Adith's signature logo
Work
Whatfix Information Architecture Revamp Banner

Note: Due to NDA restrictions, some parts of this case study have been omitted or anonymized. Reach out to adithkvn@gmail.com for a detailed walkthrough.

Designing for Discoverability: Rethinking IA at Whatfix

Timeline

6 months

Contributors

2 designers, 1 UX writer, 1 PM, 10 Developers

My role

As lead designer, I owned end-to-end design, led user research and testing, and scoped the project for roadmap delivery.

๐Ÿšจ The problem

Whatfix, a SaaS platform for in-app guidance and performance support, was expanding beyond its core DAP (Digital Adoption Platform) into new offerings like Product Analytics and Enterprise Admin.

Our current Whatfix information architecture represents only our DAP offering and has key problems:

Didn't scale - not only within our DAP product but also across new product verticals like Product Analytics and Enterprise Admin

Didn't match customers mental models - both in information feature grouping, and terminology

Has low discoverability of product offerings.

We thought our IA was logical - until card sorting told a different story. Customers grouped things in surprising ways, revealing overlaps and blind spots.

The outcome of the current IA tree test results has been poor. The current Benchmark is at a failing grade of 27%.

The old Whatfix dashboard UI
The previous dashboard UI which was tested.
Tree test success rate chart
The overall success score for the old IA was only 27%.

๐ŸŽฏ Goal

50

Tree test score for UI-less test

67

Tree test score with UI

To set the benchmark, the methods we used were: Tree testing & Card Sorting.

๐Ÿงช Learnings From Initial Product Testing

Tree Testing

The objective of the tree test for the DAP product was to define a benchmark baseline to measure DAP success against and assess users mental models and use these insights to guide the future IA. To achieve this 17 participants were given 14 tasks and were asked where would they go to complete these tasks in our current dashboard IA.


This was a wake-up call: tasks we assumed were simple turned out to be the most confusing. It showed us exactly where people got lost.

Diagram of tree test insights
A look at the insights from the tree test.

Card Sorting

The goal of the card sort tests for the DAP product was to understand user categorization of Whatfix information and features, informing future IA. 17 participants completed an open card sort, organizing 33 cards into categories and labeling them, aiming to match customer mental models through theme analysis.

Card sorting analysis data
Analysis of the card sorting exercise.

The Improvement opportunity areas discovered from the test were as follows:

  • Organize tasks and categories around user mental models
  • Refine navigation, labels, and terminology for clarity
  • Provide multiple entry points and simplify crowded categories
  • Separate and clarify settings, theming, and visibility rules
  • Reorganize integrations, analytics, and reporting for clearer structure
  • Build a focused, purposeful settings section (no catch-alls)

๐Ÿ› ๏ธ How I Solved It

As this project is signed under NDA, contact me at adithkvn@gmail.com

I used insights from initial customer testing to adjust the existing IA, then revisited the areas that had failed. Collaborating with stakeholders and subject matter experts, I facilitated brainstorming sessions that shaped a new IA for the next round of testing.

Information architecture mapping diagrams and user flow wireframes
Comprehensive IA mapping exercise showing various navigation structures and content organization layouts
Three different navigation approaches: Product switcher, Contextual product nav, and Global Utility
Exploring different navigation structures with color-coded sections for various product areas
Detailed spreadsheet matrix showing user testing results and analytics data
Comprehensive testing results matrix tracking task completion rates and user behavior patterns

I brought these back to customers to see if the changes really moved the needle, while in parallel building early navigation prototypes so I could test the new IA in context.

Interface prototypes showing DAP and Analytics navigation structures
Early navigation prototypes demonstrating the new IA structure across different product areas
Navigation types explanation showing Product switcher, Account switcher, Contextual product navigation, and Global utility navigation
Detailed breakdown of different navigation types and their specific use cases within the user experience

Thankfully, these were the results post the final round of testing:

76

Tree test score without UI

88

Tree test score with UI

86.3

SUS score for new navigation experience

๐Ÿš€ What Went Into The Product

The final IA was simpler, sharper, and tested clean. For the first time, customers could find what they needed without second-guessing.

Approximately 80% of the modifications in the new Information Architecture (IA), including terminology adjustments, regrouping, and the introduction of new navigation and multi-product switcher, were successfully integrated into the product.

๐Ÿ“ˆ Impact

Post implementation, the new experience received a lot of positive customer feedback with multiple customers specifically pointing out the product team as a whole for such an improved experience.

Positive customer feedback
A customer expressing their love for the new dashboard experience.

๐Ÿ’ช Challenges

  • Recruiting target users for testing.
  • Overcoming team inertia for adoption.
  • Technical debt limited full rollout (80% implemented initially).

๐Ÿ”„ What Could Have Been Done Better

  • Improved R&D-wide communication: Could have facilitated quicker alignment and reduced implementation time.
  • Pre-recruitment of users for testing at project initiation: Would have streamlined the testing process and improved efficiency.

๐Ÿ“š Learnings

  • The right test type (tree test vs. open/closed card sort) depends on the problem.
  • Terminology research is critical to clarity.
  • Early stakeholder alignment speeds implementation.

THE END