The Assessment, Total Brain
Assessing our users' mental health as a starting point to effectively guide them to a healthier space.
Total Brain is a mobile and web app that enables users to understand, measure and track their mental health overtime. The platform is centered around the assessment - which enables users to assess their strengths and weaknesses across 4 functions (inclusive of 12 capacities). After a user takes the assessment, they are given a customized training program that includes meditations, brain games, exercises and more.
Lead Product Designer
Product Design, User Flows, Wireframes, User Research and Testing, Iconography, Prototyping, Dev Handoff
Main feature for mobile, tablet, web app
(focus for this case study: mobile)
Sketch, Proto.io, Usertesting.com
1 Senior Product Manager, 2 software engineers, 1 UX writer
// BY THE NUMBERS
ACTIVE TOTAL BRAIN USERS
As it was originally designed, the assessment had low completion rates and engagement. We knew from user feedback that people found the assessment to be long and cumbersome, alongside not having enough context around any of the tasks they were being asked to do and unclear instructions.
We set the goal of increasing conversion.
I designed a more accessible, usable flow that was built around user goals. I made decisions based on data & feedback we had received from user testing, and let that guide us forward.
We saw a 5% increase in conversion.
// STAKEHOLDER SYNC
I met with our internal stakeholders: 1 product manager, 1 UX writer, 2 developers, the COO, and the head of design. We reviewed expectations around timelines, goals, and an unified vision for moving forward. Our goals were to increase the completion rate of the assessment, increase retention, make it more user-friendly, and add in strong context /understanding around what was being measured.
I took a look at any existing research we had, old designs, and any user feedback we had about the current assessment. I found that there was user fatigue (the assessment at the time was 20 minutes), confusion around instructions, and a lack of context.
I also looked at our existing personas, and thought about how I could make them more specific to the assessment.
With the existing research and personas, I created two that were more specific to the assessment. had two: Sam and Marissa. Marissa seeks immediate relief and wants to regain control of her emotions, while Samuel seeks to uncover patterns/trends about himself, and take a longer-term approach in his self-improvement. These were important to understand and specify to be able to design effectively.
// USER TESTING
Testing Round 1
I wrote a script to test what was motivating users to come back to the assessment, what was making it difficult for them to complete the assessment, and what they wished to change. Here are some samples of the questions we asked below.
Is there anything else that would motivate and excite you to return to complete the assessment?
"If you were to show the capability for personalized content based around my metrics."
" Yes, if I was given a preview of what the assessment is going to test, and what is measured in these results."
"I would also like a reward of more content after completing various parts of the assessment."
" Reminders via email, and results that are promising."
While a testing environment is not always the most accurate measurement, it's great for getting a sense of direction when it comes to the basics. In our research, we found that users would be most motivated by understanding better the insights provided by taking the assessment, unlocking content as you progress, and personalized metrics. Simultaneously, we also found that users felt overwhelmed by the amount of information they received during the onboarding process. I understood that this would be a balance of sharinig information without overwhelming the user.
I amalgamated our research findings and put together a deck to show our stakeholders about my recommendations about how to move forward.
User Journey Mapping
Now that our research had given us some direction, I moved to what a user journey would look like. By journey mapping, I was able to concretely define the purpose of each step and align them with user goals.
The basis for my sketches came from existing user feedback. We wanted to account for our established user goals: (1) giving the user more context around what was being assessed and why it was important, (2) modularize the assessment to make it less cumbersome and tedious, and (3) make it easier to use and more accessible.
The main differences were the breadth of information they were given and the order of the flow and functions.
We ended up choosing two, and those sketches helped shape the next step: low-fidelity wireframes.
// LOW FIDELITY
I started with low-fidelity wireframes on Sketch. I made about 3 versions of low fidelity for onboarding and the assessment. The most pivotal screens are below (onboardng flow, assessment overview, function overview / task intros).
Overview of Flow, Low-Fidelity:
Sync Round 2
It was important to share these to gain clarity on how to move forward while also accounting for development, product road maps, adherence to existing design flows, and more.
// TESTING #2
We tested two flows. Flow A (the first one), and Flow B. Here they are with user feedback indicated in red and green.
Dark, colored background with graphics makes text hard to read
There was a lot of information on this page. Users felt overwhelmed.
A second CTA leads users away from completing the assessment
Users enjoyed the illustration, especially as a relief from text
Users enjoyed the balance of text and illustrations, alongside an overview of what came next.
Clear, concise instructions over a light background / dark text was easier for users to read and understand.
Users were ushered into the next part of the assessment automatically, and were presented with a clear path forward.
Users overwhelmingly preferred Flow B. We chose to move forward to work on a full, high-fidelity flow using this structure for the assessment.
I designed high-fidelity wireframes on Sketch. After what we discovered in our A/B test, we moved forward with the following flow (Flow A). Since there were 80+ screens, I've attached screen designs from each flow below (onboarding, function introductions, task introductions / tasks) as examples.
Function Introduction Samples:
Task Introduction / Task Samples:
Full Flow Preview:
// DEV HANDOFF
I notated the flows which explained behaviors. I walked our engineering team through the specs as well to make sure everything was clear. There was no need to make a prototype since we adhered closely to our design system (which I assessed for accessibility as well), and all component behavior was documented and easily available.
Here are a few examples of information I notated:
error message handling
functionalities (if the user pauses, decides to go back, exit)
the differences between the variants of flows for different populations the assessment would cater to
for Android and iOS
responsiveness for different screen sizes
Differences between functionality on web / mobile / tablet
Here are examples of specifications I shared with our engineering team.
Sample Video of Onboarding Flow
// LAUNCH + CONTINUOUS ITERATION
We launched the changes to the assessment in phases (0, 1, 2). All changes were initialized and maintained successfully, and we were all hands on deck for any needs / iterations. Iterations are continuous as we receive feedback from our users, and we took them into account no matter how far out from launch we were. We were on a mission to build the best product we could for our users, and that is a never ending process!
Since we were releasing the project in phases, it was hard to nail down the exact % increase over time, but we found that with the initial release, there was a 5% higher engagement rate.
While the design got a huge upgrade when considering accessibility, usability, education, and user feedback during testing - we didn't truly get to address the real barriers to higher engagement.
Using Total Brain is often incentivized by a user's employer, and it wasn't within the project scope to address other factors that could have truly been more impactful like: reminders, more hand-holding throughout onboarding and the assessment itself, as well as education for more context (i.e. the answers to "why should I do this?")
Keeping Up with the Kardashians
Total Brain was featured on Keeping up with the Kardashians when Khloe takes the Total Brain Assessment (that I designed!) Here is a clip from the episode. The clip starts at 2:45!