top of page

The Assessment, Total Brain

Assessing our users' mental health as a starting point to effectively guide them to a healthier space.

Overview

Total Brain is a mobile and web app that enables users to understand, measure and track their mental health overtime. The platform is centered around the assessment - which enables users to assess their strengths and weaknesses across 4 functions (inclusive of 12 capacities).  After a user takes the assessment, they are given a customized training program that includes meditations, brain games, exercises and more.

Screenshot 2024-01-21 at 6.15.26 PM.png

Role

Lead Product Designer 

 

Contributions

Product Design, User Flows, Wireframes, User Research and Testing, Iconography, Prototyping, Dev Handoff 

 

Type
Main feature for mobile, tablet, web app
(focus for this case study: mobile)


Tools
Sketch, Proto.io, Usertesting.com

Team
1 Senior Product Manager, 2 software engineers, 1 UX writer 

Timeline
6 months


 

// BY THE NUMBERS

2

USER TESTS

2

HIGH-FIDELITY
ITERATIONS

4

LOW-FIDELITY
ITERATIONS

% est.*
HIGHER
ENGAGEMENT

8%

750,000

ACTIVE TOTAL BRAIN USERS

The Problem

As it was originally designed, the assessment had low completion rates and engagement. We knew from user feedback that people found the assessment to be long and cumbersome, alongside not having enough context around any of the tasks they were being asked to do and unclear instructions.

We set the goal of increasing conversion.

The Solution

I designed a more accessible, usable flow that was built around user goals. I made decisions based on data & feedback we had received from user testing, and let that guide us forward.

We saw a 5% increase in conversion.

(more on this later)

Process Overview

Screen Shot 2023-03-19 at 9.38.06 PM.png

// STAKEHOLDER SYNC

I met with our internal stakeholders: 1 product manager, 1 UX writer, 2 developers, the COO, and the head of design. We reviewed expectations around timelines, goals, and an unified vision for moving forward. Our goals were to increase the completion rate of the assessment, increase retention, make it more user-friendly, and add in strong context /understanding around what was being measured. 

Gathering

I took a look at any existing research we had, old designs, and any user feedback we had about the current assessment. I found that there was user fatigue (the assessment at the time was 20 minutes), confusion around instructions, and a lack of context. 

I also looked at our existing personas, and thought about how I could make them more specific to the assessment.
 

User Personas

With the existing research and personas, I created two that were more specific to the assessment. had two: Sam and Marissa. Marissa seeks immediate relief and wants to regain control of her emotions, while Samuel seeks to uncover patterns/trends about himself, and take a longer-term approach in his self-improvement. These were important to understand and specify to be able to design effectively. 

 

// USER TESTING

Testing Round  1

I wrote a script to test what was motivating users to come back to the assessment, what was making it difficult for them to complete the assessment, and what they wished to change. Here are some samples of the questions we asked below.

Verbatim
Is there anything else that would motivate and excite you to return to complete the assessment?

"If you were to show the capability for personalized content based around my metrics."
 
" Yes, if I was given a preview of what the assessment is going to test, and what is measured in these results." 

"I would also like a reward of more content after completing various parts of the assessment."

" Reminders via email, and results that are promising."

Insights
While a testing environment is not always the most accurate measurement, it's great for getting a sense of direction when it comes to the basics. In our research, we found that users would be most motivated by understanding better the insights provided by taking the assessment, unlocking content as you progress, and personalized metrics. Simultaneously, we also found that users felt overwhelmed by the amount of information they received during the onboarding process. I understood that this would be a balance of sharinig information without overwhelming the user.
 

Stakeholder Sync

I amalgamated our research findings and put together a deck to show our stakeholders about my recommendations about how to move forward. 
 

​User Journey Mapping ​

 

Now that our research had given us some direction, I moved to what a user journey would look like. By journey mapping, I was able to concretely define the purpose of each step and align them with user goals.

Assessment Onboarding.png
Assessment.png

//SKETCH

​The basis for my sketches came from existing user feedback. We wanted to account for our established user goals: (1) giving the user more context around what was being assessed and why it was important, (2) modularize the assessment to make it less cumbersome and tedious, and (3) make it easier to use and more accessible.

The main differences were the breadth of information they were given and the order of the flow and functions.

We ended up choosing two, and those sketches helped shape the next step: low-fidelity wireframes.

// LOW FIDELITY

Wireframes​

I started with low-fidelity wireframes on Sketch. I made about 3 versions of low fidelity for onboarding and the assessment. The most pivotal screens are below (onboardng flow, assessment overview, function overview / task intros). 

Screen Shot 2023-03-15 at 3.42.58 PM.png

Overview of Flow, Low-Fidelity:

Screen Shot 2023-03-15 at 3.47.58 PM.png

Sync Round 2

It was important to share these to gain clarity on how to move forward while also accounting for development, product road maps, adherence to existing design flows, and more.

// TESTING #2

A/B Testing

We tested two flows. Flow A (the first one), and Flow B. Here they are with user feedback indicated in red and green.

Dark, colored background with graphics makes text hard to read

Flow A

There was a lot of information on this page. Users felt overwhelmed.

Screen Shot 2023-03-16 at 4.35.20 PM.png

A second CTA leads users away from completing the assessment

Users enjoyed the illustration, especially as a relief from text 

Flow B

Screen Shot 2023-03-16 at 4.50.33 PM.png

Users enjoyed the balance of text and illustrations, alongside an overview of what came next.

Clear, concise instructions over a light background / dark text was easier for users to read and understand.

Users were ushered into the next part of the assessment automatically, and were presented with a clear path forward.

I also started to bridge into a newer, modern styling.

A/B Testing

Users overwhelmingly preferred Flow B. We chose to move forward to work on a full, high-fidelity flow using this structure for the assessment.

// HIGH-FIDELITY

I designed high-fidelity wireframes on Sketch. After what we discovered in our A/B test, we moved forward with the following flow (Flow A). Since there were 80+ screens, I've attached screen designs from each flow below (onboarding, function introductions, task introductions / tasks) as examples.

Onboarding Samples:

Iphone 16_edited.jpg

Function Introduction Samples:

Assessment 2.png

Task Introduction / Task  Samples:

Iphone 15.png

Full Flow Preview:

Screen Shot 2023-03-16 at 4.06.19 PM.png

// DEV HANDOFF

Handoff​

I notated the flows which explained behaviors. I walked our engineering team through the specs as well to make sure everything was clear.  There was no need to make a prototype since we adhered closely to our design system (which I assessed for accessibility as well), and all component behavior was documented and easily available.

Here are a few examples of information I notated:

  • error message handling

  • functionalities (if the user pauses, decides to go back, exit)

  • the differences between the variants of flows for different populations the assessment would cater to

  • Accessibility 

  • for Android and iOS

  • responsiveness for different screen sizes

  • Differences between functionality on web / mobile / tablet 

  • Field usability

Here are examples of specifications I shared with our engineering team. 

Screen Shot 2023-03-17 at 3.27.34 PM.png
Screen Shot 2023-03-17 at 4.20.55 PM.png
Screen Shot 2023-03-17 at 3.28.06 PM.png
Screen Shot 2023-03-17 at 3.28.16 PM.png
Screen Shot 2023-03-17 at 3.28.27 PM.png
Screen Shot 2023-03-17 at 3.28.34 PM.png
phone skin.png

Sample Video of Onboarding Flow

// LAUNCH + CONTINUOUS ITERATION

We launched the changes to the assessment in phases (0, 1, 2). All changes were initialized and maintained successfully, and we were all hands on deck for any needs / iterations.  Iterations are continuous as we receive feedback from our users, and we took them into account no matter how far out from launch we were. We were on a mission to build the best product we could for our users, and that is a never ending process! 

Impact

Since we were releasing the project in phases, it was hard to nail down the exact % increase over time, but we found that with the initial release, there was a 5% higher engagement rate.

Reflections

While the design got a huge upgrade when considering accessibility, usability, education, and user feedback during testing - we didn't truly get to address the real barriers to higher engagement. 

Using Total Brain is often incentivized by a user's employer, and it wasn't within the project scope to address other factors that could have truly been more impactful like: reminders, more hand-holding throughout onboarding and the assessment itself, as well as education for more context (i.e. the answers to "why should I do this?")

// FEATURED

Keeping Up with the Kardashians

Total Brain was featured on Keeping up with the Kardashians when Khloe takes the Total Brain Assessment (that I designed!) Here is a clip from the episode. The clip starts at 2:45!

Explore Other Projects 

bottom of page