Testing a streaming service’s onboarding to improve conversion and experience
Major TV Streaming Service / UNDER NDA
Role: Lead UX Researcher
Reading Time: 3 mins
TO READ MORE
Challenge
IDENTIFY UNMET NEEDS AND PAINS OF USERS SUBSCRIBING TO THEIR PLATFORM
An online streaming service approached us because they were updating one of their add-on subscription offers and wanted to know any unmet needs and pains of users subscribing to their platform and purchasing the current add-on across multiple devices. This included investigating users’ comprehension of the current subscription packages. This would help improve the experience for users and increase conversion of subscriptions and the add-on service.
The result
Preliminary analysis workshop
I facilitated a Preliminary analysis workshop with stakeholders to discuss their observations, prioritise the findings and discuss the challenges of the testing after viewing the sessions in-person before writing and presenting a report. Presented prioritised findings and design recommendations.
Presented insight report to WIDER TEAM
Remotely presented prioritised findings and design recommendations to the cross-functional teams organisation wide and delivered a written report of all findings, such as a barrier that prevented users from purchasing the add-on subscription after browsing the streaming services content.
The full story
The majority of client’s product teams are siloed and user research is often conducted in isolation from each other and insights aren’t shared effectively organisation wide. This has resulted in an inconsistent experience across different devices especially during onboarding.
When deciding the approach of the usability testing during alignment meetings, the key stakeholder and I weighed up the benefits of testing the live journey. We decided to test the live journeys to avoid users being limited to exploring the happy path of a prototype which would have restricted our findings.
Tailoring the discussion guide
Before crafting the testing script, in addition to alignment meetings with key stakeholders, I conducted a short stakeholder workshop to identify the client’s assumptions, questions and any known issues. They annotated these onto a Miro board for each screen of the onboarding journey for each device. This allowed me to tailor the discussion guide to the client's needs and explore key areas.
A shared working board with the client was created in Miro for each project. I created the discussion guide in the shared working board, this allowed the client to feedback on each iteration of the discussion guide before finalising ready for testing.
User testing the live journeys
I conducted in-person usability tests with 15 participants, evaluating the live journeys across four devices such as Amazon Firestick, Xbox and a mobile. Over three days, multiple stakeholders from various product teams visited our London lab to observe the sessions firsthand or watched remotely via livestream. For some it was the first time they were working together directly and created buy-in to the project and confidence in its outputs.
Our live stream also allowed observers in-person and remotely to send me questions if they wished to dig a little deeper or ask additional questions.
Full session notes were documented in a closed Miro board by a supporting consultant, while observers had access to a shared working board with an identical feedback section. This allowed them to record key observations and questions, which could be referenced during daily debriefs and future analysis.
Preliminary analysis
After the sessions were complete, I facilitated a preliminary analysis workshop ‘debrief’ with stakeholders to align with them and discuss their observations, thoughts and the challenges faced during testing before writing and presenting the report.
Unfortunately, the debrief was held a week after the sessions due to stakeholder commitments. This delay impacted the clarity and effectiveness of sharing thoughts and observations.
Ideally, debrief sessions should take place shortly after testing to maintain momentum and ensure clear context. Setting expectations and preparing stakeholders in advance can also improve engagement and the overall quality of the discussions.
DELIVERED
Presented an INSIGHT report to stakeholders
I delivered a one-hour presentation on prioritised insights and design recommendations to an audience of over 20 stakeholders from various product teams across the organisation with a short Q&A to clarify any findings.
Unfortunately, we haven’t yet gathered any quantifiable metrics on the impact of this study.
The following insights aimed to help the product teams improve the experience for users and increase conversion of subscriptions in particular the new add-on.
Some of the findings that we discovered were:
As a result of the decision to test the live journeys across different devices rather than prototypes, we were able to identify a critical error that stopped users from being able to purchase a subscription without a hard reboot and reinstalling the app on their Xbox device.
Copy on the subscription offers page on mobile confused participants and they didn’t understand what was included in the subscription and why some offers that they perceived to include less benefits were more expensive than others.
The majority of participants were surprised that the current add-on features were not already included in the base subscription level and didn’t feel it was good value for money.
Key takeaways
Green-light login and payment methods before testing
Prior to testing the client and I created spent half a day creating dummy credentials and payment information for each scenario for users to purchase the subscriptions during testing. Unfortunately, these were sometimes blocked by third-party services due to suspected suspicious activity. This created additional friction for users which affected testing and was added as a key consideration during the report.
Live journeys come with updates
I raised the concern of live updates during the stakeholder workshop and I suggested a pause to any updates of the specific journeys to avoid inconsistent findings, which the client agreed to. Unfortunately, during the 3 days of testing the journeys received multiple updates which affected questioning and required flexibility and quick decision-making during moderation to deliver balanced and consistent findings.