Why and When to Experiement

Experimentation has become a critical tool for modern product teams striving to deliver user-centered, data-informed experiences. Far beyond a simple A/B test, a well-run experiment helps teams validate assumptions, uncover user preferences, and reduce risk before scaling new ideas. In this post, we’ll explore the key benefits experimentation brings to product development, the risks to watch out for, and an overview of the process for designers and cross-functional teams to prepare for when launching experimentation programs.

Benefits 

Confident Decision Making: 

When the goal is to change a metric and not to ship features, we can focus on enhancing the user experience. The data-driven approach to development for marketing and product ideas helps with making more confident decisions. Experimentation allows us to have data backing up the choices we make in the design process based on factual user behavior and journey. 

Boosting the culture of innovation and creativity:

It instills a culture of curiosity and risk-taking. Experimentation with A/B testing brings a safety net, helping the team to think out of the box by eliminating the fear of failure. Launching an unsuccessful experiment results in expanding our understanding of the user. This feedback loop

Increasing efficiency:

The experimentation process lets the team explore multiple possibilities before committing to the final experience. Unlike the linear process, the experimentation process validates multiple potential paths and finds the most efficient manner for finding the optimal user experience. 

Reducing risks & testing the boundaries of user tolerance:

Testing new experiences on a small scale before launching them to all users allows us to prevent big setbacks. When dealing with bold ideas, experimenting and limiting exposure to the changes reduce the potential negative impacts on metrics. The results can also help us identify patterns regarding user tolerance, informing our UX guidelines.

 

Risks

Misinterpretation of the result: 

The wrong conclusion might be drawn from an experiment if the research methodology and statistics are not solidified by experts. Having a team of researchers and analysts who can help you read into behaviors and results is crucial to finalizing the learnings.

Bias:

If one is not aware of potential biases, the experiment might lead to inaccurate results

Missing the big picture:

Focusing just on optimization for short-term impact can result in an inconsistent product experience that might not align with the UX standards or the developer data platform vision.

 

When should we use experimentation?

To start adopting experimentation, there are a few considerations to assess if that’s the methodology for your project, team, and time frame. Running experiments requires a highly collaborative culture between engineers, analytics, product managers, and designers/content producers. The team can decide based on the following categories:

  • Your product/feature needs a subset of user traffic to adopt experimentation as a tool.
    To achieve statistically significant results, the experiment variant requires a large number of users.
    When working for a few clients or when your targeted user segment is small, it’s better to go with other product development methods.

    This problem is more common in B2B products since, by nature, it will lower the user traffic.

  • To run with experimentation as a methodology, you need to commit to a few weeks/months of waiting and monitoring the performance. For a one-off fast-paced release cycle, this method cannot serve its purpose and you might run into the danger of calling a test prematurely. The first few days of user interaction with new features/designs do not indicate what the user might do in the long run, often the behavior changes after a while. Depending on the target metric, the analytics team can define the minimum time needed for each variant to be running before we can call the result.

  • Assess the feasibility of sustaining the project across various iterations throughout the testing phase. Ensure that the engineering and support expenses align with the desired outcomes.

  • When the product is at the growth stage, we can dedicate resources to test the next move. Starting experimentation prematurely prevents us from collecting the necessary metrics on the original iteration’s performance. Starting early with experimentation will steal our chance to develop business-backed features. Watching out for ROI might rule out the chance to pour in resources into developing multiple variants.

  • Experimentation should be used when the UXR, Product, and design teams have a proper understanding of users to formulate an informed hypothesis. We can leverage experimentation when we are not confident about the assumptions we have about the user. Experimentation does not replace user research; however, it should be used when considering multiple paths or when qualitative research fails to provide a definitive answer to a hypothesis. 

    User research can serve as a catalyst and help fill in gaps in pre-experimentation by answering “The Why”. It can also be very powerful post-analysis to validate or counter any surprises that might come up during experimentation.

  • Some changes are not big enough to pour your resources into testing. Bugs and obvious flaws do not need to go through experimentation to be launched. In the case of small changes move forward with a traditional launch.

    When thinking of completely new feature, consider beta-release or bias-to-ship methods, which are similar to experimentation but the final decision is not dependent on a positive outcome.

 

Experimentation process:

01. Foundational UX research 

UX research can provide valuable insights and data that can inform and guide the experimentation process. It can be used to identify user needs/behaviors/pain points, illuminate gaps and opportunities within the user experience, validate assumptions or hypotheses, and ultimately provide a basis and directional focus for experimentation. Insights from UX research can ensure that design experiments stay grounded in user-centric insights, providing a solid foundation for designers to make data-driven decisions during experimentation. Additionally, it can help fuel creativity and ideation, helping bring to life a range of potential solutions. 

02. Defining metrics to improve + Create hypothesis (problem/opportunity)

Every experiment must start with a hypothesis to be validated and defined metrics to track. The Product team can lead this conversation in collaboration with the analytics team to better measure the impact of designs at the end of the process 

03. Define the MVP scope for testing 

Aim to test a Minimum Viable Experiment. Break down the hypothesis to the smallest possible assumption that impacts the user’s experience. Be focused on just validating the hypothesis, isolating the experiment from the full product experience. Adding features or changing the user experience that won’t help validate the hypothesis can contaminate the results.

04. Design for your Business hypothesis.

Your job as a designer is to design for testing the hypothesis. Create an experience that tests the business hypothesis without sacrificing a seamless user experience. Don’t seek perfection in your design process when it comes to experimentation. Anomalies and edge cases can be addressed in the wrap-up* phase of the experiment. When the execution of an ultimate design is difficult in the scope, take notes for the ‘wrap-up’ plan. (This means if the experiment is a winner and the decision is to launch it for all users, you will dedicate resources to implement missing interactions and experience perfections.)
Aim to test a Minimum Viable Experiment. Be focused on just validating the hypothesis, isolating the experiment from the main product experience. Adding features or changing the user experience that won’t help validate the hypothesis can contaminate the results and will need more resources to design, build, and analyze. Don't seek perfection since every change you make can affect the experiment's results. Anomalies and edge cases can be addressed in the wrap-up* phase of the experiment.

05. Development & Pre-analytics

Support the engineering team in this phase to develop the UI as designed. Working with them to explain the flow differences between the control and variant will help to define data tracking and development. 

06. Launch

Make sure to Q&A the code and activate the experiment under the Admin panel. Work with the development team to address misalignment with the original design before launching the experiment. 

07. Monitor

Keep an eye on the metrics that kick in on the experimentation platform or Amplitute to catch abnormalities or surprises. Do not judge the performance of an experience in the early days, but if a crucial metric is taking a hit, especially monetization-related ones, the team might consider ceasing the experiment to prevent a negative business impact.

08. Analyze the insights

The dedicated analytics team will lead this phase by reviewing the detailed data and comparing variant and control version numbers. The report will include a suggestion for the next steps and a detailed overview of the tracked interaction. 

09. Decide on the next steps:

After analyzing the results, the next steps can be: to iterate, archive, or productionize the experiment. It's also the time to run usability tests, conduct additional stakeholder and cross-team validation, or hold a qualitative follow-up session to gain a deeper understanding of the user's reaction/understanding.

Previous
Previous

Creating hypotheses for experimentation