top of page

eComm Conversion Challenges

How mixed-method research influenced Product's strategy to better focus on the customer.

The details below are a synopsis of a large project.

I would love the opportunity to talk with you about this in deeper detail and address any questions you might have.

The launch of a revamped eCommerce experience resulted in a significant increase in incomplete reservations, raising concerns about user experience and interface design. Only 60% of site visitors made a reservation through the channel, and 40% of completed reservations were cancelled. Using a live-session monitoring tool, Product concluded these issues were caused by a flawed user interface and presented the Design team with design requests they believed would resolve the issues. As the Head of Experience Research, I asked for three-weeks to conduct a mixed method study to get to the root cause of the behaviors. 

​What follows is a year-long series of research and design projects, each strategically building on the previous, from the moment the opportunity was identified until the revised feature was released 12 months later.

Root Cause Analysis

Defining the opportunities

We began with a root cause analysis. Product's hypothesis focused on a flawed user interface making it difficult to complete the rental experience. Research and Design were not convinced. Product agreed to a three-week research study. With so many potential areas in the user experience for problems to arise, it was necessary to use a variety of research methodologies to determine why performance was as it was.

Data analysis

The work began with a thorough clickstream analysis to identify usage anti-patterns.  Our analysis revealed elevated drop-off rates at specific stages, as well as pogo-sticking behaviors, indicating points of friction in the experience.

It was also important to understand how usability factored into the equation. The experience was evaluated using the Single Usability Metric (SUM). The usability score not only revealed information about task completion time, error rates, and perceived ease of use, but also provided a baseline for measuring the impact of future designs.

Single Usability Metric (SUM) report of the current experience.

Moderated user testing

Once we knew where the greatest points of friction occurred, we shifted our focus from identifying the most significant points of friction to determining why they occurred.

Through moderated user testing, we discovered a gap between our flow and what users expected. This resulted in the creation of "elements of availability" which became the framework for all future design.

Map of the greatest points of friction as explained by users during moderated user testing

Employee interviews

Employees shared their perspectives on customer feedback and recurring issues. As the ones expected to fulfill the online reservation, they were uniquely positioned to hear about issues a customer encountered during the online experience. Employees were already adept at determining how to make rental elements meet a customer's needs. The focus became clear: how can we create an online experience that simulates an interaction with an employee?

Diagram explaining the elements of availability and their impact on the experience

Several themes emerged as the facts and key findings from the various research methodologies were synthesized, including the following: 

​

  • A difficulty recovering from errors was the most urgent finding. Caused by miscommunications during front-end development, as well as breaking from UX/UI best practices, key points in the flow made it easy for a user to misstep. Unclear messaging made it difficult to correct course. This highlighted the need for clear communication in the interface and a more forgiving error recovery system. 


This is only one of several insights. Please reach out directly to learn about the other findings. 

Originally unsure of how much could be learned in a three-week timeline, the Product team was blown away by the findings and agreed with Research's message that the root cause of the issue went far deeper than the user interface. Backed by a breadth of data, Product & Technology confidently took a step back and focused on a game plan aimed at remedying the root causes. Several key outcomes were realized:

1

Prioritization of Technical Issues

Numerous technical issues were identified while watching users interact with the site, such as unresponsive clicks and broken links. Development on any new features was paused and resources were reallocated to fixing problems in the live environment. 

2

UI Best Practices

Focus was placed on making the existing experience more intuitive by fixing clear breaks from UI best practices. This included explicit error messaging, help text, improved hover states and other heuristic best practices. 

3

Rebuild the Reservation Flow

It was clear that the overall flow of the reservation process deviated from what the users were expecting. Each subsequent step in the flow caused confusion and added to frustration. Product, Technology and Design agreed to clear their calendars for a two-day workshop to rethink the flow. 

Design Workshop

Aligning on the future state strategy

With a shared view of where the key issues lay, it was time to figure out how to address them. A passionate group of people, Product had strong opinions on the experience to deliver for the customer. Simultaneously, Design was overwhelmed by the number of UX possibilities. I determined that a Design Workshop would be a good way to get everybody on the same page regarding primary users and design priorities.

It wasn't enough for the new experience to be well designed, it also had to be technologically feasible and financially viable. Product, Operations, Technology and Design were called together for a two-day workshop facilitated by the Research team. 

​

The overall purpose was to document a high-level experience, rooted in past research, that all hypothesized would please customers, while also aligning with business objectives and being technological feasible.

The Three Lenses of Innovation served as the cornerstone for the workshop.

Workshop Objectives

  1. Align the known facts and collective assumptions to create a view of the current user experience.

  2. Ideate on a future state experience through the lens of ‘desirability.’ What is the customers’ idea of an easy and useful experience?

  3. Review the new future-state experience through the lenses of feasibility and viability. Discuss what would need to be done to accommodate each idea.

  4. Incorporate the alternative ideas into a future-state experience. Ensure all are confident that the hypothesized experience can be implemented. 

A portion of the running order of events created to inform the workshop agenda

The workshop took place over two days with day one focused on aligning the group on the existing assumptions and the current experience. Once we had everyone singing from the same hymnal, focus switched to what we thought the future state experience might be. A set of four activities were identified that built on this principle.

Activity 1

Assumption post-ip

Product, Technology & Design had all formed their own ideas of the problems and what customers wanted. Given the quantity and variety, this activity set to align the team on which assumptions were the most reliable and which posed the greatest risks. These were prioritized and would be called on throughout the days as a reminder of where to focus. 

Picture3_edited.jpg

Activity 3

Future State: Map for desirability

We had aligned on the users' wants and points of friction. The next step was to design the experience we hypothesized the user wanted, regardless of technology or business constraints. This was done independently by each person to limit group think. 

Picture2.png

Activity 2

Mapping the current state

Before design could begin, there needed to be agreement on where the current experience was failing the user. The goal was to agree on pain points and the impact users were having.  Looking at pain points in the existing flow helped the team in considering what the preferred experience might be when we got to future-state mapping.

Picture1.png

Activity 4

Future State: The lenses of feasibility and viability

Next, the users' ideal experience was viewed through the lenses of what was possible. Technology and Product were able to think about how to meet users' needs while also keeping business and technology goals in mind. The team now had a user flow that everyone was confident could be built.

COM Availability Solution (6)_edited.jpg

There were numerous successes at the end of the two days:

​

  1. A new-found respect across Design, Technology and Product with regards to how each group thought about the user needs.

  2. A high-level flow centered on the users' mental model - that equipment availability should be shown as early as possible and throughout the experience.

  3. A commitment from all groups for continued partnership to deliver a user experience that everyone had a say in.

​​

The next step was to begin designing the agreed-upon user experience. 

Parallel Design

Unifying design ideas

Following the workshop, it became clear there were multiple ways to provide equipment availability throughout the reservation flow. Therefore, three alternative designs were created at the same time and tested with users.

 

The successful elements of each design were then merged into a unified design.

Three possible designs were created for the first round of parallel design testing.

Overall, the testing saw highly positive feelings toward the broad experience design strategy. This provided the team with much needed confidence that they were headed in a direction that was right for the user. Additionally, several design principles emerged that formed the backbone of future pages. ​

  • Users appreciated the ability to adjust rentals real-time to their individual circumstances.

  • Users wanted to easily change dates, times and locations without navigating away from the current page.

  • Clear messaging of availability reasons and how to remedy them was highly valued.

The team proceeded to fleshing out the designs further, including the secondary pages, with confidence.

Iterative Design Testing

Continued improvement

As the team set about designing secondary pages, every new component of the experience went through at least one round of user testing.  

ROUND 1

Wireframes were used to do exploratory research. Remote moderated user testing was chosen because it allowed us to delve deeper into "why" consumers felt the way they did about the design. We discovered that while users understood the overall design concept, several of the components, such as the usage of a drawer to display alternative equipment, did not sit well with them. They wanted to have all the information needed to make a decision on one page.

Parallel Test Meeting.png

ROUND 4

With round 4, we began conducting user testing with mid-fidelity wireframes that included small amounts of functionality. We continued to run remote moderated user tests because we could answer any questions participants had in the moment, guide the tester past missing or non-functioning features and refocus the session on the features of interest. Many of the learnings at this point focused the team on the content strategy over UI. How could we clearly communicate availability and errors to minimize the confusion uncovered in the original experience?

ROUND 8

As the experience solidified, the fidelity increased and unmoderated usability testing was conducted. The goal was to test the end-to-end experience - could a user get through the entire rental process?  We wanted to understand how easy or hard the experience was and identify possible points of friction that impeded a successful reservation. Error recovery had been a significant cause of dissatisfaction in the previous experience, therefore seeing how well people could recover from the unexpected was of chief importance. In the end, perceived ease of use skyrocketed.

In total, over the course of a year, 12 rounds of user testing were conducted. The iterative nature of the testing enabled Design address usability issues, and test new, revised designs throughout the design process. Early issue identification saved time, effort and resources. But more importantly, it increased Product confidence that the launch of this experience would not face the same decline in site performance that had been encountered with the previous website release. 

Performance Benchmarking

Proving value through quantitative measurements

Remember, during the root cause analysis which had taken place almost a year earlier, Single Usability Metric (SUM) benchmarking had been conducted to establish baselines for time on task, task completion and perceived ease of use. With the newly designed experience in a staging environment, the SUM was collected again. 

​

The updated experience saw a 9-point improvement in the overall SUM score. 

​

We also took the opportunity to break down the rental task down into two parts: finding the item to rent and completing the checkout process. This deeper look revealed that the process of searching for and selecting the item to rent posed the greatest opportunity to improve the overall time on task. 

​

The SUM benchmark has since been adopted as the primary metric by which to measure success. Five additional tasks within the digital experience have SUM score baselines now. 

​

What's more, as we continue to evolve this experience, the SUM will serve as a barometer for Design's impact on the experience.

Summary

Solid learnings served as the foundation for monumental change

Within weeks of launch, the website showed a 39% increase in funnel conversion. 

​

For me, this project was the first time in my career I witnessed true partnership and alignment between Product, Design and Technology. An early recognition of the root causes was key to creating this synergy - and the completion of that work in a short, three-week window, showed all teams that Research was capable of delivering solid learnings in a reasonable timeframe. 

​

Because of this, all parties went into the workshop events ready to participate and move the experience forward. This is also why Product and Technology were willing to hold on development to allow time for thorough usability testing throughout the design lifecycle. 

bottom of page