Every millisecond matters: Performance monitoring in the instant gratification era
Five billion dollars. That’s how much Amazon would lose from a split-second delay in load time in the present day. Way back in 2007, the company discovered that just 100 milliseconds of latency cost an eye watering 1% in sales—and customers have only become more impatient since then. A more recent study found:
A 100 millisecond delay in webpage load time can hurt conversion rates by 7%
A two second delay in load time increases bounce rates by 103%
53% of mobile site visitors will leave a page that takes longer than three seconds to load
This probably doesn’t come as a surprise. Companies have spent years working to perfect a seamless—and speedy—customer experience. This typically requires a tangled web of analytics tools to measure and optimize different functions across several teams. Real user monitoring, or RUM, has become the standard for monitoring technical performance, enabling developers to keep a close watch on latency and other events that could cause friction for users.
The problem is, RUM was only built to chronologize and report on technical events behind individual sessions. It can’t then connect those events to other aspects of the customer journey, including conversion. This makes it impossible to measure any kind of technical performance against actual revenue—a bold risk, considering how much a glitch or delay could potentially cost.
Flagging errors isn’t enough—organizations also need the ability to correlate technical issues with their impact on customer experience, conversion, retention and revenue. It sounds like a tall order, but it's very much possible—so long as you have the right tools.
We've coined the term 'real user experience' or RUX to describe a more business-centric approach to digital application performance management. An evolution from standalone RUM, RUX combines technical performance analytics with digital experience analytics. In this post, we’re taking a closer look at how forward-thinking companies can use RUX to meet customer needs faster than their competitors.
How speed impacts revenue
Before we dive into RUX, let’s take a quick look at its predecessor. RUM technically emerged in the early aughts, but gained popularity in the 2010s, with a surge of Single Page Applications (SPAs) complicating browser experiences for users.
More companies also began to realize that speed impacts revenue. Amazon’s infamous latency discovery made them a pioneer of sorts, but they weren’t the first to experiment with load times. In 2006, Google uncovered similarly profound insights: every 100 millisecond of lag cost them 1% in revenue, while half a second delay caused search engine traffic to plummet by 20%.
It’s no surprise that Amazon and Google both went on to heavily invest in and eventually dominate the cloud infrastructure market—AWS and Google Cloud collectively hold 45% of cloud market share. Nowadays, however, it’s not just tech behemoths placing serious stock in speed. Customer expectations for increasingly fast and effortless experience made technical a non-negotiable:
53% of website visitors will abandon any site that takes longer than three seconds to load.
A site that loads in one second has 3X higher conversion rate than one that loads in 5 seconds and 5X higher than a site that loads in 10 seconds.
An e-commerce site that loads in one second has a 2.5X higher conversion rate than one that loads in five seconds.
A Deloitte study found that increasing mobile site speed by 0.1 seconds boosted retail conversions by 8.4% and average order value by 9.2%.
The bounce rate increases by 123% if a page takes longer than one second to load.
90% of users have stopped using an app due to poor performance.
Companies have responded with steep investments in analytics tools so they can measure, optimize and improve user interactions. Nearly 88% of organizations increased their data investments in 2022, exploding the big data analytics (BDA) market size to $272 billion. And yet—while teams have the capability to refine an individual session down to the millisecond—customers still aren’t happy.
In 2022, 49% of consumers abandoned a brand due to poor customer experience.
According to AWS, e-commerce businesses miss out on 35% of sales due to poor user experiences—worth roughly $1.4 trillion per year.
Slow-loading websites are costing business owners $6.8 billion per year.
It’s time to get real. Teams may be swimming in analytics technology, but there’s still a major gap between fancy tech stacks and what customers are actually experiencing on the other side.
Limitations of RUM
Like most analytics tools, RUM enables deep insights, but only in one particular area. It’s certainly capable of compelling technical insights, but doesn’t help form a complete picture of the customer journey. At least not independently or at scale. This presents some serious challenges for teams still struggling to keep up with customers demands:
1. Siloed environment
Delivering lightning-fast, frictionless customer experiences requires wildly different contributions across multiple teams, who each use their own analytics tools to measure, analyze and improve performance. CXMs may map out user journeys with web or mobile analytics solutions, while product developers leverage heatmap and session replays and developers turn to their trusty application performance monitor (APM) solution for RUM data.
The problem with this approach is that each team holds a different piece of the puzzle, with no single, unified view of the overall customer experience. Even if you can technically integrate various analytics tools, it’s not realistic to expect teams to be equally familiar, proficient or even comfortable with each one. APMs, for example, have a notoriously steep learning curve, which makes it unlikely that marketers would start poking around in error logs.
As a result, teams often wind up operating—and making critical decisions—with a limited understanding of how different technical events and interactions may impact one another... or even the bottom line. Engineers, for example, can only see what is happening—technically speaking—but not why.
“Engineering teams are constantly pressured to identify, prioritize and resolve issues. But many are still working in siloes or boxed into error monitoring,” says Liran Tal, business lead, performance analytics at Glassbox. “However, some errors are more damaging than others. If engineers spot an error in the logs, there’s usually no way to tell if it actually impacts the user experience. Their ability to effectively prioritize is extremely limited, so they need to make an educated guess. But guesswork usually delays the resolution, and—in a live environment—your business will continue losing revenue until the problem is solved.”
2. No context
RUM chronologizes technical events, but it can’t then determine the consequences or outcome of performance at different stages of the customer journey. This makes it impossible for engineers to measure errors against revenue.
“Traditional analytics tools show you what customers are doing on your website or mobile app, but you’re basically clueless as to why they’re behaving in a certain way. Context is important. Otherwise, it’s difficult to interpret and act on your observations,” says Liran.
3. Technical bloat
The median web page loaded 22 different scripts in 2022.
The average enterprise organization has 400 data sources.
The thing is, customers know or don’t care that your site is loading .2 milliseconds slower because of all the tools you’ve invested in to help improve their experience. All they know is that they don’t have all day to sit around and wait. “Web pages don’t have loading bars. So when the page is slow, the visitor doesn’t know if the delay will be another 500 milliseconds or 15 seconds. Maybe it will never load. And the back button is right there,” says Andy Crestodina, CMO and co-founder of Orbit Media.
Advantages of focusing on real user experience
Meeting customer expectations for a fast and effortless experience requires multiple teams to effectively measure, analyze and optimize performance. This means go-to-market, product, support and engineering teams need a holistic view of the entire customer journey to get the job done. In bridging web and performance analytics with digital experience intelligence, RUX doesn’t just document technical events, but actually reveals how and why their performance impacts the overall experience—and bottom line. This provides some serious advantages:
1. More clarity for engineers
RUX affords greater visibility than a typical error log, enabling engineers to see which issues actually cause a change in user behavior or impact revenue. For example, let’s say an online shopper adds a product to their cart, but when they navigate to the checkout page, a technical glitch delays the loading time. The customer gets annoyed and leaves, ending the session without a conversion.
With RUM, engineers can only see where this error occurred in relation to a sequence of technical events on the checkout page. RUX, on the other hand, contextualizes these events with user actions or behavioral data leading up to and immediately after the glitch, determining how much revenue was consequently lost. RUX represents a complete shift for engineering teams, where front-end errors aren’t just defined by technical attributes, but also a quantifiable cost.
2. Faster technical performance insights for GTM and product teams
RUM is typically managed through complex APM solutions, which means nuanced technical insights are often out of reach for GTM and product teams—even when errors have serious repercussions on their own KPIs.
RUX pulls web and app performance analytics and digital experience analytics together, creating a centralized view of the customer journey: specifically, how users are behaving at different stages, and what technical events are simultaneously occurring in the background.
Technical performance data isn’t siphoned in an engineering bubble, but naturally baked into regular product and conversion optimization efforts. For marketers, that could mean leveraging web performance analytics in their conversion rate optimization (CRO) process to pinpoint sources of friction stemming from technical errors, while CXMs can run ad hoc “tech checks” to quickly determine if there’s a glitch behind drop offs at a particular stage in the customer journey.
3. Greater efficiency
RUX really steps up to the plate when organizations need to respond quickly to changes in the market. Engineering teams easily locate which errors have the greatest business impact, freeing up valuable time for troubleshooting. At the same time, marketers, CXMs and product managers can identify technical issues themselves, instead of waiting on engineers to find conversion-related errors by process of elimination or guesswork. Most importantly, RUX enables teams to replace disparate or duct-taped data sources with a robust, centralized view of the entire customer journey.
Customer demands for warp speed experiences—and their willingness to immediately abandon any brand that doesn’t deliver—has changed the rules of the game. Technical performance is inseparable from revenue, and can’t be treated like an isolated engineering problem. It needs to be consistently measured, monitored and optimized as an integrated and critical component of the customer journey, just like other KPIs.
For all its capabilities, RUM still can’t supply the context and visibility necessary to effectively measure, analyze and optimize customer experiences. RUX isn’t so much a replacement for RUM as it is a means of drastically expanding its capabilities—and impact. Its real superpower is weaving technical performance and digital experience analytics together to form a 360-degree view of the customer journey, so teams can move and improve faster.
Watch the on-demand webinar The Split Second Effect: Performance Analytics Deep Dive to learn more.