Real User Monitoring (RUM): The Complete Guide

In this guide we’ll break down what real user monitoring is, how it works, why businesses use it, the limitations of real user monitoring as a standalone tool and more.

Real User Monitoring

A Comprehensive Guide to Real User Monitoring

It doesn’t matter if you’re marketing enterprise accounting software, airline tickets or canine enrichment puzzles. Customers expect a seamless digital experience when they interact with your brand. Every. Single. Time. That means even a solitary source of friction can compromise your bottom line. Think about it:

  • 80% of B2B purchase decisions are based on direct or indirect customer experiences, while only 20% are based on actual product/service or price.

  • 61% of U.S. consumers surveyed said they would stop engaging with brands that offer frustrating experiences.

  • 12% of users would go as far as to warn others against using a site or app if it doesn’t perform well.

Okay fine, we get it. A quality digital experience is important—obviously. What’s less obvious, however, is what you may need to fix or fine tune in order to provide one. A single digital session generates around 1,000 technical events on average, each with the potential to smooth out the path to purchase—or stand in the way. So what are you supposed to do? How do you isolate sources of friction in the vast, complex and nonlinear sequence of interactions that make up a typical customer journey?

You’ll need to get a lot closer than a bird’s eye view to effectively assess how customers are experiencing your website or app. Otherwise, you’re just optimizing with assumptions and guesswork. 🤷‍♀️

The remedy? Real user monitoring—or RUM—is a type of monitoring technology that enables teams to measure and analyze website and app performance based on real user interactions. This data can uncover critical insights to vastly improve the overall digital experience for your customers—so long as you know how to leverage it. In this handy guide, we’re doing a deep dive into RUM and covering all the foundations you’ll need to get started:

  • What is real user monitoring?

  • Benefits of RUM

  • Examples of RUM

  • How does real user monitoring work?

  • Real user monitoring vs. synthetic monitoring

  • Limitations of real user monitoring

  • RUM best practices

  • RUM tools

  • FAQs

Let’s jump in!

What is real user monitoring?

Real user monitoring (RUM) records how users interact with a website or native mobile app to help assess performance, functionality, reachability and responsiveness. As a “passive” monitoring technology, RUM collects data in the background, without impacting operations. It may also be referred to as:

  • Real user measurement

  • Real user metrics

  • End-user experience monitoring

A key component of digital experience monitoring, RUM is commonly used by developers to detect errors and glitches. However, it can also generate invaluable insights for marketers looking to decrease sources of friction in the customer journey.

3 main benefits of RUM

RUM breaks down user interactions into a series of technical events, providing deeper context to broader performance metrics (like average view time or bounce rate). This helps you form a more thorough understanding of how users are experiencing your website or app, so you can:

1. Boost conversions

Correcting low conversion rates is both an art and a science, but standard tools like Google Analytics only offer a high-level overview to aid in the process. Sure, you know X number of users bounced from your landing page after .2 seconds—but why? Did they take one look at that 15-field form and say hard pass? Or did they leave before even noticing the form, because the landing page took so long to load?

RUM supplies that critical data to identify or rule out any technical causes, making it a natural ally in your conversion rate optimization (CRO) efforts. Instead of just flagging a slow page, for example, RUM will list all the page elements (i.e. images and files) by associated loading times—so if there’s an issue, you can make sure you’re correcting the right thing. RUM also typically includes filtering capabilities, so you can assess performance by device, browser, geographic location and Internet Service Provider (ISPs).

2. Improve customer retention

RUM helps identify sources of friction that may eventually cause customers to churn. For instance, RUM can rate user satisfaction by recording how long it takes for your website or app to respond to specific requests or actions (such as clicking a button). This helps developers keep on top of front-end performance issues—especially after pushing new features, when apps are more vulnerable to errors. RUM can also be a boon to technical customer support, enabling speedier troubleshooting.

3. Provide a better digital experience

A single technical event may seem microscopic in the grand scheme of things, but it can still wreak havoc on your customers’ digital experience. Case in point: web pages that take five seconds to load have a 4X higher bounce rate than pages that load in two seconds. RUM gives you the ability to pinpoint even the tiniest sources of friction undermining your users’ ease and enjoyment.

How does real user monitoring work?

Typically, DevOps teams implement RUM by injecting a bit of JavaScript into a web page or app’s code. This enables “passive tracking,” so any time a user begins a new session, it triggers the script to automatically run in the background. RUM then captures a wide range of data to form a comprehensive record of the technical events, durations and actions that occurred.

RUM is difficult to use as a standalone technology. Instead, most teams leverage application performance monitoring solutions or APMs, which typically enable RUM alongside a range of frontend, backend and synthetic monitoring capabilities.

Examples of RUM

RUM collects and compiles data from several sourcesseveral different types of data to piece together technical events behind each session. Some of the most common data sources include:

  • Sessionization: RUM chronicles the sequence of actions a user performs, like reading a blog post and then downloading a report. This makes it easier to connect technical events to specific user interactions, and what may have prompted different outcomes, such as a rage click or bounce.

  • Page load events: RUM records how long it takes for a web page or app to load all its resources—including fonts, stylesheets, images, media and JavaScript files—and in what order. This helps isolate any files potentially slowing things down.

  • HTTP requests: Any website or app is only as good as its underlying server connection. RUM assess server performance by measuring the sequence and volume of Http requests or incoming requests from the browser prompted by different user actions—like submitting a form or selecting an option from the navigation menu. It also times the server’s response.

  • AJAX / XHR calls: With Asynchronous JavaScript and XML (AJAX), single-page applications (SPAs) can prompt browsers to push and retrieve data to and from the server, and then update only the relevant elements—instead of reloading an entire webpage or app screen. This enables dynamic responses to real-time interactions. RUM can break down SPAs to look at the sequence, volume and response time of various AJAX calls—outgoing requests from the browser to the server—to assess the speed and reliability of its connection.

  • Application Performance Index (Apdex): Apdex rates user satisfaction based on how quickly a web page or app responds to their actions. Most RUM/APM solutions let you determine your own threshold, i.e. 0.5 seconds. RUM would then categorize any requests handled in 0.5 seconds or less as “satisfied,” while requests that took longer would either be “tolerating” (if time lapse is ≤ 4X the threshold) or “frustrated” (if >4X the threshold). Finally, RUM tallies the results to generate an Apdex score, so you can quantify technical events as a customer experience metric.

  • Core Web Vitals score: Google established Core Web Vitals as a set of criteria to assess quality user experience, i.e. mobile responsiveness or how quickly users can interact with different page elements after loading. RUM can measure any sequence of technical events that occur in a single session against these benchmarks.

Real user monitoring vs. synthetic monitoring

RUM isn’t the only option for measuring website and app performance. Synthetic monitoring—also called active monitoring or synthetic transaction monitoring (STM)—is another well known method. Both RUM and synthetic monitoring are:

  • Intended to prevent or detect potential sources of friction for users

  • Implemented through JavaScript code

  • Usually managed through an APM

However, there are some notable differences. While RUM generates data from live interactions, synthetic monitoring “simulates” user behaviors, then measures how long it takes for an app or website to respond. Synthetic monitoring can mimic different scenarios, like a particular sequence of interactions across different combinations of geographic locations, browsers and devices. It functions more like a traditional science experiment, testing predetermined variables over a limited period of time.

Instead of measuring a website or app’s technical capabilities against theoretical interactions, RUM continually gathers data from real sessions as they occur, giving a more accurate assessment into the technical components of the user experience.

Limitations of real user monitoring

RUM is invaluable for identifying sources of friction based on real user interactions. However, there are still some serious limitations to be aware of:

Data overload

RUM may be a beast when it comes to collecting data—but data isn’t worth much if it’s not actionable or accessible. Data also gets more difficult to manage as it increases in volume, which can be a serious problem. Over 80% of data coming through an enterprise organization is unusable, while poor data quality costs U.S. businesses $600 billion per year.

To uncover real insights, you need a way to actually organize, visualize, contextualize and analyze the data you collect—none of which RUM is capable of facilitating on its own.

Restricted visibility and context within the total customer’s journey

RUM records the technical events that occurred during a particular session, but there’s no way to actually see what users experienced on the other side. This limited visibility can lead to false positives. For example, a page may be technically perfect, but still a major source of friction for customers—like a white paper that downloads at lightning speed, but contains irrelevant messaging.

RUM also gathers its data in isolation, rather than plugging it into a holistic, 360-degree view of the overall customer journey. This makes it impossible to contextualize how one interaction may have interacted another. For example, there’s no way for an APM solution to distinguish between a user who leaves your site because they’re simply not interested in the offer or they’ve bounced around between several pages and still can’t find what they’re looking for.

Relying on technical data alone—without factoring in user behavior—will inhibit your ability to effectively assess the overall user experience.

Insights not easily shared between teams

Technical events can have a serious impact on your bottom line. Back in 2009, Amazon discovered it was losing 1% in sales for every 100ms of latency or delayed response to server requests, while a 2017 study reported that every 100-millisecond delay in website load time can decrease conversion rates by 7%.

However, RUM insights usually live in APM solutions that were built for developers. That means there’s a significant barrier to entry for marketing, product, CX and support teams who may also need RUM to identify technical events impacting their own KPIs.

These gaps can negatively impact technical teams as well. RUM can report on errors like no one’s business—but it can’t actually quantify their impact. Say, for instance, a customer is perusing your online store, adds a new product to their cart and then tries to increase the quality, but your site is unresponsive. After two attempts, they leave. RUM will report an AJAX error, but there’s no way to then connect it to loss of revenue—in this case, missing out on a sale of multiple items.

This makes it exceptionally for developers to determine which errors are having the biggest impact on revenue—and prioritize accordingly.

Zero pre-production insights

RUM may have numerous advantages over synthetic monitoring, but it can’t compete in a pre-production environment. While synthetic monitoring can use simulations to test new product features prior to deployment, RUM requires real user interactions to generate insights. Its data collecting abilities are also heavily impacted by time of day, whereas synthetic monitoring can function 24-7.

👉 Download the white paper to learn why RUM is not enough and what you need in order to truly optimize your digital customer experience.

RUM best practices

RUM can easily turn into an unruly mess of data without the right processes in place. Some suggested best practices to follow:

  1. Use business goals to establish measurable RUM objectives: What do you hope to accomplish with RUM? Instead of something vague like keep tabs on performance, work backwards from business goals to flesh out concrete RUM objectives.“Increase new user registrations by 25%” might break down into Achieve Apdex score of 0.85 + Decrease shopping cart abandonment by 50%. This helps focus your RUM efforts on specific technical events and user behaviors, so you can prioritize gathering the right data.

  2. Determine high-level performance benchmarks: Once you’ve determined what you want the data to help you achieve—and the RUM objectives necessary to get there—do a preliminary analysis to establish high-level performance benchmarks. Depending on your goals, this may include average load time, median load time, Apdex score, or ratio of resolved bugs and new ones.

  3. Create a clear, consistent RUM process: It’s important to have a process in place to establish when, why and how often you’re looking at RUM data—and where the results will be documented. This helps ensure internal alignment so nothing falls through the cracks and you don’t wind up comparing apples to oranges.

  4. Establish lines of communication between DevOps and other teams: Although RUM is managed by DevOps, the insights can hugely benefit marketing, product, CX and customer support teams as well. Create a centralized document or visual dashboard to record trends around popular features. Flag performance-related issues that may lead to an uptick in technical support requests, or could interfere with paid digital marketing campaigns. Make sure anyone who needs to understand how users are experiencing your website or app know where to go for information.

Real user monitoring tools

RUM can be a standalone tool, but is typically used in Application Performance Monitoring (APM) platforms. These solutions bundle a broad range of performance analytics and diagnostic capabilities, including RUM, synthetic monitoring, application tracing and database tracking. Although APMs can undoubtedly offer compelling insights, one notable limitation is that they are exclusively designed for technical teams. This means there’s no easy way to connect website or app performance with the total customer journey and—ultimately—revenue.

One way for organizations to address this problem is by adapting a more business-centric approach to performance monitoring. We like to call it ‘real user experience’ or RUX, since it combines technical performance analytics with digital experience analytics, including user behavior data, journey maps and session replays. This provides a deeper and more nuanced insight into website and app performance and how technical events actually impact users throughout the customer journey.

For instance, while APM tools will commonly enable data filtering by browser or device, teams using a digital experience intelligence platform with RUX capabilities can create funnels associated with different behavioral patterns—including drop-off rates and abandonment sequence—and drill down into session replays to uncover common sources of friction. RUX platforms also don’t have the same barrier to entry for non-technical teams.

FAQs

Frequently Asked Questions about Real User Monitoring (RUM).

What is real user monitoring?

Real user monitoring or RUM is a type of monitoring technology that measures real user interactions with a website or mobile app. A subset of digital experience monitoring, RUM is implemented through injected JavaScript code and used by DevOps teams to monitor technical performance, including reachability, functionality and responsiveness.

What are the benefits of real user monitoring?

Every session includes a vast amount of technical events (averaging over a thousand!), from clicking buttons to loading files. RUM provides a systemized approach to regularly monitoring these technical events, so teams can identify and decrease sources of friction, and provide a more seamless digital experience for customers.

What are the limitations of real user monitoring?

RUM is chiefly concerned with gathering technical data, so there’s no way to easily connect performance during an individual session with the total customer journey. As a result, RUM affords extremely limited visibility and context around digital experiences, from the users’ point of view. It’s also impossible to visualize data for reporting and analysis, which is why RUM is seldom used in isolation.

What are real user monitoring tools?

RUM is most commonly used through application performance monitoring (APM) and real user experienc platforms. APM solutions support RUM within a range of technical monitoring and diagnostic capabilities. Real user experience solutions integrate RUM with digital experience analytics to not only measure technical performance, but assess its impact on user behavior throughout the customer journey.