The World Wide Web became publicly available in August of 1991. Within a few years, PCs became an essential household item and short after it was already clear that the existing infrastructure was not made to scale. Web pages featuring “404 error” codes were a common aggravating flow in the user experience.
To help the web scale and ease user frustration, CDN (Content Delivery Networks) were launched. Websites with CDN support were now able to serve many more users simultaneously and over the years it became a mandatory part of the infrastructure.
CDN’s value proposition was simple: fast and more reliable content delivery translates to better business: more transactions, more ad impressions and higher gamer retention. To accomplish this, vendors such as Akamai distributed servers in major Internet junctions and stored static content such as images and videos. Content delivery was priced based on the amount of content served.
While one could argue that since CDNs help increase revenue it should be relatively simple to verify their ROI. CDN vendors have consistently pushed back from setting this as a way to measure success. As a result, customers that wanted to validate the value of the solution started retaining a 3rd party monitoring/testing solution, and the first generation of performance monitoring technology was born.
Monitoring the Web: Synthetic vs. RUM
Performance Monitoring started with a naïve approach called Synthetic Monitoring, in which companies such as Keynote Systems and Gomes (now Dynatrace) distributed test servers and executed scripts on an ongoing basis from well-known data centers globally. Together with the customer they could perform A/B testing and tell what was the marginal contribution of a CDN to a website performance/reliability.
The problem with Synthetic Monitoring was that by knowing the location of the test servers, CDN’s have learned to optimize for these locations, making performance results inaccurate. Over the years, Synthetic Monitoring became more popular for assessing website or API availability as opposed to speed.
Real User Monitoring (RUM) was invented to fill a gap in performance testing. In RUM testing, tests are executed on the devices of real users, with performance results sent to a repository that the website owner has access to it. While using RUM required end user permission and data is collected only when the user is active, it is more accurate and representative for performance testing.