Creating Sites that Sizzle
An Overview of Web Performance Monitoring.
Why wait? If one site doesn’t deliver what you need when you need it, there’s a dozen — or a hundred — more queued up to serve you. Today’s Internet user is looking to make a purchase, conduct business, and get information now, not five seconds from now and certainly not ten. And increasingly, they expect a “rich” experience wherever they are online. Performance is a critical Web site success driver with real bottom-line ramifications; and an accurate, ongoing perspective on site performance is key to creating an experience that will satisfy users. Keynote experts offer insights into creating a high-performance process to monitor web performance.
The cost of poor site performance is not just lost visitors, it’s lost money. In a recent survey, nearly three-quarters of Internet retailers correlate poor site performance with lost revenue, and more than half with lost traffic. 1Internet Retailer, Brohan, Mark, “Bigger picture/By keeping an open eye, retailers see better ways to monitor and improve web site performance” May 2008.
Just a few short years ago, evaluating website performance was a fairly simple affair. “How fast did the page load?” was often the first and last question that needed to be asked. User expectations were far lower, and patience much higher, when the experience of accessing information or making a purchase online was new and different and amazingly convenient.
Today, however, user expectations are stratospherically higher. With the Internet now tightly woven into the fabric of everyday life, and a multitude of Web sites available to satisfy any given need or desire, users expect not only virtually instant page-loads, but fast and flawless execution of transactions and enhanced functionality that delivers a “rich” site experience. In the intense competition to attract and keep site visitors, web performance is now a critical business driver for site success.
Behind The New Emphasis On Performance
“Web performance used to be an afterthought,” explains Ben Rushlo, senior consulting manager for Keynote Systems. “But two things have happened that have dramatically increased the need for performance measurement.
One is that users are much less tenacious, much less tolerant of poor performance. Five, six years ago there was still the sense of novelty. Today, though, they’re using the Internet for very critical things, critical utility, be that trading stocks or looking at bank accounts or making purchases.
“Then there is the increased complexity. Web 2.0 is the buzz word, but the reality is the Internet is becoming more and more complex. A site today is often a collection of the owner’s content, third-party content, different technologies, different hosting situations. Sites are becoming more like applications. We’re moving away from the idea of Web ‘pages’ and into the world of applications. With all this, there are many, many more challenges around performance.”
Delivering complex functionality in a manner that satisfies high user expectations requires a tremendous infrastructure, which in itself exponentially multiplies the opportunity for slow performance or outright failures. To deliver a customized “my” page on a site such as Yahoo! or Google or one of the news portals, for example, may require hundreds of servers. And running a search-and-transaction site such as eBay takes a huge amount of processing horsepower.
The bottom line is that, for businesses that depend on the Web — and that’s just about any business today — performance matters, to the bottom line. “Technical site quality is a critical business metric, and should be considered as such,” says Rushlo. “You use it just like any other metric to take action to drive your business.” Whether the objective is to reduce abandonment rates, to increase self service and reduce call center loads (and costs), to increase average sales or repeat purchases, performance monitoring is critical to acquiring the data needed to formulate sound Web strategies and tactics. Performance is the common denominator underneath every Web site metric and is fundamental to achieving any Web site goal.
Things can and do go wrong at any step of the way — in the site’s own internal network, over the Internet backbone, across the last mile of the local ISP, or on the user’s desktop. Site operators employ a number of strategies to monitor this complex path and pinpoint the many problems that inevitably come up.
Performance Monitoring Methodologies
With so much riding on the availability and performance of e-commerce Web sites — and with competition continuing to grow every day — one would expect performance monitoring to be integral to every site’s operations plan. But according to an Internet Retailer survey, while most retailers conduct some kind of site monitoring, just 16 percent test performance from different geographic locations or during different dayparts, and only 16 percent perform specific transaction monitoring. The same survey indicates that fully a quarter of respondents performance-test their sites monthly or less, and more than 37 percent test weekly or less frequently. 2ibid.
Those who do monitor their site performance — and that number is growing — generally follow one of three methodologies.
Internal Measurement. The first is a strictly internal measurement procedure that consists of monitoring the internal servers and network. This provides a clear view of what is leaving the host site — including how reliably and how fast data is being served out onto the Internet — but it provides no perspective on what the experience is for the user at the other end. Passive monitoring can be added to track user sessions, and the data that is gathered can be extrapolated to create a generalized picture of the user experience. But such a picture is at best a second-hand approximation and offers little actionable data.
End-User Monitoring. A second methodology puts software out on actual users’ machines to monitor their sessions, collect performance data, and run a synthetic browser to test performance of specific aspects of a site. This method has the advantages of testing at the other end of the transaction, with real end users, as well as scalability to encompass large numbers of users. However, the fact that it uses real end users is both an advantage and a disadvantage. There is no way to control for the type of computer, processor speed, task load, or connection, among other critical variables. For example, a user could be engaged in a processor-intensive task, or perhaps relocate from office to home or wireless café, any of which can dramatically impact performance measurement. So while this method provides more data, it is only incrementally more actionable than data gathered by the first method.
“The inherent problem with this is that you’re using a synthetic process, not a real browser,” comments Abelardo Gonzalez, Web performance product manager for Keynote. “So you’re not going to get the Flash, you’re not going to get the video, you’re not going to get all that additional Web 2.0 experience. And then you have it running on different users’ desktops. So if you see a significant slowdown, is it because they just started AutoCAD to render a huge file? Did they start downloading a video? Are they streaming music? You can’t really know what’s going on, what’s causing the slowdown.”
Browser-Based Dedicated Agents. The third and arguably the “gold standard” methodology is to deploy computers around the globe and have them use real Internet browsers to log on to the site, view pages, perform tasks, stream the video — to actually work the site as a user would work it. All the computers are identical, all configured with exactly the same processor, memory, etc. No other tasks are performed except for the testing. The types of connection are identified and constant — dial-up, DSL, cable, 3G. It is essentially a global “clean room” environment for real-world, real-time testing of site performance, with all variables controlled, so a clear and objective measurement of performance can be taken.
“In essence, it is the equivalent of having a user sitting in front of his computer and hacking away at his Internet Explorer browser,” explains Gonzalez. It’s going through a business process. For example, if you are in the airline industry, the agents would be going onto your home page and trying to book a ticket and then make that reservation, and do it over and over and over again from all these different locations.
“Then we know, when there’s changes or performance problems, everything has been kept in a very scientific, controlled study. So those changes are due either to network changes or application changes or data center changes.”
The Business Value of Performance Data
More and more leading sites are recognizing that site performance is a critical business driver, and as such merits management attention.
“The sites we work with that do well are those that a) have bought off at the business level that performance matters and b) have a structure around that,” says Keynote’s Rushlo. “There has to be a team that consumes the data and then can take action on the data.”
But how that data is used is critical to the real value it delivers to the business. “If all you’re using the data for is alerting, that’s fine, that’s useful. But it’s not really a performance management process. You’re probably not going to drive continuous improvement that way. Use the data to drive improvement, not just for alerting. The data can tell you strategically where to focus.
“The best sites out there have a real disciplined performance management program,” Rushlo continues. “It takes into account all the moving parts. It’s setting SLAs with vendors. It’s working with the creative content team to understand how their work impacts performance. It’s testing before launch when things are still in development.”
Best Performance Practices
Getting actionable data that can be used to optimize Web site quality is a significant endeavor, but the principles that guide data collection and analysis are classically simple. Basically, it is a scientific process that involves real-world simulations with tightly controlled variables. These are the fundamental guidelines recommended by Keynote’s web performance specialists.
1. Measure from Where your Users are. You cannot get a true read of what the user is experiencing unless you are gathering data from the “other end” of the Internet connection. Ideally, your measurement agents are deployed across geography that represents your typical users.
2. Measure What your Users are Actually Doing. Create meaningful testing scripts and stick with them. If you have video streams, activate and measure them. If you have Flash functionality, test how it plays. Perform search, shopping cart, check-out, and any other transaction functions that are part of your site. Page-load speeds alone simply cannot tell you what kind of experience users are having.
3. Use a Real Browser. It’s nearly impossible to accurately measure any complex functionality unless you are using a real browser. Internet Explorer is the de facto standard.
4. Control the Variables. Know the CPU, memory, Internet connection, software version, etc., and keep all of these variables constant across all the testing agents and throughout the testing period. Variations will skew the results and undermine the ability to read the data for strategic decision-making.
5. Establish Performance Benchmarks. Create both internal benchmarks and best-of-breed benchmarks for competitive and leading sites, for insightful management reporting.
Formulating and executing a performance testing program that meets all of these parameters is rigorous process that requires the commitment of the network operations and management teams. But it is a process that pays high dividends in terms of optimized user experience, better site utilization and, ultimately, greater revenue flow to the bottom line.
Caveat: Beware the “Average”
If your goal is a four-second response time and the data says your average response time is four seconds, that’s a good thing, right? Maybe.
“What if it takes two seconds from the East Coast but takes six seconds from the West Coast?” asks Keynote’s Ben Rushlo. “The average just happens to be four seconds.” But the West Coast users are waiting three times as long for a response. “Or what if response runs at two seconds during off-peak times but slows down to ten seconds during peak times, but averages out to four seconds?” Not a very good scenario for users during the busiest times.
“The averages can hide so much meaningful data around what the real performance is,” Rushlo continues. “You have to look at variability — across the country, across hours, across days. An ‘average’ number might lull you into thinking the site is fine, when in fact it’s not working in a way that meets your objectives. The data is all there, but care must be taken in how it’s consumed.”