Visitor Monitoring Mobile App Performance: The Critical Testing You’re Not Doing | Keynote
Webcast

Mobile App Performance: The Critical Testing You’re Not Doing

About the Webcast

Thanks to major outages, websites like Target, Southwest Airlines, Best Buy, and others have experienced firsthand the importance of mobile app performance testing. Even if a mobile app functions perfectly, no one will use it if it takes twenty seconds to launch.

When performance is bad, it’s all that matters to customers. If performance testing is not part of your test plan, include it—whether it’s a requirement or not. 

Join the webcast to learn how to set performance targets for mobile applications and discover what types of testing will ensure you release high-quality applications that work—and work fast—every time. 

You’ll learn:

  • Performance testing strategies for the real world
  • Approaches for finding bottlenecks and improving mobile app responsiveness
  • How to build performance testing into the mobile application lifecycle
  • How mobile performance influences user behavior

Webcast Transcription

Josiah:

Hello and welcome to today’s web seminar, Mobile App Performance:  The Critical Testing You’re Not Doing featuring Michael Kopp, head of digital at Keynote and Rachel Obstler, VP Product Management, Mobile Testing and Monitoring at Keynote.  I’m your host and moderator Josiah Renaudin.  Thank you for joining us today.  Before I hand it off to the speakers, let me explain the controls on this console.

If you experience any technical issues, please send a note through the question field located beneath the panel and we’ll be on hand to help as quickly as possible.  Our responses can be read in the Q&A panel to the left of the slides.  This is also the same way to submit questions.  We welcome all comments and inquiries during today’s event so feel free to ask as many as you like and we’ll answer as many as possible during the Q&A session after the presentation.

This event contains a polling question which can be answered from the slide view.  To optimize your bandwidth for the best viewing experience possible, please close any unnecessary applications that may be running in the background.  At the bottom of your console is a widget toolbar.  These widgets open panels.  The console opens with three on your screen, the speaker, Q&A and the slide panel.  All of these panels can be moved around and resized to your liking.  If you accidentally close one, click the respective widget to reopen the panel in its original place.

Hover your cursor over these icons to view a label identifying each widget.  Through these icons, you can download the speakers’ slides and handouts and share this event with friends and colleagues.  The event is being recorded and will be made available for on-demand viewing.  Once the recording is ready, you will receive an email with instructions on how to access the presentation slides on demand.  With that said, I’d like to introduce and pass the controls to our speaker Michael.

Michael:

Thanks Josiah.  Hey everybody, happy Tuesday and welcome to this web seminar.  As he mentioned, the title is Mobile App Performance:  The Critical Testing You’re Not Doing.  We believe this is a really great topic, especially timely right now coming up on the holiday season and so let’s go ahead and get started.  First, I’m just gonna join in and talk a little bit about context and so first off, I wanna kickoff and talk about who Keynote is and why we can be talking about mobile app performance.

So, if you’re not aware of Keynote, we’re a pioneer in the mobile testing and monitoring since 2003 through DeviceAnywhere.  We have patented technology in this space.  We have solutions for testing and monitoring that go from single developers doing functional testing all the way through global enterprises, the kind of environments doing fully automated scripted testing.

Our platform is integrated with the leading CI platforms.  We have thousands of devices in our cloud, allowing you to do testing and monitoring and also speaks to our customers.  So, we have a wide list of some of the largest Fortune 1000 companies and also down to some of the smallest individual testers.  So, that’s who we are and that’s why we believe we can talk about this subject.  So, let’s talk about a little bit of context about what’s going on out there.  So, there have been some very recent public examples of some good companies getting it wrong when it comes to mobile performance.

So, if you’ve been reading the news lately, we’ve heard about recent slips by Southwest Airlines, Target and Best Buy, you know, these are only a few among many where mobile apps don’t live up to performance and really what it does is just highlight how important it is to have performance for your mobile app, for your customer experience and really from some of these stories also play out, what some of the consequences were when people were not able to transact, companies lost money, customers get unhappy and start clogging up other websites and phone lines and it’s just really ugly situations.

And so really what we’re trying to do here is help you understand how you can not be next on this list, how you can make sure that your mobile app is up to performance, how you can constantly measure that and monitor that and what really the best practices are around that.  So, what really are the user expectations because performance can be a moving target?  Well, these stats come from other studies that we’ve collected from around the Internet.  So, the cost of poor performance so really users have an expectation, you know, you’ll see there three seconds of tolerance.  This number is always coming down so three seconds of tolerance for any digital interaction to load.

As Rachel will go into, clearly the more important the interaction, the longer somebody is willing to wait, but this is a good benchmark to use is three seconds.  You start losing people and so when you lose people you then are losing the potential revenue from them if you’re a shopping site or also for any of the other reactions that you’re really building your app for.  The other stat is that only 16 percent of people are willing to give a slow app more than one attempt, and I think this is pretty obvious.  I know from my perspective when I open up a new app and start using it, if it doesn’t load up pretty quickly and isn’t pretty snappy, it’s already deleted and gone from my view forever, probably never to return.

And what these performance issues really have an effect on is the end business outcomes that you’re really building that app for.  One example would be that shopping carts have a 65 percent abandonment rate so anything you can do to lessen that abandonment rate is going to be critical.  So, really the cost here though of all these poor performance, you know, this is just one number and this is a big number.  So, this is $155 billion and this is the expected 2015 mobile commerce.  This is from the Internet Retailed Mobile 500 Study so this is a huge number.  So, this is the Top 500 mobile companies out there, what they’re going to expect from mobile-only commerce.

Now, this is up 67 percent from 2014.  So, the mobile space, mobile commerce is hugely expanding and growing rapidly.  Now, even if your mobile app is not a commerce-related app, I can guarantee you that somebody in your marketing or product management department knows exactly what the cost of a customer and what those interactions are.  So, even if it’s not shopping revenue, if you have slow site performance and your app is not responding correctly, you’re missing out on that revenue because certainly if it doesn’t load quickly the first time, they’re never coming back to your app.

And if certain transactions don’t load within three seconds, you’re gonna be losing a higher percentage of your audience and of course then your business revenue.  So, that’s really the context and with that, I wanna pass it over to Rachel and Rachel will take you out from here and she will talk about what you can do … Rachel.

Rachel:                       

Thanks Michael.  So, what is performance?  So, we’ve talked a bit about the impact of performance, but how are we gonna define performance for purposes of what we’re doing here?  So, we like to think about performance as the customer’s perception of how long they had to wait.  So, there’re a couple of interesting points I wanna bring up about what this statement means and one is that it’s the customer’s perception.  So, what that means is how a customer views things is first of all it’s what they actually see and what they experience.  So, what that means is you can’t just look at for instance how responsive your API is or how fast your backend system is.

You really need to understand that they’re experiencing from the point of view of the handset, what they see and what they can touch.  The other thing too is that performance for a customer, their perception is not exactly reality.  It’s very close to reality and I have some data that I can show on that, but it’s not exactly reality.  So, for instance, perception can be uneven.  Perception can be that certain transaction that they just think should take longer like submitting a payment and getting feedback, you know, that you’re making a credit card purchase, those can take longer.

Also, customers can’t necessarily tell the difference between something that takes 1.2 seconds and something that takes 1.3 second; 100 milliseconds is too short of a difference for a customer really to discern.  So, one piece of this is about customer perception and that’s the thing that’s important.  The second piece is that you’ve probably noticed that this is a pretty negative way to make this statement so I’m not saying here, for instance, that it’s the customer’s perception of how quickly they were able to do their transaction I’m saying in a very negative way and why am I focusing on the negative?  I’m gonna get back to that in just a few slides but first let’s look a bit more at customer perception.

So, this data that you’re seeing right now comes from a study that we ran that looked at the process of applying for auto insurance online.  As you can see, most of the companies here are companies that do insurance.  So, we did two things here.  One is that we had a thousand panelists and these panelists were asked to go through the process of applying for insurance and rate their perception of performance of how fast was this process.  At the same time, we were measuring performance so we took over 100,000 measurements of actual performance and then we correlated those two things.  So, what was the customer’s perception versus what was the actual performance?

And as you can see, for the most part when a site was slower, customers perceived it as being slower and when a site was faster, customers perceived it as being faster.  So, there is actually a very strong correlation between perception and reality so that’s a good thing.  And then the other thing just to point out here is that people really do notice when your site is slow.  So, on the next slide, so does performance matter?  Okay, so we’ve established that people know when a site is slow and they notice but does this really matter?  And the answer is performance in and of itself doesn’t matter but business results do.

So, when performance impacts your business outcome, that’s when it makes a difference.  And so some of the things to think about here is we’ve seen some things on the Internet that say hey, every 100 milliseconds makes a big difference in revenue.  That’s not always true.  Google is not necessarily the goal; you don’t have to be so fast, necessarily as Google.  As I mentioned earlier, there’re certain transactions, certain things that customers expect to take a bit longer.

There’re also certain transactions like you’ve gone through a whole process, you’re at the very end of your process and it’s okay for it to take a bit longer because at that point the customer’s almost done with what they’ve done, they’ve put in the time already; they’re willing to be a little bit more patient.  So, really here what you need to know is when does performance impact your business results and the goal is really to make sure that the customer is not impacted by poor performance.  So, getting back to that idea of why performance is a negative and why do we talk about it in terms of its negative impact and why it’s a negative factor.

So, this data is from another study that we did and in this study we were looking at certain events that happened on a customer’s website and we correlated those events to performance and the likelihood of them completing that event.  And what we saw is that as the customer experienced poor performance, and here specifically a poor performance was defined as the number of screens that took longer than five seconds to load that they experienced, as they experienced more and more of those screens, the likelihood that they completed the transaction went down.

So, one thing to point out here is that if you think about it, someone goes to a mobile app, they download it from the app store, they launch it on their phone; they’re doing that because they have an intent to do something with that mobile app.  It could be that they intend to buy something, it could be that they intend to use your app to play a game, whatever they’re doing in the app, they have a specific intent.  Now, once they’re inside the app, if performance is really good, their intent does not stronger.  So, for instance, let’s say they’re coming into your app and they wanna buy a shirt.

Having really good performance is not gonna make them also want to buy pants.  Now, if you show the shirt with pants and show what a nice outfit is that may make them wanna buy the pants but just having good performance isn’t gonna make them buy pants.  So, performance can’t make a customer do something, but poor performance can keep a customer from doing something and make them leave the site and not complete what their intention was.  So, that’s why we say that performance is really a negative factor.

And so this gets us back to the goal which is to make sure that customers don’t notice performance, that basically performance does not negatively impact what the customer’s intention is.  Okay, so now that we’ve said that and we’ve established what performance is and the goals around performance, which is to ensure that it doesn’t negatively impact the customer’s intent, how do you do that?  So, we’re gonna talk about four best practices to ensure that you have great mobile app performance.

The first one is designing for performance and so really making sure that from the very beginning of the process you have performance goals and you measure them, that you validate performance from the customer’s point of view, that you make sure that you understand load impact on performance and then lastly that you build a performance practice so it’s not just a one‑time thing but it’s an ongoing thing that you do.  And then this will ensure that you make sure performance does not negatively impact your business outcomes.  So, before we move into the best practices – oops, got one more slide.

So, on the first best practice, designing for performance, why is that important?  So, if you look at this chart here, and I’m sure many of you in the testing world have seen information like this before, basically the earlier that you can catch an issue in your development process, the less it cost you to fix it.  A great example here is a lot of companies actually outsource their mobile application development.  We’ve worked with companies who have done this and what happens is the app gets delivered, they did not specify any performance requirements and it costs a lot of money; sometimes you have to completely re‑architect an application to make sure that it performs well.

So, what you have is you’ve been delivered a beautiful application that has all the functionality required but if it takes 20 seconds to do everything, no one’s gonna use it.  So, it’s incredibly important early on in the process to make sure that you specify performance.  So, now we’re gonna move to a poll and we’re gonna ask about what the state of performance awareness is at your organization.  So, we’ll give about a minute here so please pick one of the options.  And so the first option is that performance is built into the development process so essentially part of the whole lifecycle and there are formal metrics that you do.

The second one is that performance is considered at some stages of the development process but that there are no formal metrics or a formal process.  The third one is we only take performance into consideration or you only take performance into consideration at production.  And the fourth is you’re not currently taking performance into consideration in the development process.  So, I think this is an interesting question.

Now, by the way, when we talk to most of our customers, most of our customers are actually either in the third or the fourth category, right, so I think we find that most customers do not necessarily take performance into consideration and certainly don’t really take performance into consideration early in the process and this is something that we see organizations starting to do but very few are currently doing it.  So, I think we’re gonna give just a few more seconds so please click on one of the bullets and tell us where you are.

Michael:                     

Yeah, and Rachel, I think it’s interesting that the three sites that I brought up are the mobile apps that have performance issues, really if you notice that they were companies that do something else, be it a store, be it an airline and then have a mobile app.  And so it’s interesting that a lot of the mobile apps that really the mobile app is the product have gone through this a long time ago and solved most of their challenges so I think there certainly is a continuum of awareness of performance.

I think everybody inherently knows that performance is important, but yeah, how does that play out to the organization?  And I think we’ve seen some of our customers who the mobile app is their business, the social apps, right?  They are much more ahead of the curve and really take performance much more seriously.

Rachel:                       

Right, okay, so I’m gonna give everyone five more seconds to get their last answer in … five, four, three, two, one, all right.  And then we’re going to – good, we got a couple more votes.  I’m gonna submit it now and we can see what the results are.  These are great results.  So, we have 29 percent of people say performance is built into the development process and there are formal metrics, which is great.  I do think we may have a little bit of self-selection with the subject of the webinar so probably it is people are more concerned about performance.

So, the second one, performance is considered at some stages of the development process but no formal process or metrics so that’s 54 percent.  We have 10 percent saying that they only take performance into consideration at production and 6 percent don’t take performance into consideration.  So, certainly room for improvement.  So, we’ll talk about, again, some more best practices of how we can improve taking performance into account.  Okay, back to the first best practice which is design for performance.  So, really what’s important here is to make sure that you set performance targets.

And there’re a number of different ways that you can set performance targets from the very beginning of the process.  So, that means even when you’re designing, what are the requirements for the product?  The product requirements should include a requirement around performance.  So, how do you set performance targets?  There’re a couple different ways that you can do it.  One is standard best practices so, you know, Michael at the beginning of the presentation talked about the three-second rule.  At the absence of having any other data, the three-second rule is actually a reasonable goal to use.

Now, there’s also benchmarking.  So, looking at your competitors, looking at companies that you consider best in class, seeing what their performance is for key transactions and then setting your targets based on those benchmarks.  That’s another good way to do it.  And the last one is actual performance correlation.  So, I showed a slide a bit earlier where we had done a study where we actually correlated performance to business outcome using both performance data and it was actually Adobe Analytics Data on actual transactions.  So, this is the hardest one to do, but it’s possible to do it and I can talk more about that as well.

So, looking at best practices, standard best practices around benchmarking, so you can see here some data on mobile applications.  So, what we did here is this is actually data from a travel hospitality study that we did.  We looked at ten prominent companies across a number of different functions so one of them was launching an app, one was searching results, one was details of results and then the last one was going through a booking process.

And what we’re showing here is both the best practice results, that’s the blue bar, the average performance which is the light blue bar.  This was done for a particular company so their site is the red line.  And then lastly, worst case performance is the end of the bar, the gray.  And so what you can see here is that generally best in class is coming in at around one second for all of these things, even the search result.  And we also see that average performance is coming in at around typically three to four seconds.

So, that interestingly bears up the whole best practice role so with the exception of search results, all the other ones are around three to four seconds so it means that three seconds is probably a pretty decent target if you don’t have any other data to look at.  So, let’s then talk about correlating performance to business outcomes.  So, this, as I said, is a more complex thing to do.  It’s a little bit harder but it’s something that Keynote has done.  And what you can do is you can take data from, it could be Adobe Analytics, Google Analytics, it could be even any other solution that record what your users are doing on your site and you can actually correlate performance to those outcomes.

And so what we did here is we looked at over a million records, sessions, of users on this site and we actually correlated how performance impacts their likelihood to complete a task that they had started.  And in this case, this was actually a self-service event.  So, as Michael was talking about earlier, not everything you do in an app is about revenue.  Sometimes it’s about going on and changing your account or doing something that doesn’t generate revenue but it’s still important.  It may be something that reduces cost for the business.  It may be something that keeps someone from later on making a phone call which cost much more than them doing a transaction in the app.

So, in this case, that was true that there was a real cost of the customers what did not finish their self-service event because they would instead make a phone call.  So, here you can see that there’s a pretty drastic change in the likelihood of someone completing their self-service event every time we added a poor performing screen to the transaction.  So, what we’re seeing is that if performance was good across the board, nothing took more than five seconds, there was a 38 percent chance of the customers completing the transaction.

If you had half of your pages with more than five seconds, that goes down to 18 percent.  And if you have all of them taking more than five seconds, it was dismal.  I forget the number, but I think it was more like 8 percent.  So, there’s a real impact of performance on actually completing a transaction and so if you can do these correlations, it actually helps you pinpoint exactly where to fix so then you can find all of your five-second pages.  You can also see which of your pages or screens are used the most in these key transactions and then you can really prioritize and pinpoint where you have to make your performance improvement.

All right so let’s move to the second best practice which is validate performance from the customer’s point of view.  So, the first thing to talk about here is that it’s really important to test from the handset and I’m not saying that you should not test internal APIs and test backend systems because you also need to do that but to really understand what performance the customer is experiencing, you have to test from the handset.  There are things that can happen on the handset and can just really slowly render something and it’s important to know that because you can have the fastest API in the world but if the customer’s actual experience is slow then you haven’t achieved anything.

There’re a number of different ways to test from a handset, I mean the easiest is you have a handset, just test on it.  There are better ways to do it though.  So, Keynote actually offers a device cloud so there’s testing in a device cloud.  You can do that manually.  You can also write a script and that will allow you to automatically test so you can run a test over and over that way without having to do any manual work.  And then you can also even utilize in‑app stats.  So, there are some FDQs you can put inside of your app that allow you to collect performance information as well.

So, again, one common piece here is making sure that you’re testing from the handset so whatever methodology you end up using, that’s really critical.  So, I wanted to show some more data here as well and so this data comes from a study that we did across a number of big retailers and we really wanted to understand that performance was across these applications.

And so, again, we looked at a number of different things that you can do in these applications like launch the app, search for results, look at product detail, submit a review.  We had a couple other things in there like a store locator or adding something to a wish list.  And one of the things that’s really interesting about this data is the difference in many cases between performance on the iOS device and performance on the Android device.

And so that’s another thing to think about when you’re validating from the customer’s point of view, it’s gonna be really important to test both on iOS and Android because those platforms are so different.  And in many cases, and we’ve seen this with some customers as well, you have one of those being developed by an outsource company so maybe your iOS app is being developed by a third party but your Android app is being developed in‑house or vice versa.

And so when that happens, performance can be vastly different between the two so it’s really key to make sure that you understand performance from both perspectives and in fact, you may wanna test across different OS versions.  You may even wanna test if your application is being used on a tablet separately from phone.  All right, so that was designing for performance and making sure you set metrics even very early in the process and you know what performance you’re targeting, validating from the customer’s point of view.  And now we’re gonna talk about understanding the impact of load on performance.

So, Michael talked a bit earlier and showed some of those poor performing apps or events that had happened on certain applications, and the Southwest one is actually a really great example because Southwest had offered a sale and so that pushed a lot of traffic to their mobile application and that traffic overloaded their mobile application.  And then what happened is people gave up and went to the website instead.  And then that surge of traffic overloaded their website.  And then they went to the phone lines and then it did the same thing on the phone lines.  So, really what started with an issue in the app, it caused a cascading effect because the mobile app couldn’t handle the load.

And so really the last thing that you want is at the moment of your greatest success, when you have the most people coming to your app and using your app is when you have your greatest failure.  So, load testing is actually really critically important.  And the key with load testing on mobile applications is also to test from the handset, but it’s not feasible to have hundreds, thousands, even millions of handsets hitting a site at the same time.  That’s a very expensive test and it really doesn’t make a lot of sense.

So, you test from the handset but you drive traffic from the cloud and what that means is you use a cloud service, Keynote has one of these too, where you can drive traffic from the backend and then at the same time you take a few handsets and you have those handsets access the site so you can both see from those few handsets what the impact of load is from the customer’s point of view but you’re driving all of the load in a much more efficient way on the backend.

All right and then lastly I wanna talk about building a performance practice.  So, building a performance practice actually doesn’t sound that hard and I’m sure many of you experience this.  It’s very difficult and the reason for that is silos.  And no, I don’t mean grain silos but organizational silos.  So, I’m sure many of you are familiar with organizational silos.  They can be between different teams internally, they can be between outside agencies maybe developing apps, third parties that you may be using within your application like maybe your cart is from a third party.  There are gonna be silos between the business and the technology side.

There can be silos between QA, operations and development, and all of these silos make it difficult to really manage performance across the whole application lifecycle.  So, it’s important to be able to speak the same language and have the same goals and so the best thing to do to work through silos is to measure performance.  And so what that means is you need to agree on metrics across teams.

The other thing that’s really important about metrics is if you’re starting this process out, don’t go crazy on metrics; you’re gonna have a data overload so pick very few metrics, maybe one or two key transactions and measure them across maybe one iOS and one Android device.  So, start small and then make sure that you measure before and after release and then also make sure that these measurements are published to all teams so that way when there’re performance issues, everyone knows what they are and the team can work together to resolve them because typically issues can’t be just resolved around performance by one group.

Now, another key thing here is what I’m talking about, and a lot of things I’ve talked about in this presentation are not wholly owned by the testing organization, right, so we’re talking about making sure that you set targets early on or making sure that the whole organization is involved in this process, but I do think that testing can play a leading role in ensuring performance through the development process.  So, for instance, if there are no performance requirements, ask for them, demand them.  It’s well within your right to demand that if you’re going to be testing a mobile app that the product team, the business, they tell you what performance that they’re expecting.

And when it comes to putting a performance package in place, you have to work with other organizations to do it but the QA or testing organization can really take a lead in this and make sure that metrics are, you know, figure out which metrics should be tested and then make sure that they get published to all the teams.  So, I think that’s a key in this too is that it does require a whole practice, it does require all teams but testing can really take a lead role.

Okay, so we’ve talked about four best practices for ensuring great mobile app performance.  Designing for performance so making sure that you know upfront from the very beginning what performance you’re expecting, that you set the performance requirement, that you are validating performance from the customer’s point of view so you really understand what the customer’s experiencing, that you know what’s going to happen when you’re at the moment of your greatest success and make sure that it doesn’t turn into the moment of your biggest failure and test for load.

And then lastly that you build a performance practice that’s across functional organization and continue to manage performance both before in the early stages of development and also after it’s out in the field and make sure everyone’s aware of the targets and the actual performance so they can work together to achieve good performance.  And if you can do all those things then your customer that’s very annoyed at poor performance will turn into a happy customer, maybe a little bit of a drunk happy customer.  All right so with that, I think we’re gonna open it up to questions.

I also wanted to mention that there’re a number of ways that you can find out more about the product that Keynote has to help you manage performance in mobile apps.  One is a mobile testing product.  We offer free trials.  Same thing for our monitoring product.  The great thing about these products is that they allow you to measure performance before in the testing phase and after in the monitoring phase with the same script.  So, that really ties together your performance process.

And then you can also reach us or learn more about these products on our website, and if you wanna talk to us specifically about a performance practice, feel free to send us an email anytime at mobiletesting@keynote.com.

Michael:                     

Yeah, and if you guys wanna talk now, you can just send the questions, you can go ahead and put in questions for us and if you want us to call you to talk to you about these things you can also just let us know that by entering a question into the question interface.

Also, one thing is if any of you are attending STARWEST coming up, we will be there showing off both of these products that we’ve talked about and talking about best practices around mobile app performance and we’ll be there through the 13th so we hope to see you there live and in person.  So, with that, I think we will go to the questions.

Josiah:                        

All right, well, thank you very much.  Before we start Q&A, remember like Michael said, you can ask Michael and Rachel questions by typing in the field beneath the panels and then clicking the submit button.  We’ll try to get through as many questions as possible and for those questions we’re unable to answer during the Q&A, some will be answered offline.  So, let’s start with this one.  So, are you able to do such tasks as the tests you’ve been talking about throughout the web seminar on iOS 9 and Android 6?

Rachel:                       

Yeah, thanks.  So, the answer is yes.  I mean obviously whatever OS version is out, we’re going to support.  We typically support it either the day it’s launched or very soon afterwards.  iOS 9 I think is not quite out yet, a few days from now.

Michael:                     

September 9, I was told the GM was released.  It’s still just the developers only.  It’s not been pushed live yet.

Rachel:                       

Yeah, so as soon as that’s available live, we’ll support it.  We actually do for some of our customers load up data versions on some private devices and allow them to test in our system before it’s live.  But for the general shared environment, we have a device cloud.  We typically wait until the OS is live and then we load it up on the phone.

Josiah:                        

All right, thank you very much.  Next question:  This person would like to know at what point should you decide that we need to build performance tools in‑house?

Rachel:                       

Okay, so that’s a tough question.  I guess what I would say is that if you are building performance tools in‑house and you may have a good reason for building performance tools in-house.  Maybe there are certain things that what you can get on the open market you can’t measure.  Maybe there’s some uniqueness to your application that means it’s important to be able to do things in‑house.  But what I would say is even if you find that and you do build things in‑house, it’s still important to have a view of performance from external tools and there’re a couple reasons for that.  One is that they’re just a bit more standard and you’re also gonna be able to do things like benchmarking.

If you have an internal tool, it’s gonna be much harder if you measure all of your competitors, but if you have an external tool, typically with external tools, you can do benchmarking studies like some of the ones that I showed earlier.  And it is really kind of important to understand your performance in relation with your competitors and also to what’s best practice in the industry.  The other thing is just from most of our customers, we find that it’s just somewhat inefficient to develop tools in‑house when there are a lot of tools actually on the market that allow you to measure performance.

So, I think, like I said, if there’re really some things that most of the tools on the market don’t support or there’s some uniqueness to your app, it may make sense to build things internally but for the most part, I think it makes more sense to go with external tools because you’re gonna get just a better view of performance across the market that way.

Josiah:                        

All right, great.  We’re getting a lot of good questions still coming in.  So, for the mobile device cloud, is it possible to simulate different network conditions in terms of bandwidth, and other factors?

Rachel:                       

So, in the device cloud, we have devices that are on different operators and also access to Wi‑Fi so what you could do for instance is that you could change it from a WiFi network to a carrier network, you could go back, you could pick different devices that are on different networks, but we don’t have a way to simulate poor network performance and the reason is because we very specifically put all of the devices in a place where we can get as good network quality as possible because that’s more conducive to being able to do functional and performance testing.  You wanna know how your application is going to react when there’s good performance.

So, there are ways to do simulation of network.  I know there are tools out there.  They’re typically things that you don’t have out in a cloud but you have in a very controlled environment and you can do things like that.

Josiah:                        

All right, great.  Next question:  How do you measure performance of an app without degrading it by logging?

Rachel:                       

So, I’m not 100 percent sure what’s meant in this case by logging, but let me answer the question of how do you measure performance of an app without degrading performance by how you’re measuring it.  I think that’s kind of what’s meant here.  So, there’re a lot of different ways to measure performance of an app without impacting it.  So, for instance, the way Keynote works is that we’re really just looking at the screens that – well, that’s not the only thing we’re looking at, but one thing we do is we look at the screens that appear on the device.  And so we are not degrading the device in any way; we’re just seeing what appears on the device and how long did it take to get there.

So, we’re not putting anything therefore in the app, we’re not putting anything on the phone and so you’re not degrading or impacting in any way the actual performance of the application based on how you’re measuring it.  Now, there are some other techniques to measure performance when you put an FPK in the app, things like that.

In most cases, they don’t have a big impact on the app or they make sure that they always send data asynchronously so you’re still probably not gonna have a big impact on the application.  But I think it’s a good point that’s raised which is that whatever technique you use you wanna make sure that how you’re measuring performance does not impact actual performance.

Josiah:                        

All right, next question:  What are the most valuable metrics other than measuring time?

Rachel:                       

So, I think there’re two things.  One is availability and the other is time and when you think about time, that is probably the most important metric, but it’s what time do you mean, right, and so the important thing with the time is to understand when the customer can do what they need to do, right, so when the screen comes up and when they can act on it in the way that they intended to act on it and that’s not always straightforward.

So, it’s very different from saying hey, the backend delivered the data that was needed to the phone.  It’s when that data was presented on the phone that’s important and it’s when the customer can act on that next screen, whatever button they need to click actually is alive and available to be clicked on, that’s important.  So, I think that’s the concept of time that gets a little bit more complex is making sure that the time you’re measuring is really from the customer’s point of view, when they can actually see what they wanna see or do what they wanna do.

Michael:                     

Yeah, and I would also say from my perspective from a marketing side is that time is the measurement, especially if you’re talking about performance, but if you’re talking about mapping key customer journeys in your app, what are those multiple page, multiple screen transactions that are really important to you and really setting those up as goals.

And so such that if you understand what the goals are you can understand what those complete transactions, timing is, and that really then gives you a much better indicator, you know, not just a number but how long it takes for this goal to be completed and you can start, as you mentioned, if you can correlate it with other data, start really understanding exactly how much a second costs and then you can start making decisions that are more, you know, having a budget for performance, right?  So, at what point do you want to spend the time to fix something or those kinds of things, make much more informed decisions based on business outcomes and performance.

Josiah:                        

All right, and this person would like to know what you would say your main differentiator is with other similar cloud-based type solutions?

Rachel:                       

So, a couple of things, I mean one is some cloud-based type solutions are really designed to do offline automation and that’s something that ours can do as well.  But the other thing that ours can do is allow you to interact manually with devices and also see what’s happening on the device.  And so getting back to this idea that it’s about the customer’s perception, if you offline run a bunch of automation and you pull back some performance metrics, they can be indicative of but are not necessarily the same as what the customer’s experiencing.

So, you really wanna be able to get metrics and looks at things from the customer’s point of view, see when the screen comes up, see when they can act and their next step and so we allow customers to do that both in an automated fashion and also in a manual fashion.  You know sometimes you run a test and you get a weird result that takes really long and what do you do when that happens?  Well, you wanna go try it for yourself.  And so our cloud system allows you to do that.

Josiah:                        

All right, thank you very much.  This person would like to know how to replicate [inaudible][00:46:18] your bandwidth and a Wi‑Fi performance type and condition?

Rachel:                       

So, I think we’ve talked about that one a bit before.  I mean in the cloud, all devices are hooked up to Wi‑Fi and a lot of them also have carrier SIMS as well and so you can very easily just like with any other device go onto the device and say okay, put it on the Wi‑Fi network and now turn off Wi‑Fi and then it’ll go to the carrier network and you can test in both modes very easily.

Josiah:                        

All right, great.  We have a little bit of a longer one here.  So, what are some examples of performance metrics like loading and log in screens should be about 500 milliseconds, log in should happen at 1 millisecond, etc.  This person is trying to get an idea of what the typical benchmarks are.

Rachel:                       

Yeah, so that’s a good question.  So, maybe can we go back to Slide – what slide was that?  So, the easiest answer is the three-second rule.  You’re not gonna go horribly wrong with the three-second rule.  I think it was, yeah, this one, thank you.  So, again, it depends on a lot of things.  It can depend on your industry, it can depend on what the transactions are that you’re looking at, different transactions have different expectation, but if you look at this benchmark data, what this really showed is that on average, most mobile apps, and this was in the travel industry, each thing that you do so each new screen that you go to, it’s averaging about between three to four seconds on average.

So, if you use three seconds as a general rule that everything has to happen within three seconds, I don’t think you’ll go horribly wrong.  Now, that being said, there’s always the danger of using one metric all the time because you can see here in this example of search results, you actually see that on average it’s taking over eight seconds or is that, yeah, around seven seconds, more than seven seconds, and I think most customers expect search sometimes to take a little bit longer.  Same thing with doing a transaction like you’ve put in all your credit card information and you’re hitting buy now.

I think sometimes people expect that transaction where it’s checking your credit card information and coming back to maybe take a little bit longer.  So, use three seconds in the absence of having anything else, but if you can do benchmarking so if you can look at your competitors, look at what’s out there in the industry, get data on what the best performers are doing, if you can use that instead, you should.

And then lastly, getting back to what I was talking about earlier, if you can do a correlation between actual business results and your performance and see where performance is actually impacting your business results and how performance is impacting your business results, that’s by far the best thing to do but that’s a harder thing to do so in the absence of being able to do that then benchmarking is great and in the absence of benchmarking go with the three-second rule.

Michael:                     

Yeah, and I would say that probably everybody – like nobody would really know from the get‑go what your performance to business outcome is and that really what you have to do is pick a number and start measuring and certainly use an outside in tool like Keynote offers, being able to come from a real handset over a real carrier network and then also correlating that with some of your business outcomes data.

So, if you had mixed panel running or something like that in your app, to be able to get the in app actual metrics and be able to track through those user journeys and see where the success has happened because then you’ll start being able to correlate, you’ll start noticing hey, you know, when we saw this one-second uptick in this one user journey, we saw a downtick in conversions.  And that’s where you can start really putting a cost to the actual number of, you know, what does half a second cost, what does a second cost?

Josiah:                        

Thank you very much for that explanation.  So, we do still have some time for a few more questions if anyone would like to ask anymore.  We have one right here.  Ideally, how many devices should you test performance on?

Rachel:                       

So, that’s another one that depends.  I mean ideally you should test it on as many devices as you can or as many devices as your customers use, but practically no one has necessarily time to do that.  So, I would say look at the traffic from all of your devices.  So, if your app is targeted at iOS, measure on as many devices as maybe like different OS versions.  So, if most of your customers are on iOS 8 and some percent are in iOS 7, maybe measure on one of each of those.  If you have an iOS device and an Android device, pick the most popular Android devices as well that your customers are using and try to get coverage across the OS version.

If you’re targeting tablets as well then make sure that you measure on a tablet.  So, I think that you’d probably get very good coverage if you’re somewhere between five and ten devices and worse case, make sure you do two because you wanna at least have a view of iOS versus Android at the very minimum.

Michael:                     

Yeah, a good trick also is of course the best data would be if you had analytics put in the app, but if that’s not done yet, you can also then go to whoever runs the website and get the data from them because they most likely will have the data on which mobile devices, which versions and which OSs they’re running and you can use that as a quick proxy to understand what devices and what OSs you need to test again.  And it’ll be probably close enough until you can actually get some in‑device testing, in‑device measurements going, in-app measurements going.

Josiah:                        

All right and we have another question over here.  This person would like to know how to distinguish between UI or the performance of the backend to API.

Rachel:                       

Yeah, so the easiest way to do that is to measure QAs.  So, one way is measure the way I’ve been talking about which is at the handset to see what the actual customer experience is and then separately ping the API.  You can do just API testing and see how long that’s taking and if you see a big difference between what’s happening on the handset and what’s happening with the API then you know it’s some sort of UI issue probably and then if you see that they’re really very close in time, you know, that the API is taking 500 milliseconds less than what’s happening on the handset then that’s probably an issue with the API.

Josiah:                        

All right, well, that’s actually all the time we have for questions today so that’ll end our event.  I’d like to thank our speakers, Michael and Rachel, for their time.  I’d like to thank Keynote for sponsoring the event.  Also a special thank you goes out to you, the audience, for spending the last hour with us.  Have a great day and we hope to see you at a future event.

Duration:  54 minutes

Back to Top