Visitor Monitoring Seven Steps to Remove Barriers and Accelerate Mobile Testing | Keynote

Seven Steps to Remove Barriers and Accelerate Mobile Testing

About the Webcast

Even though we’re seven years into the “mobile revolution,” companies have various approaches to mobile testing, and many are still trying to figure it out.

If you look at highly effective mobile testing programs, you'll find testing happening earlier in the process and often integrated with open source frameworks and tools. You see greater collaboration between dev and QA in mobile project teams, and you see QA professionals dedicated to the special requirements of mobile.

Join us to discover new techniques for speeding up testing by removing barriers—to collaboration, to coverage, to builds, and to change.

You’ll learn how to:

  • Move to continuous integration in mobile release cycles
  • Apply test automation to the mobile application environment
  • Prioritize the steps to take in your journey toward better mobile quality
  • Create process harmony between development and QA teams
  • Prepare for the demands of mobile form factors yet to be imagined

Webcast Transcription

Josiah Renaudin:        

Hello and welcome to today’s web seminar, Seven Steps to Remove Barriers and Accelerate Mobile Testing, featuring Chris Karnacki, Senior Solutions Consultant at Keynote.  I’m your host and moderator Josiah Renaudin.  Thank you for joining us today.  I’d like to introduce and pass the controls to our speaker Chris.

Chris Karnacki:

Thank you, Josiah.  Good morning and good afternoon everyone.  Welcome and thank you for joining our webinar today, Seven Steps to Remove Barriers and Accelerate Your Mobile Testing.  I’ve been in the mobile space for many years.  In the last few, I’ve focused primarily on the quality area since mobile is such a disruptive change and an opportunity for all of you out there dealing with quality and being tasked with looking at how we do and deal with mobile.

I’d like to comment there are a few different flavors of mobile out there.  There are, of course, mobile web-powered applications that are built very much like and can be viewed as an extension to everything that’s going on in the desktop world.  However, there is also native mobile applications, and from a technology perspective, and if you look at some of the other trends that are going on in this space, there’s a lot of debate over which is the best approach.

But regardless, the number of native mobile apps that are being built and deployed today is tremendous.  It’s almost unmistakable that there’s a general trend that is starting to happen and there are reasons for that that we won’t explore today.  The core of today’s focus that we’ll talk about is how to speed up your mobile testing regardless of whether you’re creating a mobile website or native application or both.

So, the first premise that I have is that you can look at mobile testing in two ways.  And when you think about native apps, you can say well, we need to do things as we’ve always done things or maybe we need to look at it differently.  Again, the premise that I have here is that mobile testing is different.  It’s fundamentally different, and that’s for a few reasons.  That’s what we’re gonna talk through here.

First, the process is different, and if you think of traditional applications and how they’ve been delivered over the years even though agile has really taken over from waterfall type and there’s even an accelerated version of agiles that I see happening with most of our customers in communities that I work with today when it comes to their native mobile apps.  Clearly, it’s a very, very quick iterative structure, but the velocity is that the apps that are created is what drives the different process.

If you think about the process itself, I believe the life cycle is getting redefined for native mobile apps.  There is an ideation phase and the interaction between designers, developers, the business owners of those applications.  It’s very different from traditional.  It’s driven by user experience and because that user experience is so important.  So, there’s the ideation phase.

And that phase, once it’s complete and moves, it moves to something called design-driven development where those quick iterations between mockups that need to be translated into different kinds of frameworks where you separate the logic of an application from the actual user interface and using frameworks where designers can work completely independent of the developers.

The point here is if you’re in a quality role, you need to understand that iterative design process that’s happening.  This isn’t the notion that just taking an experienced mobile web application and trying to cram it into a small One Factor tablet or Smartphone is going to work.  Processes and the overall flow of an application need to be rethought as well as the capabilities, many of which didn’t exist in the world before.

So, ideation flows to design-driven development which many times flows to something you’re all familiar with, continuous integration in a broader category that I call continuous delivery.  This is where a lot of testing can start to participate in the process.  Quality participates in that process in an iterative approach where you’re integrating between your build and SEM system and your automation and your automated testing so that when a check in of code happens from your developers, that can kick off a test or a set of tests as part of what is hopefully an automated process to drive that continuous flow.

But it doesn’t stop there.  If you think about after the design-driven development and continuous delivery and you think about the notion of what kinda feedback do you get from production, there’s no other type of system application delivery of technology to date where there’s so much visible feedback as there is with native mobile applications.  Whether it’s from an enterprise app store or more importantly a public app store, getting feedback directly to the teams of developers and testers is key.  We call that experience-driven analytics and this loop reflects that back.

Think of this almost like a window for developers and testers to look into production to see how the apps are being used, what the adoption rate is, what are the crashes and exceptions that are happening and what are the breadcrumbs or trails that lead people to those problems that we should be thinking about from a development or user journey experience that we need to account for during development and testing.  And we can potentially create regression scripts to use later once those issues are found and fixed.  As I’ve shown here, the process for testing mobile is quite different.

The next point I would make is that the team sizes are very different.  What we see most often is there’s not a typical makeup of a native mobile app team.  It’s usually two to three developers, one to two testers typically doing manual testing.  This is what we see more often than not for these projects and there’s a designer and an architect that float between different projects so they’re not necessarily synced to a specific application or service.  It’s a very different size team that we traditionally see with desktop applications or desktop web.  It’s much smaller and what I think is interesting is there’s a different interaction based on that smaller team size as well as the different process you see at the top.

Next is the frequency of releases of the applications is very different between traditional desktop and now mobile.  There was a survey where we asked over 5,000 enterprises – these are enterprise customers not consumer developers or developers who are creating apps for B2C, these are B2B apps.  They’re enterprise centric and these 5,000 project teams that are working on the mobile applications responded.  In the survey, the question that we asked was very simple.

It was how frequent are your application releases?  Are they weekly, biweekly, monthly, quarterly or yearly?  The red section on that chart shows that their releases are more frequent, and these are not just bug fixes.  These are new releases with new features and capabilities that are being put into the native app on a monthly basis or even faster.

The greater than 50 percent of those 5,000 enterprise project teams that were queried said they’re releasing their native mobile apps on at a minimum monthly basis if not weekly.  This is, of course, much different than traditional desktop applications or traditional desktop web.  Now, that kind of velocity puts a lot of pressure on quality.  It puts a lot of pressure on the development team and the iteration and the innovation that you put into your applications.

The next reason I believe testing for mobile is fundamentally different is that the complexity and the fragmentation of devices and form factors is, of course, much different on mobile than it is in the desktop world with all that you ultimately have to test for.  All the different configurations, the different versions of the operating systems, screen sizes, resolutions, different carrier, overlays or different OEM overlays from the likes of Samsung and HTC and others.  If you think of all those configurations and things you need to test for, it’s really starting to explode in mobile.

Those variations and the fragmentation in Android, not only again on form factors but device manufacturers, how the actual API and Java are implemented can be different.  It becomes a very, very big challenge.  In iOS, it is a little bit better.  And people used to say, “Well, it’s so much simpler in iOS.”  When you just look at all the different devices that Apple’s expanding into, not to mention the new Apple Watch that just came out, add all that together and iOS is no longer such a friendly environment for testing.

And then we look at someone like Microsoft, fiercely making a run at being the third Tier 1 player into the market, especially in an enterprise space.  Testing on all these creates a new headache that wasn’t felt prior to the explosion of mobile and mobile applications.  These all need to be accounted for and tested for prior to delivery.

So, one of the most important elements and why this is different is user expectations.  When HTML and the web first really came out, there was a seven-second rule.  Users could expect up to seven seconds for a site to load or for a page to be rendered or useable.  Now, with new users, mobile devices, mobile applications, mobile sites comes a whole new set of expectations.  These users today were not around during the beginning of the Internet time and have only been using mobile devices.  Now, we see generally a three-second tolerance.  Keynote may argue that it’s really two seconds of tolerance on mobile devices.

The tolerance is at an all-time low and the expectations are, of course, at an all-time high.  The reactions to a core user experience is very clear.  People don’t only not want to use a particular app if they have a bad experience, but more often than not they’ll never go back to it again or they’ll actually just uninstall it from their mobile device.  That’s, of course, the consumer world, but there is a direct correlation to the enterprise world as well.  People just won’t use the application if they have a poor experience.  I’ve seen it happen many times.

Enterprises kinda made their first foray into making a mobile application, and they’re confused as to why nobody was using it.  Why were their employees, their partners, their vendors not using that application?  Because people don’t want to be experts of their own domain and they know their own subject matter but then not having the ability to quickly and easily use a mobile application on a device.  It should just work.  They don’t want to have to spend time to figure out horrible interfaces on enterprise-developed applications.

So, expectations have changed drastically with mobile.  The implication of all this is that quality is quite different.  The way to think about how this is different, the expectations are different, the process is different and the team sizes are different.  This means that there has to be a collaboration that happens much more so in the mobile world between developers and designers with the folks responsible for quality.  That quality role is still a pivotal role to make sure that it’s not just working but now it’s working better than expected that it blows away the expectations of the users both from a functional as well as from a performance aspect.

So, we need to mix together our skills of each team, we need to collaborate and we need to be consistent.  So, that’s where you are today.  The question then, of course, that comes to mind is great, so what do I need to do whether I’m general quality, whether I’m mobile-specific quality or even those of you out there that are a developer?  What do I need to do in order to be successful?

When I think about quality and mobile apps, I think there’s a typical Rule of Seven.  And as most of you probably know, there’re a lot of Rules of Seven out there.  We’ll list a few of them:  Seven Steps to Seven Figures, Seven Rules of Life, Seven Steps of True Love.  Today, we’re gonna add a new rule of seven.  So, the first Rule of Seven of our Seven Steps to Mobile Quality I’ve seen it done at some really innovative customers.  Step 1 is to think like mobile users.  There’re some incredible statistics out there.  In the next five years, over half of the workforce will be millennials.

When they look at their tablets or phones, they have a much different expectation because they grew up using native mobile apps and mobile devices.  The implication here is that the user is very different and their needs and requirements are very different than those of the past.  One of our customers today who works in the communications area, they actually go out and they look specifically to recruit for their mobile quality individuals, people with gaming type backgrounds.

These are people who always try to work around and improve scores.  They look for workarounds maybe around a given path, a user journey.  They may try to create things, find Easter eggs, not just follow down the traditionally beaten path.  It’s very interesting to think of it that way, but one of the common threads is that when it comes to the world of mobile, I shouldn’t expect any kind of manual to come with an application that I’m going to use.  I should just be able to pick it up and use it without having to read anything, without having to learn anything and that’s really important.

That means that the intuitive nature of what’s being created has to be there, and that’s one of the core approaches from a quality standpoint.  I need to think like a user of the application that I’m creating and developing.  So, it’s important to understand what the key user journeys are, and you can only know that if you’re part of the design-driven development phase that’s done upfront early in the ideation process.  So, that can be a part of whatever type of test plan you put together.  Then there’s the negative aspect.  Again, one of the reasons why this one particular customer is hiring a different profile of testers is because they think differently.

They’re thinking about what’s not working, and for that, I think it’s very interesting to think about different types of testing.  Of course, there seems to be this notion of traditional, functional and regression testing that needs to happen, whether manual or automated, but there are other types of testing to be done as well, exploratory testing, negative testing, all these types can be an important help to meeting higher expectations for mobile.

Another step, Step 2, and one that comes up quite often is that in order to test and think like a user, you must use real devices.  Now, emulators play a key role too.  I get asked a lot, “Can’t I just use an emulator?  Do I really need a real device?”  My opinion, when it comes to emulator versus real devices is that you have to use both.  Both play a very critical role and both should be leveraged by your teams.

When people are going through that first step I mentioned in the process, before design-driven development, when they’re going through continuous integration, there’s a level of responsibility falling more increasingly on the developers for mobile applications where they need to be doing some testing.  The iterative nature of development and design requires using emulators.  Testing can be a key part of that so it’s important to you that you add an element or step in that process, but nothing is a substitute for real devices.

And I think at some point, it becomes problematic for any organization unless you’re in the Telco Industry and you have access to an unlimited number of devices, but you need to have those devices and have those devices available for you, especially with new versions and form factors coming out all the time.  It’s important to manage that, and you don’t want to have critical key expensive resources standing in line at the Apple store, standing in line at your favorite retailer to get the latest and greatest devices.  You need access to those devices.

The second thing to think about is that a lot of testing organizations are working in a distributive manner so it becomes even more complicated if you’re trying to provision those devices locally for yourself.  Why wouldn’t you wanna use and leverage the Cloud as part of that also?  My opinion is both emulators and real devices are necessary and needed to successfully test your mobile applications.  As more and more application development is outsourced or done by different teams in different parts of the world, you need to be sure that you have the ability to collaborate and give feedback between development and QA and, of course, production.

And that can only be done by using real devices and really my opinion is real devices and the Cloud so that all of your testers, your developers, even production individuals have access to the same consistent set of devices.  There’s no worry that someone in the U.S. is using one version of an iPhone or an Android device and someone offshore may be using something slightly different and not realize it.  Using real devices out in the Cloud eliminates any of the ambiguity or uncertainty.

So, just imagine trying to provide an Apple Watch for all of your local and remote resources and then hoping they don’t forget to put them back in the drawer at the end of the day.  You need access to real devices without the grief, the hassle, the overhead and expense of managing a lab or managing a drawer or a closet full of devices.

So, we talked a bit earlier about the types and profiles of the people doing the testing.  Now, it’s time to talk about what types of testing to do.  The broad categories of functional and regression are important, but when you really get into the velocity of these apps that are rolling out, it’s critical to understand that there are some types of tests that are really important and more often than not, I’ve seen a lot of people do some level of smoke testing upfront before more detailed versions of test plans are engaged in just to make sure it’s really QA worthy.

Is it really worthy or should this be worked on before we engage again and spend time and expensive resources in a more rigorous testing process and then running through an entire test plan.  I do, of course, think that smoke tests and sanity tests, which are different, play a really important role in that area.

Another question that comes up a lot is what should that smoke test include?  Well, it should include the basics.  It should include key user journeys, maybe login, logout, common actions you can expect all of your users to perform.  Example of a retail banking app, you want to make sure that your smoke test includes, again, launching the application, logging in, maybe doing an ATM location or find an ATM, checking an account balance.

These are things that all users are going to do so we should make sure that those are working and they’re functional before we engage in and spend those precious resources doing a more rigorous, detailed test.  If those do run, then more importantly you can automate them.  It gives you a good foundation to move forward with more investment of time and resources and expanding your testing.

So, if you’re gonna spend time to run through this, why wouldn’t you automate it?  I think it’s important for QA testers to engage their development counterparts.  So, as this slide says, you want to get your developers involved.  There’s so much that can be done upfront when it comes to unit testing, when it comes to integration testing, when it comes to continuous delivery process that I mentioned before and, of course, there’s this long-held belief that he or she who created the application should not be trusted to test it.  There may be truth to that.

In the world of mobile apps, there’s a level of quality that has to be there, that has to be in place before you as the formal quality organization get this in your hands.  And you know the truth in mobile is there’re a lot of different elements in the quality process given the release velocity and the relatively small size of the teams that I speak with.  This puts the developers in a robust, unique position to participate in this process with a lot of different and new changes, new innovations in the world of mobile, advances that make this much more relevant for both quality and development.

I’ll give you an example.  The chart I’m showing here is the cost of fixing a defect.  So, what does it really cost to fix some bug or defect that’s found during testing?  What are the number of defects that are injected in different parts of the process?  And what is the cost of fixing it at the point in which that defect is injected into the process?  So, if you find a defect during the coding phase, for example, it costs 25 bucks to fix that.  The majority of problems and defects are found during the coding phase versus problems that are identified during functional unit tests, system tests, etc.

You can see that the cost of fixing those defects later skyrocket so why wouldn’t you want to do and perform that testing?  So, we get the developers involved to start catching those bugs and defects during that coding phase, that development phase when it’s much, much cheaper to fix those defects and generally easier.

I mentioned earlier advances in innovations in mobile that can help developers and testers upfront.  You may have heard of the Application Performance Management or APM space.  There’s now, of course, a quickly evolving mobile Application Performance Management space or MAPM.  Here, the notion of real user monitoring is key.  The information you can capture from production and provide to your testing and development teams is absolutely astounding.  There has never been this level of insight in the past.

You can actually see where an app crashed and why it crashed.  You can see what the user was doing, what the device was doing, even what the network on that device was doing.  In the world of native mobile apps, you can see down to the line of code that caused that crash and then be sure to not only fix it but use it to create a test to ensure it does not reappear and use it as a regression test.  All of this leads to faster, better feedback making it back to development and QA from production.

Step 5, I don’t think I can stress enough the importance and the relevance of automation.  I do think that the No. 1 problem is that people dive headfirst into automation and they get a start.  Everyone understands the reason why is the savings on costs, the ability to manage the complexity that I talked about earlier, the need for regression sweeps, building it into their continuous integration process, but what I see far more often than not is a lot of folks get passive automation using a new product, they dive headlong into it and start creating very, very complex automation test scripts.  And my opinion would be to start very simple.

Start with a very object-oriented approach of creating smaller snippets of the test and then being able to reuse those over again and again.  It just doesn’t make sense to create very complex, very painful automation scripts.  This poor approach to automation has led to a black eye for automation in general.

So, start slow with the basics and create a very simple set of automation assets around those things we spoke about earlier.  Could be launching an application, logging in, maybe checking a balance if it’s a retail banking application.  If you’re going to take the time to create automation tests, why not start with things that users do most or use most frequently if we built a new process that makes sense.

So, Step 6.  And I kinda walked us into continuous integration.  I mentioned that this master process for a native app, which has the ideation phase, design-driven development, continuous delivery and the analytics-driven feedback from production.  Well, that continuous delivery phase has at its core continuous integration.  And you know who needs it?  Everybody, everyone needs continuous integration.

If you’re going to take the time to create automation around smoke or sanity tests, why wouldn’t you put in that extra effort and build in the continuous integration process?  It doesn’t really make sense not to extend your existing CI process into mobile.  The video playing here shows how you can extend your existing process using Jenkins into mobile and trigger a set of automated test scripts at the end of an application build job.  So, what you’re seeing here is a video of a Jenkins build process or job, building a native application, maybe it’s an IPA file for an iOS device, APK file for an Android device, and traditionally, the build job or the Jenkins job stops when that application was built.

But what we’re doing here is tying that in and extending it into our mobile testing process.  So, now rather than just stopping with the output of an APK or an IPA file, we’re actually taking that file, we’re installing to real mobile devices in the Cloud and then triggering a series or set of automated test scripts.  And now the results of those test scripts will be brought back into Jenkins so that you can see the stability of that build.

So, it’s really taking what you’re doing today, probably for both desktop and hopefully mobile, using Jenkins, Bamboo, some other CI system and extending beyond its walls and further automating that process, not stopping with the output of an application but stopping with the output of results of your actual tests, again, whether it’s smoke and sanity tests or functional regression tests against those real devices in the Cloud.

Step 7 is to enable you and your development and QA teams to do functional testing in this Cloud.  You’re going to need a platform for both developers and quality to use to achieve your desired outcomes.  Of course, those outcomes as we spoke about earlier, are to not only provide a fully functional app but that it’s functioning and performs not just to expectations but well beyond expectations because those expectations are constantly getting higher and higher.

You need to enable both your local and remote resources to test, of course, both manual testing and automated on a complete library of real devices.  I mentioned earlier that this can help take away the uncertainty that testers are not using the proper or same devices from test cycle to test cycle.  It helps to improve your coverage and quality of your functional testing for both native mobile apps as well as mobile websites.  We, of course, have been talking about native applications so far, but the same sort of testing in the Cloud can be leveraged against your mobile websites as well.

Now, your teams may have a variety of skill sets.  You may have very skilled automation engineers that want to get their hands dirty in code and do things programmatically.  You may have more junior members of your team that would prefer a UI or a WYSIWYG interface.  So, you need a testing platform that fits that varying skill set.  So, it’s important to have the ability to quickly record the test scripts, especially if you’re following agile methodology, you’re only running two-week long sprints.  You don’t want to spend a lot of time creating an automated test only to throw it away later.

So, you need the ability to quickly record scripts, maybe use a very visual drag and drop WYSIWYG interface or to write scripts at the code level for those individuals, developers or quality professionals that want to get their hands dirty at a code level.

Now, in revisiting the SDLC for mobile application, we recall that it’s imperative to have integration into existing tools and frameworks.  So, you already have a set of tools that you’re using for mobile, for desktop, desktop web and we want to be able to utilize that existing resource or resources and skill sets so you need a platform that allows you to supplement or add on to your existing process and not force you to learn a whole new tool, a whole new process, procedure and skill set.

You need something that’s gonna plug right into what you’re already doing today but allow you to do it more efficiently and with less cost.  And, of course, at the heart of any automation platform, you need flexibility to both do GUI and object-level interactions and validations.  This is really two‑fold.  First, it allows you to create a very resilient and robust test strategy.  Object-level testing will help you create a single test for all devices in your test plan and save you precious, expensive resources and time.

The other thing this flexibility does is it allows you to not only test the functions of your application and kinda the flow test, can I get from Point A to B to C and so on, but you also want to make sure that your user is being presented with the proper information, the proper branding and ultimately the proper experience because that’s what reflects back on your enterprise and your brand.

So, having the ability to do GUI-based testing so validating that the pictures on the screen, the buttons are rendered properly, that the text is readable, all those sorts of things are what the user, your end customer and end user, are actually using to judge your application.  So, you need a platform that has the ability and the flexibility to let you do both.

Most of my customers are doing just that.  They’re using both object-level testing to get from, again, Point A to B to C, but they will also add visual checkpoints because at the end of the day, that’s what your customer, your user has to experience your brand and your application.  They have their iPhone; they have their iPad, their Android phone, Windows phone, whatever it may be.  They’re concerned about what they see on the screen.

Now, you also need access to devices with both old and new versions of the operating systems.  So, just because Android Lollipop 5.0 came out doesn’t mean that your users still aren’t using Android 2.3 and 4.0 and 4.4, whatever versions they may have.  And even with iOS, although it has a much quicker adoption rate of new versions of iOS, not everybody’s running iOS 8.  Some people can’t.

Not everybody’s running 8.1.3 or whatever today’s latest may be so you need to have the ability to access devices that have both old and new versions of the operating system.  Again, this can be done in the Cloud.  You also need support for new OS on Day 1 or earlier.  So, you need to find a Cloud that gives you the ability to test on pre‑released versions of the operating systems.

So, for iOS 9 let’s say, when Apple releases the beta for iOS 9, you’d like to start testing that on real devices, not just on simulators, not just on phones in your hand because you’d like to, again, enable your local and remote resources to do that testing on that new version of the operating system before it’s available.  If you wait until the day it’s available and something happens, your application does not work with that new version of that operating system, your customer’s experience is going to be horrendous.  So, support for testing of that new operating system early or before Day 1 is crucial.

So, I hope you found today’s webinar to be useful.  Again, my hypothesis is that mobile is truly different.  It’s opened up not only a new set of challenges but also opportunities.  When you think about these new user expectations, the process of engagement, the value of automation and what you want in a partner and vendor to help deliver these applications and these quality applications, there’re really a lot of new opportunities to raise and elevate your own roles within your organization.  So, I think we can open up for any questions that are out there.

Josiah Renaudin:

All right, thank you very much.  Before we start Q&A, everybody can ask Chris questions by typing them in the field beneath the panel and then clicking the submit button.  We’ll try to get through as many questions as possible, but for those questions we’re unable to answer during Q&A, some will be answered offline.  All right, the first question, how could enterprises test pre‑released applications or mobile sites that are not publicly accessible from a Cloud?

Chris Karnacki:

That’s a very good question.  Most organizations and enterprises that I deal with have their pre‑released applications, their pre‑released mobile sites, of course, that they wanna test.  They’re not gonna wait until it’s out to the public, but they are firewalled.  They are not accessible from the outside world from a traditional Cloud.

So, you need to find a Cloud that gives you the ability to be flexible with where those devices reside.  Keynote can actually provide devices that are in a Cloud behind that customer’s firewall so they have access to any of those protected test assets.  Or alternatively, Keynote can actually host those devices in a Cloud for the customer and provide a VPN access for those mobile devices, be it Smart phones or tablets, to access those pre-released test assets.

Josiah Renaudin:

All right, thank you very much Chris.  Next question, how do enterprises fit mobile testing into their existing test processes?

Chris Karnacki:

That’s another great question.  I touched on it a little bit, but what you need to do is find a platform for mobile testing that allows you to utilize the existing process you have today.  So, if today you’re testing your desktop applications, maybe your desktop websites using HP’s UFT product for automated testing, you may be rolling up all of those tests into the HP ALM platform for test management.  You need to find a platform that gives you the ability to continue to use that process because you’ve already invested in the resources be it people, licenses, hardware, whatever it may be, and you need to kind of extend that into mobile.

And Keynote can help you do that with our HP certified add‑ins and plug‑ins and other integrations into test tools and harnesses you may already have be it from IBM, Worksoft or even more earlier on in the development phase or testing phase with Selenium or Appium.  It’ll allow you to, again, fit your existing process into mobile and not create a whole new process just for mobile so it really allows you to reuse your valuable resources that you’ve already got working today.

Josiah Renaudin:

All right, next question.  How do functional tests work on the Cloud?

Chris Karnacki:

So, that’s a great question.  So, in the Cloud, Keynote can provide access to a large library of real mobile devices.  These devices are just like having a phone in your hand except you’re gonna control them with your mouse and your keyboard.

So, if you have a functional test to make sure that a user can log in to the application and do a balance transfer so, again, retail banking, you have the ability to install your application on this device because, again, it is a real device, real data plan, either a cellular connection or wireless or Wi‑Fi connection, install your application, again, either from the public application stores so the Google Play Store, the Apple App Store or maybe from an internal build system, anywhere it’s hosted, and then you step through that application just as a user would.  And you can either do that manually, and you have the ability to take screen shots and create videos of that manual test.

You also have the ability, as I mentioned earlier, to create an automated test.  You can record your interaction with that device in the Cloud, and so you’ve stepped through that functional test, again, launching the application, logging in, doing a balance transfer.  You can then play that test back across multiple devices so you record against one phone and run it back and drive it as an automated test against many devices.

Josiah Renaudin:

All right, next question, how do enterprises choose what device to test on given the fragmentation and large variety of devices?

Chris Karnacki:

Sure, so there’re a couple of ways we see enterprises doing this today.  First, most of the enterprises we work with have their own analytics teams that can kind of tell them what the most popular devices are that are using their application.  It may be something as simple as going to the Google Play Store and seeing what devices are installing the application.  Apple just released some analytics to show you a little bit of that same sort of insight.  We also see that marketing organizations generally have insight into what applications are hitting their mobile websites or their mobile applications.

So, customers generally come to us with kind of a vague list of what they’re looking for.  Keynote can then use its years of industry expertise to help you kind of narrow down what devices should you really be testing on and what mix of devices, be it form factors, manufacturer’s platform, operating system.  What mix is going to give you the best amount of coverage for your given resources be it the number of people, be it budget, but Keynote can help do that.

We’ve been doing this for 15, 20 years so we have a lot of expertise in doing this, and we also have a lot of real world examples from our existing enterprise customers on what their testing against.

A good example is, again, a bank comes to me and says they want to do some mobile testing.  I can go back and look at what devices Keynote’s other banking customers or financial customers are testing on.  There’s something to be said about making sure that you’re testing on the same devices or set of devices, not the same physical devices, but models, carriers, all of those different iterations that the competition is.  You wanna make sure that you don’t leave anything out, but Keynote can help kinda whittle down your vague idea of what you may want to test on.

Josiah Renaudin:

All right, this next one’s a little bit longer.  We often test mobile devices hooked up to logging tools such as Charles so we can ensure our omniture tags, etc. are firing correctly.  It’s to root out error codes that we see coming in but we can’t pinpoint.  Is it possible to use the real devices in your Cloud to look at logs such as Charles or Conviva in real time?

Chris Karnacki:

That’s actually a really good question.  It’s becoming more and more of a real use case that we see.  So, the devices can, of course, be connected to or through a proxy, let’s say Charles, and the traffic from those devices as you’re using your application will go through that proxy and then a tester or developer can, either in real time or kind of post test, go through those Charles logs and see if the proper tags were firing, if the proper APIs were hit, whatever it is that you’re looking for.

And you can actually make sure that all of those different things are firing up, but it would be used in conjunction with the Keynote Cloud or the Cloud of devices from Keynote.  You point them at that proxy, if you have an existing or create one, and then, again, watch as that test is being run either in real time or kinda post test.

Josiah Renaudin:

All right and this one is a little bit broader.  This person would like to know what are some of your best practices in mobile testing.

Chris Karnacki:

Sure, so I think that, you know, at a very high level, you need to test on real devices.  Again, there is a place for emulators.  There will always be a place for emulation and simulation.  You need to test on real devices.  We’ve seen many of time when the application works fine on an emulator or simulator but installed on a real device and for whatever reason that real device did not behave the same way as an emulator or simulator.  So, real devices is huge.

Two is, again, engage.  Quality professionals should engage the development team for all the reasons I mentioned earlier:  One, it’s kind of design-drive developments that we’re seeing a lot and the testing and development teams need to collaborate to make sure that they’re providing a quality, consistent application and experience.  And the biggest one really I think that you need to do, and this is for both testing and development, is to think like a mobile user.

If you think like a mobile user, you’ll test better because a mobile user doesn’t always just follow what I’ll call a happy pass, right?  You may build a test plan or a test script that follows some specific user journey that you know works because that’s how you want someone to use your application, but not everybody uses your application the way you want them to.

So, if you think like a mobile user to make sure you cover a broader set of scenarios because believe it or not, although most of you probably believe it, your users will find those scenarios.  They will try those edge cases, not on purpose, but they will do it.  So, using real devices, collaboration between development and QA and thinking like a mobile user in everything that you do, again, be it development, be it testing or be it analyzing the data and feedback from production.

Josiah Renaudin:

All right, thank you very much for those answers Chris.  That’s actually going to end our event today.  I’d like to thank our speaker, Chris, for his time and I’d like to thank Keynote for sponsoring this event.  Also a special thank you goes out to you, the audience, for spending the last 45 minutes with us.  Have a great day.  We hope to see you at a future event.

Duration:  48 minutes

Back to Top