Towards a Better Quality App
Interview with Martin Wrigley, Executive Director of the App Quality Alliance (AquA)
Imagine that you were a first-time developer of mobile apps and wanted to know how to test your app to ensure the highest levels of quality and reliability before putting it on the market. Would you know who to turn to? Considering the relative youth of the mobile app industry, is there a widely accepted set of criteria upon which to base your tests, or a trusted third-party to turn to for guidance?
Benchmark recently had the opportunity to speak with Martin Wrigley, who, as a mobile industry veteran and Executive Director of the App Quality Alliance (AQuA), is in a unique position of authority to address these questions. Mr. Wrigley has more than 25 years of experience in the mobile and IT industries, with an extensive background in IT development, solutions architecture and delivery. In addition to his current role as head of AQuA, he is also a highly sought-after mobile technology consultant.
As Director of Developer Services at Orange Partner, from 2004 to 2012, Wrigley worked with telco industry partners, app shops and mobile app developers, providing strategic direction and tactical support to the Orange developer community. Now with AQuA, he is bringing his years of personal experience working with mobile app developers to bear in promoting app quality and delivering the message that consistent quality and standards-based testing is an absolute necessity for the continued growth of advanced mobile technology.
“AQuA is not selling anything, they just want to promote mobile QA best practices. So for a tool vendor like Keynote, if we align with their best practices it provides a real value for the end customer,” says Thomas Gronbach, Director of Marketing for Keynote Europe. “They provide the methodology, the best practices and the testing criteria and then they have also verified that we have aligned our tools to that framework. Taken together, this is what app developers need to have a successful mobile QA practice.”
In our conversation with Mr. Wrigley, he addressed the question of why it is that ensuring app quality is so central to the continued growth and adoption of advanced mobile technology and how AQuA came into being to drive home that message, among many other related topics.
Benchmark: Tell our readers about the origins of AQuA and how the organization came to focus specifically on mobile apps and improving app quality.
Martin Wrigley: First of all I’d like to say it’s very nice to welcome Keynote-DeviceAnywhere as the newest member of AQuA. Having Keynote on board as an endorsed testing service is really great and we’re very pleased about it. I think the two organizations are going to work very well together and we’re already planning our first joint-webinar, which took place on November 12th.[Click here to view the archived webinar: AQuA-Keynote-Webinar]
We know that the AQuA testing criteria are a prefect script for developers to use, in addition to the functional testing that they do when they’re using Device Anywhere, to actually check that their app is working correctly on a range of different devices. To see that it’s working in the Device Anywhere environment in exactly the same way as it would if they were holding the device in their hand. So the approach that Device Anywhere has fits very nicely with the AQuA approach and I think the two organizations have a very bright future together.
But back to the origins of AQuA. AQuA is an organization that is coming up on its tenth birthday in 2014. It was started by a group of handset manufacturers by what was then Sun – now Oracle – back in the Java world. People like Nokia and Siemens and SonyEricsson and Sun and Motorola. They realized that they were causing a lot of confusion for developers and duplicating work in that they all had their own accreditation or certification schemes for the Java ME apps that were running on phones back then in 2004. Despite the fact that everyone believes that Apple invented “the app,” in those days we had Java MIDlets, and you could have a very good business creating simple MIDlets that ran on phones -- effectively, apps.
But developers were confused as to how to get those early apps certified and how to get them tested, and then get them re-tested for certification. So the industry and other related bodies appreciated that they were duplicating effort, not only on their part, but also causing cost and difficulty for the developers. So they combined together their existing schemes, things like the Motorola TIC and the Nokia OK and collectively defined Java Verified. The idea for Java Verified was that it was a single unified testing initiative -- adopted by half a dozen companies -- with a set of standards that they could all get behind and all stand up and say, “Yes, this is the way you should test your Java applications and this is the way you need to get them certified and if you get them signed through this Java Verified signing scheme then it will work on any of our devices.” And that was a really big move forward. In those days Java was fragmenting faster than you can imagine and having a one certification scheme across the whole industry was a really beneficial development.
So that was all very well and good and we ran with Java Verified for a number of years. In fact, Java Verified is still up and running today and the Nokia handsets going out into emerging markets, running the Nokia stores, are using all Java Verified apps. However, a few years ago we realized that phones were moving on from Java and with the advent of Android devices we saw that there was a need to create a new set of testing criteria, a new way of looking at testing apps. The Android world is quite different from the Java world and there isn’t the same level of gatekeeping or control. There isn’t the same level of influence that the operators and handset manufacturers had back then and it’s basically much more of a free-for-all. So we approached it in a very different way. But we found that the experience that we had from Java was still very relevant to the creation of a set of testing criteria and best practices for Android apps, which we subsequently released and put out into the market a few years ago. And now in fact we’ve gone one step further and in October of this year we released a set of testing criteria for Apple iOS applications as well.
Benchmark: I saw that. Congratulations!
Martin Wrigley: Thank you very much. It has gone down very well. The reason we released the AQuA testing criteria for Apple iOS apps is because developers were asking us to. These are the two major platforms at the moment that developers are focusing on – clearly there are others around as well – but these two platforms are where we see the need for the testing help that we provide.
So AQuA provides a number of different things in its role of promoting the quality of mobile apps. We have a set of best practice documents and these best practice documents allow developers to read and learn from all the errors that have been made before them. So we are really bringing together the collected wisdom of all our members. And I think it’s worth pointing out that we are totally funded by the fees that our members pay to be a part of the organization and we charge developers nothing at all, all of the AQuA materials are free for developers to use.
We provide best practice documents and for each of the platforms we provide a set of testing criteria. The testing criteria are a set of tests or test cases, which allow the developer to go through and test those things that many developers tend to miss. When testing their apps, developers -- in whatever technology – tend to behave pretty much the same. They will check that their weather app gets the weather right. They will check that their stock price app gets the stock prices accurately and rapidly, etc. They’ll check the main functionality of their app quite well, but what they’ll forget to check is all the peripheral stuff. Does the app download and start up on a new device in an appropriate amount of time without any nasty error messages? Does it cope correctly with an incoming phone call or text message and then recover from it afterwards? Does it deal happily with the user who manages to remove the memory card from a device that has a memory card? So it’s these sorts of things that developers know not to do to a handset because it’s a daft thing to do to a handset, but they’re the sort of thing that users do all the time. And we know that developers keep making the same errors and keep on missing these types of things if they don’t use our criteria because we have a set of ‘Top-10 Errors,’ and it’s remarkably consistent. A month ago we tested all of the entries to the Apps World London - Appsters Awards. And lo and behold, the errors for the ones that failed were consistently in our Top-10. It’s a simple set of QA testing that can be done by the developer without any real difficulty -- it’s very easy to do -- that improves their app quality. It’s not rocket science. It’s just basic, sound, QA testing.
Up until now we’ve recommended that developers take a real device and test their app on it. Now clearly it can be difficult for developers to get ahold of some of the devices. To get ahold of devices that aren’t available in the region in which they’re working, for instance. And that is why we’re delighted to be working with Keynote-Device Anywhere.
Benchmark: Well that’s great and that was a great introduction and overview of how AQuA came about and the types of issues it helps developers to address. To follow up on that, perhaps you could speak more specifically to the question of why it’s so important that there are clear and consistent quality assurance guidelines for developers of mobile apps, and why it’s important for the growth of the mobile platform?
Martin Wrigley: Well, the truth is that the mobile software development industry is still a very young industry. It hasn’t reached the same levels of maturity that the more established software development platforms have reached. So having those simple testing guidelines is a real benefit to the mobile developer community.
Now the beauty of the mobile development world is that, if you have a great idea, you can write an app and you can put it out there. And that is fabulous. And it really does work well for innovative thinkers and people with very clear views of what their customers want and what their app should be doing. But many of these guys are new to development and new to the mobile industry, so what they aren’t are expert testers. What they know is the functionality of their app. What they don’t know is all the lessons that we’ve learned over the last 10 or 15 years about apps on mobile devices that we can bring in to help them. So the testing criteria that we provide really catches all of those errors that you find when you’re dealing with a different environment on a mobile phone, compared to writing for a PC or a desktop computer. You’ve got all the issues relating to variable levels of connectivity -- you don’t know when the phone is going to lose its connection in terms of data. You’ve got all of the difficulties surrounding the fact that they are designed primarily to make phone calls and receive text messages. Apps are secondary things. So you get interrupted by a phone call. You don’t get things like that on a PC. All of the programs [on a PC] have their own dedicated resources, so you don’t have the same issues of limited connectivity, limited battery life, limited screen size, limited memory and a nasty environment where you’re interrupted by stuff all the time. So these things are common to all mobile devices and bringing in the lessons from the last 10 years in a way that is laid out straightforwardly, understandably and easy for developers to use has great value. And we find that developers keep coming back and saying “Why didn’t we know about this before? This has been really, really helpful.”
Benchmark: If a developer wants to get started doing some testing I know they can go, if they’re developing for Android, to the Android developer site (developer.android.com). How do those app quality guidelines fall short? Or is that a good place for developers to start when looking for guidance on assuring app quality?
Martin Wrigley: It’s a great place to start actually. There’s lots of very detailed technical advice there. There are lots of very detailed and specific things that you should and shouldn’t do provided there. What lacking though is a laid out set of steps for actually going through a logical, consistent set of testing, which is what the AQuA testing criteria provides on top of that. So yes, absolutely, there’s some great information there that’s really good. We’ve gone through all that and made sure it’s all entirely consistent. So we’re actually very pleased that it’s there and we’re very pleased that Google are actually encouraging people to test their applications. In the early days they were saying there’s no need to test, just throw your app on the market and let users see it. But that doesn’t work anymore. We’re not in the early adopter phase anymore. People expect apps to work and to work well. And if the developer hasn’t done due diligence in terms of testing the application, it’s going to fail. At the very best, it’s going to get a one-star rating. At the very worst, users will never look at the app again.
Benchmark: Yes and I think there’s clearly a benefit for the whole industry to have these kinds of standards so that people know that it really is ready for prime time and I think it’s definitely going to help it gain more traction.
Martin Wrigley: Absolutely. And there’s a huge benefit in having a consistent message from the quality and testing industry, and the handset manufacturers, the network operators and the platform owners to actually push back on the developers and say, “You DO need to do testing. Here’s a minimum set of testing that you should do in order to make users happy. You don’t need to pay anybody else to do it, but if you do get an independent third-party that’s even better. But at the very least make sure you’ve gone through this on a few handsets and you can then be sure that your app is going to work and you’re not going to disappoint your users, because there’s nothing worse than a disappointed user.”
Benchmark: What’s your position on using real devices or emulators for testing, do you think there’s an ideal mix where you may want to use an emulator for certain testing scenarios and real devices for validation? Or is it always necessary to test on real devices to get a true sense of end user experience?
Martin Wrigley: A lot of that depends on the quality of the emulator. If you look at the Apple platform, the emulator is kept pretty well up-to-date, so you’ve got a fairly good chance of finding a lot of errors there. If you look at other platforms the emulators that you find are really not that good. There are some cases where emulators work well. If you’re just looking at how your screen size adaptation may work on different screen sizes. But in terms of doing that final user acceptance testing, that final quality assurance testing -- which is where the AQuA testing criteria are focused -- there is no substitute for testing on a real device. It really is the key thing. The emulators aren’t close enough to a real device to give you the understanding of whether it’s really going to work or not. Emulators are useful in the early stages. They’re useful for a part of it. But they don’t give you that confidence, that final certainty that your app really is going to work on a particular device.
Benchmark: Tell us a little about the Quality App Directory that AQuA maintains and that was recently expanded to include iOS. Who is the directory targeted at and who are the main beneficiaries of such a directory?
Martin Wrigley: We realized when we released the Android testing criteria that there was no way for developers to actually highlight the fact that they had been using our testing criteria, that they had gone through the test and done the work to show that they had a quality app. So we created the Quality App Directory to enable developers to post details of their apps there and post information about what they have done in terms of testing. And the purpose of that is two-fold and maybe even three-fold. It’s partly to allow the developers to show off. It’s great that they’ve done the testing and that they are quality developers. They should be proud of what they’ve done and proud of the fact that they are doing good testing. And many developers are. They’re building reputations for delivering quality apps to their customers. It’s partly for people to check to see what testing developers have done on a particular app. It’s partly for our members to refer to, to see if there are applications and developers that they are particularly keen on picking up for whatever reason. And it’s partly to enable us to give developers an endorsement badge for their application if they’ve gone through and done all that testing and actually had that testing independently verified as well. So we provide recognition to the developers and to their app, because we endorse the app that has gone through that testing, and the work that they’ve done and the effort they’ve put in to make sure that their app is a quality app. So we give them a badge that they can use to advertise to their customers, to people commissioning apps from them. And in an environment like today’s where there are more professional software companies that are working on a commissioned app basis, rather than necessarily trying to develop and release them for themselves – if I was commissioning an app, I would want to make sure that the guys I was working with had a history of producing quality apps; that they knew about quality and they knew how to test apps. Because if I’m commissioning an app I probably don’t know how to test it myself and I’m relying on them to do it. I don’t know what the standards are so I’m going to want to make sure they’ve got some accreditation or some sort of quality badge that they can present. And that’s exactly what we provide through the Quality App Directory.
In addition to which we also provide all of the testing criteria online. We provide an online management tool for developers to go through and they can use this quite happily in conjunction with DeviceAnywhere and they can record all the tests that they’ve gone through and print out a test report for their app and use that on an iterative basis so they can go in and improve and fix any issues that they may find. And all of that is free of charge for developers. The only area where a developer may pay is if they wish to have a third-party test house take their app and test it on a device and have it independently verified. And if I were a developer paying somebody to do that I’d sure as heck want to make sure I had gone through those tests myself first.
Benchmark: Great. That brings me to a question about the AQuA-Endorsed Testing Services. I know you added a new membership tier recently and I know Keynote is very proud to be selected as one of the endorsed testing services. Perhaps you could describe how and under what criteria AQuA-endorsed testing services have been validated and endorsed by AQuA.
Martin Wrigley: Absolutely. This is where AQuA is recognizing companies that are working in the developers’ interests and are promoting messages of good mobile quality. There are many testing organizations out there right now and we wanted to make sure that we recognized those that are promoting good practices and working in line with the aims and ambitions that AQuA has as an organization. So we created the idea of Endorsed Testing Services where we have looked at a testing company – and it might be the provider of a testing tool or set of tools, like DeviceAnywhere, or it might be a traditional test house where they use people and local devices. In fact some of the test houses we talked to use the DeviceAnywhere tools as part of their testing technique, including one that has scripted the entire Android testing criteria into DeviceAnywhere as a semi-automated set of testing. So the Endorsed Testing Services are companies that wish to work with AQuA to help promote the good quality message that together we can promote a clearer and more consistent message to developers globally about testing. It’s all too easy to get confused about how all the various bits and pieces of testing fit together. It’s all too easy to believe a message from an obscure organization that you may have had no dealings with before who may be unscrupulous enough to say that their tool or testing service will do absolutely everything you want, until you sign a contract with them and you find all the limitations. AQuA wants to work with the organizations that provide good service to developers, who provide valuable service to developers and that fit in with the tried and tested practices that AQuA is promoting. Keynote DeviceAnywhere very much fits into that mold, and as such we were able to recognize that with the Endorsed Testing Service designation. We have a comprehensive assessment process that we carry out looking at the practices, procedures and standards that are used by the testing service that wishes to be endorsed by AQuA. And that is carried out by our QA Manager who undertakes that assessment process and really separates the wheat from the chaff.
Benchmark: How did your experience as Director of Developer Services at Orange Partner from 2004 to 2012, shape your thinking about app development, testing, and the role of the developer in shaping the future of mobile?
Martin Wrigley: I was at Orange for nearly 20 years and in the latter years I was working at Orange Partner, the developer program, and one of the roles I had was bringing applications in from developers into the Orange application shops. It was very clear in the early days of app development that the quality of apps just wasn’t there. We would find that we were given applications that just didn’t work. This was unacceptable and it was wasting everybody’s time. We didn’t have the time or the desire to be a QA department for each and every developer that we were working with. But rather than taking the traditional route that many people took a the time, which was to go through publishers and aggregators and limit the number of people we were dealing with, we wanted to encourage more people to come on board with applications. One of the things we found to help us was adopting the testing criteria that was being produced by what ended up being AQuA. So we adopted those and we joined the organization and we started promoting the message out -- trying to make sure that developers were clear on our criteria for testing applications before we accepted them and had an opportunity to actually go through those criteria themselves. Prior to that everything was very muddied and not transparent, and as a developer you didn’t know what any given organization was going to test your application for or how they were going to consider the quality of it so you would have to go through different hoops for each and every organization you wanted to deal with. This was what we at Orange picked up on and found that the consistency of the message, “This is the basic level of testing that you need to go through,” was really important. From an Orange point of view it saved us a lot of time and effort, but I think from the developers’ point of view it was helpful in that it was a clear consistent message we were giving and now they knew where they stood and they knew they had to go through this testing and they had to do it before they submitted any application to us. We even set up a lab with DeviceAnywhere to enable developers to do that testing. And that really helped the quality of applications that we were getting into the shops and really improved the flow, so that more applications were accepted the first time around when developers were using the testing criteria. As an industry-wide message I felt this was very important. The more I saw of applications and the more things became open as the Android environment became more prevalent -- as developers started using the Google Play marketplace, where unlike many of the other shops it’s not what we call a “curated” shop in that there’s no one gatekeeping it -- we wanted to make sure that the message went out to developers that this testing was still vitally important. Now you might ask why do people like Orange and AT&T and Sony and Samsung care, if it’s not their shop? And the answer is, if the app looks bad on the handset it makes the handset manufacturer look bad, it makes the network that’s carrying that handset look bad and it costs everybody money as well because a lot of handsets are returned because the user feels they’re faulty when in fact there’s nothing wrong with the handset, there’s nothing wrong with the network. It can often be a faulty app that’s causing the handset to lose battery very quickly, or to not behave properly, or to any of these other things that we actually test for in the AQuA testing criteria. So it’s in everybody’s interest that the quality of apps continues to improve and that developers have every possible advantage to create better apps. And because of that I was very pleased to be able to embrace the opportunity and take on this role as Executive Director of the App Quality Alliance and continue to get the message out into the industry.
Benchmark: So this really was something very personal for you and arose out of some of the pain points you experienced first-hand at Orange Partner. That’s excellent. Now let’s shift gears a little and talk about Enterprise Mobility for a bit. How will mobile become truly enterprise-relevant or enterprise-ready? When will mobile become enterprise-relevant, or have we already hit that tipping point?
Martin Wrigley: I think you raise an interesting question. The difference between the consumer use of mobile apps and enterprise use is largely one of sophistication. Not only of the functionality of the app but also the reality that in an enterprise environment you expect something to work, you haven’t got time to play with something, to mess around with something that doesn’t work properly. You want an app that works, that works well and that delivers value to you as a businessperson. While there have been apps in the enterprise for a long while, they’ve been relatively simple. What we’re now starting to see is more complex apps on larger screens that are much more enterprise friendly. At the Appsters Awards we saw a whole range of apps in the enterprise category and these ranged from tools to help make sure that salespeople have the most up-to-date presentations for their customers, down to internal applications that make sure that business reps have all the right details to a very complicated and complex environment. There were some in financial environments, for instance, where they were looking at the right financial packages to sell to their customers. And that is a very advanced app especially considering the complexity of some of the financial instruments these days. And that is great, but then it has got to work and it’s got to continue to work. So the improved quality of those apps is now enabling businesses to think about using them in more serious ways. I was interested to see recently that some of the airlines have eliminated all the reams of paper for the checklists their pilots must go through and are now using tablets. So when you’re getting to that level of complexity and importance people are starting to see that quality is vitally important, reliability is important. Now that we’re getting that level of reliability in mobile apps it’s important that app developers are putting the effort in to ensure that the quality level of their apps is consistently high so that enterprises can really make use of what they’re delivering.
Benchmark: What are the special challenges associated with mobile testing for the enterprise, for enterprise-grade apps?
Martin Wrigley: Well it turns the tables somewhat in that you cannot rely on what people did in the early days of Android, which is having your users test your app for you. As a developer you have to be able to test for every scenario that your user might encounter before the user finds it themselves. You have to find the bugs, not let your user find them. So it does push a much higher level of quality requirement back on the app developer. If the developer hasn’t gone through the range of devices using something like DeviceAnywhere or indeed the range of scenarios using the AQuA testing criteria they will miss things. And they do continue to miss things all the time, as we demonstrate with our Top-10 Errors. So we’re helping them with that. Developers have to have responsibility for testing the more complex functionality of the apps, but they also can’t forget all the other things that will make or break their application. So the quality bar is much higher for the enterprise. The toleration of errors in the field is much lower, so they [developers] have to do that testing for themselves.
[ Ed. Note -- Mobile testing solutions for the enterprise also need to be integrated with existing or legacy QA workbenches, such as IBM Rational, HP UFT (aka QTP) or Tricentis Tosca. Integrated mobile testing solutions like Keynote’s allow enterprises to seamlessly extend their established testing procedures to mobile.]
Benchmark: What do you think the future holds for enterprise mobility and how do you think companies should handle BYOD and the related questions of security and device/platform diversity in the workplace?
Martin Wrigley: Clearly any apps that a company is using, or allowing or even encouraging their employees to use need to be reliable high-quality applications. They should be looking for AQuA Quality App badge to see that the developers have tested those apps properly. That will save hours of time for the IT department because the last thing you want to do is have an IT department having to go out and check every single app that their employees are bringing into the workplace. So by allowing the standards to actually take the place of that IT department trying to become a mobile testing expert you can save a huge amount of time and effort and have a good amount of confidence that the apps that employees are using are going to work and they’re not going to be spending a lot of time on crashed apps taking them out of commission or having them fiddling around with making things work when they should be delivering profitable time to the business.
Benchmark: It used to be that EMEA and Asia were leading the charge as far as mobile innovation and consumer adoption, but now we’re seeing the real flourishing of the mobile ecosystem here in the U.S. What do you think was gating factor that led to the U.S. consumer market lagging behind at the outset and what has changed so that now the U.S. is catching up and arguably leading the way in the development of mobile apps?
Martin Wrigley: It’s interesting. There was a switch a few years ago, you’re absolutely right. Five years ago I would have said that Europe was years ahead of the US, but now I think the US has overtaken Europe. I think there are a couple of things that have happened. One is the leapfrogging of standards in the networks, the U.S. going to LTE before the Europeans networks have done so. We are only just beginning to roll out LTE here in Europe. So there’s higher capacity and higher bandwidth available to mobile users in the States. I think it’s a geographical challenge in the States from a network point of view because the distances are so much greater than in Europe and it becomes difficult until you get that ubiquity of connectivity to actually take mobile seriously, which is I think what held the States back for a long, long time.
But the other thing -- the real take-off point, in all honesty, was Apple. Apple’s advertising in the States was huge and the Apple iPhone was the big thing. They really made smartphones simpler to use. It’s fairly easy to argue that the Apple device is a simpler device for customers to use which makes it easier for the less technically-savvy to use a smartphone, so that broke that barrier. Smartphones existed a long time before the iPhone but they were not straightforward to use. Apple’s attention to straightforward usability, making it simple to use, simple to approach – if you’ve used one Apple product you can use other apple products – that made a huge difference in the take-up. And that approach helped the States pick up on smartphones really quickly and really accelerated adoption. And the fact that you essentially skipped a generation -- all of the Symbian smartphones that we had over here in Europe were great but they were slightly complicated – they never really gained traction in the States so in effect the U.S. smartphone usage basically jumped a generation.
There’s always been a thriving mobile development community across the States -- in Silicon Valley and on the East Coast and various other places -- and they have really grabbed the advantage and run with it tremendously and it’s been a joy to watch.
Benchmark: What are your thoughts on Native vs. mobile Web vs. Hybrid in terms of the relative quality of each experience or environment? What are the benefits of each and do you think that the hybrid approach is destined to win out in the long run?
Martin Wrigley: Interesting question and this is one that the whole industry is basically holding its breath and trying to sort it out. You sometimes find it phrased: “Will HTML5 take over the world?” Personally I think that while HTML5 is a great technology and has some real advantages, I think we will still see an awful lot of hybrid apps. To have truly mobile web and only web working on the devices you have to have a solid, ubiquitous connection. And that isn’t there yet, and probably never will be. Because devices are mobile and you go into parking garages where there’s no signal, because you go into other areas where there’s limited connectivity. You want to have your apps keep working, so you’ve got to have a good local client that’s still present there. In many ways you can look at the similar pattern of what happened in the PC world. I remember about seven or eight years ago there were a lot of people saying you don’t need smart PCs, you don’t need all the intelligence on the desktop, you can do it all in the cloud – sounds familiar – you can have a very thin client on the desktop and you can have just a web browser. Well that didn’t actually take off in the PC world so why should it take off in the mobile world when there’s actually more argument for having local intelligence in your device because you are continuously losing and re-making your connection, you move within and out of the network and have variability between 2G, 3G, 4G connectivity, let alone Wi-Fi when you’re at home. So I don’t think that any single pure technology will take over everything, I think we will see a combination of the different elements. I think there are some apps that are totally suitable for mobile web-only. I think there are some apps that are always going to be based on the device itself, and there’s a whole range in the middle that will end up being a hybrid one way or the other. Whatever the right technology is to solve the problems that you’re trying to solve for your customer is probably the right one, and if you believe in any one single technology as the silver bullet you’re probably making a mistake.
About Martin Wrigley
Martin has more than 25 years of experience in telecoms and IT, with a wide background of IT development, solutions architecture and delivery and is now an independent consultant and Executive Director of AQuA.
As Director of Developer Services at Orange Partner, from 2004 to 2012, Martin was working with telco industry partners, app shops and mobile app developers providing strategic and tactical direction and support to the Orange developer community, Martin brings all this experience to his consultancy work at Moexco. Prior to that, Martin had more than ten years in a variety of technical roles in Orange.
Martin is also part time Executive Director of AQuA, the not-for-profit Application Quality Alliance, whose members include AT&T, LG, Motorola, Nokia, Orange, Oracle, Samsung and Sony Mobile. AQuA develops and promotes quality and testing standards for mobile apps, developers and test houses, and runs the Quality App Directory