Tuesday, July 21, 2015

iOS 9 - The Challenges and Facts Developers Must Know

Our resident mobile expert and guru Peter Rogers shares his insights on the challenges presented by iOS9 in this guest article.  Enjoy!
*****
Peter Rogers, Cognizant Mobile Guru
Web and App developers often live in fear of the latest iOS release because of the challenges it will bring to porting their software.  This time, however, iOS 9 really has outdone itself. This article will focus on the core changes that developers are going to need know and address.

Apple has spent a lot of time focused on the interoperability between Apps and Safari. A key deliverable of this is the new Safari View Controller. The idea is that all HTML content rendering requests are literally handed over to Safari, as opposed to using UIWebView or WKWebView (which never reached its true potential and remains largely unloved). How this impacts PhoneGap who struggled to implement WKWebView will be very interesting to see.

http://www.macstories.net/stories/ios-9-and-safari-view-controller-the-future-of-web-views/

This web interoperability theme is continued with the ability to use universal links that securely lead users directly to a specific part of your App from a website link. This is achieved by means of a signed shared web credentials file in JSON format that has to be stored on your server. The net result means a seamless interchange between web and App environments that bypasses Safari, of which developers will want to take full advantage. There is also an extensive Search API that exposes mobile and web data seamlessly.

https://developer.apple.com/videos/wwdc/2015/?id=509

The controversy arrives in the form of downloadable Safari extension Apps that can be used amongst other things to offer mobile Ad Blockers. This is something that Purify aims to take full advantage of and the end result is amazing for web users but terrible for advertising agencies. The panic has already started to spread and I predict we will see typical strategies of “beg to whitelist” and “in content advertising” which offers solutions in the desktop space.

Swift 2.0 has been released but it is interesting to see that a lot of developers have not adopted Swift 1.0 due to lack of tooling support. There are also standard updates to HomeKit, HealthKit, CloudKit and MapKit. CloudKit offers a new web interface using CloudKit JS that can be used to share your Cloud data between Mobile Apps, Desktop App and Web Apps. The Games support has been massively overhauled and Metal continues to offer the lowest overhead access to the GPU, spurning OpenGL portability for raw performance. The News App is also worthy of note because it may mean similar Apps get blocked from publishing on the App Store if they are too similar. We also have Application Transport Security which forces HTTPS and declarations of network connections.

http://stackoverflow.com/questions/30751053/ios9-ats-what-about-html5-based-apps

The second piece of controversy arrives in the form of multitasking on Tablets. Suddenly every Tablet App can be run in a multitasking context which could support a secondary App taking up half the screen and a third video App running Picture in Picture. The problem for developers is that the onus is on them to deliver a constrained application that has been fully tested in a multitasking environment. This means that suddenly System Testing – in particular Performance Testing – and Non-Functional Requirements become more important and that of course adds to the cost.

The options available are as follows:
  1. Slide Over provides a user-invoked overlay view on the right side of the screen (or on the left side in a right-to-left language version of iOS) that lets a user pick a secondary app to view and interact with.
  2. Split View displays two side-by-side apps, letting the user view, resize, and interact with both of them.
  3. Picture in Picture lets a user play video in a moveable, resizable window that floats over the apps onscreen.
“From a development perspective, the biggest change is about resource management. Every iOS app—even one that opts out of using multitasking features—needs to operate as a good citizen in iOS 9. Now, even full-screen apps don’t have exclusive use of screen real estate, the CPU, memory, or other resources. To participate effectively in this environment, an iOS 9 developer must carefully tune their app’s resource usage. If an app consumes too much time per frame, screen updates can fall below 60 frames per second. Under memory pressure, the system terminates the app consuming the most memory.”

An application can opt out of appearing in the Slide Over selector bar or being available for PiP as long as there is a good reason. If you are submitting Apps to the Apple App Store then you are most probably going to have to include this feature if you do not want to be rejected.  Here is the major rub though…you cannot stop another application from being run at the same time as your application in Slide Over, Split View or PiP. That means you have to make sure your App is a well behaved multitasking citizen and that means a lot of System Testing on actual devices. You need to watch your framerate, memory consumption, handle window based resizing as opposed to screen based resizing (Autolayout helps here) and handle system state events (such as temporarily being put in the background and releasing memory intensive resources).

The following can happen even if you opt out of multitasking in your application:
  • A user adds a Picture in Picture window that plays over whatever else is onscreen (including a full-screen app), while the app that owns the video continues to run in the background.
  • A user employs Slide Over to use a secondary app. While it is visible, the secondary app runs in the foreground.
  • A user invokes a keyboard from a secondary app in Slide Over, thereby obscuring a portion of the primary app.
One key challenge here is automated testing. Often the devices are remote and accessed through some Cloud based testing set up like CloudBees, DeviceAnywhere or Xamarin Test Cloud. This means that remote device testing vendors now have to offer the ability to run three applications on a remote iPad and provide performance and memory logs. If such vendors do not offer these capabilities then you have to acquire the devices yourself and run the tests manually and that adds to the hardware costs for the project.

If that wasn’t enough we also have the third and final controversy: App Thinning. In order to help developers streamline their Apps for the new multitasking environment and to handle Apple Watch, Apple have introduced three new concepts that come under the banner of App Thinning:
  1. Slicing
  2. Bitcode
  3. On Demand Resources
Slicing is the process of creating optimal versions of the application for specific target devices. For example rather than have one Universal App that covers all the tablet and mobile code then you could deliver two Apps. In fact rather than have legacy code for iOS 7 and iOS 8 then why not just deliver three dedicated Apps each for iOS 7, iOS 8 and iOS 9. Why stop there? Why not look at screen sizes and offer precisely the resources that a certain screen size needs. Welcome to Slicing.

The creation side of things is handled by Xcode 7, and the new ability to specify target devices and provide multiple resolutions of an image for the asset catalog. Xcode also allows you to test the local variants of the application on a simulator. You then create an archive of the App and send it off to the App Store. The App Store actually handles the deployment side of things offering up the precise variant of your App to iOS 9 App Store clients. The question remains how this works for Enterprise App Stores but that one is for a later article.

Bitcode is an intermediate representation of a compiled program. By supplying bitcode in your application then it allows Apple to re-optimise your App binary in the future automatically. I can only assume that it is impossible to introduce errors in this process or there would be a large outcry.

On-demand resources are those that can be fetched when required, as opposed to being bundled with the application. The reason why this does not work so well in Android is because of the Java Heap meaning that often memory gets lost in the process. The App Store hosts the resources for you on Apple Servers and manages the download for you. The operating system purges on-demand resources when they are no longer needed and disk space is low. If you use an Enterprise App Store instead of the Apple App Store then you must host the on-demand resources yourself and that is again something worth exploring in future articles here.

Hopefully when your customer asks you for a quick iOS 9 update then you will at least be prepared now.

************************************************************************
Kevin Benedict
Writer, Speaker, Senior Analyst
The Center for the Future of Work, Cognizant
View my profile on LinkedIn
Read more at Future of Work
Learn about mobile strategies at MobileEnterpriseStrategies.com
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Monday, July 20, 2015

Mobility, Sensors, Robotic Process Automation and the Principle of Acceleration

If you have spent any time working on IT projects you would have heard the comment, "The system is only as good as the data." It's an accurate and necessary statement, as it describes a prerequisite for many technological innovations. Many system designs fail in the face of reality. Reality is often a cloaked term for implementing a digital solution in a physical world without a sufficient understanding of how the physical world operates. This is one problem where sensors can really help.

Sensors fill in the blind spots in our systems and operations by measuring the physical world and providing us with the data. Where previously we operated on conjecture or false assumptions, sensors provide real data on how the real world functions. Operating on real data allows for new and different approaches and IT strategies. Strategies that utilize artificial intelligence or in more complex environments robotic process automation solutions. These automated processes or solutions know exactly what to do in a complex process given specific data. Robotic process automation offers operational speeds and levels of accuracy never before possible with humans alone.

In a world of ubiquitous mobility, businesses must learn to operate in real-time. Marketing, sales and commerce must all evolve to operate in real-time. Think about a LBS (location based service) where retailers want to inform their customers, via SMS, of nearby discounts or special offers. If the SMS is delayed, the customer will likely have moved on and the SMS will be irrelevant. Payments must operate in real-time. Real-time is a speed deemed impossible just a few years ago and remains a future goal for most companies. Today, however, with mobile devices and real-time wireless sensors updating complex systems, it is often the humans in a process that are the sources and causes of bottlenecks. Think about how slow a credit or debit card transaction would be if every transaction ended up in a human's inbox to review and approve before it could be completed. Global and mobile commerce would stop. The credit and debit card processes have long ago been automated. Enterprises are now feeling the pressure to automate more processes to enable an operational tempo than runs at the speed of mobility.

What does it take to automate and run at real-time operational tempos? First, it takes accurate data that has not expired on the shelf. Data that has expired on the shelf means the value it once had, no longer remains.  For example, the weather forecast for last weekend, is not useful for this weekend.  The value of the data has expired. Second, it takes IT infrastructures capable of supporting real-time transactions and processing speeds. Thirdly, it takes defining decision trees, business rules and processes to the level where they can be coded and automated. This will then enable artificial intelligence to be added and utilized. Once enough artificial intelligence is supported it can be connected together into a complete process for RPA (robotic process automation) to be supported. Now you have a chance at real-time speeds.

In summary, accurate and real-time data, especially in a physical environment, will require sensors to fill data blind spots and replace data that has expired on the shelf. This is just one of the many ways enterprises can take advantage of the IoT (Internet of Things).

Mobile apps are driving the demand for real-time interactions and information.  Real-time demand drives a need to change business processes and IT (digital transformations). Digital transformation increases the demand for real-time IT infrastructures and processes, which in turn will increase the demand for IoT and robotic process automations. In economic circles this is known as the principle of acceleration. If demand for a product or solution increases, then the production capabilities for supplying the demand increases at an even greater amount. What does that mean for us?  Mobile is going to drive all kinds of increasing changes in business and IT. Mobile technologies are having an acceleration effect across enterprises and IT today. This effect is driving digital transformation initiatives toward reaching the "real-time" benchmark that will require more enterprise IoT and robotic process automations to achieve real-time speeds.

************************************************************************
Kevin Benedict
Writer, Speaker, Senior Analyst
Digital Transformation, EBA, Center for the Future of Work Cognizant
View my profile on LinkedIn
Read more at Future of Work
Learn about mobile strategies at MobileEnterpriseStrategies.com
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Monday, July 13, 2015

Laws for Mobility, IoT, Artificial Intelligence and Intelligent Process Automation

If you are the VP of Sales, it is quite likely you want and need to know up to date sales numbers, pipeline status and forecasts.  If you are meeting with a prospect to close a deal, it is quite likely that having up to date business intelligence and CRM information would be useful.  Likewise traveling to a remote job site to check on the progress of an engineering project is also an obvious trigger that you will need the latest project information.  Developing solutions integrated with mobile applications that can anticipate your needs based upon your Code Halo data, the information that surrounds people, organizations, projects, activities and devices, and acting upon it automatically is where a large amount of productivity gains will be found in the future.

There needs to be a law, like Moore's infamous law, that states, "The more data that is collected and analyzed, the greater the economic value it has in aggregate," i.e. as Aristotle is credited with saying, "the whole is greater than the sum of its parts." This law I believe is accurate and my colleagues at the Center for the Future of Work, wrote a book titled Code Halos that documents evidence of its truthfulness as well.  I would also like to submit an additional law, "Data has a shelf-life and the economic value of data diminishes over time."  In other words, if I am negotiating a deal today, but can't get the critical business data I need for another week, the data will not be as valuable to me then.  The same is true if I am trying to optimize, in real-time, the schedules of 5,000 service techs, but don't have up to date job status information. Receiving job status information tomorrow, does not help me optimize schedules today.

Mobile devices are powerful sensor platforms.  They capture, through their many integrated sensors, information useful to establishing context.  Capturing GPS coordinates for example, enables managers to see the location of their workforce.  Using GPS coordinates and geo-fencing techniques enables a software solution to identify the job site where a team is located.  The job site is associated with a project, budget, P&L, schedule and customer.  Using this captured sensor data and merging it with an understanding of the needs of each supervisor based upon their title and role on the project enables context to be established.  If supervisor A is responsible for electrical, then configure the software systems to recognize his/her physical approach to a jobsite and automatically send the latest information on the relevant component of the project.

I submit for your consideration yet another law, "The economic value of information multiplies when combined with context, meaning and right time delivery."  As we have seen, mobile technologies are critical for all of the laws discussed so far in this article.

Once sensors are deployed, sensor measurements captured, data wirelessly uploaded, and context understood, then business rules can be developed whereby intelligent processes can be automated. Here is an example, workers arrive at a jobsite and this data is captured via GPS sensors in their smartphones and their arrival automatically registers in the timesheet app and their supervisor is notified.  As they near the jobsite in the morning, using geo-fencing rules, each worker is wirelessly sent their work assignments, instructions and project schedules for the day.  The right data is sent to the right person on the right device at the right time.

The IoT (Internet of Things) is a world of connected sensors.  These sensors feed more sources of captured data into the analytics engine that is used to find meaning and to provide situational awareness.  If smartphones are mobile sensor platforms, then smartphones and IoT are both peas in the same pod.

Intelligent automated processes, like the ones mentioned above, are called "software robots" by some. These are "aware" processes acting upon real-time data in a manner that supports human activities and increases productivity.  Here is what we all need to recognize - mobile applications and solutions are just the beginning in this value chain.  Rule: Mobile apps provide only as much value as the systems behind them.  Recognizing mobile devices are sensor and reporting platforms that front systems utilizing artificial intelligence and automated processes to optimize human productivity is where the giant leaps in productivity will be found.

If you agree with my premises, then you will understand the urgency to move beyond the simple testing and deployment of basic mobile apps and jump into building the real value in the intelligent systems behind them.

Summary of Laws:
  • The more data that is collected and analyzed, the greater the economic value it has in aggregate
  • Data has a shelf-life and the economic value of data diminishes over time
  • The economic value of information multiplies when combined with context, meaning and right time delivery
  • Mobile apps provide only as much value as the systems and intelligent processes behind them
************************************************************************
Kevin Benedict
Writer, Speaker, Senior Analyst
Digital Transformation, EBA, Center for the Future of Work Cognizant
View my profile on LinkedIn
Read more at Future of Work
Learn about mobile strategies at MobileEnterpriseStrategies.com
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Tuesday, July 07, 2015

Making the Web Run Faster via WebAssembly

Digital and Mobile Guru, Peter Rogers
Like most of us, my colleague at Cognizant and technical mobile and web expert, Peter Rogers, spends his warm summer evenings pondering how to make the Internet run faster.  In this guest blog, Peter shares the latest developments in "WebAssembly."  Enjoy!

*****

You probably saw the Blogosphere explode the other day when WebAssembly was announced. [Sorry Peter, I missed that one]. For the uninitiated Web Assembly is ‘a new low-level binary compile format that will do a better job at being a compiler target than JavaScript’. It is basically a binary form of AST (https://en.wikipedia.org/wiki/Abstract_syntax_tree) which means that it is much faster to load, process and potentially run. Half of the problem with JavaScript is that we need to load a text file and then wait for it to be interpreted. Web Browsers are trying to use JIT (Just In Time) and AOT (Ahead Of Time) strategies to speed things up but the language itself makes life hard. Suddenly Brendan Eich comes up with a project to deliver an actual binary AST (https://medium.com/javascript-scene/why-we-need-webassembly-an-interview-with-brendan-eich-7fb2a60b0723) that can be used and what really excites people is that Google, Mozilla and Microsoft have all agreed to work on the format. Browsers will understand the binary format, which means we'll be able to compile binary bundles that compress smaller and that mean faster delivery. Depending on compile-time optimization opportunities the WebAssembly bundles may run faster too. A quick overview can be found here (https://medium.com/javascript-scene/what-is-webassembly-the-dawn-of-a-new-era-61256ec5a8f6) and the community page is now open (https://www.w3.org/community/webassembly/).

The rest of what followed was a whole load of confused articles about JavaScript being dead, writing everything in WebAssembly and questions as to why JavaScript itself cannot use WebAssembly instead of C/C++ .

Well here are some interesting bullet points for you:

  • You can actually run WebAssembly today and it does actually use JavaScript
  • WebAssembly is going to be a slow evolution not an overnight sensation
  • This solution is really useful for game developers and advanced web applications but it probably won’t be applicable in most cases 

The whole WebAssembly idea has actually evolved from Emscripten (https://github.com/kripken/emscripten), ASM.js (https://hacks.mozilla.org/2015/03/asm-speedups-everywhere/) and NaCl/PNaCl (https://developer.chrome.com/native-client/overview). ASM.js is a subset of JavaScript that has been designed to be highly optimised by compilers but it is a textual format. If a Web Browser supports ASM.js and you have somehow managed to load this textual format then you can see as much as 1.5x native speed execution depending on the browser and the code itself. Just to put that in perspective that is pretty much the same speed as Java and C#. Sounds great, but how do I get my code in ASM.js? This is where it starts to get nasty…you have to code in C/C++…You use Clang to compile your C/C++ into LLVM bytecode (https://en.wikipedia.org/wiki/LLVM) and then use Emscripten to convert the LLVM bytecode into to a subset of JavaScript (http://ejohn.org/blog/asmjs-javascript-compile-target/) called ASM. The Unreal 3 Engine was ported to ASM and it ran surprisingly well in ASM capable web browsers.

In a nutshell, you have to write in C/C++ and then use a few tools to output a highly optimised subset of JavaScript (albeit still a textual one) that can be accelerated on web browsers that support it. The rendering part needs to be considered though and normally we use WebGL because it is hardware accelerated and perfect for dealing with the ASM data structures. Unity have been quick to support ASM, NaCl and now WebAssembly. Any WYSIWG tool or dedicated programming environment can always spit out ASM. Of course there will always be an overhead if you are switching ASM contexts on and off, so you have to consider your application structure. Ideally one huge lump of heavy processing gets handed over to ASM and the rest of the application carries on with normal JavaScript. You could write the whole application in ASM but the whole thing becomes totally unreadable unless you use a tool like Unity: but that is probably more suited towards a canvas type approach like a Widget or a game rather than a full web application.

Quite a few browsers support ASM according to this excellent article (https://hacks.mozilla.org/2015/03/asm-speedups-everywhere/):


Exciting…but what if you do not wish to program in C/C++? Here is where it gets very interesting. As it stands there is no WebAssembly but there is a Polyfill (https://github.com/WebAssembly/polyfill-prototype-1) that uses ASM. Which means you can run WebAssembly today using ASM, but you are still actually using JavaScript, only a highly optimisable version in a textual form. The roadmap basically starts here with the logical progression being a binary form of the existing ASM before we move to a new whole new language. At the moment you have to use C/C++ in order to generate ASM but there is nothing to stop you hand-coding it - other than patience and sanity. Anyone familiar with writing video game pokes in machine code from back in the 80s will probably smile at the challenge of hand-coding ASM.

So why C/C++? Well the problem is that scripting languages are very high level. This excellent article talks you through the argument tremendously well (http://mrale.ph/blog/2011/11/05/the-trap-of-the-performance-sweet-spot.html). Probably the best scripting language I know is Ruby and even that does not have the low level capabilities of C/C++ required for performance optimisation. Indeed the whole web was founded on a mixing of declarative languages (CSS, HTML) and simple scripting languages (ECMAScript, JavaScript, JScript and even VBScript). There is a reason why game developers use C/C++ and that is for performance and so if you really want to get close to the metal then a traditional scripting language is probably not going to cut it. Most of this is down to the data structures and the overhead in storing objects.

However…what about a next generational scripting language…he said mischievously. I was amazed to find a new scripting language that fitted the bill called LLJS (http://lljs.org/), which I presume stands for Low Level JavaScript. They have just been able to compile LLJS into ASM (http://jlongster.com/Compiling-LLJS-to-asm.js,-Now-Available-). This is a very exciting glimpse into the future. I can see tool vendors like Unity and next generation scripting languages like LLJS, all being able to spit out ASM and deliver a much improved web experience. Soon you will be to write a 3D application in Unity and export it to WebAssembly and use the WebAssembly Polyfill to actually run it in most modern web browsers. LLJS will probably not be the only next generation scripting language and ECMAScript 7/ECMAScript 2016 along with new APIs are already adding lots of new features which makes it much more effective to accelerate JavaScript such as: Typed Objects (http://wiki.ecmascript.org/doku.php?id=harmony:typed_objects); and SIMD (https://hacks.mozilla.org/2014/10/introducing-simd-js/).

My guess is that ECMAScript will start to evolve into a much more lower level language and this will rapidly accelerate as soon as a few next generation scripting languages start to challenge it. It will be very interesting to see how low level a scripting language can actually become. Swift has arguably proved an initial attempt at just this, by embracing the best practices of scripting along with much deeper control.
*****
Thanks for sharing this article with us Peter!

************************************************************************
Kevin Benedict
Writer, Speaker, Senior Analyst
Digital Transformation, EBA, Center for the Future of Work Cognizant
View my profile on LinkedIn
Read more at Future of Work
Learn about mobile strategies at MobileEnterpriseStrategies.com
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Wednesday, July 01, 2015

The Evolution of IoT and a Look at the Future

Peter Rogers
My colleague at Cognizant, digital guru and mad scientist Peter Rogers shares his experiences and insights on the IoT (Internet of Things) in this guest blog.  These are his personal opinions. Enjoy!
*****
We hear a lot about Internet of Things but the million dollar question is, how does anybody actually make any money?
  1. The Cloud based vendors will add in IoT support in order to retain or grow their customer base within their MBaaS, MADP or API Gateway solutions.
  2. The developers will try and cash in on wearables as a new platform.
  3. Random new wearable devices will appear from disparate vendors.
Once the dust settles then the impression I get is that networked hardware sensors that can be integrated with regular consumer products will be the next big thing. The size, price and functionality of sensors is now so attractive that we can literally integrate them into our lives in a frictionless way.

I created a hardware demo of a consumer product (a squeezable Mayonnaise bottle) that could detect when and where it was shaken, and then send that information to a marketing micro-website. You can watch the video here (https://www.youtube.com/watch?v=2yaE6-KuHgs) and add your thoughts as to how I accomplished this feat and if you want to help me Kickstarter fund one. I was considering that the bottle could also detect when it was nearly empty and automatically order another one.

A few days later the Amazon Dash Button was announced (https://www.youtube.com/watch?v=NMacTuHPWFI) and people could actually press a remote button to directly order something. It did not stop there though as a few weeks later Google announced Project Soli (https://www.youtube.com/watch?v=0QNiZfSsPc0) which is effectively a small radar sensor which can detect small finger movements and map them into user interactions. I was so excited that I ordered a Flic (https://flic.io/) which is a remote button which you can program to do just about anything. The possibilities seem endless and the sensors are only going to get smaller. Indeed while the current trend is for phones and indeed watches to get bigger then it is left to the sensors to shrink and seamlessly integrate.

I would therefore predict that the real money is in these small integrated sensors which can offer us digital experiences without us touching a PC, phone, tablet or watch. Interestingly this fits into the Post App World vision that Apple and Google are allegedly eyeing up (http://www.wired.com/2015/06/apple-google-ecosystem/). For in the Post App World, it is the API that rules supreme and offers us frictionless services integrated into our consumer products. This vision of hardware sensors being able to offer us user interaction without a traditional screen is intriguing and describes the multiple touch points of the article fittingly.

After the sensors the real money lies surely with what the sensors produce…which is data. Suddenly Big Data just got a whole lot more interesting. There will be reams and reams of data from hardware sensors everywhere which are just crying out for Big Data processing solutions. But what do we do with the data? This is where the algorithms come in…highly intelligent algorithms that can analyse consumer data and use predictive analytics in order to offer us services before we even know that we need them. And then what? The algorithms start to use artificial intelligence and we end up with automated agents that operate on data models using M2M, freely trading data with each other, in order to analyse us and then directly offer us new targeted services. Cold Calling has already replaced humans with static voice recordings, but how long before that becomes dynamic? Imagine an autonomous agent somewhere processes enough of your data to work out that you need double glazing and then dynamically records a sales pitch and sends it to you.

My predications in a nutshell are an integration of sensors with everyday consumer products and the result driving the Big Data market some 12 months later. And what of Virtual Reality? The more I think about it the more I see an augmented digital reality powered by sensors where the ‘screen’ is our lives. I think the mistake that AR vendors made in the past was to think that we actually needed a screen…

************************************************************************
Kevin Benedict
Writer, Speaker, Senior Analyst
Digital Transformation, EBA, Center for the Future of Work Cognizant
View my profile on LinkedIn
Read more at Future of Work
Learn about mobile strategies at MobileEnterpriseStrategies.com
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.