Tuesday, December 22, 2015

My Best Articles on Mobile Commerce Strategies 2015

In 2015, a master strategy for mobile commerce emerged.  Mobile apps need to be personalized, but that is not enough. Personalization without context, relevance, value to the customer and permission is just creepy and/or obnoxious.  We recognized a new kind of partnership is required between customers and trusted vendors.  One that requires a deeper level of earned trust, and one that provides mutual benefits through the sharing of data.  We call this relationship a MME Data Partnership.

Parts of MME Data Partnerships can be found within many existing loyalty and rewards programs.  Although the purpose is rarely understood.  These programs define how the collection and use of specific data will be used to provide mutual benefits.  It is an overt agreement by both parties to share and use data in return for defined rewards.  Within a MME Data Partnership we found three types of data, we call 3D-Me, are needed to optimize a mobile user experience:
  • Digital data - online and mobile activities and behaviors
  • Physical data - Sensor and IoT 
  • Personal data - MME Data Partnerships
For each of these categories purposeful strategies need to be developed and implemented to collect, analyze and utilize the data in order to provide the best experiences for customers.

Personalization, as we have learned, is not enough. Personalization needs to be combined with CROME Triggers (contextually relevant opportunities, moments and environments), which are bits of data that when collected and analyzed in real-time, identify the need for specific and relevant personalized content.

All of these strategies and more are discussed in "The Best of Mobile Commerce 2015" articles listed below:
  1. Strategies for Personalizing Mobile Apps
  2. Special Report: Cutting Through Chaos in the Age of "Mobile Me"
  3. Mobile Strategies for Combining IoT, CROME, 3D-Me and Artificial Intelligence
  4. Mobile Commerce Strategies and CROME Triggers
  5. What Does the Age of Mobile Me - Mean for Retailers?
  6. Mobile Commerce Strategies and Tactics
  7. Retail Evolution, Mobile Experiences and MME Strategies
  8. Mobile Commerce, Speed and Operational Tempos, Part 1
  9. Mobile Commerce, Speed and Operational Tempos, Part 2
  10. Mobile Commerce, Speed and Operational Tempos, Part 3
  11. Latest Research on Mobile Commerce Trends and Strategies
  12. The New Mobile Consumer - Latest Research
  13. Mobile Consumer Behaviors - The Questions to Ask
  14. Video: Age of Mobile Me
Download the full report, "Cutting Through Chaos in the Age of Mobile Me" here: http://www.cognizant.com/InsightsWhitepapers/Cutting-Through-Chaos-in-the-Age-of-Mobile-Me-codex1579.pdf.

************************************************************************
Kevin Benedict
Writer, Speaker, Analyst and World Traveler
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Tuesday, December 08, 2015

Instantly Personalizing Mobile Apps - Cutting Through Chaos

Unique Consumers and Unique Profiles
Smartphones, laptops, PCs and in-store visits have made path-to-purchase journeys very complex and confusing for online retailers to recognize and support.  Consumers can search and discover products and services using a smartphone on their way to work.  In the evening they can pull out a tablet and engage in immersive research while laying in bed.  They may decide to review some more on their desktop at work, then at lunch time stop at a brick and mortar store to look at the product in more detail.  That evening, they purchase the product online using a laptop.  How is a retailer or e-tailer going to cut through this chaos and recognize individual consumers and their needs along their path-to-purchase journey?

In our research at Cognizant's "Center for the Future of Work" we found online shoppers use different devices for different categories of products.  In fact, 56% of online shoppers use multiple devices on many online path-to-purchase journeys.  On the go search and discovery is often initiated on smartphones, immerse research on tablets, while completing transactions on laptops is a common pattern.

Some products consumers are comfortable purchasing on a smartphone, others not.  We found online shoppers of different ages exhibit markedly different shopping behaviors.  We found significantly different online shopping behaviors between those with different education levels, genders, ethnicity and technology preferences (laptop/desktop vs. mobile).

Our findings reveal these variables, all added up, equate to thousands, if not millions of different combinations of needs, preferences, unique activities and behaviors.  These unique set of variables we call Mobile Me Profiles (MME-Ps), require different personalized content, at different times and locations, for each consumer in order to provide an optimal experience.  In this age of "mobile me" where customers demand personalized and relevant user experiences, it is necessary to identify these differences, precisely and instantly.

If you are going to compete and win in mobile commerce today, you must target markets of one.  It is no longer an effective strategy to treat your customers as one homogeneous market of unknown consumers.  In today's world of mobile commerce, where devices are intimate extensions of unique individuals, knowing those individuals, as individuals is key.

Read more on how to deliver these strategies in my new report, "Cutting Through Chaos in the Age of Mobile Me."

Download the report here http://www.cognizant.com/InsightsWhitepapers/Cutting-Through-Chaos-in-the-Age-of-Mobile-Me-codex1579.pdf.

************************************************************************
Kevin Benedict
Writer, Speaker, Analyst and World Traveler
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Mobile Technologies Revealed: Web and Native App Development Strategies

Our resident Cognizant mobile and digital technology guru, Peter Rogers, shares his insights into web and native app development strategies in this guest post:  Enjoy!
********
Peter Rogers
I often meet customers who want to transition web developers into mobile application developers. Apple has clearly tried to address this market using Swift but that does not offer a cross platform solution. Developers who have come through this transition will traditionally wrap the latest and greatest web framework (like Angular 2 or React) using Adobe Cordova through initiatives like Ionic. However great the latest web frameworks are though they can never compete with pure native mobile user interfaces powered by dedicated hardware acceleration. It may be a simple solution but the net result is never going to be present the best possible user experience and there will always be problems with Apple App Store submission and changes to WebView technologies designed to gently nudge developers towards pure native Apps.

Appcelerator Titanium has long since offered an excellent solution in this space but the only downside is the lack of a combined desktop and mobile solution.

Recently three new exciting initiatives arrived to offer new Titanium-like solutions in this space:

1.       React Native (http://www.reactnative.com/)
2.       Angular 2 Native Apps (https://www.youtube.com/watch?v=4SbiiyRSIwo)
3.       NativeScript (https://www.nativescript.org/)

The benefit of the first two is that the technology can be shared across both mobile and desktop effectively. There is no learning a new framework. For the web developers who are trained in Angular 2 or React then this is a very attractive solution for transition to mobile development without having to go anywhere near Cordova. In fact in most cases all you have to do is to swap out the final Cordova Wrapping process for a dedicated Web Native Development phase, which means you don’t have to throw anything away.

How does this magic work? Well advanced web developers have already started to mix Angular and React: using the big framework quality of Angular and the high speed rendering of React. This architecture is made even simpler with Angular 2 in which there is platform-agnostic template parsing and platform-specific rendering. This makes it possible to plug in React Native or NativeScript as the underlying rendering engine. This offers a future in which Angular 2 can create cross-platform desktop or cross-platform mobile applications, allowing you to choose your programming language (ECMAScript 5.1, ECMAScript 2015, TypeScript, Dart or CoffeeScript) and choose your platform-specific rendering engine (React Native, NativeScript, Angular 1, Angular 2 or React). For those who wrote off Angular 2 due to radical design changes then suddenly that decision is looking incredible hasty, for it is nothing short of genius.

If you watch the Angular 2 Native App video then you will see the focus around NativeScript. The question is why not consider Titanium or React Native? Whilst that is perfectly possible using the plug and play nature of the new Angular 2 rendering engine there is a clear advantage offered by NativeScript. To understand this advantage we need to take a slight diversion into Hybrid App world. As you may recall there are three main models for Hybrid Apps: wrapped web; runtime interpreters; and cross-compilers. If we start with cross-compilers then we will find Xamarin ruling the roost but I would not call this a Rapid Application Development approach. You trade performance for a slightly longer development time and a more difficult programming language. The interesting thing with Xamarin is the 100% API coverage available within a few days. There are also a few HTML 5 canvas cross-compilers like those found in Intel XDK but these are specific to Canvas technology which works better for the specific use case of widgets and games. We all know the most popular wrapped web solution is Cordova, with another notable entry being IBM Worklight.

Runtime Interpreter solutions do not quite offer the performance of a cross-compiler but they do offer support for rapid application development through JavaScript. Appcelerator Titanium is the most popular Runtime Interpreter solution and has teased a cross-compiler solution called HyperLoop for a long time but it is offered in a restricted capacity. I am a huge fan of Titanium and have used it a lot for various customers. I was really looking forward to HyperLoop but looking at the software repository then it seems to have slowed down to a halt. The only downside of Titanium is the lack of 100% API coverage but this is a shared limitation with most other portable native solutions with Xamarin and NativeScript being the notable alternatives. Now in the case of Xamarin the API wiring has to be performed by hand however in NativeScript then it is automatic.

So what is the magic of the Runtime Interpreter solution powering Titanium, Kony, React Native and NativeScript? Well Telerik (who created NativeScript) provide the best explanation that I have quite possibly ever read before online (http://developer.telerik.com/featured/nativescript-works/). In a nutshell the two core JavaScript engines that power iOS (JavaScript Core) and Android (V8) both expose a very advanced set of APIs that power the JavaScript bridge (http://izs.me/v8-docs/namespacev8.html).

·         Inject new objects into the global namespace
·         JavaScript function callbacks
·         JNI to talk with the C layer on Android

NativeScript offers the following explanation of how it uses these APIs in order to build the JavaScript bridge:

1)      Metadata is injected into the global namespace at build-time
2)      The V8/JavaScript Core function callback runs.
3)      The NativeScript runtime uses its metadata to know that the JavaScript function calls means it needs to instantiate an Android/iOS native object
4)      The NativeScript runtime uses the JNI to instantiate an Android object and keeps a reference to it (iOS can talk directly to the C layer)
5)      The NativeScript runtime returns a JavaScript object that proxies the Android/iOS object.
6)      Control returns to JavaScript where the proxy object gets stored as a local variable.

This is probably quite similar for most of the other vendors but the additional step that NativeScript adds is the ability to dynamically build the API set at build time using Reflection (introspection). Because generating this data is non-trivial from a performance perspective, NativeScript does it ahead of time, and embeds the pre-generated metadata during the Android/iOS build step. This is why NativeScript can offer 100% API coverage immediately because it does not involve the manual step required in Xamarin. To be accurate it is unlikely that NativeScript can offer 100% API but instead it will offer all of the APIs that can be discovered through reflection – there is a subtle difference here as those who have use reflection programmatically will pick up on.

NativeScript offers two different modes of operation:

1)      Use the low level iOS and Android objects directly
2)      Use high level abstraction APIs

The high level abstraction APIs are provided as RequireJS modules and allow you to work at a higher level of abstraction. If you were wiring this into Angular 2 then you would probably have an Angular component which either calls a Browser Object or an NS Module, which itself talks to either an iOS proxy object or an Android proxy object through NativeScript. Of course there is nothing to stop you having an Angular component that calls out to React Native and that option is being explored as well.

This is not to say that NativeScript is better than React Native, Titanium or Xamarin. In fact I can see the main use case of NativeScript as being used inside of Angular 2 as its platform specific rendering solution. I can actually see more people using React Native as a standalone solution even though it is in a much earlier state. I can also see Titanium carrying on as one of the most popular mobile solutions on the market today. I can however see native mobile web applications becoming a hot new topic and a great place to transition web developers towards.

Download the latest mobile strategies research paper, "Cutting Through Chaos in the Age of Mobile Me," here http://www.cognizant.com/InsightsWhitepapers/Cutting-Through-Chaos-in-the-Age-of-Mobile-Me-codex1579.pdf
************************************************************************
Kevin Benedict
Writer, Speaker, Analyst and World Traveler
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Monday, December 07, 2015

Latest Research on Mobile Consumer Behaviors and Mobile App Requirements

I just finished a major research paper titled, "Cutting Through Chaos in the Age of Mobile Me."  Our findings reveal current mobile consumer behaviors, the challenges in creating mobile apps for them, and specific recommendations and business strategies for winning in an age of "Mobile Me."  Download the full report here http://www.cognizant.com/InsightsWhitepapers/Cutting-Through-Chaos-in-the-Age-of-Mobile-Me-codex1579.pdf.

Video Link: https://youtu.be/IqN6NbY_Q0A
************************************************************************
Kevin Benedict
Writer, Speaker, Analyst and World Traveler
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Tuesday, November 24, 2015

Beacon Essentials You Must Quickly Learn

Our resident Cognizant digital/mobile expert, Peter Rogers, asked me to recommend a digital strategies topic to share, and I suggested Beacons for this week.  I confess to reading about them daily without knowing much about them, so I want to thank Peter for this article!  Enjoy!
********
Digital & Mobile Expert
Peter Rogers

Let's start with a Basic Beacons 101 class:

  1. Beacons do not push out notifications. They broadcast an advertisement of themselves (traditionally their UUID, major and minor values) and can be detected by Bluetooth Low Energy (BLE) devices.
  2. The proximity from a number of Beacons can be measured using typical triangulation techniques in order to get a (very) rough idea of (typically) indoor location.
  3. The Beacon UUID, major and minor version values are typically used for identification and used to map to either a message, service, media content, website, application or location inside the Native App.
  4. Beacons can have their UUID, major and minor versions (and indeed power level) modified statically before deployment or dynamically using WiFi connectivity. A Beacon Management App is often provided by a Beacon Platform Vendor to allow you to manage these values dynamically.
  5. Updating the Beacon major and minor values can be used to update the identity of the Beacons and subsequently change what they map to inside the Native App. This does mean there is a security risk of somebody remotely hacking your Beacons and changing their values to take down or corrupt your service.
  6. iBeacon is Apple’s proprietary BLE profile but their patents seem to cover more than just the profile aspect. There were Beacons before iBeacons. Apple did not invent the Beacon. What they did is an incredibly good job of integrating Beacon support into iOS. iBeacon is not a piece of hardware. It is a BLE profile that is loaded onto a piece of hardware. This profile makes the Beacon an iBeacon.
  7. There are many Beacon vendors who offer various capabilities such as: BlueCats; BlueSense; Gelo; Kontakt.io; Glimworm; Sensorberg; Sonic Notify; beaconstac; mibeacon (Mubaloo); estimote; Gimbal (Qualcomm); Apple; and Google, etc. 

Beacon vendors offer various difference offerings such as:

  • hardware
  • proprietary BLE Beacon profiles
  • support for popular profiles
  • remote Beacon management
  • analytics
  • associated content management
  • marketing campaigns
  • software version management
  • profile switching
  • client side SDKs
  • professional support services
Most do not offer the whole solution, and so it was interesting to see Apple and then Google throw their hat into the ring. Most people are still really excited about Apple’s iBeacons, but they look like they will become a closed eco-system which could possibly even include being able to be physically undetectable to non-Apple hardware.  Today Beacon vendors are just not allowed to provide library based support for iBeacons on Android hardware (http://beekn.net/2014/07/ibeacon-for-android/).

At the start of 2015 Google created a new form of Beacon called UriBeacon (http://uribeacon.io/) which was able to actually advertise a URL pointing to a website or a URL that could be processed locally. This was in stark contrast to all the previous forms of Beacon which could only advertise their identity (UUID, minor, major). UriBeacons also promised to be cheaper and easier to configure, which was largely down to their more limited use case of just being used to advertise a URL/URI. The killer concept, however, was that of The Physical Web. The Physical Web is an approach to unleash the core superpower of the web: interaction on demand. People should be able to walk up to any smart device and not have to download an app first. A small pre-installed App (like the Web Browser or something Operating System level) on the phone scans for URLs that are nearby. Google previously used the UriBeacon format to find nearby URLs without requiring any centralized registrar.

This was a major breakthrough because having to download an App for each Beacon vendor completely breaks the organic, intelligent, evolutionary Smart City model. Notice that I used the words ‘without having to download an App’. You still need an App to process the UriBeacons, however, this can be built into the Web Browser (Chrome offers this for iOS) or the Operating System (Android M offers this). The following vendors offer UriBeacons: Blesh; BKON; iBliO; KST; and twocanoes, etc.  

Recently Google updated their single-case UriBeacon specification to that of Eddystone. Eddystone is an open-source cross-platform beacon solution that supports broadcasting of UUID, URL, EIDs and Telemetry data. Previously Beacons had only supported UUID until UriBeacons offered the single option of URL advertisement. Eddystone offers an additional two frame types: Ephemeral ID is an ID which changes frequently and is only available to an authorised app; Telemetry is data about the beacon or attached sensors: e.g. battery life, temperature, humidity. Unlike iBeacons, which must be approved by Apple, anyone can make an Eddystone-compatible beacon. Current beacon manufacturers include: Estimote; Kontakt; and Radius Networks, etc.

The Eddystone-URL frame broadcasts a URL using a compressed encoding format in order to fit more within the limited advertisement packet. Once decoded, the URL can be used by any client with access to the internet. For example, if an Eddystone-URL beacon were to broadcast A URL then any client that received this packet and with an Internet connection could choose to visit that URL (probably over WiFi). You can use an App to manage that experience and either take you directly to the URL or process a URI internally to perform some other function without network connectivity. Better still The Physical Web initiative has moved away from UriBeacon to the open initiative of Eddystone.

Now one thing to realise is that Eddystone may support iOS but that obviously does not include integration with CoreLocation as per iBeacons. Eddystone beacons only interact with iOS devices via CoreBluetooth which means you have more work to do. Likewise, on Android M there are a whole bunch of new APIs and those will not be available on iOS.

  • The Nearby API makes it easy for apps to find and communicate with beacons to get specific information and context. Apparently it uses a combination of Bluetooth, Wi-Fi, and inaudible sound.
  • Nearby provides a proximity API called Nearby Messages in which iOS devices, Android devices and Beacons can discover, communicate and share data/content with each other.
  • The Proximity Beacon API helps developers to manage data and content associated with Beacons. Once Beacons are registered with Google's Proximity Beacon API then we can map data and content that can be pulled from the Cloud using a REST interface. This makes Content Management Solutions much easier and gives us the ability to dynamically map content available to Beacons. This functionality will most probably be supported in the Physical Web through Web Browsers clients that support this API through JavaScript.  
  • Place Picker is an extension of Places API that can show Beacons in your immediate vicinity. The Places API is also able to read and write Beacon positioning information (GPS coordinates, indoor floor level, etc.) from/to the Google Places database using a unique Place ID based around the Beacon UUID and then have the Beacons navigable though Google Maps. This would provide a much better retail solution where customers could literally Google “Hair Shampoo” inside a Boots store and be taken directly to the product using indoor positioning.

I am sure you have many questions such as, can a Beacon run iBeacon and Eddystone simultaneously. At the moment the Beacon vendors offer the ability to support both profiles but not simultaneously. This is apparently due to battery usage. Most vendors do seem to support simultaneous broadcast of UUID, URL and Telemetrics within Eddystone though. For any other questions then here is a fantastic Q&A on Eddystone from Kontakt.io (http://kontakt.io/blog/eddystone-faq/).

The Physical Web has now moved away from UriBeacon and onto Eddystone-URL frames. A few months ago, Chrome for iOS added a Today widget. The new Chrome for iOS integrates the Physical Web into the Chrome Today widget, enabling users to access an on-demand list of web content that is relevant to their surroundings. The Physical Web displays content that is broadcasted using Eddystone-URL format. You can add your content to the Physical Web by simply configuring a beacon that supports Eddystone-URL to transmit your URL of choice. When users who have enabled the Physical Web open the Today view, the Chrome widget scans for broadcasted URLs and displays these results, using estimated proximity of the beacons to rank the content.

The Physical Web also support finding URLs through Wifi using mDNS (and uPnP). The multicast Domain Name System (mDNS) resolves host names to IP addresses within small networks that do not include a local name server. It is a zero-configuration service, using essentially the same programming interfaces, packet formats and operating semantics as DNS. While designed by Stuart Cheshire to be stand-alone capable, it can work in concert with DNS servers. The mDNS protocol is implemented by the Apple Bonjour and by Linux nss-mdns services. In other words rather than waiting for your client to discover a Beacon advertising a UUID or URL then you could actually start searching for local services hosted on Beacons using a multicast form of DNS. Beacons are actually more powerful than most people realise and can often run micro-services on them. In fact if we think about it then Beacon based services are the ultimate form of a micro-service architecture. Brillo is an upcoming Android-based operating system for IoT devices and this lightweight OS could theoretically run on a Beacon which would enable a portable way of deploying a Beacon based micro-service architecture.

When you woke up this morning did you honestly think that Beacons were that powerful?

************************************************************************
Kevin Benedict
Writer, Speaker, Analyst and World Traveler
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Monday, November 23, 2015

The 18 Laws for Winning with Data, Speed and Mobility

I have given nine presentations in the past 10 days on mobile and data strategies.  I have met with companies in the energy, media, insurance and banking industries.  I have brainstormed and discussed these laws for winning with data, speed and mobility, and they have held up.  In the age of mobile me, where information is the prize, a new set of laws and strategies are required to win.  In my new report, "Cutting Through Chaos in the Age of Mobile Me," I discuss many of these laws and how they are applied in mobile apps and mobile commerce.
  1. Data is the modern commercial battlefield.
  2. Information dominance is the strategic goal.
  3. Real-time operations and tempos are the targets.
  4. Advantages in speed, analytics, business operational tempos determine the winners.
  5. Real-time business speed is enabled by advances in mobile information, sensors and wireless communications.
  6. Competition is now focused on optimizing information logistics systems (the systems involved in maximizing information advantages).
  7. Businesses that can “understand and act with speed” dominate those which are slower. 
  8. In order to win or gain superiority over competitors in the age of information, you must operate  information logistics systems at a faster tempo, and get inside your competitor's decision curves. (Adapted from John Boyd)
  9. Situational awareness enables insights, innovations and operations to be conducted faster and at lower cost .
  10. Principle of Acceleration & Mobility – As demand for mobile apps increases, an even greater demand for changes will occur across business processes, operations and IT.
  11. The more data that is collected and analyzed, the greater the economic value and innovation opportunity it has in aggregate.
  12. Data has a shelf-life, and the economic value of data diminishes quickly over time.
  13. The economic value of information multiplies when combined with context and right time delivery.
  14. Mobile apps provide only as much value as the systems behind them.
  15. Full Spectrum Information: Winners will dominate by collecting, transmitting, analyzing, reporting and automating decision making faster and better.
  16. The size of opponents and their systems and platforms are less representative of power today, than the quality of their sensor systems, mobile communication links and their ability to use information to their advantage.
  17. Information is a new asset class, in that it has measurable economic value.  There are significant strategic, operational and financial reasons for investing in it, and optimizing it. (Douglas Laney, Gartner)
  18. If I can develop and pursue my plan to defeat you faster than you can execute your plan to defeat me, then your plan in unimportant. ~ Robert Leonard
These laws need to be known, and their relevance intimately understood and applied to every aspect of business and IT today.

Download the new report "Cutting Through Chaos in the Age of Mobile Me" - http://www.cognizant.com/InsightsWhitepapers/Cutting-Through-Chaos-in-the-Age-of-Mobile-Me-codex1579.pdf

************************************************************************
Kevin Benedict
Writer, Speaker, Analyst and World Traveler
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Tuesday, November 17, 2015

Data Collection and the Modern Battlefields of Business

Dr. John Snow's Map
In 1854 Cholera broke out in the Soho neighborhood of London.  Hundreds of people were struck down and died within days.  No one, at the time understood where the disease came from, how to treat it or how it was transmitted.

A local physician, Dr. John Snow spent every possible moment of his day studying the victims and data in an attempt to understand the disease.  His biggest challenge was a lack of data.  He had only the list of the dead and a blank map of the neighborhood.  What he needed was more data.  This was solved when he met the local priest, Henry Whitehouse.  Whitehouse had recorded the time of death, and the location where all the families lived and died.  When these sources of data where combined, and then overlaid on a map, visual patterns emerged which ultimately led the two to see the common denominator for all the victims was drinking contaminated water from the Broad Street water pump.

The pump handle was removed, people stopped drinking its water, and the disease burned out.  Dr. John Snow is now recognized as one of the fathers of modern epidemiology.  The data that led to his discoveries were:
  • Victims
  • Relationships
  • Locations
  • Time of illness
  • Time of death
  • Behaviors and patterns of life
Adding all of these data sources to a map, for visual reference and clarity, enabled the insight that ultimately revealed the source and means of transmission of the disease.  Minus key data sources, the disease would have remained a mystery and many more people would have died.

In business, many challenges and obstacles today can also be solved with better data collection strategies and enhanced analytics.  We have all heard the phrase, "knowledge is power."  Knowledge comes from data, so data is power.

I sincerely believe that the battlefields of business today are around data.  The winners of today and tomorrow will be those better able to collect, analyze, understand and apply data to the customization and personalization of digital interactions.  My colleagues Malcolm Frank, Paul Roehrig and Ben Pring wrote the book "Code Halos" last year to dive deep into these ideas.

Last week I published a new thought leadership whitepaper on the application of real-time data strategies and analytics to mobile commerce and consumer facing mobile applications.  The paper is titled, "Cutting Through Chaos in the Age of Mobile Me."  You can download the whitepaper here http://www.cognizant.com/InsightsWhitepapers/Cutting-Through-Chaos-in-the-Age-of-Mobile-Me-codex1579.pdf.

************************************************************************
Kevin Benedict
Writer, Speaker, Analyst and World Traveler
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Thoughts on Delivering Digital Transformation with Agile

My colleague the digital expert and programming guru Peter Rogers share his insights on the role of Agile in digital transformation, and the major debates around methodology.  WARNING!  This article dives deep into programming strategies, so go NO further if this frightens you.  Enjoy!
********
Peter Rogers
Most digital projects are going to end up being run as Agile. With that said it is well worth knowing which style of Agile you are actually using to ensure that: it is indeed a recognised style; and that you play to its strengths. Often projects fall between different styles or offer a random approximation of what the team thinks an Agile project should look like. In either case you may find the project goes inexplicably sideways.

XP (or Extreme Programming) is a suite of practises, principles and values invented by Kent Beck in the late 90s. A lot of people remember XP for the controversies such as pair-programming fights to the death over curly brace placement and confusion over 40 hours a week being a Sustainable Pace.

What people forget about XP is that is actually came up with most of the practises that we use today in Agile:

-Planning Poker
-Collaborative multi-discipline teams
-Acceptance Tests
-Continuous Integration
-Test Driven Development
-Simplest Design

XP is one of many Agile methods but it still remains the most well defined, broadest and well disciplined. Am I suggesting you use XP? I think you probably do use a few of its practises already. It is certainly interesting that what you probably considered Agile is actually an XP practise that you have bolted onto your chosen Agile method.

In 2000 a large developer summit took place in Utah and the result was a manifesto for Agile Software Development.

This created 4 values:

-Individuals and interactions over processes and tools
-Working software over comprehensive documentation
-Customer collaboration over contract negotiation
-Responding to change over following a plan

The set of values was accompanied by a set of 12 principles which you can find at www.objectmentor.com who offer great debate on this topic.

Now those of us who live in the real-world are already laughing at the third point, "Customer collaboration over contract negotiation".

Sadly in a world of fixed price, fixed deliverable, fixed time,  fixed capacity, heavily negotiated digital contracts then it makes it hard to see how Agile even has room to breath. One solution in this situation is to use longer term Agile Planning techniques (Agile Planning Onion) as opposed to software estimation techniques (like UCP). You plan how to deliver the high level work packages inside of the shell of an agreed set of sprint / release based payment milestones. When most of the variables are already fixed then you look towards planning as opposed to estimation, as a beacon for delivery confidence.

At this junction as a technologist, solution architect and offshore/nearshore specialist, I must take issue with two of the Agile principles:

(1) The most efficient and effective method of conveying information to and within a development team is face-to-face conversation

The world has moved on since the year 2000 and there is no reason to believe Agile projects have to be run in all in one room. We live in a time of offshoring and nearshoring that just wasn't considered back then.  That said I do believe (from personal experience) that teaching is far more effective face to face. So maybe I partly agree but I object to the implications of the wording.

(2) The best architectures, requirements and designs emerge from self-organizing teams

Unfortunately I do not subscribe to the theory that we do not need subject matter experts and specialists, and anyone should be able to do any role and a team can self-repair without any external guidance. This is not what I have seen in the real-world.

I equally disagree that teams can autonomously find and complete training as and when required, in order to self-heal and repair themselves during a project without any external influence.

Instead I believe in mentors and subject matter experts who train the team in order to improve them as a collective. If these people are in the team, then that's great but there is nothing wrong with external mentors. This is not a communist state and everyone does not have to be equal in anything other than respect. There have to be knowledge leaders to help spread the knowledge and in most cases there is an external support framework.

Seal Team 6 has repeatedly proven that a small specialist team can do what a thousand generalists could not even hope to achieve. I certainly believe in empowering developers and motivating a team, but everybody has roles. If you give a job to somebody who isn't good at that job, then it will take much longer. I certainly do not believe in emerging architectures and emergent problem solving either.

Indeed I feel solution architects will probably agree with me on these two points, but feel even stronger about some of the concepts in Lean.

Lean originates from Lean Manufacturing where you eliminate waste and The Toyota Way where you improve flow or smoothness of work.

This is achieved through the following principles:

-Pull Processing
-Perfect First Time Quality
-Waste Minimization
-Continuous Improvement
-Automation
-Long Term Relationships
-Production Flow
-Visual Control

Mary and Tom Poppendieck adapted the principles from Lean Manufacturing to create Lean Software Development with the following premises:

-Eliminate Waste
-Build Quality In
-Create Knowledge
-Defer Commitment
-Deliver Fast
-Respect People
-Optimise The Whole

Lean tells us to eliminate anything not adding value at this moment in time. That can mean eliminating features, documents, meetings and even future known requirements. If it doesn't add immediate value then its out.

One thing Lean gets spot on is the concept of optimising the whole. Too many companies reward partial goals like lines of code or defects found. These goals are often mutually exclusive as many managed service organisations found out a few years back. Focus your goals and reward system around optimising the whole system and not part of it.

The first thing I disagree with Lean on is that it expects teams to miraculously self-heal without any external intervention. I personally believe there is a difference between healthy respect and failing to intervene when a team is clearly struggling and needing help. Some would disagree but I don't agree with teams operating in a bubble as they are always surprised by the effects of what lives outside.

A second thing I adamantly disagree with Lean on as a solution architect is the notion of deferring decisions to the last minute and waiting for an emergent solution to appear. Never in my life has such an emergent solution appeared. Those who fail to plan, plan to fail. I doubt I am the only solution architect to take issue with this point if only for fear of job security.

A third and final point is if you can show me a Lean project that can work with reusable components on separate delivery timescales then I will literally buy and eat a hat.

Lean appears to promote a lack of future planning, risk management and dependency management. Its "live in the moment" style is popularised by Lean Start Up which promotes Continual Deployment and Fail Fast. Originally developed in 2008 by Ries with high tech companies in mind it does remain a good technique for Start Ups to test a product on a target audience without burning all their VC money in month one.

Ries offers these lean startup principles:

-A Minimum Viable Product
-Continuous Deployment
-A/B Testing
-Actionable Metrics
-Pivots (course correction)
-Innovation Accounting
-Build, Measure and Learn loop

Later business model templates arose:

-Business Model Canvas
-Lean Canvas
-Crowdfunding Canvas

I like Lean Startup although I take issue with some of the Lean thinking, because it offers some solid principles straight out of the Extreme Programming domain. That said outside of Start Ups you will rarely find companies doing Continual Deployment (as opposed to Continual Delivery), willing to float an unpolished MVP or able to accept the financial risk of a failed product.

So where does that leave us? I believe we need to pick an Agile methodology, and there are three main contenders:

-Scrum
-Kanban
-Scrumban

We all know and love Scrum. The Sprint Backlog details the User Stories (or Features in BDD) to do and the Product Owner provides a prioritisation based on the business needs. Each Sprint is fixed in duration (normally 2 weeks) with its content decided just before using Story Points or Ideal Days in a game of Agile Planning Poker.

The benefit of the fixed Sprint sizes is that you can work out the Velocity of the team based on the completed Story Points in order to fit just enough User Stories into the next Sprint with a few Stretch Targets. Burndown charts visualise the Story Point progress and a Scrum Master is in charge of keeping the team motivated and clear of distractions. We show the Product Owner a fully working product at least every Sprint and so we get regular feedback and validation.

I must say that I love Scrum but I do agree with some of points raised by the Kanban Camp. Fixed Sprint sizes can be too long to get validation, and during those 2 weeks the business may have changed direction. If you are working with a highly dynamic business, then you may need to either adopt 1 weeks Sprints or consider Kanban.

The second problem with Scrum is that there are no measure of how long it takes for a feature to go live. If you are doing Behaviour Driven Development (BDD) and/or Continual Delivery then Feature Cycle Time is a particularly useful measure of efficiency.

The third and final problem with Scrum is that often the Product Owner role is part-time and does not have a strong connection with the business, and the Scrum Master role is part-time or doesn't exist for cost reasons.

Kanban is the second popular approach, although I must admit here and now that I have often seen this border precariously on chaos. Kanban has no time boxed Sprints but instead it limits how many features a team can work on at any given time. There is no Product Owner or Scrum Master and the team just decide priorities collectively.

When a feature is completed then it can be made immediately available to release into production (or a suitable safer environment) and the Cycle Time can be recorded as a measure of team efficiency. The team can then collectively choose to work on whatever the next highest priority item is in the Backlog.

The cool part is that we can use simple tools like Trello which offer a customisable Kanban board as opposed to heavy weight tools like Jira. Traditionally we would have something like 'To Do', 'In Progress' and 'Done' columns. We normally break 'In Progress' into some workflow states but you get the general picture of simplicity offered here.

We now limit the number of items in each column of the Kanban board. This is critically to avoid multi-tasking and context switching which is highly inefficient. The Kanban board gives us a visualisation of the team's workflow and it keeps them prioritised without a Scrum Master role.

Seeing as we can release a feature whenever we want then we tie into Continual Delivery much more closely. We also get much faster feedback from the customer which allows us to roll back badly thought out features and feature correct rapidly without destroying the product.

There are also some very nice Kanban visualisation techniques for Feature Cycle Time and Accumulative Status.

My first issue is how the team democratically chooses the next item to work on. The second issue is that without Sprints then there is no guarantee of what will actually be delivered at any stage in the progress.  The third issue is that there is a even stronger notion of unmanaged teams without the Product Owner or Scrum Master to offer a helping hand if things go south.

The solution is my view is the third option which is Scrumban. This takes Scrum with the fixed Sprint sizes, Product Owner and Scrum Master but adds in a few concepts from Kanban.

-Kanban Feature Cycle Time
-Kanban Feature Limits

If I were to personalise this further into "Peter's Real World Scrumban" then I would add the following:

-Remove the Scrum Master role and use Feature Limits to stop context switching if cost is an issue
-Make sure you have a 100% Product Owner role
-Ignore Lean advice about not planning for the future
-Use internal or external mentors for team learning don't expect this to happen magically overnight
-Use both Sprint Burndown Charts and Kanban Cycle Time Graphs
-Add in some of the best Extreme Programming practises like TDD and Continual Delivery
-Avoid Continual Deployment unless you are in a Start Up that can afford to Fail Fast
-Embrace some of the BDD features like Gherkin, Three Amigos and Automated Acceptance Tests
-Add in some of Lean Start Up's best practises like MVP, A/B testing and Pivots
-If a process isn't working for you then fix the process before you try to fix the people
-It isn't an Agile project just because somebody decides to call it that. Embrace the Agile values rather than a few bullet points

Special thanks to the amazing Abby Fichtner for her excellent website at hackerchick.com that gave me a lot of inspiration for this debate.

Good luck and I hope this helps you make the right choice in your Agile adventures.

************************************************************************
Kevin Benedict
Writer, Speaker, Analyst and World Traveler
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Monday, November 02, 2015

Cutting Through Chaos in the Age of Mobile Me - New Report

Supporting real-time enterprise mobility that is personalized and contextually relevant takes a lot of work. In fact, it takes digital transformation. We have all grown accustomed to using personal consumer apps that know and understand us (think airline apps and Netflix), our preferences and provide contextually relevant content. Today, we expect the same from all of our apps both consumer and enterprise.

Download the full report here "Cutting Through Chaos in the Age of Mobile Me".

Ninety percent of mobile users highly value personalized mobile experiences. In order to deliver these experiences one must have real-time data collection, analytics, personalization engines and mobile applications capable of supporting real-time personalization. One must also have an operational tempo within their IT systems and business processes capable of supporting real-time. These capabilities make possible innovative new business processes that provide significant competitive advantages for businesses that embrace them.

Delivering a personalized experience, however, requires data and lots of it. We have identified three key information rich sources of this data we call 3D-Me data sources:

  1. Digital – online activities, preferences, sentiment and profiles
  2. Physical – data collected from IoT sensors (on vehicles, buildings, equipment, wearables, smartphones, etc.)
  3. Personal – user preferences, roles, jobs, skills, locations, etc.
3D-Me data sources enable enterprises to collect the right data to gain an understanding of real-time activities, and insights into the needs of their users. One of the key ingredients of a 3D-Me data source strategy is users must agree to share personal data in exchange for value. This requires a new kind of enterprise/user relationships we call MME Data Partnerships.
Personalized experiences are not the whole story. End users want contextually relevant personalization. Personalization becomes relevant when you add time, context and location to it. Sending me an SMS alert that my local coffee shop is offering my favorite hot drink at a 50% discount for the next 45 minutes is not relevant if I am on the other side of the country. Relevant personalization requires the use of data triggers that identify contextually relevant opportunities, moments and environments (CROME). CROME triggers are bits of data that provide context, which can be used to provide relevant personalization at a specific time and place. Think geo-fencing jobsites.

These CROME triggers provided the data that when analyzed, understood and integrated with relevant personalization engines, can optimize the user's experience and productivity on the job.

CROME triggers can automatically deliver the right content at the right time. They can be connected to tasks, jobs, timesheets, etc. There are at least six tasks/challenges when implementing a CROME strategies:
  • Identify the required CROME triggers
  • Understand the meaning of each CROME trigger
  • Understand where and how CROME triggers can be placed, collected and transmitted
  • Monitor and analyze CROME triggers in real-time
  • Connect specific CROME triggers to specific personalization options and business value
  • Provide CROME powered personalization in mobile experiences
CROME triggers inform that something different and perhaps significant is happening. Finding the meaning, and then relating it to a particular personalization task or action follows.

The implementation of 3D-ME enabled data and personalization strategies and CROME triggers, all supported by IT systems and business processes running at real-time operational tempos will help companies deliver to the highest expectations of mobile users today and tomorrow.

Download the full report here http://www.cognizant.com/InsightsWhitepapers/Cutting-Through-Chaos-in-the-Age-of-Mobile-Me-codex1579.pdf.
************************************************************************
Kevin Benedict
Writer, Speaker, Analyst and World Traveler
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Wednesday, October 21, 2015

Mobile Expert Interviews: Dan Bricklin, Co-Developer of the First "Killer App"

I am excited to share an interview I conducted yesterday in Boston with a member of software programming royalty, Dan Bricklin.  Dan was the co-developer of the world's first software "killer app", Visicalc.  Visicalc, a spreadsheet app for the Apple II series of personal computers, was so popular in the 1980s, that companies spent thousands of dollars on computers just to run the $100 software program.  Dan worked closely with Steve Jobs, Bill Gates many others in the early years of personal computers.  His life is outlined here on Wikipedia, https://en.wikipedia.org/wiki/Dan_Bricklin.

Dan still programs and designs productivity apps.  He is the CTO of Alpha Software, the developers of sophisticated digital forms for mobile devices.

Dan has received many honors for his contributions to the computer industry from the ACM, IEEE, MIT, PC Magazine, the Western Society of Engineers, and others. In 1981, he was given a Grace Murray Hopper Award for VisiCalc.  In 1996, Bricklin was awarded by the IEEE Computer Society with the Computer Entrepreneur Award for pioneering the development and commercialization of the spreadsheet and the profound changes it fostered in business and industry.  In 2003, Bricklin was given the Wharton Infosys Business Transformation Award for being a technology change leader. He was recognized for having used information technology in an industry-transforming way. He has received an Honorary Doctor of Humane Letters from Newbury College.  In 2004, he was made a Fellow of the Computer History Museum "for advancing the utility of personal computers by developing the VisiCalc electronic spreadsheet." Bricklin has appeared in the 1996 documentary Triumph of the Nerds, as well as the 2005 documentary Aardvark'd: 12 Weeks with Geeks, in both cases discussing the development of VisiCalc. His book, Bricklin on Technology, was published by Wiley in May 2009.

Video Link: https://youtu.be/ucDlFmrHfpk
************************************************************************
Kevin Benedict
Writer, Speaker, Analyst and World Traveler
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Friday, October 16, 2015

Code Halos and Rethinking Data Analytics - Spark is HOT!

Technical Guru
Peter Rogers
Our resident mobile technology and digital guru, Peter Rogers introduces us to his latest research and findings.  If understanding how to incorporate big data analytics in mobile and IoT apps is not your thing, go no further.  Seriously, go NO further.  Peter gets excited about...actually I have not idea.  He digs deep into Spark - an open source big data processing framework built around speed, ease of use and sophisticated analysis, with support for modern programming languages.  Enjoy!
********

Three years ago, a professor told me Spark was the future, but I never really thought to look into it until now.   I am amazed at how powerful it is!  If you look at the top there books on Safari Books Online today, you will see Spark, JavaScript and Micro-Services.  I am not the only one that is impressed!

In a Digital World the currency is data but this requires an extensive processing facility. It is not just IoT and its associated world of sensors that are propagating tidal waves of data; analytics is also being built into the majority of software and our whole life styles are being set against a backdrop of continual statistical analysis, predictive analytics and artificial intelligence. This is why the single most read technical book at the moment is on Spark, the future of data processing.

Unfortunately before I can even introduce Spark I have to give you a Big Data 101 and this will either be the most painful or the most incredible (free) guide you will find on the subject. I hereby thank the referenced blogs in advance.

We have to start at a place called Functional Programming. You may have heard of the following functional programming languages: Haskell; Erlang; Ellixir; Lisp; D; R; Scala; Wolfram; Standard ML; and Clojure [Kevin B: No, sorry missed that class]. You would probably be more surprised to know that CoffeeScript, the later versions of JavaScript and Underscore.js also have aspects of functional programming too.

If this strikes fear in your heart then do not panic, as we have an easy path. Even though it has routes in Lambda Calculus you really do not need to know anything about that. I am going to teach you everything you need to know but you will have to do some reading and there is no better place than this guide.

http://www.smashingmagazine.com/2014/07/dont-be-scared-of-functional-programming/

The purpose of functional programming is to break our programs down into smaller and simpler units that are more reliable and easy to understand. This is the perfect paradigm for being able to distribute tasks across a network of dedicated computers all working together to solve the task faster.

The core concepts of functional programming are as follows:

  • Functional programs are immutable – nothing can change
  • Functional programs are stateless – there is an ignorance of the past
  • Functions are first class and higher order – functions can be passed in as arguments to other functions, can be defined in a single line and are pure (have no side effects)
  • Iteration is performed by recursion as opposed to loops

Referential transparency – there are no assignment statements as the value of a variable never changes once defined

We generally start by defining functions that perform all the grunt work and these follow the general rules:

  • All functions must accept at least one argument
  • All functions must return data or another function
  • Use recursion instead of loops
  • Use single line (arrow) functions where utility methods require a function as a parameter
  • Chain your functions together to achieve the final result

To consider an example then let us consider a 3D visualisation that wants to take an API response from a Mensa API Web Service and visualise it to show an average of IQs against towns. We start by defining a set of utility functions that combined in order to do the grunt work. We will end up with a single line of code that obtains the coordinates for drawing the graph.

  • A simple function that adds two numbers together
  • A recursive function that adds everything up in a numerical array
  • A simple function that takes an average given a total and a count
  • A function that builds on the last to take the average for a whole array
  • A function that returns a function that finds an item in a data structure
  • A recursive function that combines arrays

With this in place we can start to look at the built in methods that often exist in seemingly non-functional languages. In JavaScript, we can find the map() and reduce() methods inside the Array object. These respectively use a processing function to: convert a set of values to another set of values; and reduce an array to a single value.  Now we can use our freshly defined utility functions along with the Array methods to create a single line of code that retrieves the coordinates - of course we would probably be using Scala here – but WOW!

var pts = combArr( find(data, ‘IQ’).map(averageForArray), find(data, ‘town’) );

See functional programming was not that hard after all? Now we come to MapReduce which is “a programming model and an associated implementation for processing and generating large data sets with a parallel, distributed algorithm on a cluster.” Google famously used MapReduce to regenerate its index of the World Wide Web but have now moved on to streaming solutions (Percolator, Flume and MillWheel) for live results.

https://en.wikipedia.org/wiki/MapReduce

A lot of people regularly mention MapReduce in conversation but few actually know what it is or how it works. Luckily for you, I will provide a few guides to get you up and running. It helps to remember that the map() function is used for converting a set of values into a new set of values; and the reduce() function is used to combine a set of values into a new single value. Now consider you can split all of your tasks up into map() and reduce() functions that are run in a distributed network of computers. Each worker node is working on a map() function against its local data and writing to temporary storage. There is next a Shuffle where worker nodes redistribute data based on the output of the map() function such that all data belonging to one key is located on the same worker node. Each worker node now processes the reduce() function in parallel and a combined result is returned.

https://hadoop.apache.org/docs/r1.2.1/mapred_tutorial.html

In a nutshell we use the map() and reduce() functions from functional programming but we apply them to a 5 step distributed program context.

  1. We prepare the map() input by designating map processors and provide the processor with all the input data 
  2. We run the custom map() code exactly once for each input value
  3. We shuffle the map() output to the reduce processors
  4. We run the custom reduce() code exactly once for each value from the map() output
  5. We collect all the reduce() output, sort it and return it as the final outcome

This has worked fine in the past but the problem is the shuffling takes time due to reading and writing to a file system, along with HTTP network connections pulling in remote data. It is also a sequential batch processing system and so it cannot handle live data, which is why Google moved on some time back.

Finally we arrive at Spark, which is an open source big data processing framework built around speed, ease of use and sophisticated analysis, with support for modern programming languages like Java, Scala and Python.

http://www.infoq.com/articles/apache-spark-introduction

Spark has the following advances over Hadoop and Storm:

  1. It offers a comprehensive framework for big data processing with a variety of diverse data sets
  2. It offers real-time streaming data support as well as batch processing
  3. It enables applications in Hadoop clusters to run up to 100 times faster in memory and 10 times faster even when running on disk
  4. It supports Java, Scala and Python, and comes with a built in set of high level operations in a concise API for quickly creating applications
  5. In addition to Map and Reduce operations, it supports SQL queries, streaming data, machine learning and graph data processing.

Excited yet? Well other than the core API there are a number of additional libraries:

  • Spark Streaming for processing real-time data
  • Spark SQL exposes Spark datasets over JDBC and runs SQL-like queries
  • Spark MLlib offers common learning algorithms for Machine Learning
  • Spark GraphX offers graphs and graph-parallel computation
  • BlinkDB allows running interactive SQL queries on large volumes of data
  • Tachyon offers a memory-centric distributed file system enabling reliable file sharing at memory-speed across cluster frameworks
  • Integration adaptors connect other products like Cassandra and Kafka

Now if I told you that the following two lines of code could print out the word count of a file of theoretically any size imaginable then you would probably be left scratching your head at how the code is so short and then ask for a programming guide to this wizardry immediately.

val d = t.flatMap(l => l.split(" ")).map(w => (w, 1)).reduceByKey(_ + _)
wcData.collect().foreach(println)

http://spark.apache.org/docs/latest/programming-guide.html

************************************************************************
Kevin Benedict
Writer, Speaker, Analyst and World Traveler
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Monday, October 12, 2015

Mobile App Development Strategies for Cross Platform Developers

Cognizant's mobile app development expert and guru Peter Rogers shares his latest insights on app development strategies.  Enjoy!
********
In previous Blogs I have announced the diversification of iOS 9 and Android 6 and the prediction of a move back to platform specific coding or HTML 5. The corollary to this is that you end up with two of three development teams: Objective-C (or Swift); Android; and potentially Windows 10.

The challenge therefore is how you manage to keep the applications aligned with the original use cases, functional (and non-functional) requirements and class design models within these technologically diverse areas.

I think there are a three different solutions to this problem:

  1. Apply the same rules for each development team across technologies and platforms. That is the developer does the unit testing and is accountable for the code coverage score with regular reporting.
  2. Look to a reusable component model, but carefully managed at a program management layer across different projects. I recommend some dedicated tool to help manage the components.
  3. Cross-train resources in both iOS and Android so they at least have an appreciation of each technology.
It is the last point that I have decided to test out through a very unique and experimental training program in Spain. I decided to teach both Objective-C and Android to our mobile developers who were originally iOS or Android. This way I figured that with an appreciation of both technologies, the development teams could work closer together.  It is a controversial proposal because Technical Architects are normally SME (subject matter experts) in one particular technology. This is where it all breaks down in the mobile space,  and how we end up with 5 star iOS Apps and 1 star Android Apps.

By teaching two competencies rather than one, you elevate the associates to somewhere between Technical and Solution Architect. Whilst still a hands-on technical architect in their field (iOS, Android or Windows) they still have a slightly bigger picture appreciation of the mobile world around them.

My experiment seemed to work. We were able to build iOS and Android Apps, nobody had a mental breakdown, and everybody seemed happy. I think this works because the technologies are so distinct. It will be very interesting to see how they get on in the future, and if this leads to much better App Store ratings for both sets of Apps.

If you can get the developers and the designers talking as well, then you will really have something magical...
************************************************************************
Kevin Benedict
Mobile Technology and Business Writer, Speaker, Analyst and World Traveler
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Wednesday, October 07, 2015

Strategies for Combining IoT, Mobility, AI, CROME and 3D-Me

None of us like slow mobile applications or those that ask us stupid questions. Our time has value. Google reports 82% of smartphone owners research and compare prices in stores, and we don’t want to be standing in the aisle answering questions the mobile app and vendor should already know. We want our apps to recognize us, the context, and to understand our needs. We want real-time mobile applications connected to mobile commerce vendors running at real-time operational tempos.

In addition to speed, 90% of 18-34 years olds strongly value personalization in their mobile applications. Personalization comes in at least two forms, latent and real-time. Latent personalization means it lays dormant waiting for an application to be launched and then applies a stored personalized content profile. Real-time personalization, however, means dynamic real-time data, consisting of digital, physical and personal (3D-Me data) data, is being always collected and combined with CROME triggers (real-time contextually relevant opportunities, moments and environments) to instantly provide a personalized experience that is relevant now! For example, a security gate automatically opens because it is integrated with a mobile application that geo-fences the security gate. When you are 100 meters away it notifies the security system to open your front security gate, raise the garage door, turn on the inside and outside lights, deactivate the home security system and notifies your family members that you are home.  An AI algorithm understands the real-time meaning and context of the data it is receiving.

Real-time data collected via GPS on your smartphone automatically triggered a real-time, relevant event using real-time artificial intelligence algorithms. Combining real-time 3D-Me data, CROME triggers and artificial intelligence with smart devices connected to the Internet of Things (IoT) means more and more of your daily activities and behaviors can be understood and digital conveniences developed.

The scenario above requires an intimate understanding of the customer, their security systems, smart devices, passwords, locations and behaviors.  I predict that soon consumer scenarios will justify extending enterprise mobile security systems out to consumers.  This means enterprise mobile security vendors may soon expand beyond the enterprise into the integrated consumer mobility/IoT/AI markets as the entire integrated system needs to be secured.

************************************************************************
Kevin Benedict
Mobile Technology and Business Writer, Speaker, Analyst and World Traveler
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Monday, October 05, 2015

Code Halos for Mobile Apps, iOS 9 Universal Links and Search

One of the new features in iOS 9 is Universal Links.  What is the significance, and what does it mean for mobile commerce and mobile apps in general?  Our resident mobile expert, Peter Rogers, shares the details with us here.  Enjoy!

********
Mobile & Digital Expert
Peter Rogers

Universal Links in iOS 9 are used for deep linking between applications (which is great for user experience) and in particular deep linking from websites to a mobile apps. This is quite similar to the notion of "Intents" in Android but it extends the concept to have a strong security model.

Apple has deemphasized the idea of "app searching" and replaced it with the more secure concept of Universal Links between Apps and Websites.

Search in iOS 9 effectively tries to rival Google, and offers a holistic search that covers both inside your app, and outside your app on the Internet. This is delivered through Spotlight and Safari search results, Handoff and Siri suggestions. You can decide what content gets indexed, what information to show in search results, which mobile apps show your search results, and where you get seamlessly transported to after clicking/touching a Universal link. 

There are private search indexes on-device, and an Apple server-side index for publicly available data. A CloudKitJS library is also offered to allow communication between the App and its Cloud based datastore. You can make your content searchable through NSUserActivity, the Core Spotlight Framework, Web Markup and Universal links. 

If we look at what this actually represents then you can see that each mobile app actually has a Code Halo of data that surrounds it. Code Halos are the data that accompanies people, organizations, mobile apps and devices.  A mobile app may be singular in its purpose and separated from other mobile apps now in iOS 9, but its sphere of influence is extended out through the Internet for the purpose of data exchange and searchability.

Suddenly the most valuable currency is data, be it app specific data or supporting website data. The Code Halo that surrounds the mobile app can radiate out from the app to the far reaches of the Internet.

************************************************************************
Kevin Benedict
Mobile Technology and Business Writer, Speaker, Analyst and World Traveler
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Thursday, September 24, 2015

Mobile Expert Interviews: VMware's Sanjay Poonen, PT 2

In Part 2 (watch Part 1 here) of my interview with VMware's Sanjay Poonen, we discuss VMware's strategies toward the enterprise mobility market, recent announcements and plans going forward.  In addition, Sanjay announces the new AirWatch led Mobile Security Alliance with 10 initial members. This alliance supports customers seeking to mitigate the growing mobile threat landscape by providing advanced security solutions. Charter AirWatch Mobile Security Alliance members include Palo Alto Networks, Check Point, FireEye, Appthority, Lookout, Pradeo, Proofpoint, Skycure, Veracode and Zimperium.

Also, SAP and VMware plan to integrate the ACE (App Configuration for the Enterprise) approach to enable secure, instant deployment and login of SAP's SuccessFactors and Concur mobile applications on iOS and Android devices. Enjoy!

Video Link: https://youtu.be/JPptgrVmGTY

************************************************************************
Kevin Benedict
Writer, Speaker, Senior Analyst
The Center for the Future of Work, Cognizant
View my profile on LinkedIn
Read more at Future of Work
Learn about mobile strategies at MobileEnterpriseStrategies.com
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.