Showing posts with label peter rogers. Show all posts
Showing posts with label peter rogers. Show all posts

How Good is Your Mind at Predicting?

My friend, Peter Rogers, who lives in the UK was wrong at predicting Brexit, but right at predicting Donald Trump would win.  How did he get one wrong and the other right?  Read about his experiences here.

Guest Blogger - Peter Rogers
Peter Rogers Predicted Donald Trump

I always thought I was particularly good at prediction as a result of me working as a technologist most of my life, but my world was turned upside down after Brexit. It took a long time for me to work out why I got Brexit so wrong, but eventually I brushed myself off and started to read a lot of material on Super-Forecasters.

It learned I had been misleading myself for many years.  I thought I was good at non-technical decision-making. I recall looking at the Ladbrokes Swingometer for Brexit and being so sure of a "remain" vote, that I was going to place a large bet.  I was however, wrong. I made the classic mistake of polluting the decision-maker-mindset.

In order to forecast accurately I needed to consider a wide range of diverse opinions without being overly drawn to any one particular source. This of course, is where social media makes fools of us all. We are typically drawn to a small group of close friends for inspiration, and these friends typically share our opinions.  People rarely fact check on social media. We also read newspapers, which have an increasingly political bias, and a high percentage of us fail to fact check.

I decided if I was going to truly escape from newspapers and social media bias, then I was going to have to train myself to be able to forecast independently. As a first step, I built a website that enabled me to place forecasts and to track whether I was right or wrong. I added a scoring system so there was feedback for my predictions.  This was important as most people don't keep track of their predictions and the results.

Every day I made forecasts on politics, sports, weather, finances, entertainment, and just about anything else I could think of.  I thought anybody can make correct guesses in their own field of expertise, but how many people can make correct predictions outside of it?  Even that prediction was wrong!  In fact, it turns out that Subject Matter Experts (SMEs) are bad forecasters in their own field!

I learned there are two parts to being a good forecaster:

  1. A good gut feel
  2. Being able to show your "thought process."  Show how you worked through an "Outside Model" that is refined by an "Inside Model." 
I started out remarkably bad at forecasting. I soon learned to differentiate between the things I wasn’t so sure of, and mark these at a lesser percentage, from those that I was quite sure about, which I would place at a higher percentage. I also began regularly adjusting my prediction when new evidence became available. It actually started to feel a lot like betting, because I used a simple gamification hook with an avatar who gets weaker or stronger depending on my average score.

The bottom line, after 50 bets, I was actually able to predict with 95% accuracy that Donald Trump would be the next President many months before the actual election.

My goal now is to help other people improve their predictive powers as I did. The system still needs a lot of work.  Today it helps people improve their gut instinct, which is an improvement as I went from 25% accuracy to to 75% in just three months of time.  My plan now is to roll the system out to the general public as a beta.  You can register by simply emailing peterzrogers@hotmail.com, and I will send you the website address and a secure login token.

I am also very interested in talking to people who would like to take the system forward because I strongly believe that digital systems to enhance forecasting are in demand.

************************************************************************
Kevin Benedict
Senior Analyst, Center for the Future of Work, Cognizant Writer, Speaker and World Traveler
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Mobile Technologies Revealed: Web and Native App Development Strategies

Our resident Cognizant mobile and digital technology guru, Peter Rogers, shares his insights into web and native app development strategies in this guest post:  Enjoy!
********
Peter Rogers
I often meet customers who want to transition web developers into mobile application developers. Apple has clearly tried to address this market using Swift but that does not offer a cross platform solution. Developers who have come through this transition will traditionally wrap the latest and greatest web framework (like Angular 2 or React) using Adobe Cordova through initiatives like Ionic. However great the latest web frameworks are though they can never compete with pure native mobile user interfaces powered by dedicated hardware acceleration. It may be a simple solution but the net result is never going to be present the best possible user experience and there will always be problems with Apple App Store submission and changes to WebView technologies designed to gently nudge developers towards pure native Apps.

Appcelerator Titanium has long since offered an excellent solution in this space but the only downside is the lack of a combined desktop and mobile solution.

Recently three new exciting initiatives arrived to offer new Titanium-like solutions in this space:

1.       React Native (http://www.reactnative.com/)
2.       Angular 2 Native Apps (https://www.youtube.com/watch?v=4SbiiyRSIwo)
3.       NativeScript (https://www.nativescript.org/)

The benefit of the first two is that the technology can be shared across both mobile and desktop effectively. There is no learning a new framework. For the web developers who are trained in Angular 2 or React then this is a very attractive solution for transition to mobile development without having to go anywhere near Cordova. In fact in most cases all you have to do is to swap out the final Cordova Wrapping process for a dedicated Web Native Development phase, which means you don’t have to throw anything away.

How does this magic work? Well advanced web developers have already started to mix Angular and React: using the big framework quality of Angular and the high speed rendering of React. This architecture is made even simpler with Angular 2 in which there is platform-agnostic template parsing and platform-specific rendering. This makes it possible to plug in React Native or NativeScript as the underlying rendering engine. This offers a future in which Angular 2 can create cross-platform desktop or cross-platform mobile applications, allowing you to choose your programming language (ECMAScript 5.1, ECMAScript 2015, TypeScript, Dart or CoffeeScript) and choose your platform-specific rendering engine (React Native, NativeScript, Angular 1, Angular 2 or React). For those who wrote off Angular 2 due to radical design changes then suddenly that decision is looking incredible hasty, for it is nothing short of genius.

If you watch the Angular 2 Native App video then you will see the focus around NativeScript. The question is why not consider Titanium or React Native? Whilst that is perfectly possible using the plug and play nature of the new Angular 2 rendering engine there is a clear advantage offered by NativeScript. To understand this advantage we need to take a slight diversion into Hybrid App world. As you may recall there are three main models for Hybrid Apps: wrapped web; runtime interpreters; and cross-compilers. If we start with cross-compilers then we will find Xamarin ruling the roost but I would not call this a Rapid Application Development approach. You trade performance for a slightly longer development time and a more difficult programming language. The interesting thing with Xamarin is the 100% API coverage available within a few days. There are also a few HTML 5 canvas cross-compilers like those found in Intel XDK but these are specific to Canvas technology which works better for the specific use case of widgets and games. We all know the most popular wrapped web solution is Cordova, with another notable entry being IBM Worklight.

Runtime Interpreter solutions do not quite offer the performance of a cross-compiler but they do offer support for rapid application development through JavaScript. Appcelerator Titanium is the most popular Runtime Interpreter solution and has teased a cross-compiler solution called HyperLoop for a long time but it is offered in a restricted capacity. I am a huge fan of Titanium and have used it a lot for various customers. I was really looking forward to HyperLoop but looking at the software repository then it seems to have slowed down to a halt. The only downside of Titanium is the lack of 100% API coverage but this is a shared limitation with most other portable native solutions with Xamarin and NativeScript being the notable alternatives. Now in the case of Xamarin the API wiring has to be performed by hand however in NativeScript then it is automatic.

So what is the magic of the Runtime Interpreter solution powering Titanium, Kony, React Native and NativeScript? Well Telerik (who created NativeScript) provide the best explanation that I have quite possibly ever read before online (http://developer.telerik.com/featured/nativescript-works/). In a nutshell the two core JavaScript engines that power iOS (JavaScript Core) and Android (V8) both expose a very advanced set of APIs that power the JavaScript bridge (http://izs.me/v8-docs/namespacev8.html).

·         Inject new objects into the global namespace
·         JavaScript function callbacks
·         JNI to talk with the C layer on Android

NativeScript offers the following explanation of how it uses these APIs in order to build the JavaScript bridge:

1)      Metadata is injected into the global namespace at build-time
2)      The V8/JavaScript Core function callback runs.
3)      The NativeScript runtime uses its metadata to know that the JavaScript function calls means it needs to instantiate an Android/iOS native object
4)      The NativeScript runtime uses the JNI to instantiate an Android object and keeps a reference to it (iOS can talk directly to the C layer)
5)      The NativeScript runtime returns a JavaScript object that proxies the Android/iOS object.
6)      Control returns to JavaScript where the proxy object gets stored as a local variable.

This is probably quite similar for most of the other vendors but the additional step that NativeScript adds is the ability to dynamically build the API set at build time using Reflection (introspection). Because generating this data is non-trivial from a performance perspective, NativeScript does it ahead of time, and embeds the pre-generated metadata during the Android/iOS build step. This is why NativeScript can offer 100% API coverage immediately because it does not involve the manual step required in Xamarin. To be accurate it is unlikely that NativeScript can offer 100% API but instead it will offer all of the APIs that can be discovered through reflection – there is a subtle difference here as those who have use reflection programmatically will pick up on.

NativeScript offers two different modes of operation:

1)      Use the low level iOS and Android objects directly
2)      Use high level abstraction APIs

The high level abstraction APIs are provided as RequireJS modules and allow you to work at a higher level of abstraction. If you were wiring this into Angular 2 then you would probably have an Angular component which either calls a Browser Object or an NS Module, which itself talks to either an iOS proxy object or an Android proxy object through NativeScript. Of course there is nothing to stop you having an Angular component that calls out to React Native and that option is being explored as well.

This is not to say that NativeScript is better than React Native, Titanium or Xamarin. In fact I can see the main use case of NativeScript as being used inside of Angular 2 as its platform specific rendering solution. I can actually see more people using React Native as a standalone solution even though it is in a much earlier state. I can also see Titanium carrying on as one of the most popular mobile solutions on the market today. I can however see native mobile web applications becoming a hot new topic and a great place to transition web developers towards.

Download the latest mobile strategies research paper, "Cutting Through Chaos in the Age of Mobile Me," here http://www.cognizant.com/InsightsWhitepapers/Cutting-Through-Chaos-in-the-Age-of-Mobile-Me-codex1579.pdf
************************************************************************
Kevin Benedict
Writer, Speaker, Analyst and World Traveler
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Beacon Essentials You Must Quickly Learn

Our resident Cognizant digital/mobile expert, Peter Rogers, asked me to recommend a digital strategies topic to share, and I suggested Beacons for this week.  I confess to reading about them daily without knowing much about them, so I want to thank Peter for this article!  Enjoy!
********
Digital & Mobile Expert
Peter Rogers

Let's start with a Basic Beacons 101 class:

  1. Beacons do not push out notifications. They broadcast an advertisement of themselves (traditionally their UUID, major and minor values) and can be detected by Bluetooth Low Energy (BLE) devices.
  2. The proximity from a number of Beacons can be measured using typical triangulation techniques in order to get a (very) rough idea of (typically) indoor location.
  3. The Beacon UUID, major and minor version values are typically used for identification and used to map to either a message, service, media content, website, application or location inside the Native App.
  4. Beacons can have their UUID, major and minor versions (and indeed power level) modified statically before deployment or dynamically using WiFi connectivity. A Beacon Management App is often provided by a Beacon Platform Vendor to allow you to manage these values dynamically.
  5. Updating the Beacon major and minor values can be used to update the identity of the Beacons and subsequently change what they map to inside the Native App. This does mean there is a security risk of somebody remotely hacking your Beacons and changing their values to take down or corrupt your service.
  6. iBeacon is Apple’s proprietary BLE profile but their patents seem to cover more than just the profile aspect. There were Beacons before iBeacons. Apple did not invent the Beacon. What they did is an incredibly good job of integrating Beacon support into iOS. iBeacon is not a piece of hardware. It is a BLE profile that is loaded onto a piece of hardware. This profile makes the Beacon an iBeacon.
  7. There are many Beacon vendors who offer various capabilities such as: BlueCats; BlueSense; Gelo; Kontakt.io; Glimworm; Sensorberg; Sonic Notify; beaconstac; mibeacon (Mubaloo); estimote; Gimbal (Qualcomm); Apple; and Google, etc. 

Beacon vendors offer various difference offerings such as:

  • hardware
  • proprietary BLE Beacon profiles
  • support for popular profiles
  • remote Beacon management
  • analytics
  • associated content management
  • marketing campaigns
  • software version management
  • profile switching
  • client side SDKs
  • professional support services
Most do not offer the whole solution, and so it was interesting to see Apple and then Google throw their hat into the ring. Most people are still really excited about Apple’s iBeacons, but they look like they will become a closed eco-system which could possibly even include being able to be physically undetectable to non-Apple hardware.  Today Beacon vendors are just not allowed to provide library based support for iBeacons on Android hardware (http://beekn.net/2014/07/ibeacon-for-android/).

At the start of 2015 Google created a new form of Beacon called UriBeacon (http://uribeacon.io/) which was able to actually advertise a URL pointing to a website or a URL that could be processed locally. This was in stark contrast to all the previous forms of Beacon which could only advertise their identity (UUID, minor, major). UriBeacons also promised to be cheaper and easier to configure, which was largely down to their more limited use case of just being used to advertise a URL/URI. The killer concept, however, was that of The Physical Web. The Physical Web is an approach to unleash the core superpower of the web: interaction on demand. People should be able to walk up to any smart device and not have to download an app first. A small pre-installed App (like the Web Browser or something Operating System level) on the phone scans for URLs that are nearby. Google previously used the UriBeacon format to find nearby URLs without requiring any centralized registrar.

This was a major breakthrough because having to download an App for each Beacon vendor completely breaks the organic, intelligent, evolutionary Smart City model. Notice that I used the words ‘without having to download an App’. You still need an App to process the UriBeacons, however, this can be built into the Web Browser (Chrome offers this for iOS) or the Operating System (Android M offers this). The following vendors offer UriBeacons: Blesh; BKON; iBliO; KST; and twocanoes, etc.  

Recently Google updated their single-case UriBeacon specification to that of Eddystone. Eddystone is an open-source cross-platform beacon solution that supports broadcasting of UUID, URL, EIDs and Telemetry data. Previously Beacons had only supported UUID until UriBeacons offered the single option of URL advertisement. Eddystone offers an additional two frame types: Ephemeral ID is an ID which changes frequently and is only available to an authorised app; Telemetry is data about the beacon or attached sensors: e.g. battery life, temperature, humidity. Unlike iBeacons, which must be approved by Apple, anyone can make an Eddystone-compatible beacon. Current beacon manufacturers include: Estimote; Kontakt; and Radius Networks, etc.

The Eddystone-URL frame broadcasts a URL using a compressed encoding format in order to fit more within the limited advertisement packet. Once decoded, the URL can be used by any client with access to the internet. For example, if an Eddystone-URL beacon were to broadcast A URL then any client that received this packet and with an Internet connection could choose to visit that URL (probably over WiFi). You can use an App to manage that experience and either take you directly to the URL or process a URI internally to perform some other function without network connectivity. Better still The Physical Web initiative has moved away from UriBeacon to the open initiative of Eddystone.

Now one thing to realise is that Eddystone may support iOS but that obviously does not include integration with CoreLocation as per iBeacons. Eddystone beacons only interact with iOS devices via CoreBluetooth which means you have more work to do. Likewise, on Android M there are a whole bunch of new APIs and those will not be available on iOS.

  • The Nearby API makes it easy for apps to find and communicate with beacons to get specific information and context. Apparently it uses a combination of Bluetooth, Wi-Fi, and inaudible sound.
  • Nearby provides a proximity API called Nearby Messages in which iOS devices, Android devices and Beacons can discover, communicate and share data/content with each other.
  • The Proximity Beacon API helps developers to manage data and content associated with Beacons. Once Beacons are registered with Google's Proximity Beacon API then we can map data and content that can be pulled from the Cloud using a REST interface. This makes Content Management Solutions much easier and gives us the ability to dynamically map content available to Beacons. This functionality will most probably be supported in the Physical Web through Web Browsers clients that support this API through JavaScript.  
  • Place Picker is an extension of Places API that can show Beacons in your immediate vicinity. The Places API is also able to read and write Beacon positioning information (GPS coordinates, indoor floor level, etc.) from/to the Google Places database using a unique Place ID based around the Beacon UUID and then have the Beacons navigable though Google Maps. This would provide a much better retail solution where customers could literally Google “Hair Shampoo” inside a Boots store and be taken directly to the product using indoor positioning.

I am sure you have many questions such as, can a Beacon run iBeacon and Eddystone simultaneously. At the moment the Beacon vendors offer the ability to support both profiles but not simultaneously. This is apparently due to battery usage. Most vendors do seem to support simultaneous broadcast of UUID, URL and Telemetrics within Eddystone though. For any other questions then here is a fantastic Q&A on Eddystone from Kontakt.io (http://kontakt.io/blog/eddystone-faq/).

The Physical Web has now moved away from UriBeacon and onto Eddystone-URL frames. A few months ago, Chrome for iOS added a Today widget. The new Chrome for iOS integrates the Physical Web into the Chrome Today widget, enabling users to access an on-demand list of web content that is relevant to their surroundings. The Physical Web displays content that is broadcasted using Eddystone-URL format. You can add your content to the Physical Web by simply configuring a beacon that supports Eddystone-URL to transmit your URL of choice. When users who have enabled the Physical Web open the Today view, the Chrome widget scans for broadcasted URLs and displays these results, using estimated proximity of the beacons to rank the content.

The Physical Web also support finding URLs through Wifi using mDNS (and uPnP). The multicast Domain Name System (mDNS) resolves host names to IP addresses within small networks that do not include a local name server. It is a zero-configuration service, using essentially the same programming interfaces, packet formats and operating semantics as DNS. While designed by Stuart Cheshire to be stand-alone capable, it can work in concert with DNS servers. The mDNS protocol is implemented by the Apple Bonjour and by Linux nss-mdns services. In other words rather than waiting for your client to discover a Beacon advertising a UUID or URL then you could actually start searching for local services hosted on Beacons using a multicast form of DNS. Beacons are actually more powerful than most people realise and can often run micro-services on them. In fact if we think about it then Beacon based services are the ultimate form of a micro-service architecture. Brillo is an upcoming Android-based operating system for IoT devices and this lightweight OS could theoretically run on a Beacon which would enable a portable way of deploying a Beacon based micro-service architecture.

When you woke up this morning did you honestly think that Beacons were that powerful?

************************************************************************
Kevin Benedict
Writer, Speaker, Analyst and World Traveler
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Mobile Insights - Feeling the Force (Force Touch) with iOS 9

My friend and Cognizant's mobile and digital technical guru, Peter Rogers, has been playing again. In this "must read" article he shares how iOS9 handles touch and sensing.   Enjoy!
**********

Every time there is a new games console release (especially when Nintendo is involved) rumours are always floating abound of a technological support for textures that you can actually feel on your touch screen. Basically the ability to sense different materials through the screen. It is a lovely idea and the closest we have come yet is probably haptics (https://en.wikipedia.org/wiki/Haptic_technology) and electric shock feedback (https://www.youtube.com/watch?v=MRQAijNKSEs).

Well, we are not there quite yet but Apple certainly came close with the iPhone 6S announcement of 3D Touch (http://www.apple.com/iphone-6s/3d-touch/). After revolutionising the touch screen world with multi-touch, it then made perfect sense to add a force element to the touches in order to offer different types of touch depending on the applied pressure. In fact, there was something called Force Touch which was already available on the Apple Watch however it had less capability to measure your touches and doesn’t react as quickly to your input. This is because the new 3D touch can instantly measure microscopic changes and feed them back from the hardware to the software in real-time. 3D Touch is highly sensitive and reacts immediately, it also allowing different types (or level) of press depending on the pressure applied. Apple have included this feature in iOS 9 but the hardware is only released in the 6S devices.

“When you press the display, capacitive sensors instantly measure microscopic changes in the distance between the cover glass and the backlight. iOS uses these measurements to provide fast, accurate, and continuous response to finger pressure, which could only happen with deep integration between software and hardware. iPhone 6s also provides you with responsive feedback in the form of subtle taps, letting you know that it’s sensing the pressure you’re applying.” [Apple]

I have already fallen in love with 3D Touch but we have to remember that it is only available on 3D Touch devices and the feature may also be turned off by the user. Currently the only devices supporting this are the 6S and 6S Plus, which is surprising given that the new iPad Pro would be perfect for pressure sensitive art packages. The Apple Human Interface Guidelines state that “When 3D Touch is available, take advantage of its capabilities. When it is not available, provide alternatives such as by employing touch and hold. To ensure that all your users can access your app’s features, branch your code depending on whether 3D Touch is available.” This gives a glimpse of a future whereby most Apps are using 3D Touch even if it is faked on non-3D Touch devices.

As well as being built into some preinstalled applications.  You can also use it within third party applications. The 3D Touch enables three new types of capability:
  1. Pressure sensitive applications, such as art packages
  2. Peek and pop, to preview content without opening it
  3. Quick actions, to offer a short cut to different services offered by the same App
Mobile & Gaming Expert
Cognizant's Peter Rogers
The first is realised by two new properties in the UITouch class: ‘force’ and ‘maximumPossibleForce’. These properties allow ‘UIEvent’ events to convey touch pressure information to the App. A typical example is an art package whereby you press harder to get a thicker line.

The second is true genius in my opinion. The UIViewController class can respond to three phases of applied pressure to offer ‘Peek and Pop’ functionality. When you first apply a little bit of pressure then a visual indication appears  (the rest of the content blurs) to show if a content preview is available. If it is then a little bit more pressure then you will be shown a preview of the content called a ‘Peek’. If you release your finger at this stage then the content preview is hidden and you return back to the original user interface without having wasted your time loading content that was needlessly time consuming. The email client is a perfect use case as you can imagine. If however you swipe upwards on the Peek then you are shown the ‘Peek Quick Actions’ which allow you to perform quick actions associated with it – this will be explained in the Quick Actions section later on. If you apply the final level of pressure then the you can optionally navigate to the preview content and this is referred to as a ‘Pop’. The analogy here is of a stack of visual elements that allows you to peek at an element before popping it off the stack.

This is where Apple have been really clever in iOS 9 and their rollout of information, as we had previously seen the capability to switch between Apps transparently, but it becomes very clear why this is so useful when we see ‘Peek and Pop’. For example the new Safari View Controller actually uses Safari to do the new rendering without launching it. Likewise the new hot-linking between Web Browser and Apps is seamless without any App loading or closing. This enables the Peak Preview to show you the a preview of a Web URL or Apple Map contained in an email, without having to clumsily swap between applications. This is built into a few of the native applications: email; web links in email; locations in email; and the camera.

The third is probably the most contentious. By clicking on an App icon within a 3D Touch device then you will be presented with a menu of options called Quick Actions. These actions allow you to use the App to quickly perform a given service – for example “Take a Selfie” is supported in the pre-installed Camera App. If you can anticipate between three and five common tasks that your App performs (typically the items within a menu shown in the first screen are good candidates) then you can offer these as Quick Actions either statically (in your app’s Info.plist file) or dynamically (using UIApplicationShortcutItem). A Quick Action can return a small amount of text and an optional icon.

The only downside to all of this wonderfulness is how Xcode 7 supports 3D Touch development. Sadly the Simulator in Xcode 7 does not support 3D Touch and neither does Interface Builder. That pretty much means you need to develop on the device for testing 3D Touch. It also adds a whole layer of entropy for automated testing using systems like Calabash.

As wonderful as iOS 9 is, and I truly believe it is wonderful now, the bottom line is that developers are going to face three issues:
  1. They will need to be doing a lot more on-device testing for 3D Touch and Multi-Tasking
  2. They will be increasingly going in different directions for iOS and Android development
  3. They will be increasingly waiting for cutting edge features to be supported in cross-platform solutions 
iOS 9 may go down in history as the operating system that finally broke cross platform development and actually differentiated between native Apps and HTML 5.

************************************************************************
Kevin Benedict
Writer, Speaker, Senior Analyst
The Center for the Future of Work, Cognizant
View my profile on LinkedIn
Read more at Future of Work
Learn about mobile strategies at MobileEnterpriseStrategies.com
Follow me on Twitter @krbenedict
Subscribe to Kevin'sYouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

The Latest on Microsoft's Windows Phone 8.1 for Enterprise Mobility and IoT

By Guest Blogger and Cognizant Mobile Expert Peter Rogers

A lot of attention lately has been given to Android and iOS, but let's not forget developments from Microsoft. Microsoft made some exciting announcements at Build 2014 that we should consider.

The Windows 8.1 update was given an imminent release date (April 8th) and Windows Phone 8.1 Dev Preview Program is just starting. There was a nice quote reflecting their intentions with Windows Phone 8.1, “We believe Windows Phone is the world's most personal smartphone”.  Microsoft is bringing Windows Phone 8.1 to all Lumia devices running WP8  and the next generation of Lumia devices were shown with ridiculously good cameras and a Snapdragon 400/800 chip inside (1.2/2.2 GHz).

Cortana is Microsoft’s version of Siri (with a husky voice), that is powered by Bing, and has been fully integrated into the phone experience. Windows Phone 8.1 also comes with an enterprise VPN and Internet Explorer 11. The desktop version of Internet Explorer now has an enterprise mode for improved compatibility (white listing of sites) and finally supports WebGL (3D).

The first announcements that was of keen interest to me was the new Universal Apps. These are based on the Windows runtime environment and are portable across the following: PCs; tablets and even Xbox. There is an update to Visual Studio 2013 that allows you to build such Universal Apps. A demo showed the same App running on both Xbox and Windows Phone; and there was also a demo showing the improvements in DirectX 12.

The second thing of interest for me was that ‘The Internet of Things’ got a lot of air time and Microsoft were very keen to talk about Intel and their new Quark chip. It's the smallest SoC the company has ever built, with processor cores one-fifth the size of Atom's, and is built upon an open architecture. Quark is positioned to put Intel in wearables and they even showed off a prototype smartwatch platform Intel constructed to help drive wearable development. Intel President Renee James pointed out that Quark's designed for use in integrated systems, so we'll be seeing Quark in healthcare too. The link for Microsoft was of course their Azure Cloud platform and the shock announcement that Windows will be available for free for Internet of Things-type devices - and indeed for phones and tablets with screens smaller than 9 inches.

The third thing that sparked my interest was from one of the questions in the Q&A, “What's the vision for Microsoft? The vision twenty some-odd years ago was ‘a computer on every desk’. But that's basically been achieved.”  Microsoft's answer, “Mobile First, Cloud First, and a world based on concepts like machine learning.”

I like “Mobile First, Cloud First” as a concept because it stresses the important relationship between the two. Microsoft may not see the success they desire with Windows 8.1 (even when the start menu returns) but it is clear that they are still a force to be reckoned with, and Windows 9 will have all the necessary learning in place to succeed.


*************************************************************
Kevin Benedict
Senior Analyst, Digital Transformation Cognizant
View my profile on LinkedIn
Learn about mobile strategies at MobileEnterpriseStrategies.com
Follow me on Twitter @krbenedict
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

IoT and M2M Cloud Controlled Programmable Hardware

My friend and Cognizant colleague the ever opinionated Peter Rogers shares more of insights into the world of IoT (Internet of Things) geekdom and how it really works under-the-covers.
__________

Facebook invested more cash this week when they acquired one of my favourite Kickstarter projects Oculus VR for a seemingly ridiculous $2b. The VR (virtual reality) headset was the best in class technology (in its price range) and had just added a head-tracking software solution to reduce motion sickness. Of course it wasn't just the VR headset that Facebook acquired, but the CTO of Oculus VR, who is no other than the legendary game creator John Carmack.

There is every indication that Facebook will let Oculus VR do their own thing but I do worry about the lack of support from game developers, so John Carmack needs to rally the forces. We all agree there is money in wearable technologies in 2014, right? I actually classed virtual reality headsets as my favourite form of wearable technology, but I am a gamer at heart and spent a lot of time playing VR games in the local arcades as a teenager/adult. With the problems of motion sickness being alleviated and fast refresh rates then we can all look forward to recreating scenes from Disclosure very soon and it proves this is happening now.

IFTTT

I was recently looking into IFTTT (if this then that) which is a service that lets you create powerful connections of Internet services. Channels are the basic representation of online services (Facebook, LinkedIn, Evernote, etc.). Triggers are actions that place on a channel, such as “I check in on Foursquare” or “I am tagged in a ridiculous picture of the office party on Facebook”. Actions are the tasks to perform such as ‘send me a text message to warm me of photos I am tagged in on Facebook”.  Recipes are therefore the final ‘if this than that’ statement which combines triggers on channels with actions to perform. You can have personal recipes, one example of such being a text message warning system for photos that you are tagged in on Facebook within days of an office party.

I didn’t realise until recently that some of the non-enterprise MBaaS (mobile backend as a service) systems offer a similar IFTTT construct. If we look at Firebase which is probably more of a real-time connectivity platform than a MBaaS, but has come into the spotlight after a strong partnership with Famo.us. Firebase offers hooks to inject conflict resolution logic into the proceedings. Likewise, Telerik  allows you to inject custom JavaScript code to be executed before/after CRUD (create/read/update/delete) operations on data items. This offers a simpler alternative than a Node/GAE service tier and with the merging of API Gateways and Enterprise MBaaS on the horizon (a topic for a later Blog) then I have a strong feeling we will see this level of configuration-programmatic control in the near future, especially in the wearable space.

Tessel

Tessel start with asking a great question, “How do you teach web developers about hardware?” and it is a question I have long been pondering from a resourcing perspective. The answer they give is fantastic, “You don’t. You teach hardware about web developers”. You use familiar web development language such as JavaScript and Node to make programming hardware a much higher level construct.

Tessel is a micro-controller that runs Embedded JavaScript. The guys at Tessel seem to think that JavaScript is the perfect embedded language and I am inclined to agree. Tessel are targeting the affordable embedded processor range of Cortex-M0 to Cortex-M4 which are the lower end of the performance spectrum but come in at the $4@1k range. The options are to either run a JavaScript VM (which comes in at around 10Mb of memory) or run a Lua VM (which is highly portable and comes in at around 30K). I was curious what Embedded JavaScript actually was and I guess we will see quite a few definitions of cut down versions of the ECMAScript but Tessel have a unique take on all this. Originally on a local computer there would be a JavaScript file and a g-zipped Lua file which was then sent to a Tessel micro-controller which ran a Lua interpreter. To improve performance they have now moved to having a JavaScript file on the local computer and then on the actual Tessel they will compile JavaScript to Lua bytecode and run this through a LuaJIT (just in time compiler) based custom RTOS.

I remember all of the MEAPs (mobile enterprise application platforms) used to support Lua and soon everyone quickly moved away to the more familiar JavaScript language. Corona was the first to see an exodus of game developers due to the closed nature of the solution. Now in the MCAP space everyone is moving away from JavaScript VMs to cross-transpiled / cross-compiled JavaScript solutions (Hyperloop, Cocoon, Intel XDK). This means you get to write in JavaScript but you end up with native code which is a win-win – unless you hate JavaScript. The future is that it will become feasible to embed in every product a micro-controller powerful enough to run a high level language but for now JavaScript (or Embedded JavaScript as it will be called) seems to be the language of choice.

Firmata

I later discovered trailr which allows you to build and deploy Arduino ‘environment-aware’ sketches over WebSockets. This basically means that you can effectively reprogram the hardware by sending an environment configuration over the air. This led me onto Firmata, which is a generic protocol for communicating with microcontrollers from software on a host computer. It is intended to work with any host computer software package. Basically, this firmware establishes a protocol for talking to the Arduino from the host software. The aim is to allow people to completely control the Arduino from software on the host computer. Firmata is therefore a simple Arduino sketch that allows you to control all of the pins on the micro-controller dynamically without loading a new program on the board every time you want to do something.

SkyNet and Cloud Programmable Hardware

I have to mention SkyNet once again after they amazed me by lighting up their office with Phillips Hue light bulbs that change colour (red or green) as their stock price fluctuates (using the Yahoo Stock Market API). You can see the video here at  https://www.youtube.com/watch?v=ZNiHQXmawys. SkyNet have firmware that allows an Arduino to automatically connect to SkyNet and await Firmata instructions. SkyNet then becomes the compute cloud for controlling devices and collecting sensor data without CPUs or custom device apps.

As Chris from SkyNet says, “You could literally duct-tape an Arduino, MicroArduino (https://www.kickstarter.com/projects/microduino/microduino-arduino-in-your-pocket-small-stackable), Spark device (https://www.spark.io/), or RFduino (http://www.rfduino.com/) to a light pole with a small rechargeable battery and solar cell.  It connects to SkyNet allowing you to stream sensor data from connected sensors or you could turn on pins for lights, relays, motors, etc. via SkyNet messages. SkyNet messages could be sent from people all around the world.”

I must admit that I find the whole concept of Cloud controlled programmable hardware very exciting.


*************************************************************
Kevin Benedict
Senior Analyst, Digital Transformation Cognizant
View my profile on LinkedIn
Learn about mobile strategies at MobileEnterpriseStrategies.com
Follow me on Twitter @krbenedict
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Interviews with Kevin Benedict