In previous articles I predicted that wearable technology would be powered by light-weight operating systems, citing Samsung’s decision to go with Tizen instead of Android. This decision was apparently based on battery life and user interface considerations. However, just after the article hit the Internet, Google executive Sundar Pichai announced the Android SDK for Wearables. Android is used in many different ways as demonstrated by Kindle and Nokia X (Nokia X seems to have deployed a Windows 8 look and feel on top of Android). Indeed, for this very reason Android 4.4 has moved a lot of key APIs into the Cloud.
Wearable device developers are interested in the APIs available to them. If we turn the clock back to the J2ME days there was a dedicated API for user interfaces (UI) called javax.microedition.lcdui. This was a small UI library compared to today's Android libraries. Indeed you wouldn’t run Java Swing on Android, and likewise a wearable device needs a more constrained API for the UI. Even though a wearable device may be supporting a full operating system, it will most probably have a constrained UI and that means a slightly different programming style.
Recently there was an interesting post in the Washington Post supporting my claims that Wearable Devices and the Internet of Things requires different skill sets. The article listed the new skills required as data analytics and enterprise data analysis. Basically you need to know how to capture the data, read the data and then apply the data to your specific business domain. Surely real-time analytics and visualisation tools will become critical in the wearable space and this is where a new term called Fog Computing has been introduced by Cisco.
“Fog Computing is a paradigm that extends Cloud computing and services to the edge of the network. Similar to Cloud, Fog provides data, compute, storage, and application services to end-users. The distinguishing Fog characteristics are its proximity to end-users, its dense geographical distribution, and its support for mobility. Services are hosted at the network edge or even end devices such as set-top-boxes or access points. By doing so, Fog reduces service latency, and improves QoS, resulting in superior user-experience. Fog Computing supports emerging Internet of Everything (IoE) applications that demand real-time/predictable latency (industrial automation, transportation, networks of sensors and actuators). Thanks to its wide geographical distribution the Fog paradigm is well positioned for real time big data and real time analytics. Fog supports densely distributed data collection points, hence adding a fourth axis to the often mentioned Big Data dimensions (volume, variety, and velocity).”
In trying to predict what will be in the Android Wearable Software Developer Kit (SDK) then it is very interesting to note that Google acquired Android Smartwatch vendor WIMM Labs last year. WIMM Labs released its first Smartwatch back in 2011, the WIMM One, which ran Android and included an SDK for developers. Interestingly the WIMM website has removed all of the documentation for the SDK but a lot of WIMM One developers downloaded it before it got taken offline and so were able to get a potential glimpse of what Google is planning. WIMM had a Micro App Store which featured the following categories: entertainment; productivity; health; shopping; travel; utilities; watch faces; and games. As well as a Software Developer Kit there was also a Hardware Developer Kit which allows you to make accessories that wrap around the WIMM module.
One of the most interesting features is location information being retrieved from one or more sources, including built-in GPS, network based IP-location lookup, or a paired Android or Blackberry smartphone. This demonstrates that the Wimm One was able to perform even when not paired to a device and it was equally able to pair with a Blackberry.
If we look at the Android Wearable SDK then it is heavily rumoured to support Google Now, the voice control feature. It will also have to support Bluetooth Low Energy integration for communication with mobile devices for pairing and indeed detecting other sensors. It is also worth looking at the Google Glass Developer Kit (GDK) for a few hints at what may be revealed. The GDK was the alternative to the Mirror API which only really supported REST calls to a Google Cloud Service. The GDK is an add-on that builds on top of the Android SDK and offers the following: voice; gesture detector; and cards. It is safe to assume that voice control, local networking, touch control and potentially gesture control are all on cards. Google will show their hand at Google I/O and Samsung have already shown their Gear 2 devices at MWC. Next it is time for Apple to finally show their hand and we have to wonder if it will be a decisive one, quite possibly if history has taught us anything.
Senior Analyst, Digital Transformation Cognizant
View my profile on LinkedIn
Learn about mobile strategies at MobileEnterpriseStrategies.com
Follow me on Twitter @krbenedict
Browse the Mobile Solution Directory
Join the Linkedin Group Strategic Enterprise Mobility
***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.