IoT Trends: Four Key Capabilities for Third-Wave Products

Computers used to look like computers. Of course, we saw an evolution of form factors from large mainframes to portable mobile devices, but this was a measured evolution with user interface paradigms anchored around screens and manual input methods (e.g. keyboard, touch, voice). If we transported a DEC VT100 terminal user from 1978 to the present day, they might be surprised by the vivid color and graphics of a laptop or touch screen computer, but they would instantly understand how to interact with it.

How do you define your product and tune its capabilities with no common thread? It turns out third wave products share four key capabilities that make them unique – Sensing, Connecting, Thinking and Expressing. Understanding these commonalities is key for designing new products.

What makes the third wave unique is the ability for technology to observe people, the environment, machinery, and the rest of the physical world on a whole new scale. An explosion of devices is constantly collecting observations, increasing the amount of data generated globally.

Today people create a vast trove of data from the social Web – Facebook posts, LinkedIn profiles, YouTube videos, etc. Think of this data as comprising the social graph. The physical graph will be comprised of the data generated by the new web of sensors and machines. The analysis of the raw data collected from these sensors provides the means for generating insights. As GE Software : “What tracking cookies in a browser do for understanding what Web ads to put in front of you, a turbine rich with sensors and complimentary software will do for better operating an electrical grid.”

The technical capability to collect and analyze data is not new. What is new is the scale of data from different sources and our ability to combine these disparate data streams to generate insights. With more data to review and analyze, the breadth and depth of analysis improves. Different types of data from the same context also improve analysis. By combining different types of observations, a richer picture of the world emerges.

The human brain quickly paints a rich picture full of context and understanding by combining inputs from our five senses. Sensors can do the same. Microsoft’s Kinect combines audio, video, and infrared in a way that vastly improved a machine’s understanding of people and objects in a physical environment. This capability to combine disparate yet contextually related data streams is sensor fusion.

While connectivity is not new, the ubiquity and low cost of connectivity are the key enablers driving connected products. Consider that connectivity makes it possible to dial down intelligence (and thereby the cost) of end points, because we can move computing workloads to a central location for storage and analysis. There is a relationship between increasing one capability and decreasing another, in this case connectivity and intelligence.

As is often the case with new technology, there is a proliferation of ways in which devices are connecting and communicating: wired, wireless, low-energy Bluetooth, and even dedicated IoT networks for multiple use cases and industries. There are connectivity options that are optimized for the security, bandwidth and energy consumption required.

A device’s intelligence is dictated by a combination of hardware and software. Like the other capabilities, intelligence can be dialed up or down as needed.

  • On the low end, wearables like Fitbit or Misfit Shine have limited intelligence. They are built for a very specific function – to collect fitness activity – and have little capacity for anything else.
  • In the middle, Nest has dialed up the intelligence in its thermostats by using smartphone hardware and machine learning software to optimize temperature control based on users’ living patterns.
  • On the high end, autonomous vehicles have the equivalent hardware of multiple PCs with highly advanced software for making real time driving decisions.

This wide range of computing horsepower, from multi-core processors to micro-controllers, means product designs need to account for the needs of the product and the constraints of the environment. Can you offload computing workloads to another system to save cost on hardware? Will you have access to enough power to run a multi-core processor or will you need a power efficient microcontroller that can run on batteries for 3 years?

Third wave products are not simply a digital manifestation of the physical world. Rather, changes made to the physical graph trigger corresponding changes in the physical world. For example, irrigation sensors in agriculture drive watering patterns that ensure only the crops that really need water get it.

In the prior eras of computing, people were the primary interface with computers using keyboards, mice and screens. However, as technology becomes a part of our environment, interactions will move from screens to rooms, buildings, and ultimately an entire city. Smart thermostats are a simple form of interactivity for adjusting climate control, whereas an asteroid detection and prevention system is more complex. Both are examples of machine-based interactivity made possible by the feedback loop of sensing, connecting, and thinking. Interacting is the final step that closes the loop by feeding back into the physical world.

Come back later this week more insights on New Architecture (Wed Feb 25), New Software Skills and Security (Thu Feb 26) and New Market Dynamics (Fri Feb 27) that will shape the third wave of computing. For more perspective on IoT product engineering, please click here to access Aricent’s full white paper titled, “Never Mind the IoT, Here Comes the Third Wave.”

Editor’s Note: This is the second in a five-part series on #IoT #Engineering trends that will shape how companies design, build, and bring products to market. To learn more, please register here to meet with our team at #MWC15.


Leave a Reply

Your email address will not be published. Required fields are marked *