You’ve heard about it. Now it’s time to develop for it.
Contextual Computing is completely changing the way people interact with their devices, and how those devices interact back with users. The trackpad has given way to a new, two-way exchange of information where computers can recognize and understand a user’s eye movements, voice commands, and hand gestures, and then take intelligent action based on that information. The contextual information is acquired from a variety of sources, such as microphones, cameras, GPS, or other sensors that reside on the device.
Now, here’s where you come in:
What kinds of applications could you develop knowing that devices have become aware of their current states, surroundings, and user commands? Could you leverage the capabilities of a 3D camera and develop a new kind of office productivity app? How will voice control inspire you? The possibilities are endless.
The Intel® RealSense™ camera brings embedded depth sensing to Lenovo’s PCs and Tablets. In addition to being able to see in 3D and in low light conditions, it has dual array microphones to serve as the ‘ears’ of the device. Combined with the Intel® Perceptual Computing SDK, this opens up a host of possibilities.
Intel's new 3D camera holds incredible potential for productivity apps, and Lenovo sees this as one of the hottest areas for developers. If you have an app or are currently developing an app for 3D camera, now is the time to let us know.
Non-verbal communication comprises a large part of our interaction with others, and Gesture Control now allows our devices to understand hand gestures as well. Using its built-in camera, a device can detect hand motions, or gestures, and execute commands without the user ever having to touch the device.
For a device to become ‘aware,’ it needs to know what’s going on with it and happening around it. It gleans this information through different sensors, and the aggregation of those sensor inputs conveys the ‘state’ of the device and its environment at a given time.
Sensors available today are also able to measure a person’s point of gaze, or motion of an eye relative to the head. Combining the knowledge of what a person is looking at and for how long with other contextual information can enable a device to achieve a whole new level of understanding.
Not only can your device hear what a user says, but now with Natural Language Understanding, it also knows what that user means and intends. For those times when using a keyboard, mouse, or touch isn’t convenient or desirable, voice control is an excellent alternative.