By Oded Lilos, CMO of Ginger Software
One of the hallmarks of recent technology is the way it is designed to respond to the people who use it. Easy examples of where we already see that happening are Google Now Cards (or Waze) that follow the user’s travels, notifications of the user’s upcoming appointments, app preferences, and accurate contact selection when the user enters only a few letters. In the industry, this responsiveness is known as “contextual awareness,” where a given computing device uses user-specific data, like location-based technology and sensors to determine their users’ circumstances. Then it seems as if the device “knows” where the user is and what he or she needs. It goes without saying that, even though desktop computers make use of this technology to some degree, mobile devices are the greatest providers of contextual awareness.
Coming Soon to a Keyboard Near You
The mobile keyboard has come a long way from its humble push-button origins. If once the keyboard was an unresponsive piece of unintelligent hardware, the latest versions feature software touchscreens seemingly capable of anticipating and responding to your needs before you’re even aware of them.
Users have already come to expect a certain level of proactive responsiveness from their keyboards. Be they word predictions that change according to the person you’re interacting with, or auto-learning the names, locations and slang that you frequently use in your texting, contextual-awareness, albeit a very limited form of it, is already a standard requirement in any self-respecting keyboard app. However, that’s just the tip of the iceberg when it comes to the value-adding capabilities that AI-based contextual-awareness could potentially provide.
How can a keyboard do all that?
Well, let’s take a step back and consider the way keyboards store information for the sake of sophisticated AI-based analysis, and we’ll understand how the future is nearly here.
The simplest form of language processing is the kind of one-to-one ratio of meaning that appears in stilted, mechanical, bad translations. In the past, even more sophisticated attempts at language processing depended on inputting vast amounts of data, including rules that governed manipulation of the content entered as well. The results may have been functional, but were not natural.
Using AI for translations should improve upon that kind of language by recognizing the way words are being used repeatedly, including when they are used in different ways. The AI “machine learning” uses algorithms that examine vast amounts of real-world examples, and then derive the rules that govern that natural language. These statistical inferences determine the best possible match for the word in question, and they are much more successful at generating language that sounds like that used by real live human beings. Indeed, in the best case scenario, the computer seems to be learning the language from its users and responding as if it has human understanding.
In using the algorithms that study natural language and creating new language that mirrors natural use, AI breaks every sentence down into definable segments that function as commands, and in turn determines what action is an appropriate response to those sentences. The idea of these smart keyboards is comparable: they will study users’ keystrokes and the order of events that follow from them and be able to apply that “knowledge” in the future, when those same keystrokes are made.
Let’s consider the following scenarios:
· You text a friend to meet you for lunch. In doing so, of course, you have used your smartphone’s keyboard. If your keyboard were contextually aware, it would process your sentence and send an invitation to your friend, suggest locations for you to meet, recommend restaurants, provide directions, make a reservation, and more.
· You begin typing the letters of a particular client’s company name. Now, your keyboard recognizes those keystrokes and calls for the previous correspondence and other history with that client.
· On Mother’s Day, you open your contacts list to reach your mother, and your contextually aware keyboard suggests sending flowers to her.
· Your users type in your product’s name, and you are able to target their attention with advertisements that draw on those keystrokes.
As keyboards are programmed to incorporate contextual awareness, the possibilities of their usefulness will become nearly boundless. The amount of time saved by the simple act of the keyboard recognizing your need to set up an appointment and then responding proactively with an open calendar app is just one low-hanging example.
Moreover, as you use your smart keyboard, it will analyze your input and adapt to your usage. Once the software is in place, it falls to you, the user, to increase the degree of sophistication in its suggestions and recommendations for you. That is, the more you use it, the more attuned and nuanced it will become.
The development of contextually aware keyboards will draw on the sophistication of AI, and as it is built into widely used products like smartphones and used with regularity, the user experience will be personally tailored beyond anything we have seen thus far. It is exciting to realize that future is almost here.
Oded Lilos is the Chief Marketing Officer of Ginger Software, a mobile keyboard app developer.