In recent years traditional computing peripherals have changed and evolved. While smartphones and tablets made touch screen technology common place, digital keyboards were still developed for these devices to perform functions such as messaging.
No one at this point could ever think that the traditional keyboard so many of us use daily is about to become obsolete. So common is this device that creates a physical interface between the user and the computer, that it is hard to imagine it would ever go out of fashion or stop being used.
And yet contrary to popular use and belief this is exactly what is happening! The controller or how we interface with computing devices is pivoting faster than we can imagine. From sensory eye technology to haptic gloves, a number of devices are being developed by many manufacturers to naturalize computing, or how we interface with computers.
There is also another more advance level of communication that researchers are exploring, with the usage of AI, neural transmission and voice.
For many advanced research facilities the ultimate goal to achieve is neural transmission with brain waves sending signals directly to compute devices where the interpreter can successfully analyze those signals and respond as efficiently as most of our bodies do when our brains currently send signals for any given action and the control of millions of muscles.
The outcomes for this technology could be phenomenal especially in medicine where chronic physical disabilities maybe cured by neural and physical augmentation. A number of technologies would have to work cohesively to achieve this, but fundamentally the biggest step towards this reality will be the ability to connect our brains in a non invasive way to computing circuits, processors that can ultimately understand us! and translate that into a number of different functions.
Scientists have proven through exhaustive research in the past 100 or more years that instructions are what guides our brains and bodies to function in certain ways, and these sets of biological code are embedded within our DNA.
Not too dissimilar is the computing process where instructions are passed to circuits and semiconductors which ultimately guide the device how to function.
But the most genius part of our design among other creatures also is language. Voice and how it is used to communicate, and how our brains are able to interpret it is perhaps one of the most advanced biological processes ever designed. It is hard to fathom that this happened through pure coincidence also, as there are known gaps in the theory of evolution especially when it comes to intellectual evolution.
Voice technology in the past decade has really risen in popularity and devices like Alexa or Google Voice are very much mainstream technologies. Voice is probably the easiest transitional path towards complete abandonment of the traditional keyboard, where instructions are vocally fed to computers rather than being typed, with the only downside being that people look a little strange talking to themselves when in actual fact they are talking to their devices.
Limitations with voice however do tend to orient themselves around practical functionality in certain scenarios, for example if your career is in programming it is somewhat challenging speaking commands into a computer since algorithms are like gibberish when spoken.. what makes perfect logical or mathematical sense when typed can sound nonsensical when voiced.
Artificial intelligence is enhancing voice by creating a database of language and interaction as people all around the world engage with their #VoiceFirst devices such as Alexa.
What the AI cannot process or understand is handled by thousands of people who work for companies like Amazon where their primary focus is to take daily human interaction, emotion and behavior and input this into whatever AI engine they are developing so that devices like Alexa become more intelligent towards understanding how real people communicate.
Augmented reality solutions are likely to be used to bridge some of the practical challenges in how people work with technology in a more natural way. It is however strongly feasible that in 20 or so years keyboards will be sitting in a history museum as relics of the past.
As technologies driven by Neuroscience, AI, AR, Voice increase and data processing evolves from ‘input / output’ to instinctive models of processing, delivery and consumption the augmented human will become a reality.