The Exciting (?) Future of Artificial Intelligence
By now you’ve undoubtedly heard the terms artificial intelligence, machine learning, deep neural networks, and other related derivatives ad nauseum. Even though they’ve become buzz words, there is substantial real promise to be found in these technologies. A quick, high-level overview can be found here in case you are unfamiliar.
The current landscape largely revolves around predictive analytics generated by often multiple computers that can learn on their own, processing immense unstructured data sets. Predicting that an engine may fail ahead of time and performing preventive maintenance at a lower cost as well as personalizing and targeting your marketing to have the greatest effect are just a few of the ways AI helps us make more efficient and cost-effective decisions.
It should come as no surprise that the “High Tech & Communications” sector leads in AI adoption and growth, given it was born here. What may be surprising though is that right next to it is financial services, but regardless, nearly every industry is looking to expand its capabilities.
When I decided to write about this topic I was very eager to learn about all the crazy new ways we are facilitating takeover by our robot overlords. Terminator and The Matrix aren’t that far out of our memories, right? After sitting down and doing some research; however, I discovered that we’ve pretty much hit peak AI.
This doesn’t mean that we won’t continue benefiting from the technology. After all, PWC threw a big number out there about how we are going to reap $30 trillion in value from AI by 2030. Who doesn’t like $30 trillion?
The slightly less exciting fact is that most of the AI technology that will facilitate this has already been “invented”, it’s really just a case of tailoring combinations of algorithms to particular situations and refining some of the more complex models at this point. Gartner does an excellent job of displaying this visually for us:
If you’ve kept current on recent news you might notice that as I write this in April of 2019, we’re starting to get into the “Trough of Disillusionment” phase, but that isn’t necessarily a bad thing. What is important is that the ball is already rolling on numerous systems that will allow us to build the next wave of technological innovations.
These range from flying autonomous vehicles (like in the Jetsons, maybe) to enhancing our biologic capabilities to widespread, distributed, and off-hardware graphics rendering permitting insanely life-like audio visual experiences anywhere we go enabled by 5G.
Surely less exhilarating is talking about the adoption, practical, and societal implications that we need to work on to ensure AI can scale throughout the masses. Klaus Schwab, in his 2016 book, The Fourth Industrial Revolution, wrote, “Shaping the fourth industrial revolution to ensure that it is empowering and human-centered, rather than divisive and dehumanizing, is not a task for any single stakeholder or sector or for any one region, industry, or culture. The fundamental and global nature of this revolution means it will affect and be influenced by all countries, economies, sectors, and people.”
Mr. Schwab highlights the radical scope of the task, which may be even more difficult to realize than those flying cars mentioned earlier. It involves being fully aware of the aspects of AI that, if left unchecked, could inflict damage rather than help us. One of the greatest threats is quality of data — if you feed incomplete or biased information into an algorithm, it will only get you to bad results quicker.
Generative Adversarial Networks “GANs” are a key aid to prevent this from occurring. To keep it simple, you essentially have a “generator” that is creating fake instances of predicted variables in order to trick the primary “discriminator” algorithm. The fact that both the models are optimized for opposite goals creates a double feedback loop, which results in a better understanding of the capabilities and flaws of our primary model. GANs provide an additional degree of assurance, but do not eliminate the need to validate our data sources for biases and completeness on a human level.
Source: Thalles Silva
Practical implications revolve around how we accustom the general population with using AI in everyday life. You can’t just expect someone to walk into a smart office and instantly be aware of how to manipulate all the applications. Simplicity and generally agreed upon principles will be vital to ensure we are all aware of how to derive the most benefit from what this new technology has to offer us.
On a societal level, we also mustn’t neglect the need for human involvement in such systems. Although computers are immensely more capable than us in sheer calculation power and don’t require sleep, the human mind possesses unique characteristics that may never be replicable by computers. For this reason, we need a broad framework to govern what degrees of autonomy are permissible. Human consciousness still has an important part to play in assuring AI is used to its fullest capabilities.
All in all, the future looks extremely bright, with a solid foundation already in place for enabling what we once thought possible only in science fiction. Proactive actions will ultimately need to happen to facilitate deployment of sophisticated mass application technologies. Whether this manifests itself through governments working together voluntarily (less likely), need for profit maximization in the private sector (more likely), or some other combination remains to be seen. We can all do our part; however, by becoming more knowledgeable on the subject matter and helping shape our collective future.