While Machine Learning and Artificial Intelligence in general are often discussed in terms of their influence on consumers, businesses and the tech we use, there is less discussion on what effect they will have on the programming languages we use today and the diversity of software engineering more generally.
Languages belonging to certain paradigms, such as functional programming with Scala, or the multi-paradigm F# language, are increasingly touted as potential game changers, especially in terms of their ability to produce clean and concise code (which influences the bottom line, in the end). However, there is a possibility for AI — one of the largest (if not the largest) future influences in the 21st century on businesses, economies and societies — to propel one lucky programming language to stratospheric levels, where StackOverflow surveys will simply become irrelevant.
ML has for a long time been seen to be expensive, especially from being GPU heavy. The Google I|O release of TensorScript.js in 2018 made steps to dramatically improve upon this restraint, enabling client-side ML, which may turn both ML and one specific language on its head.
Running client-side means less demand on servers and bodes well for serverless architectures, which will not have to necessarily accommodate for the data that feeds the training and inference done in ML.
“Andrej Karpathy [Director of AI at Tesla] refers to AI as ‘Software 2.0’: a revolution in how we approach the craft of software. I think we will see AI increasingly become a core component of software development.”
“AI generally exists in one of two phases: Training and Inference,” Kevin Scott explains. “Training is building the model, and Inference is using the model to make a prediction.
“Training will largely remain the domain of servers. It’s expensive, data-hungry, and demands enormous hardware. When it comes to Inference, however, I think we’re seeing a huge shift in that moving onto consumer devices, off of servers.
“Companies are investing huge sums of money in developing specialized chips for AI. For instance, Apple’s recently released NPU (on the iPhone XS) is a staggering 9x faster than last year’s chip. Today’s consumer devices have access to hardware that was, not even a decade ago, purely the provenance of servers. With increasingly powerful hardware, it becomes easier to run powerful models locally.
“You’ve also got the twin concerns of Privacy and Latency. In our increasingly data-conscious era, keeping consumer data on the consumer’s device — running the AI model locally — is compelling. And trying to achieve 60 fps performance is rarely possible on a typical mobile connection; anything demanding real time processing power — whether it’s video, audio, or text analysis — benefits from being computed locally.
“So I think more and more Inference will happen locally, on-device, wherever possible.”
“In most AI frameworks (for instance, Tensorflow) the framework serves as a translation layer between high level abstractions and the actual mathematical operations being farmed out to the GPU. Since the underlying mathematical layer is C++, this gives software developers the freedom to build the abstractions in whatever language they’re comfortable in.
“However, I see this less as a barrier and more of an opportunity to help build the next generation of AI tools.”
“AI is going to increasingly become a part of the software developer’s toolbox, if not replace the toolbox entirely. And I think it will become a part of every language, everywhere, because software lacking intelligence will quickly become irrelevant.