In January, we saw a wide range of predictions on how AI would effect our lives in 2018, from people continuing to adopt virtual assistants (Alexa, Siri, Google Assistant, and more), to increased use of facial recognition for security, to better and more personalized media recommendations for the average person. On the other hand, we were worried about how AI would be used in healthcare, and whether decisions made using AI were always fair, or even right.
In February, we started defining AI for the public. This was likely in anticipation of increased adoption of AI by the average person, as we saw in January. We also started thinking about potential malicious uses and global impacts, especially in China.
In March, we took a look at the people creating AI and how their biases might effect outcomes. We also started thinking about AI policy in the era of fake news – How did legislation fit into development?
April: Elon Musk expressed concerns about AI ruling mankind forever in the documentary Do You Trust This Computer. Europe began to develop regulations to improve investment in AI and to set ethical guidelines for developers. The AI community pushed back on South Korea after they announed that they were developing killer AI robots. Finally, Apple hired a new Chief of A.I..
May: The White House created a Committee on Artificial Intelligence. Google decided not to renew Project Maven, a partnership with the Department of Defense for killer AI, after employees protested en masse. On a positive note, scientists at the Allen Institute for Cell Science in Seattle created a 3D model of the inside of a living human cell.
June: Following the end of Project Maven, Google banned the development of AI in weaponry. This reignited a debate in Silicon Valley on the role of companies in war. MIT researchers created a psychopathic AI bot that was trained on the darkest depths of reddit to caption ink blots. Wrapping up the first half of the year, Google researchers developed a program that was 95% accurate at predicting risk of death during hospital stays.
July: Tech leaders around the world took a pledge against developing autonomous killer weapons. In the aftermath of the Google controversies, Google released their newest voice assistant, called Duplex, with mixed feelings from most people.
August: August was a little bit slower, with Samsung announcing plans to spend $22B on AI over the next three years. Google gave $1M to fund summer camps focused on getting more underrepresented minorities and women into AI, in an effort to combat bias.
September: September focused on AI and money — Healthcare AI was projected to surpass $34B by 2025. Microsoft pledged $40M towards AI for humanitarian issues. Burger King made an AI-created ad that wasn’t actually created using AI. Joy Buolamwini continued to speak out about combating bias in AI as founder of the Algorithmic Justice League.
October: Google pledged $25M towards the AI Global Impact Challenge, run by their AI for Social Good initiative. A couple months after Google told the Pentagon that they would not provide AI for military uses, Microsoft announced that it would sell AI (and any other tech) to the Pentagon for “whatever advance technologies they needed to build a strong defense”. MIT announced their plan to invest $1B to create a College of AI. Lastly, a piece of art made using AI sold for $432K at a Christie’s auction, and researchers in Australia developed an algorithm to identify galaxies in space.
November: In November, Waycare improved highway safety in Las Vegas by using predictive AI to understand road conditions in real-time, helping traffic agencies take preventative action. China released their first AI news anchor at the World Internet Conference. A Florida hospital used AI to monitor blood loss to prevent deaths during childbirth. And indie musician Grimes, who you may know as Elon Musk’s girlfriend, released a song called “We Appreciate Power” that was written from the prospective of a Pro-AI Propaganda group.
December: And now it’s December. Most of the news has focused on what AI will look like in 2019, from healthcare, to virtual assistants, to job markets, to fairness. I’ve seen a lot of predictions, and I have a few of my own. But that’s for the next video, where we’ll talk about AI in 2019. What did you think of AI in 2018? What are your predictions for 2019?