If AI isn’t the most hyped technology of the 21st century, it’s certainly right up there with earlier manias for mobile, virtual reality, the internet of things, and big data. Companies large and small feel pressure to claim they use AI in some key way to drive their business. But does AI deserve this level of hype? On one end of the spectrum are the doomsayers (including heavyweights like Stephen Hawking and Elon Musk) who see the technology posing an existential threat to the future of humanity. In contrast, there are those who see AI as the breakthrough that could solve many of the world’s most intractable problems. Visionary Ray Kurzweil believes AI will soon enhance virtually everyone’s mental capabilities. Musk’s own startup, Neuralink, is reportedly developing a brain-to-machine interface that could improve memory or allow for more direct interfacing with computing devices.
Reading recent media coverage of AI induces forecasting schizophrenia, making it difficult to determine what is real and where the technology is headed. For example, here are two recent article titles from the same week: “AI image recognition fooled by single pixel change” (due to “adversarial images”) and “AI and Machine Learning to Revolutionize U.S. Intelligence Community.” We must hope that the Pentagon will be able to overcome adversarial images so that it can correctly differentiate a stealth bomber from a dog.
This discrepancy points to a broader truth about AI. While the technology has come far, the singularity — the point in time at which machine intelligence surpasses human intelligence — remains a long way off. Kurzweil believes this will occur in the next 30 years. Similarly, Softbank CEO Masayoshi Son predicts machines will surpass human intelligence by 2047.
They’re not alone. During a panel held earlier this year, renowned AI experts (including both Kurzweil and Musk) were asked whether it would be possible for machines to develop superintelligence, the permanent state beyond the singularity where computer intelligence surpasses that of the brightest humans. The answer was unanimous: Yes.
Take X and add AI
In the meantime, companies are rushing to incorporate AI technologies into their enterprise or consumer apps. Futurist Kevin Kelly famously asserted that the business plans of the next 10,000 startups are “Take X and add AI.” That certainly seems to be playing out. A PwC and CB Insights report for Q3 2017 noted that funding exceeded $1 billion for the third straight quarter across 91 deals for AI companies in the U.S.
It’s not just a U.S. phenomenon, either. Kai-Fu Lee, chairman and chief executive officer of Sinovation Ventures, an early-stage venture fund in China, recently claimed at MIT’s AI and the Future of Work event that “in the age of AI, a U.S.-China duopoly is not just inevitable, it has already arrived.” The Chinese government sees AI as an imperative and plans to become the world leader in AI by 2030. And it’s not only startups exploring the technology. Analyst firm Gartner reported recently that inquiries to the firm regarding AI — usually from established corporations — had grown 500 percent over the past year.
Trillion-dollar industry or fantasy bubble?
Venture Capital site AngelList currently lists more than 3,500 AI startups with an average valuation of nearly $5 million. That marks a 75 percent increase in the number of AI startups on the site since earlier this year. Analyst firm CB Insights notes that corporate giants like Google, IBM, Yahoo, Intel, Apple, and Salesforce are competing in the race to acquire private AI companies, with Ford, Samsung, GE, and Uber emerging as new entrants. Top talent is a rare commodity because Facebook, Google, and a handful of other tech companies have cornered the market on experts.
Despite the current market frenzy, AI is still in its infancy. For example, AI systems require a tremendous amount of data to properly understand what is being viewed, through a process known as training the application. People do not need to look at thousands of images of cats to identify a cat. AI systems today are nowhere close to replicating how the human mind learns.
Nevertheless, AI is growing in sophistication and applications. Financial services companies use it to block potential fraud: An AI algorithm can learn to look at current transactions to detect unusual activity using a large sample of purchases labeled as fraudulent or genuine. Additionally, there are many AI applications emerging in health care, the leading industry for AI investment deals. An example application shows how researchers at the University of Nottingham created an AI application that scanned regular medical data to predict which patients would have strokes or heart attacks. The AI system is more accurate than doctors using standard techniques.
While these and other applications are impressive and have an immediate benefit, they are not exactly “intelligent.” AI examples today are very good at pattern recognition and probability calculation along with natural language processing, speech recognition, and computer vision. But a considerable amount of work remains to be done before singularity is achieved.
Future AI advances will require further substantial gains in computing, in both performance and energy efficiency. AMD and Nvidia are both driving this forward for AI, but for how long? Moore’s Law, the shorthand label for the doubling in transistor density every 18-24 months, is slowing. There are perhaps two additional cycles remaining before the current silicon-based digital computing paradigm runs into very hard physical limits. Performance beyond that could require breakthroughs that today remain mostly theoretical, such as quantum computing.
Yet early signs of even this advance are appearing. Volkswagen recently announced it will use a quantum computer from Google to improve AI and machine learning. Though some believe Google’s quantum computer isn’t much more than a science project at this stage, it’s hoped the collaboration will eventually yield the same level of accuracy current AI does, while using smaller data sets than the vast number of images required today for AI algorithm training.
Is the AI hype justified? Absolutely. In the near term, AI technologies will lead to dramatic and possibly disruptive applications and ROI. However, substantial breakthroughs in AI theory and algorithms and in computing hardware and software must still occur before artificial intelligence becomes superintelligence.
Gary Grossman is a futurist and public relations and communications marketing executive with Edelman.