What do you think your field will look like in 100 years? Speculating about the world a century from now may be too challenging, so what if instead a community took it upon itself to periodically assess its progress and potential nearer-term futures over time? How might such reflections influence the rate of progress, the types of problems that the field focuses on, the public perception of the work, or the ability to anticipate and address thorny ethical or policy questions?
Titled “Artificial Intelligence and Life in 2030,” this year-long investigation is the first product of the 100 Year Study on Artificial Intelligence (AI100), an ongoing project hosted by Stanford to inform societal deliberation and provide guidance on the ethical development of smart software, sensors and machines.
This study was launched through the inspiration of Eric Horvitz, a past president of the Association for the Advancement of Artificial Intelligence (AAAI).
The inaugural study panel, of which I was a member, spanned an impressive breadth of areas, from machine learning to language, from robotics to healthcare, and from technology to economics and law. Each of us was challenged to provide our perspective on a 30-year span of technological progress, from 2000 to the present and then forward to 2030, in an area with which we were familiar. From this, we tried to elicit ideas, lessons, and predictions for what we might see in the future.
When you do this, it is surprising to note how few new and disruptive ideas or changes emerge in that time span. At the same time, it is hard to anticipate which existing ideas will grow or converge to have an impact. For example, autonomous driving research has been steadily progressing for 30 years, and, in fact, today’s automated vehicles aren’t all that different than those from fifteen years ago. But, the convergence of several factors led to an explosion of interest and progress that surprised even experts in the field. A similar story can be told of the convergence of cloud computing, deep learning, and enormous data archives accumulated via the Web, which has transformed fields such as computer vision and language understanding.
What fields stand on the brink of similar convergence and related disruptive progress? Could it be healthcare, now that electronic health records, machine learning, genomics, and precision medicine are aligning to create more powerful evidence-driven medicine? Maybe it will be technologies for an aging society driven by powerful economic and societal trends? Or perhaps it will be enhancements in urban transportation networks that reduce traffic congestion and increase access for all citizens.
And, perhaps as the study progresses, it will be forced to wrestle with some of the more difficult societal, ethical, and philosophical implications of future, more powerful AI technologies. However, to quote Isaac Asimov from the preface to his first book, “Minds of Steel” with regard to the (at that time) concern about robotics:
Even as a youngster, though, I could not bring myself to believe that if knowledge presented danger, the solution was ignorance. To me, it always seemed that the solution had to be wisdom. You did not refuse to look at danger, rather you learned how to handle it safely.
“Until now most of what is known about AI comes from science fiction books and movies,” Stone said. “This study provides a realistic foundation to discuss how AI technologies are likely to affect society.”
Of course, this is an uncontrolled experiment – we can never really know how these reports affect the future choices of AI researchers. Yet, in the coming decades, as we look back to this first report and those of subsequent panels, I am convinced that this unique activity will serve to bring wisdom to the field, and thus provide benefits to the field and to society as a whole.