Freelance Data Scientist, PhD Mathematics, Rhodes Scholar. I help you succeed with nearly all things data and AI.
Published Sep 03, 2020
Companies engaging with artificial intelligence (AI) occupy an unenviable position in the trajectory of science and technology: they sit in the middle of an AI revolution that promises to disrupt the way businesses operate just as Galileo’s revolution in the 17th century upended the earth-centered view of the universe. It is only a matter of years before the human-like or better performance achieved by deep learning on vision and text-based tasks will expand to most other human tasks, resulting in automation on an unprecedented scale.
Source: An illustration of the Moon from Sidereus Nuncius, published in Venice, 1610
Or not. Perhaps instead the warning signs—such as fatalities involving autonomous driving [1, 2, 3, 4], the proliferation of “adversarial examples” showing best-in- class deep learning models mistaking fox squirrels for sea lions and dragonflies for manhole covers with 99% confidence [5], and the concerns that China’s deployed AI may not pay the bills of the computer chips it runs on [6]—show that we are in an AI “winter” rather than revolution [7, 8]. Perhaps the latest headlines about AI successes in natural language processing [9, 10, 11], protein folding [12], and the video game Dota 2 [13] are just final parlor tricks from a circus that is already folding up its tents.
This uncertainty makes it difficult for companies to decide how to engage with AI, whether investing externally in AI-powered software, services or even companies, or investing internally in teams to develop AI assets. To help busi- nesses chart a course, a critical view of other scientific revolutions provides valuable guidance.
In the classic Structures of Scientific Revolutions, Thomas Kuhn coined the phrase “paradigm shift” to distinguish between two regimes of scientific development: normal and revolutionary. In normal science, the scientific community largely agrees on priorities and questions, and focuses on improvements of existing models. Revolutionary science, however, is characterized by throwing existing rules and metrics out the window. In business terms, a decision in a normal regime might be deciding on new procurement software, for which established cost-benefit analyses and service-level-agreements help to choose one among many options. An example of a revolutionary business decision might be choosing an AI-powered insurance claim fraud solution, where months of time and many tens of thousands of dollars can be spent on a “proof of concept” before knowing whether an AI solution has even a chance to work.
Kuhn’s views on scientific revolutions provide businesses with two main lessons:
An example of the first point can be seen in Galileo’s arguments for a sun- centered model of the universe against the prevailing Aristotelian earth-centered model of the universe. Among Galileo’s skeptics were two kinds: philosophical and technical. The philosophical skeptics are better known, with objections to Galileo’s theories coming from the Catholic Church as well as Aristotelian philosophers. For Galileo, the elegance and simplicity of this model of planetary motion was more important than the skeptics’ question.
While the philosophical skeptics make for good headlines—like the AI twitter wars between Elon Musk and Mark Zuckerberg of 2017 [14]—for us buyers of AI, it is the technical skeptics that deserve attention, for they are the ones concerned with solutions to concrete problems. For Galileo, the chief technical skeptic was the astronomer Giovanni Battista Riccioli, who published a treatise in 1651 detailing numerous arguments against Galileo’s sun-centered universe. Two of them, one involving estimates of the size of stars based on telescope observation, the second about projectiles on a rotating Earth, remained unsolved research problems [15, 16]. So if you were in the star-measuring or canon-firing business in the seventeenth century, you would have done well to take Galileo with a grain of salt and stick to Aristotelian science.
For AI revolutionaries, a central question is how to capitalize on their deep learning (or other AI) assets, whereas most businesses are more interested in solving concrete pain-points with AI. For example, during vendor selection for a proof of concept at Allianz Global Corporate and Specialty Insurance, I asked one expert from a big tech company if he thought the business problem and available data we had were well suited to their deep learning proposal. He seemed to not understand the question, and instead continued discussing new research developments that he would try.
As a business corollary of Kuhn’s first point, if your business need matches one of the clear successes of AI, such as extracting the number of parked cars in a shopping center from satellite imagery, then you need not worry about AI revolution or winter, and can focus instead on estimating your return on investment. But if not, for example, if you want to estimate natural catastrophe damage to industrial sites with deep learning and satellite imagery, then tune out the AI philosophers and seek out the technical critics, or you may end up funding a research problem.
Kuhn’s second point can be seen both in both scientific revolutions, like Galileo’s, and the several AI revolutions and winters starting with John McCarthy’s watershed 1956 AI workshop (see [17] for a succinct and lively account of AI history). While historians point in retrospect to episodes like Galileo observing Jupiter’s moons, in real-time, it is often via mundane factors like generational change (Kuhn) or repeated failure to pay the bills (AI expert systems in the 80s, IBM Watson’s struggles to sell its cancer prediction AI [18], and Turing Award Winner Yoshua Bengio’s company Element AI being [19]) that these triumphs harden into a revolution or not. As Alan Chalmers puts it, the Galilean revolution “did not take place at the drop of a hat or two from the Leaning Tower of Pisa.” [20]
Hence we should not expect a watershed moment or conclusive proof of an AI revolution, nor should we expect a knock-down counter-example to make business decisions for us. In the now, businesses engaging with AI need help to deal with this uncertainty. We need experts who are able to judge what the good research problems are, and to ensure the important business questions are prioritized independent of buzz value. Keeping up with blog posts and deep learning headlines help generate enthusiasm and spur new ideas, but hard- nosed business decisions require technically-savvy skeptics. For behind the sci- fi predictions and stock images of circuit-wired brains, the real reason to buy AI remains unchanged: to generate sustainable value for the company and its customers, whether revolutionary or otherwise.
Author biography Dr. Paul Larsen is the head of Data Analytics Practices at Allianz Insurance Headquarters in Munich, Germany. He has a doctorate in mathematics from Humboldt-Universitaet-zu-Berlin, a Masters in the History of Science and Technology from Oxford University as a Rhodes Scholar, and undergraduate degrees in mathematics and physics. He has published research articles in machine learning, risk modeling, algebraic geometry and computa- tional physics.
[1] Mark Matousek. A Tesla Model X caught on fire after crash- ing into a highbarrier – and Tesla has a theory about why the crash was so bad. Business Insider, https://www.businessinsider. de/tesla-fatal-model-x-crash-response-2018-3?r=US\&IR=T, March 2018.
[2] Associated Press. Tesla Model S crash into Cali-
fornia pond killed driver, police say. USA Today, https://eu.usatoday.com/story/tech/news/2018/05/21/ tesla-model-s-crash-into-californiaa-model-s-crash-into-california-pond-killed-driver-p 630798002/, May 2018.
[3] Chris Isidore. Family of Apple engineer sues Tesla, saying Autopilot caused his fatal crash. CNN Business, https://edition.cnn.com/2019/05/02/ tech/telsa-autopilot-crash-suit/index.html, May 2019.
[4] David Shepardson and Heather Somerville. Uber not criminally li-
able in fatal 2018 Arizona self-driving crash: prosecutors. Reuters, https://www.reuters.com/article/us-uber-crash-autonomous/ uber-not-criminally-liable-in-fatal-2018-arizona-self-driving-crash-prosecutors-idUSKCN March 2019.
[5] Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. arXiv preprint arXiv:1907.07174, 2019.
[6] Louise Lucas. Cold water hits China’s AI industry. Financial Times, https: //www.ft.com/content/973bfc08-a15f-11e9-a282-2df48f366f7d, July 2019.
[7] Kathleen Walch. Are We Headed For Another AI Winter? Forbes, https://www.forbes.com/sites/cognitiveworld/2019/10/20/ are-we-heading-for-another-ai-winter-soon/#6a015d4756d6, Octo- ber 2019.
[8] Filip Piekniewski. Ai winter is well on its way. https://blog.piekniewski.info/2018/05/28/ai-winter-is-well-on-its-way/, May 2018.
[9] Steve Guggenheimer. Microsoft’s MT-DNN Achieves Hu- man Performance Estimate on General Language Under- standing Evaluation (GLUE) Benchmark. Microsoft Blog, https://blogs.msdn.microsoft.com/stevengu/2019/06/20/ microsoft-achieves-human-performance-estimate-on-glue-benchmark/, June 2019.
[10] Myle Ott, Michael Auli, Nathan Ng, Sergey Edunov, Omer Levy, Amanpreet Singh, and Ves Stoyanov. New advances in natural language processing to better connect people. Facebook AI Blog, https://ai.facebook.com/blog/ new-advances-in-natural-language-processing-to-better-connect-people/, August 2019.
[11] John Thornhill. The astonishingly good but predictably bad AI program. Financial Times https://www.ft.com/content/ 51f1bb71-ce93-4529-9486-fec96ab3dc4d, August 2020.
[12] Robert F. Service. Google’s DeepMind aces protein fold- ing. Science, https://www.sciencemag.org/news/2018/12/ google-s-deepmind-aces-protein-folding, December 2018.
[13] Nick Statt. pion e-sports. OpenAI’s Dota 2 AI steamrolls world cham- team with back-to-back victories. The https://www.theverge.com/2019/4/13/18309459/Verge, openai-five-dota-2-finals-ai-bot-competition-og-e-sports-the-international-champion, April 2019.
[14] Elon Musk and Mark Zuckerberg. Elon Musk - Mark Zuckerberg tweets. twitter https://twitter.com/elonmusk/status/889743782387761152? lang=en, July 2017.
[15] Christopher M. Graney. Teaching Galileo? Get to know Riccioli! – What a forgotten Italian astronomer can teach students about how science works. arXiv preprint arXiv:1107.3483, 2011.
[16] Macgregor Campbell. Coriolis-like effect found 184 years before Coriolis. The New Scientist https://www.newscientist.com/article/ dn19979-coriolis-like-effect-found-184-years-before-coriolis/, January 2011.
[17] Melanie Mitchell. Artificial intelligence: a guide for thinking humans. Penguin UK, 2019.
[18] Angela Chen. IBM’s Watson gave unsafe recommendations for treating cancer. The Verge https://www.theverge.com/2018/7/26/17619382/ ibms-watson-cancer-ai-healthcare-science/, June 2018.
[19] Sean Silcoff. Element ai hands out pink slips hours af-
ter announcement of sale to u.s.-based servicenow. The
Globe and Mailhttps://www.theglobeandmail.com/business/ article-once-touted-as-a-technology-world-beater-montreals-element-ai-sells/, November 2020.
[20] Alan F Chalmers. What is this thing called science? Hackett Publishing, 2013.