Why Artificial Intelligence? And Why Now?
The announcement a few weeks ago, that Open.Ai had just received an investment of $1 from Microsoft made tsunami sized waves in the AI industry. It wasn’t the one dollar, but rather the nine zeroes following, which got everyone’s attention. By the way, that’s a whole round billion for those of us confused by very large numbers. The gigantic investment, now makes Open.Ai and Microsoft partners in developing Artificial General Intelligence (AGI), which can be likened to a human brain clone – fitted within a computer.
While AGI attracts all the attention, it is AI in specific tactical use cases, helping real companies access their own and public data to give themselves competitive advantages which is already here. This is obviously not AGI, but rather the ability of humans to use ‘machine learning’ to help them make decisions smarter and faster.
If the math underlying AI has been around since before the time, man went to the moon, why is AI suddenly such a flavor during the years leading up to 2020? For that, a little background is in order.
Origins of AI
Highlighting just a few steps along the way – we start back in 1950 with Alan Turing asking “can machines think?”. John McCarthy, in 1958, defined the field devoted to the development of intelligent machines, and invented LISP, the standard AI programming language. It is still used for voice recognition technology, including Siri (who knew she was in her 60’s), and services like airline scheduling and fraud detection. Dr. McCarthy spent his career at Stanford mobilizing math, for machines whose time had not yet come.
A major leap arrived a couple of decades later through Geoffrey Hinton of the University of Toronto. His germinal paper “Learning Representation by Back-Propagating Errors” in 1986, laid the groundwork for modern AI. The only four page academic paper explained back-propagation for networks of neurone-like units, working sort of like the chain rule with a bow tie, by repeatedly adjusting the weights of connections in the network, as it ‘trains’ those networks.
Commercialization of AI
We’ve now come to the point where Oral-B now advertises an AI toothbrush. How did AI come to being available for enterprises over the last few decades?
Neural networks which have been academically interesting for all this time, but to make them practically usable, they needed industry to slowly develop an infrastructure of computing power to catch up into order to use this knowledge effectively. This is not unusual, as it mostly takes academic research, about twenty years to filter itself from being not only well-understood, but also becoming practically implementable.
Over time investment boomed, due to ever more powerful computing, which in turn allowed applied engineering skill to increased speed and memory of computers. Tech behemoths with their steady supply of user data, began to use AI to solve behind the scenes problems – data mining, industrial robotics, speech recognition and just even Google’s very famous search engine which over time has begun to heavily utilize AI.
GPU’s are not readily available almost on every laptop providing parallel processing capacity to anyone who wants it. The cloud services (Microsoft, AWS, Google Cloud and IBM) all now offer their own ML capabilities.
AI’s beating of a human at Chess, and Go, and even at Jeopardy (answering questions about human culture and history), from 1997 – 2015, captured broad attention in the general public, but this last decade has seen this marriage of math and computing finally producing results. Now, around ten years after the commercialization of AI, enough people with the understanding, and appreciation of its benefits, are primary representatives in the senior leadership ranks of the world’s companies.
As smaller non-tech companies organize and access their data in order to utilize AI to gain insights into their customers, into their prices (prediction) and responding ever faster to customer needs (classification), we will see ML keep its promise.
Though AGI may still be a way away, companies looking to stay competitive in their territories will need to be able to access ever larger amounts of data, with ever zippier computers, armed with advanced machine learning techniques. Whether they choose to do this on their own or bring in specialists, not moving forward with AI, will leave them irrelevant faster than they think.Share on Facebook Share on Twitter Share on Pinterest