Op-Ed
Creating a Meaningful Definition of AI
Artificial intelligence is a misleading name, as we can’t define even natural intelligence.
There are many competing theories and definitions, but none have stuck, as we haven’t solidly defined the edges of sentience, consciousness, and self-awareness.
AI can be impressive, it exceeds human capabilities in multiple domains, and will only improve. Nonetheless, most of AI today is little more than glorified statistics.
Branches of AI
The most common form of AI is Machine Learning, which are algorithms designed to do tasks without being explicitly programmed to do so. Within ML you also have Neural Networks, which are a form of supervised learning for pattern recognition, made of interconnected neurons and synapses assigned various weights. A multilayered neural network learning from a lot of data is called Deep Learning.
However, the goalposts keep changing. The first chatbots — no more than if-else statements — were considered AI. Nowadays, some consider ML, which is dynamically optimizing for statistical parameters, a form of artificial intelligence, while many don’t. Since neural networks are just a more complex flavor of ML, they’re also a statistical tool, even if the results can be a lot more impressive than what you’ll find in SPSS (edit: Turns out you can use neural networks in SPSS).
Artificial General Intelligence or Synthetic Intelligence
AI has had a strong grip on culture, with almost mythical qualities in the imagination of many. Perhaps we should re-define the goalposts, as “AGI,” or artificial general intelligence, still implies that a human-like machine intelligence would still be “artificial” or an imitation.
What some prefer instead is “SI,” or synthetic intelligence, but that only suits as a replacement for AGI, and not the levels of AI we see today.
Meeting Expectations
We need a term where the reality of its application meets the expectations of the term, even if only so that the average person knows what it means (good user-experience design).
Since systems today don’t exhibit self-governance or novelty, it would be technically accurate to refer to it as “cognitive computing” — increasing the abilities of computing machines to emulate or surpass solution-finding through logic and calculation (computing).
Still though, there’s some hype-connotation even if that term, so why not just stick to computing or statistics?
This article was written by Frederik Bussler former CEO at bitgrit. Join our data scientist community or our Telegram for insights and opportunities in data science.