Artificial Intelligence (AI) is intelligence that a machine demonstrates. Sometimes called machine intelligence, it differs from natural intelligence that humans demonstrate.
The study of AI covers traits that range from a machine or computer abilities to convincingly imitate human intelligence all the way to creativity and autonomy. To study AI closer, scientists break out AI into many facets such as reasoning, natural language processing, and perception. For many, the ultimate goal of AI is Artificial General Intelligence (AGI), or the ability for a machine to comprehend and master any cognitive tasks a human can.
Toward that end, the study of AI also covers the technologies that are applied to reaching those milestones in a machine’s capability. The threshold or goalposts for whether a machine is intelligent is shifting forward (increasing in difficulty) as certain ‘tests” of AI’s effectiveness are met and subsequently retired from what is deemed intelligent. Well-functioning chatbots, for example, are commonplace. They are no longer appreciated or elicit shock in users and therefore these machines are now “dumb”.
British multidisciplinary scientist Alan Turing was one of the world’s most celebrated thinkers on AI, and indeed helped define it. In 1950 Turing published his pioneering whitepaper, “Computing Machinery and Intelligence”. In it, he posits that if in a simple game of a computer imitating a person in a conversation results in humans failing to distinguish they are conversing with a machine, then we may conclude that the machine is demonstrating intelligence.
Example:
“Hollywood depicts leaps in AI in dystopian terms, but in reality AI has manifested in chatbots and other convenience features that are benign and commonplace.”