AI is the construction of computers, algorithms and robots that mimic the intelligence observed in humans, such as learning, problem solving and rationalising. Unlike traditional computing, AI can make decisions in a range of situations that have not been pre-programmed into it by a human.
Much of AI is about systems that can learn and evolve through experience, often to carry our specialised tasks such as driving, playing a strategy based game, or making investment decisions. This subset, also referred to as cognitive computing, needs to be trained by learning from experts.
Looking to the future, the focus is on creating an Artificial General Intelligence (AGI) that can apply itself to a broad range of tasks in a much less structured way.
As part of a wave of automation, we are seeing widespread and rapid adoption of early AI technologies that are transforming industries across every sector. This will have wide ranging implications for the global economy, countries and organisations, and will only accelerate as parallel technologies, such as the Internet of Things, unlock more opportunities.
Although AI would be very different from human intelligence, high profile commentators such as Stephen Hawking and Elon Musk have pointed to the potential risks should its performance and capabilities dramatically exceed that of humans. AI also raises questions about how its decisions should reflect a moral/ethical code.
There is currently a significant debate and little consensus about when (and indeed whether) true AGI can be achieved, but many industry observers believe it is unlikely before 2030. However, given the current rapid evolution of such technologies, we may be at the early part of a dramatic increase in AI development.