Artificial Intelligence and Robotics

AIs desperately seeking an imagination model

What is imagination? How does it work? And what role does it play in consciousness? Cognitive scientists have long hypothesised that imagination equals a “mental workspace” within which information can be processed across specialised subdomains. In other words, it’s a place where the brain can simulate the physical world around it and manipulate that simulation according to new variables to anticipate future scenarios. What’s always been a mystery, however, is how the mind knows what rules to use when manipulating that simulation. Don’t worry. FT Alphaville isn’t turning into a bad science blog (Pluto fraud excluded). We raise the query mainly because of the current hubbub surrounding artificial intelligence systems and existential risk. On which note, do see John Gapper’s latest on killer robots and our own post from Wednesday. In any case, the “imagination” issue is important because it’s a key factor differentiating a conscious AI system from a dumb one. And, weirdly enough, it’s also an area where economic modelling — the art of creating simplified theoretical constructs representing the complex economic process and relationships of the real world for the purpose of extrapolating future paths — can play a role in building AI systems. Hence, we thought, it was a good time to showcase the work of Dr Simon Stringer and team at the Oxford Centre for Theoretical Neuroscience and Artificial Intelligence precisely because of how it differs to the engineering/big data-led approaches currently being touted as existentially threatening. As Stringer has told us in the past, AI is a broad category of many kinds of techniques. Yet a true AI, in his opinion, is one capable of learning about its world without human supervision. Despite all the media attention, techniques being developed by the likes of Google’s DeepMind venture are not capable of learning this way.

Newer results You are on page 2

FT Alpha Tweets