So our minds CAN be read: Magnetic scanner produces these actual images from inside people’s brains
Process reproduces visual images from analysis of blood flow to brain
Experts believe it could be used in future to analyse dreams and memories
By CHRIS PARSONS and ROB WAUGH
Last updated at 9:44 AM on 28th October 2011
Scientists have created a revolutionary brain imaging process which allows them to ‘see’ moving images inside people’s minds. As the test subjects think of a video, the researchers ‘see’ it on screen.
It’s the most astonishing demonstration of ‘mind reading’ technology ever demonstrated.
The academics from the University of California, Berkeley, managed to decipher brain activity by measuring blood flow through the brain’s visual cortex, and used this information to construct images of what they were ‘thinking’.
Reproduction: An image, left, of Steve Martin in Pink Panther 2 is amazingly recreated through analysis of blood flow into the brain’s visual cortex to produce the representation on the right
They then converted this information into visual patterns after feeding it through a computer, in a process which scientists say ‘opens a window into the movie of our minds’.
As yet, the technology can only recognise and reconstruct movie clips shown to the test subjects before they braved the scanner.
However, the breakthrough paves the way for reproducing the movies inside our heads that no one else sees, such as dreams and memories, according to researchers.
Professor Jack Gallant, a UC Berkeley neuroscientist, said: ‘This is a major leap toward reconstructing internal imagery.’
Test subjects watched two separate sets of Hollywood movie trailers, while an MRI scanner was used to measure blood flow through the visual cortex, the part of the brain that processes visual information.
On the computer, the brain was divided into small, three-dimensional cubes – a computer-imaging term known as volumetric pixels, or ‘voxels.’
Shinji Nishimoto, one of the scientists involved in the procedure, said: ‘We built a model for each voxel that describes how shape and motion information in the movie is mapped into brain activity.’
The brain activity recorded while subjects viewed clips was fed into a computer program that learned, second by second, to associate visual patterns in a particular film with the corresponding brain activity.
The computer was then ‘fed’ information so that it could construct its own ‘versions’ of the trailers the subjects were watching – without using the original material. This was done by feeding 18 million seconds of random YouTube videos into the computer program.
The computer then cross-refefenced the two sets of data – and the subjects were shown an entirely new set of film trailers.
The 100 YouTube clips that the computer program decided were most similar to the trailer the subject was watching were merged, creating a blurry, but recognisable image of what was ‘happening’ inside their minds.
[You’ve got to see the rest of the images to believe them…]