Visualization of dreams

Visualization of dreams

Visualization of dreams by scientific methods and results

The ability to control thoughts in one form or another was widely used by the authors of numerous science fiction novels. But more recently, the visualization of dreams (mental images) has become available to scientists.

Jack Gallant study

In the early 2000s, with the help of fMRI, the first attempts were made to “reverse retinotopy”. At first, the attempts were rather timid: the subjects were shown images and simultaneously took data on the activity of various brain areas using Functional magnetic resonance imaging (FMRI). Having collected the necessary statistics, the researchers tried to solve the inverse problem – to guess what the person is looking at using the brain activity map.

In simple pictures, where the main role was played by spatial orientation, the location of objects or their category, everything worked perfectly, but it was still very far from “technical telepathy”. But in 2008, scientists from the University of California at Berkeley’s Neuroscience, led by psychology professor Jack Gallant, tried to do this trick with photos. They divided the study area of the brain into small elements – voxels (elements of a three-dimensional image) – and tracked their activity at the time when the subjects (two authors of the work performed their role) were shown 1,750 different photos.

Based on this data, scientists built a computer model that they “trained”, showing 1000 other photos and receiving 1000 different voxel activation patterns at the output. It turned out that by showing the same 1000 photographs to the subjects and comparing the patterns taken from their brain with the predicted computer, it is possible to determine with a sufficiently high accuracy (up to 82%) which photograph the person is looking at.

Moving pictures

In 2011, a team of researchers led by the same professor, Gallant of the University of California at Berkeley, achieved much more interesting results. Showing to the subjects “training” excerpts from films with a total duration of 7,200 seconds, the scientists studied the activity of a multitude of brain voxels using fMRI.

But here they are faced with a serious problem: fMRI responds to the absorption of oxygen by brain tissues – hemodynamics, which is a much slower process than changing nerve signals. For researching the reaction to still images, this does not play a special role – a photo can be shown for a few seconds, but with dynamic videos serious problems arise. Therefore, scientists have created a two-step model that links slow hemodynamics and fast neural processes of visual perception.

Having built the original computer model of the “response” of the brain to various videos, the researchers trained it with the help of 18 million one-second videos that were randomly selected on YouTube. Then the test subjects were shown “test” films (other than “training”), studying brain activity using fMRI, and the computer selected from these 18 million hundreds of videos that caused the closest activity pattern, then averaged the image on these videos and gave an “average result”. The correlation (coincidence) between the image that a person sees and that generated by a computer is about 30%. But for the first “mind reading” this is a very good result.

Japanese studies

But the achievement of Japanese researchers from the Laboratory of Neurosciences of the Institute of Telecommunications Research in Kyoto, the Institute of Science and Technology in Nara, and the National Institute of Information and Communication Technologies in Kyoto is much more significant. In May 2013, they published in the journal Science the work “Neural decoding of visual images during sleep.” Yes, scientists have learned to dream. More precisely, not to see, but to spy!

Writing down the brain activity signals with fMRI, three subjects were awakened (about 200 times) at shallow sleep stages and asked to describe the content of the last dream. From the reports, key categories were identified that, using the lexical database WordNet, were combined into groups of semantically close terms (synsets) organized into hierarchical structures. The fMRI data (nine seconds before waking up) was sorted by synsets.

To train the model for recognizing the awake, the subjects showed images from ImageNet, corresponding to synsets, and studied the map of brain activity in the visual cortex. After that, the computer was able to predict, with the probability of 60? 70%, on the activity of various areas of the brain, what exactly a person sees in a dream. This, by the way, indicates that a person sees dreams with the help of the same areas of the visual cortex that are used for ordinary vision in the waking state. That’s just why we see dreams in general, scientists cannot yet say.