Scientists translate dreams into YouTube videos

Gallant Labs UC Berkeley, Wisdom Quarterly, The Consumerist
() Left clip: segment of Hollywood movie trailer subject viewed while in fMRI machine. Right clip: reconstruction of segment from brain activity measured using functional magnetic resonance imaging.

Translating brain activity into YouTube videos

BERKELEY, California - Science is getting closer to letting people see through the eyes of others. UC Berkeley scientists have designed a way to read brain activity and reconstruct YouTube videos.... broadcasting the mental images people create, meaning you could soon watch others' dreams.
  • It would be amazing to hook up Buddhist saints (Noble Ones) or at least monastics and asking them to meditate or "see" nirvana. We know those who have seen it exist.
(Meggito/Flickr)

The Gallant Lab at UC Berkeley had people watch videos while hooked up to an MRI machine, then matched the data and brain activity to shapes, movement, and colors. The computer model the researchers created blurry, dreamlike images. More
Procedure
1. Record brain activity while subject views hours of movie trailers.
2. Build dictionaries (regression model) to translate between shapes, edges, and motion in these movies and measured brain activity. A separate dictionary is constructed for each of several thousand points in the brain at which brain activity was measured.
3. Record brain activity to a new set of movie trailers that will be used to test the quality of the dictionaries and reconstructions.
4. Build a random library of approximately 18,000,000 seconds of video downloaded at random from YouTube (that have no overlap with the movies subjects saw in the magnet).
5. Put each of these clips through the dictionaries to generate predictions of brain activity.
6. Select the 100 clips whose predicted activity is most similar to the observed brain activity.
7. Average those clips together. This is the reconstruction.
Related Posts Plugin for WordPress, Blogger...

Blog Archive