A team of scientists from the Swiss Federal Institute of Technology Lausanne developed a novel machine-learning algorithm called Cebra (pronounced “zebra”), which translates brain signals into video – meaning basically that it can turn thoughts into video.
The new artificial intelligence tool was tested on rodents in order to predict and reconstruct what they see based on mapping their neural activity to specific frames in videos, according to researchers who carried out the study, published in the science journal Nature on May 3.
“Cebra excels compared to other algorithms at reconstructing synthetic data, which is critical to compare algorithms,” said Steffen Schneider, co-first author of the paper, as reported by Neuroscience News.
“Its strengths also lie in its ability to combine data across modalities, such as movie features and brain data, and it helps limit nuances, such as changes to the data that depend on how they were collected,” he added.
Cebra’s 95% accuracy
The study from the Swiss university, which is also known as École Polytechnique Fédérale de Lausanne (EPFL), comes shortly after scientists at the University of Texas reportedly used AI to read people’s minds and turn that into text in real-time.
For their study, the EPFL researchers had Cebra learn the real-time brain activity of a mouse after it watched a movie or the movements of the arm in primates. Part of the brain activity was measured directly with electrode probes inserted into the visual cortex area of the brain.
The rest was obtained using optical probes on genetically-modified mice, engineered so that each time the neurons are activated or receive data they glow green. Cebra used this data to learn the brain signals that are related to particular frames of a movie.
“You can then take a new mouse whose neural data we’ve never seen and run this algorithm and you can predict which frame the mouse is actually watching this movie,” explained Mackenzie Mathis, the study’s principal investigator, in a video posted on YouTube.
Researchers were able to convert this data into a movie of its own, added the EPFL assistant professor. She said her team used open-sourced data collected from the brains of mice using electro-physiological signals.
“We don’t predict each pixel, but rather the frame. Chance level would be 1/900, so over 95% accuracy is, we think, quite exciting. But this pixel-wise decoding is something we plan to do next,” Mathis later told MailOnline.
AI disrupting industries
As seen in the video above, the mouse was made to watch an old black and white movie clip – possibly from the mid-20th century – of a man running to a car to open the trunk. Another screen, almost identical, shows what the mouse is looking at from Cebra’s perspective.
According to Mathis, the AI tool was able to perform this using less than 1% of neurons in a mouse’s visual cortex, roughly 0.5 million neurons.
“We wanted to show how little data – both in terms of movie clips and neural data – we could use,” she was quoted saying.
“Notably, the algorithm can run in real-time, so it takes less than one second for the model to predict the whole video clip.”
So the question to ask is whether it is possible to reconstruct what someone sees based on brain signals alone? The answer is not yet, according to the study. But the EPFL researchers “have made a step in that direction by introducing a new algorithm for building artificial neural network that capture brain dynamics with an impressive degree of accuracy.”
In the U.S., scientists at the University of Texas, Austin used AI to read people’s brain scans and recreate a whole story only from their brainwaves, per a study published recently.
With this research, participants sat in a brain scanning machine known as fMRI, listening, watching or imagining a story. The artificial intelligence tool was able to read their brain waves and recreate the story accurately.
However, concerns have been raised over the issue of accuracy. That’s because the AI can be fooled easily if the subject decides to think of something different compared to the piece they might be listening to.