May 20, 2024

New Evidence Shows Self-Supervised Learning Models Generate Brain-Like Activity Patterns

Researchers from the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT have conducted a pair of studies that provide new evidence supporting the theory that self-supervised learning models generate brain-like activity patterns. Self-supervised learning is a type of machine learning that allows computational models to learn about visual scenes based solely on the similarities and differences between them, without any labels or additional information.

In the studies, the researchers trained neural networks using self-supervised learning to learn representations of the physical world. They found that the resulting models generated activity patterns that closely resembled those seen in the brains of animals performing the same tasks as the models. This suggests that these models are able to learn representations of the physical world and make accurate predictions, similar to how the mammalian brain operates.

The researchers trained the models to predict the future state of their environment using naturalistic videos. They then evaluated the models’ performance in a task called Mental-Pong, where the ball disappears before hitting the paddle, requiring the player to estimate its trajectory. The models were able to track the hidden ball’s trajectory with accuracy similar to neurons in the mammalian brain, and the neural activation patterns within the models were similar to those seen in animals’ brains.

Another study focused on grid cells, specialized neurons involved in navigation. The researchers trained a self-supervised model to perform path integration, a task that involves predicting an animal’s next location based on its starting point and velocity. The model learned to represent space efficiently and formed activation patterns similar to those seen in grid cells.

These findings suggest that self-supervised learning models have the potential to emulate natural intelligence and provide insights into how the brain develops an intuitive understanding of the physical world. The studies will be presented at the 2023 Conference on Neural Information Processing Systems (NeurIPS) in December.

The research was funded by the K. Lisa Yang ICoN Center, the National Institutes of Health, the Simons Foundation, the McKnight Foundation, the McGovern Institute, and the Helen Hay Whitney Foundation.

*Note:
1. Source: Coherent Market Insights, Public sources, Desk research
2. We have leveraged AI tools to mine information and compile it