Meta has made a major achievement in the field of neuroscience. The company has launched TRIBE v2 AI model. This model is capable of simultaneously understanding and predicting the activities of the human brain. This model, based on fMRI data of 720 people, can prove to be a boon for the treatment of diseases, education and the disabled.
Till now scientists used to understand the brain by dividing it into parts of audio, video and language. But this new model works by combining all these things together. That is, if a person is watching a movie, this model can understand what is going on in his mind simultaneously, what he is seeing, what he is hearing and what he is understanding.
Brain scans of 720 people were used
To create this model, brain scans (fMRI) of 720 people were used, which included more than 1000 hours of data. This helps the model understand the brain’s response to different situations. Meta has open-sourced it so that researchers around the world can use and further improve it.
Able to work even without training
According to Meta’s research paper, this model is more accurate than old methods and the special thing is that it can work even with new people and new situations without additional training. The biggest feature of this technology is that it allows in-silico experiments, i.e. conducting brain-related experiments on the computer itself. Earlier scientists had to test on people every time, but now many experiments can be done on computers only.
Know in 5 points, what will be the benefit
1. Research will be cheaper
In future, brain related research may become faster and cheaper. This means that new discoveries will be made quickly and their benefits will reach people quickly.
2. Help in treatment
This technology can help doctors better understand brain-related diseases like Alzheimer’s, depression, etc. If the reaction of a person’s brain is understood in advance, then treatment or testing methods can be better.
3. Brain Computer Interface
Brain will play an important role in the development of computer interface. This will make it easier for the disabled to develop a brain-controlled computer control device. This model will help explain how the patient is putting things together.
4. In education
If a student is learning from both video and audio while studying, it will be easier to understand how his or her brain is learning by associating the information. By simulating which content generates more attention and understanding in the minds of students, we will be able to create better courses and videos.
5. In AI development
With the help of this model, AI can be made more natural and intelligent by understanding the brain better. With this, apps like voice assistant, VR/AR will be able to understand people’s emotions in a better way.