Oct 26(ABC): Scientists from the University of Texas developed an AI-powered algorithm that lets them decode what people were hearing and thinking by brain scans using fMRI.
The first non-invasive method might facilitate first-time communication for those who cannot talk.
The technology used three brain regions associated with natural language to extract data. The model recreates the sounds or ideas a person hears or has while using natural language.
As a result, the system could generate plain text of the person’s thoughts.