Brain Machine interfaces can predict the Bird's next song, next are humans, get ready to chat without typing.

As all we know that we are surrounded from machines these days, but ever you think about a machine which can predict human brain and artificial intelligence which can type 
what you think, let see
The idea that a machine might be able to read and translate your thoughts may sound like science fiction. Buckle up, buddy. You're in the future now. Researchers have found a way to read a bird's brain and predict what song it's going to sing next.
University of California, San Diego scientists announced they were able to "decode realistic synthetic birdsong directly from neural activity." The team claims this is the first prototype of a decoder for complex, natural communication signals from neural activity. And it's the first step towards some pretty ambitious goals.

The Bird's Words

The experiment used machine learning to "decode" the pattern of neural firing in the zebra finch, a songbird that learns its language from older birds – much like how humans learn language from adults. The researchers analyzed both the pattern of neural firing and the actual song that resulted, including its stops and starts and changing frequencies. The idea was to train their software to match the brain activity to the sound produced, in what they termed "neural-to-song spectrum mappings."
"We have demonstrated a [Brain Interface Machine] for a complex communication signal, using an animal model for human speech," the researchers wrote. But why birds? In 2014, the results of a massive, four-year genetic research study published in a package of articles in "Science" confirmed that bird songs are closely related to human speech. The research implications were that "you can study in song birds and test their function in a way you can't do in humans," Erich Jarvis – one of the lead researchers of the international effort – told Scientific American.
This could be the first step towards developing a brain-to-text interface, but what are the implications? First, there's economic incentive from Silicon Valley: both Elon Musk and Mark Zuckerberg have announced that they're working on Brain Interface Machines (BMIs) to allow people to do things like text or tweet directly from their brains, with Musk launching brain-interface company Neuralink and Zuckerberg launching research lab Building 8 in early 2016 to research such technologies. Second, as UC San Diego's report states, this development can assist in the advancement of biomedical speech-prosthetic devices for patients without a voice.

Finding A Voice

To better understand the medical application, it helps to first know how prosthetics work. Dr. Sliman Bensmaia, neuroscientist and principal investigator at the University of Chicago, recently joined us on the Curiosity Podcast to explain how they work and how he and other researchers are developing more advanced prosthetics with a realistic sense of touch.
Bensmaia works on two different types of prosthetics. "For amputees, where part of the arm is missing and part of the arm is still there, the nerve that used to enervate the hand is still there," he explained. "When you electrically stimulate that nerve and creative patterns of neural activation in that nerve, you basically wake that nerve up. That evokes a sensation in the amputee of something touching his hand that is no longer there." The science behind using this type of prosthetic is different from the science behind the BMIs utilized in other types.
"The other type of prosthesis is directed at people who are tetraplegic, so they're paralyzed and insensate from the neck down. That means the nerve is still there, but it's no longer attached to the brain," Bensmaia continued. "So you can stimulate the nerve, but it doesn't have any consequence. For those patients, the only solution is to interface directly with the brain." And to do that, researchers have to learn how to decode the neural code – one experiment at a time.
"We present stimuli, we record the neural activity, and then we try to understand how information is encoded in those neural signals," Bensmaia told us. "If we do our job right, you can give me a pattern of neural activation and I can tell you a lot about the thing that was touched." This is what the UC San Diego team accomplished when they were able to "decode realistic synthetic birdsong directly from neural activity." Researcher Makoto Fukushima told MIT Technology Reviewthat the richer range of birdsong is why the new results have "important implications for application in human speech."

I'm Thinking Of A Number...

Current brain-machine interfaces can track neural signals that reflect a person's imagined arm movements, allowing users to move a robot or a cursor on a screen. "BMIs hold promise to restore impaired motor function and, because they decode neural signals to infer behavior, can serve as powerful tools to understand the neural mechanisms of motor control," the full report explains.
"Yet complex behaviors, such as vocal communication, exceed state-of-the-art decoding technologies which are currently restricted to comparatively simple motor actions. Here we present a BMI for birdsong, that decodes a complex, learned vocal behavior directly from neural activity." So while the idea of a helmet or brain implant that can effortlessly pick up what you're trying to say remains pretty far from being realized, this research shows that it's not strictly impossible.

Comments

Popular posts from this blog

A $30 (2266 INR)android 5.1 smart watch which gives you amoled display and 1.3 GHz processor with waterproofing,best ever

Kinemaster pro's and cons

How to download any image or video from instagram in simple steps