Abstract

A decade after speech was first decoded from human brain signals, accuracy and speed remain far below that of natural speech. Here we show how to decode the electrocorticogram with high accuracy and at natural-speech rates. Taking a cue from recent advances in machine translation, we train a recurrent neural network to encode each sentence-length sequence of neural activity into an abstract representation, and then to decode this representation, word by word, into an English sentence. For each participant, data consist of several spoken repeats of a set of 30-50 sentences, along with the contemporaneous signals from ~250 electrodes distributed over peri-Sylvian cortices. Average word error rates across a held-out repeat set are as low as 3%. Finally, we show how decoding with limited data can be improved with transfer learning, by training certain layers of the network under multiple participants' data.

Download full-text PDF

Link Source
Download Source 1https://www.nature.com/articles/s41593-020-0608-8?error=cookies_not_supported&code=ac60fa54-395b-487b-a2d3-0e7b8b17ab98Web Search
Download Source 2http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10560395PMC
Download Source 3http://dx.doi.org/10.1038/s41593-020-0608-8DOI Listing

Publication Analysis

Top Keywords

machine translation
8
translation cortical
4
cortical activity
4
activity text
4
text encoder-decoder
4
encoder-decoder framework
4
framework decade
4
decade speech
4
speech decoded
4
decoded human
4