Intel Helped Dr. Peter Scott-Morgan to Give “World’s First Human Cyborg” a Voice


In 2017, the British roboticist Dr. Peter Scott-Morgan received a diagnosis of motor neurone disease (MND), also known as ALS or Lou Gehrig’s disease. MND attacks one’s brain and nerves and eventually paralyzes all muscles, even those that enable breathing and swallowing.

Doctors told the 62-year old scientist he had probably died by the end of 2019, but Scott-Morgan had other plans, he wants to replace all his organs machinery to become the “World’s first full Cyborg.” Scott-Morgan began his transformation late last year when he underwent a series of operations to extend his life using technology.

He now relies on synthetic speech and has developed a lifelike avatar of his face for more effective communication with others. “Peter 2.0 is now online,” Scott-Morgan announced after his surgeries late last year. “This is MND with attitude.”

“I will continue to evolve, dying as a human, living as a cyborg.”

Among the team of technologists working with Scott-Morgan is Lama Nachman, Intel fellow, and director of Intel’s Anticipatory Computing Lab. She helped famed physicist Stephen Hawking speak; now she and her team are helping Scott-Morgan.

Intel Helped Dr. Peter Scott-Morgan to Give “World’s First Human Cyborg” a Voice
Lama Nachman, the Intel fellow, and director of Intel’s Anticipatory Computing Lab works to help Dr. Peter Scott-Morgan communicate. Previously, Nachman helped physicist Stephen Hawking to speak. Nachman and her team developed the Assistive Context-Aware Toolkit, software that helps people with severe disabilities communicate through keyboard simulation, word prediction, and speech synthesis. (Credit: Intel Corporation)

For almost eight years, Nachman helped Hawking communicate his almost mythical intellectual achievements through an open-source platform she and her team helped develop, called the Assistive Context-Aware Toolkit (ACAT). The software helps people with severe disabilities communicate through keyboard simulation, word prediction, and speech synthesis. For Hawking, it was a tiny muscle in his cheek that he twitched to trigger a sensor on his glasses that would interface with his computer to type sentences. For Scott-Morgan, Nachman’s team added gaze tracking, which allows him to stare at letters on his computer screen to form sentences, as well as word prediction capabilities.

“How can technology empower people? That’s been a thread in my life all along.”

A Palestinian growing up as a child in Kuwait, Nachman recalls neighbors calling her to fix their broken electronics and appliances. “I’ve always had this interest in figuring out the latest and greatest technologies and playing with them and breaking them and fixing them,” Nachman says.

Nachman’s team works on context-aware computing and human artificial intelligence (AI) collaboration technologies that can help the elderly in their homes, students who might not thrive in standard classrooms and technicians in manufacturing facilities. “I’ve always felt that technology can empower people who are most marginalized,” Nachman says. “It can level the playing field and bring more equity into society, and that is most obvious for people with disabilities.”

Intel Helped Dr. Peter Scott-Morgan to Give “World’s First Human Cyborg” a Voice
Intel’s Anticipatory Computing Lab team that developed Assistive Context-Aware Toolkit includes (from left) Alex Nguyen, Sangita Sharma, Max Pinaroc, Sai Prasad, Lama Nachman, and Pete Denman. Not pictured are Bruna Girvent, Saurav Sahay, and Shachi Kumar. (Credit: Lama Nachman)

While Hawking wanted more control over his conversations, Nachman says, “Peter is open to greater experimentation and the idea of him and the machine learning together. As a result, we have been researching how to build a response-generation capability that can listen to the conversation and suggest answers that he can quickly choose from or nudge in a different direction.”

While this isn’t as accurate as Hawking’s preference, Nachman says Scott-Morgan is willing to forego control in exchange for intuitive collaboration with his AI-powered communication interface because of the speed it affords him.

“My ventilator is a lot quieter than Darth Vader’s.”

Scott-Morgan is known for his wit and self-effacing humor, and he wants to be able to show that with his artificial voice. In addition to decreasing the latency, or “silence gaps,” between Scott-Morgan and another conversing, Nachman’s team is looking into how Scott-Morgan can express emotion. When we’re conversing normally with another, we’re looking at multiple cues like expressions and tone, not just the words. For Scott-Morgan, the team is researching an AI system that listens to what’s going on and then prompts alternative suggestions and tones according to different criteria.

Someday, Scott-Morgan and others might use brainwaves to control their voices.

Nachman said some of her team’s research focuses on people who cannot move any part of their body, not even a twitch of their cheeks or eyes. For them, Nachman says, brain-computer interfaces (BCIs) include skullcaps equipped with electrodes that monitor brainwaves, like an electroencephalogram test. Nachman says she and her team are looking to add BCIs to ACAT to ensure no one is left behind.

As AI gets smarter, Nachman is particularly interested in exploring ways to preserve human control while giving the AI system greater agency so “the two diverse actors are working in concert to achieve better outcomes together.”

FTC: is supported by its audience. When you purchase through links on our site, we may receive compensation at no extra cost to you, this will help us to keep our website running. You can read more HERE. Note: The pricing and availability are accurate as of the time of publication but are subject to change in the future.

Intel Helped Dr. Peter Scott-Morgan to Give “World’s First Human Cyborg” a Voice