ai-header
When asked how it would represent itself doing good things, ChatGPT said “I think it would be a blend of symbols and gentle humor — a sort of ‘ChatGPT mascot’ that hints at my purpose. It would be less about me as a machine and more about the feeling I want to give: help, clarity, and a bit of delight.” (Image generated by ChatGPT)

5 Ways UC Davis Says AI Is Surprisingly Good

How Artificial Intelligence Can Help in Diagnostics, Weather Forecasting and More

  • by UC Davis Magazine staff

The capabilities of artificial intelligence are expanding every day. UC Davis is harnessing AI in many beneficial ways. Here, researchers in veterinary medicine, health, climate science, engineering and education ask how AI can help in their fields — and discover the good side of the growing technology.

How Can AI Read the Body’s Signals?

Whenever we move a muscle, it gives off a small electrical signal that can be picked up by electrodes on the skin, a technique called surface electromyography. Clenching your fist, for example, sets off electrical signals from muscles in the forearm. For years, researchers have sought ways to use these signals to control prosthetic limbs.

Using electromyography to let someone seamlessly control a robotic prosthetic hand is actually really challenging,” said Jonathon Schofield, associate professor in the UC Davis Department of Mechanical and Aerospace Engineering.

“The electrical signals we record from muscles are often small, the patterns of muscle activity are complex, and everything needs to work reliably and without noticeable delay for the user to feel like they are naturally controlling the prosthesis," Schofield said.

Neuroengineers like Schofield and Lee Miller at the Department of Neurobiology, Physiology and Behavior use a form of artificial intelligence — machine learning — to solve this problem. The machine learning interface can be trained on sensory data until it “understands” the pattern of signals that relates to a particular hand gesture. 

Peyton Young
In this 2022 photo, Peyton Young, a PhD candidate in mechanical engineering, works in the Schofield lab with a robotic limb that is connected to an electromyography forearm controller which can move the robotic arm. (Gregory Urquiaga / UC Davis)

Miller’s lab is also using a similar approach to recover the natural voices of people who have lost them due to injury or surgery on their larynx. By training an algorithm on recordings of someone’s voice, together with electromyography from movements of their face and jaw as they attempt to speak, they hope to one day recreate someone’s authentic natural speech from a voice synthesizer.

“Restoring speech by decoding electromyography signals on the face or neck has been a research goal for decades. Only now is it within reach, largely thanks to recent advances in machine learning and in particular, neural networks,” Miller said.

Speech involves coordinating over 100 muscles over time, Miller added. 

“To translate muscle signals into speech, the algorithm must understand both the complexity of each moment as well as how moments are stitched together across time. It’s like making sense of a symphony solely from the orchestral score.” he said.

Machine learning can also allow prosthetics to adapt to their users. Not all prosthetic users are the same, and age, weight, skin tone and the amount of muscle in the limb can all affect electromyography signals. A prosthetic limb that can “learn” to work better with its user will be more acceptable.

In addition to electrical signals, Schofield’s lab has been experimenting with measuring pressure changes in the forearm as muscles tense and relax. Feeding the algorithm with this data as well as the electrical signals gives improved results, they have learned.

— Andy Fell

The entire original article was published in the UC Davis Magazine

Primary Category

Secondary Categories

Human & Animal Health

Tags