If you have ever felt an uncomfortable, involuntary tightening inside while speaking with air traffic controllers, perhaps asking them to “please say again” for the second or third time, you are not alone. Other pilots including seasoned professionals, as well as veteran controllers, occasionally have similar difficulty making sense of radio transmissions that might be plagued by signal interference, cabin noise, and terminology that is not always familiar.
Many have wished for technology that could somehow clear up the messages coming through headsets and speakers to make them easier to comprehend. The good news is that researchers at Embry-Riddle Aeronautical University are developing a system that uses artificial intelligence to clarify aviation radio communications.
“We see an opportunity here for another leap forward to help controllers and pilots have safer radio communication,” Schneider said.
The system uses speech recognition technology to transcribe spoken radio transmissions into text. After that, the text is processed and refined using standard terminology. The Embry-Riddle technology also formats spoken numbers and call signs, removes extraneous words, and flags possible errors.
The system is designed to allow broad analysis of communication between pilots and ATC. As a result, it could reveal patterns, phraseology errors, and safety concerns that previously were difficult to study, according to Embry-Riddle. The system could also, at some point, provide feedback to student pilots and instructors seeking to identify and correct communication problems.
In the effort to clarify radio communications Schneider is collaborating with Jianhua Liu, an associate professor of electrical and computer engineering, to develop the AI-based system.
“With this research, we can improve the efficiency of communications and reduce errors,” said Liu, whose background includes machine learning.
Two grants totaling $30,000 from the university’s Boeing Center for Aviation and Aerospace Safety have helped support Schneider and Liu’s research. “We simply couldn’t have launched this work without that support — it enabled us to move from concept to reality,” Schneider said.
Their research, which began with a $15,000 grant in 2023, included the collection of radio recordings from 12 U.S. airports with high traffic levels. The researchers used automatic speech recognition tools such as OpenAI’s Whisper to transcribe the recordings. A high rate of errors convinced them that they needed aviation-specific tools, according to Schneider, who is a third-generation pilot and linguist.
“Aviation English isn’t standard conversational grammar—it’s a condensed, highly specific phraseology spoken over a noisy radio where words get clipped and specialized jargon abounds,” he said.
Liu used his expertise in signal processing to tailor a speech recognition tool that cut the error rate from 80 percent to less than 15 percent.