Skip to main content

Hari Garudadri: Improving the World Through Signal Processing, Step by Step by Step

By:

  • Tiffany Fox

Media Contact:

Published Date

By:

  • Tiffany Fox

Share This:

Article Content

hari garudadri

hari garudadri

Throughout Hari Garudadri’s career, three things have remained constant: A fascination with signal processing, a love for work with a humanitarian focus and (fittingly) the number three.

The University of California, San Diego electrical engineer never purposely planned it this way, but “every three years, either I change the city I live in or I change the field I’m working in,” he says. A focus on signal processing has remained constant throughout his career, whether it’s developing wireless communications systems for coal miners in India, improving speech recognition tools or his current project at the UC San Diego Qualcomm Institute: Outfitting a football helmet with a suite of sensors to characterize how sports-related impact leads to changes in brain patterns (and how brain injury can be prevented).

Doing “meaningful work” is Garudadri’s primary passion, and fortunately for him signal processing has its place in nearly every aspect of modern human life. It’s taken him from deep underground into the ether of the wireless spectrum, from processing human speech to processing brain waves.

“My current focus is to extend secondary and tertiary care know-how to primary and home care situations in order to improve healthcare outcomes and improve quality of life,” he explains. ”Notwithstanding the recent advances in sensing modalities, wireless technologies, consumer electronics and signal processing, this is a hard problem. The Qualcomm Institute and the School of Medicine at UC San Diego provide me an ideal platform for the necessary innovations. I have found home and I see myself at this a lot longer than three years.”

Connected in a Coal Mine

After earning his undergraduate and graduate degrees in Electronics and Communications Engineering at Sri Venkateswara University and the Indian Institute of Technology, Bombay, Garudadri moved on to the Indian Institute of Technology at Kanpur in 1981. While there he went underground — literally. Garudadri was part of a three-man team that spent hours laying 1000 yard antennas for a wireless communications system for coal miners that could withstand a collapse of the mine.

“When there is a collapse in a coal mine, the first things that give way are the wires used for lighting and communications,” he explains, noting that wireless signals cannot penetrate the ground. “We decided to use extremely low-frequency (ELF) communications signals, and tested up to 1,000 feet below ground. For the downlink transmitter, we had full voice communications — a home stereo amplifier with the 1000 yard antenna connected to the speaker wires. For the uplink receiver, we had a contraption that looked like a metal detector to pick up Morse code from trapped miners.

“The miners had a small unit powered by the battery they used for lighting, capable of sending Morse code and receive voice from the surface. In this way, in the event of a collapse, we would be able to track the miners underground, keep up their morale and coordinate rescue.”

Garudadri calls his job working in the coal mines of India his “most meaningful job to-date.”

“We experienced the hardships of the miners first-hand and connected at a personal level with hundreds of them. I was surprised to learn that their average life span was only 40 years.” He said the miners had a rotating system to hike four miles to bring the engineers a cup of tea in the afternoon every day they were at the site.

“During one field visit, the mine manager came to the site, told us our permit was cancelled, escorted us to the nearest train station at Dhanbad and made sure we left the area. It took us six months to sort out the paperwork and resume our work. At that point, the mine manager took all three of us home for lunch and explained that a ‘kill-contract’ was out on us, as local mafia thought our equipment was just a front for law enforcement. They did more for us than we did for them.” Garudadrii says he still likes to brag about his brush with Indian organized crime.

Synthesis and Recognition

Garudadri was inspired to work on speech recognition by the work of inventor and futurist Ray Kurzweil at MIT, who developed reading aids for blind people. Garudadri later obtained his Ph.D. in electrical engineering from the University of British Columbia in Vancouver, where he worked on speech analysis and developed sophisticated signal-processing tools that could analyze speech in time and frequency, beyond Heisenberg’s Uncertainty.

“When you say ‘cat’ and I say the word ‘cat’ it’s understandable, but our accents are slightly different,” explains Garudadri, whose native language is Telugu. “I wanted to know if there were acoustic signatures to tell them apart, and I quickly realized I needed more background in speech and audiology.” He worked with advisors in the UBC School of Audiology and Speech Sciences and the Department of Electrical Engineering to analyze and compare English sounds spoken by native and non-native language speakers for invariant patterns. He studied English, French and Telugu and discovered patterns in the acoustic signal that change meaning in some languages but not in others.

Garudadri spent the next 12 years establishing a successful career in the field of speech recognition, switching companies every three years as his interests changed. He began at the research arm of BCTel (the Canadian equivalent of a baby Bell Telephone Company) developing speech recognition and audio coding. (At the time, BCTel had a vested interest in using speech recognition to reduce the number of operators in its call centers.) He moved to Institut National de la Recherche Scientifique (INRS) in 1991 to develop a 60,000-word continuous real-time digital speech recognizer for transcription. Those were early days of commercial speech recognition systems.

In 1994, Garudadri was hired on at a Boston-based start-up out of MIT called Voice Control Systems, which was aimed at developing a speech-recognition system for the telephony network. His systems were deployed in many baby Bell networks. Since that time, many companies specializing in speech recognition have been consolidated: VCS was merged with another company and was eventually bought by Phillips, which was bought by Scansoft, which was acquired by Nuance, which helped Apple develop Siri. Some of Garudadri’s speech recognition code also made it into high-end automobiles.

San Diego Calling

From Boston, Garudadri made his way to San Diego to work for “a little company everyone was talking about”: Qualcomm, which had about 3,000 employees at the time. Two years after giving a presentation to Qualcomm on how cell phones might incorporate directory assistance services, Garudadri was asked to join its CDMA Technologies division in 1997 (the company by then had doubled in size). He has remained there for the past 16 years working in systems engineering but of course, per his modus operandi, dabbling in several different projects along the way.

“I came to San Diego to put speech recognition technologies inside cell phones, first in English. What’s funny is that there were no native English speakers on our team -- we’d have our boss, who was a native speaker, test the systems we developed. At this time, the companies that were setting the pace for cell phone features were in Japan and Korea, so even though none of us could speak Japanese or Korean, we built systems in their languages anyway and they put it in their phones. We had the honor of having the most deployed speech recognizers in the world until Siri came along.”

When the traditional three-year mark was up, Garudadri switched his field to video processing technologies for Qualcomm, developing coding that would allow video playback on cell phones. Three years after that, in 2005, he contributed the multimedia protocol standards that would make it possible for an iPhone to make a video call to an Android phone, for example, or for a Droid user to text a photo to a Samsung.

“I then stumbled on a field called Body Area Networks (BAN), and we wanted to stream music to ear buds wirelessly. I told Qualcomm I would help develop the audio codecs provided they would let me continue with biomedical signal processing such as sending an electrocardiography (ECG) signal from a person’s chest to his cell phone.”

Garudadri and his team worked with a New York-based medical device company called Welch Allyn to develop a wireless ECG prototype that provided “wired” resting ECG quality in the presence of packet losses and motion artifacts (i.e. noise that can otherwise interfere with a signal). Qualcomm CEO Paul Jacobs gave a TEDMed talk about the device in 2009. Says Garudadri: "Medical professionals were very excited by the device because it was accurate. Everybody was asking, ‘when can we have it?’”

The device caught the attention of UC San Diego Computer Science and Engineering Chair Rajesh Gupta, who invited Garudadri to give a talk about the wireless ECG device at UC San Diego. This year, Garudadri was invited to join the Medical Systems and Device group in the university’s Electrical and Computer Engineering Department, and QI Director Ramesh Rao asked him to join the research faculty at the Qualcomm Institute, which is the UC San Diego division of the California Institute for Telecommunications and Information Technology (Calit2).

For Garudadri, the opportunity was too good to pass. “I’d been a part of a project at Qualcomm for a year on neuromorphic computing to build systems that mimic the human brain. But the prospect of developing technologies to improve healthcare delivery solidified the decision."

Processing Signals, Protecting Lives

Garudadri isn’t wasting any time pursuing that passion: He was just awarded two grants from the Calit2 Strategic Research Opportunities program, one of which will look at improving computer vision technologies for object recognition and tracking.

“For the last six decades there have been lots of improvements in video and audio processing, but it’s all been work with bits coming in and bits going out,” Garudadri notes. “We want to broaden the scope by incorporating sensing and rendering of real world (analog) signals. There are many lessons from how humans generate and consume information that can be incorporated in machines and robots.”

He and his collaborator on the project, ECE Chair Truong Nguyen, have purchased a “liquid lens” for auto-focus and a low-power motion sensor inspired by the human retina, which responds to changes in the visual field. The lens can be used to augment robot vision with auto-focus to improve the quality of the images the robot is ‘seeing.’ “We are also investigating computer vision algorithms to characterize tremors in patients with neurological disorders,” he adds. “Currently, there are no accepted objective metrics to aid diagnosis and treatment of essential tremors.”

And then comes Garudadri’s most clinically directed project to date: Developing a football helmet embedded with ECG sensors and accelerometers to measure how tackles and other high-impact hits affect the human brain.

The long-term consequences of concussions and other football-related injuries are drawing increasing attention from the media and the public following the publication of the book “League of Denial” and the subsequent documentary film about traumatic brain injury in the National Football League.

For Gardudadri's project, which has received $50,000 in CSRO funds, he will work with sports medicine specialists Dr. Naznin Virji-Babul, University of British Columbia, and Dr. Amelia Eastman at the UC San Diego Health System to determine if there’s a way to develop “some kind of intelligence as to whether player should stay out of game or if it’s safe for them to go back in.” They will be testing the helmet on middle school and high school ice hockey and football players as a proof-of-concept for further research on helmets that could be used to monitor and minimize impact on the brain.

The project has also received $20,000 from Canada’s National Sciences and Engineering Research Council, $30,000 from the UC San Diego Center for Brain Activity Mapping and another $30,000 from the company MaXentric through the Department of Defense’s Small Business Innovation Research program. The team is awaiting news about whether or not it will also receive funds from the NFL’s Head Health Challenge.

“The Qualcomm Institute has the expertise in sensing, circuit design, signal processing, wireless technologies, and the UC San Diego School of Medicine is on the cutting edge of translating innovations into the clinical flow,” notes Garudadri “This combination is perfect for us to do this kind of work. I’m very happy to work here.”

Share This:

Category navigation with Social links