Matt Winn

Au.D., Ph.D.

University of
Washington

 

 

Research

Here you will find information about the
Listen Lab
at the University of Washington
in Seattle. The lab is supported by the NIH Institute on Deafness and Other Communication Disorders (NIDCD).

Matt Winn directs the Listen Lab, and he previously worked at the Waisman Center, primarily in the Binaural Hearing and Speech lab, the Learning to Talk Lab, and was also affiliated with the Department of Surgery. He is supported by the NIH Division of Loan Repayment.

These are the main areas of research
in the lab:

Listening effort

Cochlear Implants

Speech perception

Acoustic Cue weighting

Binaural Hearing

Data visualization

 

Listening effort

tired tired

People with hearing impairment are known to report elevated levels of listening effort for routine activities, such as going to work and engaging in conversation. As a result, by the end of the day there can be little energy left for socialization, recreational activities or adventure. Audiologists frequently hear stories from patients who once enjoyed fun things like the theater, dining out, game night, church and the comedy club, but who now think it’s simply not worth the hassle. Numerous publications suggest that listening effort likely plays a role in the increased prevalence of sick leave, unemployment, and early retirement among people with hearing impairment.
At the Listen Lab, we evaluate listening effort by measuring pupil dilation, which for decades has been regarded as a reliable index of cognitive load.

pupils

We are interested in the factors that affect listening effort, and how effort can carry forward to other kinds of functioning. This includes the absolute level of effort and engagement in listening, the speed of understanding sentences, and the amoutn fo time a person needs to spend thinking about a sentence trying to restore words that were mis-heard.

There are SO MANY questions that remain to be explored. For example:

Does the sound distortion produced by a cochlear implant demand that a person use greater effort to understand speech?

How does effort get deployed over the course of a sentence and throughout a conversation?

How do effort and language processing interact?

Can we change the processing parameters of a CI or a hearing aid so that effort is reduced?

Can listening effort measures be a tool to capture benefits or weaknesses of listeners for whom absolute percent-correct scores do not tell the whole story?

(return to top)

 

Cochlear implants

Cochlear implants (CIs) provide a sensation of hearing to people who have severe to profound deafness and who choose to participate in the hearing/speaking community. The microphone and speech processor receives sound through the air and converts the sound information into a sequence of electrical pulses that are used to stimulate the auditory nerve. This is designed to parallel the normal process of hearing, where mechanical movements in the ear are translated into electrical stimulation for the nerves.
The Listen Lab research on CIs focuses on the representation of spectral (frequency) information, which is known to be severely degraded. We are interested in the perceptual consequences of degraded spectral resolution in terms of success in speech perception, acoustic cue weighting, and the effort required to understand speech.

cochlear implants

(return to top)

 

Using speech to learn about the auditory system

A central technique of the lab is to use speech sounds to learn about the auditory system. Speech contains a variety of acoustic components that stimulate the auditory system in the spectral, temporal, and spectro-temporal domains. There are so many dimensions to explore! By exploiting these properties of speech, we can learn a lot about the corresponding properties of the auditory system.

Such exploration is typically done with non-speech sounds in psychoacoustic experiments. There is a rich history of psychoacoustics, but a surprising lack of connection to real components of speech sounds. We hope to bridge that gap and, in the process, learn something valuable about how to understand the perception of speech by people with normal hearing and by people with hearing impairment.


For any particular acoustic cue that you’re interested in, there is usually at least one speech contrast that depends on that cue.

(return to top)

Acoustic cue weighting

With so many aspects of speech changing at the same time, people have multiple options for different “strategies” to identify speech sounds.
A good analogy to understand cue weighting is the different cues at a traffic light.

green light

 

red light
These people can use different strategies to obtain the same exact information – that it’s okay go, or it’s time to stop. Other cues include seeing other cars move around you, or hearing blaring horns from cars behind you (but not in the Midwest!)

The Listen Lab studies how multiple cues in speech sounds can be decoded by people who try to identify the speech. This is particularly important for understanding hearing impairment, which can force some people to tune in to cues that are different than the ones used by people with normal hearing.

speech
Some of my previous work with Monita Chatterjee and Bill Idsardi suggests that listeners increase reliance upon temporal cues when frequency resolution is degraded. This has particular implications for people who use cochlear implants, because they are known to experience especially poor frequency resolution.
Ongoing work explores how cue weighting is connected with listening effort.

 (return to top)

 

Binaural Hearing

Binaural hearing refers to the coordination of both ears to learn about sounds in the environment. It's more than just hearing on the left and right; it's the ability to know where sounds are coming from, and to distinguish one sound from a background of noise. Binaural hearing relies on some of the fastest most precise coding in the brain to be successful.

At the Listen Lab, we explore binaural hearing sensitivity using high-speed eye-tracking methods that demonstrate the speed, certainty and reliability of our judgments of sound cues. We hope to use this paradigm to test basic psychoacoustic abilities, especially in cases where it is difficult to test (e.g. in children), difficult to coordinate the two ears (e.g. hearing with cochlear implants), or in cases where the binaural system might have been damaged by traumatic brain injury or blast exposure.

(return to top)

 

Statistical modeling and data visualization

Statistical modeling is an essential part of any research. In the lab, we are enthusiastic about finding the most effective ways to describe data and the behavior that leads up to it. The main tools in my works include generalized linear mixed-effects (hierarchical) models and growth curve analysis.

growth curve analysis

Thoughtful data visualization is an effective way to help others understand your research. I strive to find ways to visualize data in ways that help to reveal unexpected patterns, or to convey information in ways that facilitate learning.

To create visualizations, I prefer to use the ggplot2 package in the R programming environment.

spectral tilt continuum group comparisons

Effort release plot

temporal cues

MDT

slope differences

 

(return to top)