IntroductionSpeech understanding in background noise has

Introduction:

Speech understanding in background noise has been extensively studied in both individuals with normal hearing and individuals with hearing loss. Studies that investigate the speech perception amidst background noise in infants are emerging. Infants' speech and language development happens concurrent with the brain development and depends on their exposure to spoken language around them. The more they're exposed to spoken language, the more skills they would develop for communication. However, there are several sounds happening in the environment that overlap with speech sounds.

This paper outlines the current research on infants' listening and understanding in background noise with various types of speech stimuli, cortical and brainstem responses to speech discrimination in noise, subtle cues in speech such as visual, acoustic and other perceptual cues that they utilize to achieve better speech perception as well as some of the factors that influence speech discrimination in noise in children with hearing loss.

Types of background noise:

Infants are exposed to a whole range of sounds in their environment.

Get quality help now
WriterBelle
WriterBelle
checked Verified writer

Proficient in: Nervous System

star star star star 4.7 (657)

“ Really polite, and a great writer! Task done as described and better, responded to all my questions promptly too! ”

avatar avatar avatar
+84 relevant experts are online
Hire writer

Their ability to discriminate speech from other background sounds depends on how well their brains can group sounds into different sound sources. Infants' brain continues to develop after birth and therefore environmental factors play an important role in the neural development. Infants are exposed to two different types of masking wherein speech information is masked by surrounding non-speech sounds or babble. The first type is Energetic masking where the acoustical energy of the background noise directly masks the target speech information i.e., the spectrum of the noise is large and broad enough to mask the spectrum of speech.

Get to Know The Price Estimate For Your Paper
Topic
Number of pages
Email Invalid email

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email

"You must agree to out terms of services and privacy policy"
Write my paper

You won’t be charged yet!

The second type of masking is referred to as informational masking where the attention/perception is deviated because of the presence of background noise. In this type of masking it could be that the background noise is more interesting than the target speech that the infant loses attention to target speech.

Infants' response to name call:

Many studies have evaluated infants' response to name call in the presence of background babble. Newman (2009) reported that infants by 9 months of age can discriminate and identify phonetic details of their own name against multi-talker background speech. A more recent study by Bernier & Soderstrom (2018) utilized background babble more commonly encountered by infants in everyday life and found that infants as young as 5 months of age were able to discriminate their name in a signal-to-noise ratio of 10 dB. Unlike Newman's findings, Bernier & Soderstrom 's findings indicate that infants by 9 months age can distinguish other speech than their own name in the presence of a more naturalistic multi-talker babble. This study also looked at an infants' preference for linguistic information by recording infants' response to words similar to their own name such as a stress-matched name as well as a non-matched name, a name that had no phonetic or phonemic resemblance to the infants' name. They found that the infants at 5 months and at 9 months of age showed preference to their own name versus a stress-matched counterpart but not over a non-matched name. This could mean that infants find it easier to discriminate between words when the differences are subtle compared to when the two words are completely different from each other even though the child's auditory experience to his/her own name is far more than the non-matched name. Acoustically if the non-matched name is more appealing in terms of the acoustic pattern or prosody, this might draw more attention than the infants' own name.

More research in this area is needed to further analyze child's responses to name call against different target words that differ in the linguistic detailing. This study also looked at the age at which infants begin to discriminate their own name. In quiet conditions, previous research has shown that infants recognize their own name by 4.5 months of age (Mandel, 1995) and discriminate by 6 months of age. That being said, infants only show some sensitivity to the phonetic aspects of their name in the presence of background noise by 9 months of age. This could be due to the developing brain in infants. More research in the area of infant response to name call is needed to generalize this finding. A longitudinal study assessing ability to recognize and discriminate phonetic details in the presence of background noise can shed more light on when infants or children fully develop the skill to discriminate their own names against other words in background noise.

Cortical responses in background noise:

It has been well established that children require a higher signal-to-noise ratio (SNR) to perform well under adverse listening conditions when compared to adults. This can be attributed to the differences in maturation of the brain structures. Like we discussed earlier, the type of masker plays an important role in determining the SNR that infants or young children need for better speech understanding. There have been some studies that have looked at the neural underpinnings of speech understanding in background noise and its implications. Cortical auditory-evoked potentials (CAEP) are often used to study the maturation effects in infants, wherein infants show a positive peak at 200 ms followed by a variable and broad negative trough at approximately 250 msec - 450 msec. On the contrary, older children and adults with mature brain systems show a triad response with a positive peak at 50 msec followed by a negative trough at 75 msec and another positive peak at 200 msec A study by Small, Sharma, Bradford, & Vasuki, (2018) looked at the effects of different SNRs on the CAEPs elicited using speech stimuli. The authors identified that the CAEP responses in infants depend on the type of the stimuli namely low (/m/) versus high frequency stimuli (/t/). They concluded that increasing the intensity of the background noise, namely white noise resulted in decreased amplitudes and longer latencies for low frequency stimuli in infants. It is important to note that that site of electrode placement can also affect the responses elicited in infants. Small et.al (2018) found that infants required at the minimum a 10 dB better signal compared to adults for speech understanding in noise for low frequency stimuli. This is in line with behavioral studies that report a similar increase in SNR needed for infants in adverse listening conditions. However, why cortical response changes were noted for different SNR conditions for low frequency stimuli and not for high frequency stimuli needs to be further analyzed. The implications that these results have on the behavioral responses of infants to varying SNRs and whether cortical responses can predict the behavioral performance of infants in adverse listening conditions needs to be explored. Billings, McMillan, Penman, & Gille (2013) argued that in adults the cortical potentials elicited at 5 dB SNR for a low frequency stimulus presented at 70 dB SPL predicted a behavioral performance of 50 % sentences correct in the presence of background noise. It is important to analyze the correlation between perceptual and cortical responses as it has been noted that 25% of infants who had absent cortical responses in the aided condition actually perceived the stimulus (Chang et al. 2012). It could also be that the maturing brain in infants is resulting in cortical responses that cannot be considered as reliable predictors of performance in speech in noise conditions. More research is needed in this area to interpret with confidence the implications of cortical responses to speech in noise in infants.

Brainstem responses to speech in noise:

Background noise impacts different parts of the speech stimuli in different ways. Stop consonants where there is a sudden stoppage followed by release of the airflow are more likely to be affected by random acoustic parts of background noise. On the other hand, vowels which are longer and periodic are less susceptible to the randomness present in noise and therefore provide more opportunities for analyzing and encoding the speech signal (Miller and Jusczyk, 1989). Behavioral measures have thus far been used to identify speech processing deficits in noise. However, these measures are of limited benefit to non-verbal children and have reduced objectivity. Complex auditory brainstem response (cABR) has been deemed as a reliable measure of assessing speech processing deficits in infants and young children. A study by Musacchia, Ortiz-Mantilla, Roesler, Rajendran, Morgan-Byrne, & Benasich, (2018) looked at the brainstem responses to speech in noise in young and older infants. The authors concluded that infant aged 7-12 months were more vulnerable to the disruption of the fundamental frequency of vowel sounds by noise compared to infants aged 12-24 months. Differences in noise effects on the high frequency consonant discrimination between the two groups could not be established as both groups performed similarly where there was a reduction of the amplitude of the response. It was also noted that the waveform morphology was more coherent in the older infants group compared to the young infants which might suggest maturational differences between the two groups to speech in noise performance. Maturation of the brainstem is thought to continue after first year of life meaning that auditory processing especially in background noise is improving (Musacchia et.al, 2018). Speech-in noise deficits can be used to predict the outcome of language ability in children. This can be established using cABR if more research can point to the reliability of cABR in predicting language outcomes and can aid in early identification and intervention of speech processing deficits. More large-scale research is needed to establish normative values of ABR amplitude and latencies in infants of different age ranges to identify infants that fall outside the normal range.

Use of visual cues:

All individuals despite the age utilize visual cues consciously or unconsciously for speech understanding. These cues further help with speech understanding especially when background noise is present. However, the reliance on visual cues varies with age. Newborns until the age of six months, pay attention to the speakers' eyes as their visual system undergoes tuning as they become more sensitive to identifying human face patterns (Leppanen, 2016). Following this infants begin their selective attention to the mouth of the speaker as their production of speech sounds increases. Kr?l (2018) reported that in the presence of background noise infants increase their fixations to the area around the mouth to accommodate for the uncertainty in speech understanding. This increase in fixations of the mouth area comes at the price of decrease in attention to other areas of the face. The authors in the same study found that when the background noise level was moderate, infants immediately shift their attention from the eyes which provide social cues to the mouth area to improve speech understanding. When the background noise was high, decreased attention was noted not only to the eyes but also the rest of the areas of the face. Infants unconsciously pay more attention to the mouth area that can provide linguistic cues during adverse listening conditions which aids in better speech understanding. However, this decrease in attention to the eyes that are responsible for providing social cues in the presence of background noise can be detrimental to the development of social behavior especially in children with language disorders who rely more on the linguistic cues (Kr?l, 2018). This decreased attention to the eyes can play a role in these infants developing one of the Autism spectrum conditions. A study by Klin, Jones, Schultz, Volkmar, & Cohen, (2002), found that infants or children with diagnosed with Autism Spectrum disorders paid less attention to the eyes, and among these children, those who expressed more attention to the mouth area developed better social skills compared to those children who fixated on other objects and not the face of the speaker. This attention to the mouth indicated an interest in communication which led to better social functioning.

More research is needed in this area to analyze how detrimental is this trade-off between eyes and mouth fixations in background noise for infants. Whether the trade-off is a cause or a result of decreased social functioning and the genetic and environmental factors that determine why certain infants are impacted by this trade-off whereas others are not, needs to be evaluated.

Infants reach a certain point in their childhood when they no longer fixate on the mouth as they become more linguistically competent that visual cues provide a minimal role in speech understanding in regular conditions. This is again revered in older population who rely more on the visual cues for speech understanding due to the presence of hearing impairment or due to the degrading cognitive processes and speech processing abilities. That being said, Kr?l, (2018) found that infants who had more linguistic ability evaluated using word-recognition scores focused more on the mouth compared to eyes or other areas of the face. Even if this seems contradictory to the fact that infants reduce reliance of mouth when they become linguistically competent, until they reach that competency, infants who in general had more interest in language acquisition focused their attention more on the mouth area which led to improved language abilities.

Vocal characteristics and speech in noise performance:

We all know that adults can make use of differences in vocal characteristics amongst different talkers to segregate what they want to hear from the background babble or side conversations. One of the cues that adults use to try to separate target speech from noise is the differences in vocal characteristics between male and female talkers (Newman & Morini, 2017). The differences in the fundamental frequencies, vocal tract size and shape of male and female talkers may provide subtle cues to differentiate target speech from background noise, if there are differences in sex between the masker and the target signal. Leibold, Buss, & Calandruccio, (2018) studied infants and adults to evaluate which group can make use of differences in sex of the masker/target for speech discrimination in background babble. They reported that infants within the first year of life are not able to make use of sex mismatch cues to differentiate target from the masker speech. This indicates that infants are still learning to recognize voice patterns and this skill is not developed within the first year of birth. They also reported that school-going children are able to make use of vocal differences between male and female talkers to achieve better speech discrimination in background noise (Leibold et.al, 2018). They argued that somewhere in-between infancy to childhood, this skill develops even though the magnitude of the skill is larger in adults compared to children. A longitudinal study will provide more insights on when the skill achieves maximum magnitude and if this skill deteriorates with age. That being said, how much does this cue contribute to improving speech-in-noise perception against other perceptual cues needs to be evaluated to see how important sex-mismatch cues are in speech discrimination in noise. It is not often that target and masker signals vary in sex and therefore its importance to speech perception in adverse listening conditions is questionable.

Onset asynchrony cues:

When individuals are exposed to multiple sound sources, a process of segregation happens from the periphery to the higher centers in the brain that work together to encode the stimuli. Temporal processing is one such higher order function. Onset asynchrony is a temporal cue that can be utilized in separating the target and background babble. Onset asynchrony cue makes use of the differences among speech stimuli in their onset. It is highly unlikely that two stimuli begin at the same instant. Oster, & Werner, (2018) looked at if infants utilize onset asynchrony cues in segregating different stimuli. Their study involved infants aged 3 months and 7 months and he evaluated their ability to segregate two competing vowel sounds. They argued that infants at both 3 months and 7 months of age utilize onset asynchrony cues to segregate vowel sounds (Oster, & Werner, 2018). Particularly, infants at 7 months of age were able to make use of these cues similar to an adult's utilization of onset asynchrony. A timeline for the development of usage of onset asynchrony cues can therefore be put together including results from Bendixen et al. (2015). They examined that newborn children don't use onset asynchrony cues for sound segregation even though they have access to it. The ability to use onset asynchrony cue is developed by 3 months of age and is mastered at the same level as an adult by 7 months of age (Oster, & Werner, 2018).

Earlier in this article, it was discussed that infants don't make use of differences in vocal characteristics specifically the fundamental frequency differences between male and female talkers. However, they do utilize temporal cues such as onset asynchrony to a great extent similar to that seen in adults. Oster, & Werner, (2018) also noted that infants in the study required a higher target signal to masker ratio (TMR), meaning that the level of the target signal had to be almost 12 dB higher than the masker to achieve segregation of vowels using onset asynchrony cues. Their performance in difficult listening conditions needs to be explored further. Even though the ability to use this temporal cue is as developed in 7 month old infants as in adults, infants require a higher TMR compared to adults who can use this cue at lower TMR levels as well. The factors that are contributing to this difference needs to be evaluated.

Updated: May 19, 2021
Cite this page

IntroductionSpeech understanding in background noise has. (2019, Nov 24). Retrieved from https://studymoose.com/introductionspeech-understanding-in-background-noise-has-example-essay

IntroductionSpeech understanding in background noise has essay
Live chat  with support 24/7

👋 Hi! I’m your smart assistant Amy!

Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.

get help with your assignment