jkenny
Full Member
Posts: 83
About Me: Audio equipment designer forever in pursuit of more realistic & engaging music reproduction purely because of the extra enjoyment of music created by such reproduction.
http://Ciunas.biz
|
Post by jkenny on Aug 29, 2019 8:48:44 GMT 10
I thought this video by James Johnston (J_J on most audio forums) is an useful overview (with plenty of detail) into the mechanics of hearing (not the ASA brain side) J_J is a well respected researcher into psychoacoustics & was behind the development of lossy codecs which use psychoacoustic masking (MP3, etc) One thing to note in this video is how often he states how unsure research is about the actual physical mechanics of how hearing works & often states that what he is showing is an explanatory model NOT how the ear actually does it One big issue which he doesn't touch on is the now almost universally accepted notion that a travelling wave mechanism in the fluid within the inner ear, is the starting point for the physics within the fluid & everything comes from that - frequency filters i.e ERB (equivalent rectangular bandwidth), etc. But there are many unknowns which can't be explained using this as the starting point. I've read about another model for the underlying mechanism which involves Bernoulli fluid mechanics rather than a travelling wave - see this PDF book " Applying physics makes auditory sense" This excerpt may spike interest & show that this idea is not just some crackpots notion of a different paradigm, it has historic significance On a more general point, it emphasises how we should always be open to questioning scientific models. This may all be a bit down in the weeds but some may be interested in it?
|
|
|
Post by ROWUK on Sept 5, 2019 4:29:00 GMT 10
I do wonder however how much of the "original" pressure wave gets translated into what we "hear". Can we be sure that it is possible to turn off a lifetime of "interpretation" to hear absolutely? How is our hearing augmented by tactile sensation?
Many musicians even "hear" with their eyes - prejudices against specific instruments or finish (gold plated trumpets for instance) are very common...
|
|
|
Post by Audiophile Neuroscience on Sept 5, 2019 8:25:01 GMT 10
I do wonder however how much of the "original" pressure wave gets translated into what we "hear". Can we be sure that it is possible to turn off a lifetime of "interpretation" to hear absolutely? How is our hearing augmented by tactile sensation? Many musicians even "hear" with their eyes - prejudices against specific instruments or finish (gold plated trumpets for instance) are very common... Cross modal interactions and multisensory integration are well known phenomenona such as where there is cooperation between sight and hearing ala McGurk effect, 'eating with our eyes', taste by smell. These are usually in the form of enhancement but can work the other way round or in the creation of illusion."Hearing with our eyes" is of course sometimes raised as a pejorative in audio circles but relates more to expectation bias....in just the same way as "deafness" can result from looking at a set of measurements . David ----------------------------------------------------------------------------------------------------------------------------- "All music is folk music. I ain't never heard no horse sing a song." - - Louis Armstrong
|
|
jkenny
Full Member
Posts: 83
About Me: Audio equipment designer forever in pursuit of more realistic & engaging music reproduction purely because of the extra enjoyment of music created by such reproduction.
http://Ciunas.biz
|
Post by jkenny on Sept 6, 2019 16:08:18 GMT 10
I do wonder however how much of the "original" pressure wave gets translated into what we "hear". Can we be sure that it is possible to turn off a lifetime of "interpretation" to hear absolutely? How is our hearing augmented by tactile sensation? Many musicians even "hear" with their eyes - prejudices against specific instruments or finish (gold plated trumpets for instance) are very common... To go a bit deeper in this question - all our perceptions are useful constructions, not necessarily a true reflection of the outside world so the question of hearing absolutely is moot. Our hearing does not work internally like microphone recording just as our vision does not work like a camera - we construct our perceptions internally For instance, about one third of our brain is devoted to vision perception - this would not be needed if vision was simply a movie of the external world playing inside our head - the need for such vast computation brain processing is due to the exercise of analysis & interpretation needed to be performed on the nerve signals arriving from our eyes The same applies to hearing - our auditory perception is a construct & that construct uses all sorts of tricks to do its analysis including prediction & multi-modal signals. As to the higher level of bias that you mention, I doubt we can avoid it completely There's an even deeper philosophical question which Donald Hoffman proposes - our perceptions are just a hack, we don't actually see reality just a version that is useful to us - like we look at a map of the underground - it's not a true version of the reality of the underground, just a useful representation that allows us to navigate it. But that's a whole nudder topic
|
|
|
Post by Audiophile Neuroscience on Sept 8, 2019 16:02:33 GMT 10
STC I saw this question in my notifications..(post was later deleted) The thread The mechanics of hearing that you participated in has been updated by STC. I have a general question about the auditory cortex. 1) I know where it is located but is it located exactly in the center? 2) is there a difference in the distance between the left and... ....I think the other question related to time differences, if any between left and right auditory pathways. Anyway, As far as I know left and right auditory pathways are anatomically symmetrical and have similar topographical and tonotopic (frequency) representations (maps). The human primary auditory cortex is located in the superior temporal gyrus (of the temporal lobe) and is surrounded by secondary auditory cortex, so called belt and parabelt regions. So these areas are not “in the center” but the difference between left and right, and time for neural conduction should be the same, again AFAIK. If you look at evoked potentials, a test neurologists do at different levels of the nervous system, in the auditory area there is ABR Auditory brainstem Response and OAE OtoAcoustic Emissions. ABR is about 10ms. Auditory Brainstem Response Audiometry Updated: Mar 12, 2019
Author Neil Bhattacharyya, MD Associate Professor of Otology and Laryngology, Harvard Medical School; Consulting Surgeon, Department of Surgery, Division of Otolaryngology, Brigham and Women's Hospital
Neil Bhattacharyya, MD is a member of the following medical societies: American Academy of Otolaryngology-Head and Neck Surgery, American Bronchoesophagological Association, American College of Surgeons, American Medical Association, American Rhinologic Society, Society of University Otolaryngologists-Head and Neck Surgeons, The Triological Society
Disclosure: Nothing to disclose. Specialty Editor Board Francisco Talavera, PharmD, PhD Adjunct Assistant Professor, University of Nebraska Medical Center College of Pharmacy; Editor-in-Chief, Medscape Drug Reference
Overview
Auditory brainstem response (ABR) audiometry is a neurologic test of auditory brainstem function in response to auditory (click) stimuli. First described by Jewett and Williston in 1971, ABR audiometry is the most common application of auditory evoked responses. Test administration and interpretation is typically performed by an audiologist. This article provides an overview of the test and its most common applications. For purposes of clarity and brevity, specialized ABR techniques and more technical issues have been omitted.
ABR audiometry refers to an evoked potential generated by a brief click or tone pip transmitted from an acoustic transducer in the form of an insert earphone or headphone. The elicited waveform response is measured by surface electrodes typically placed at the vertex of the scalp and ear lobes. The amplitude (microvoltage) of the signal is averaged and charted against the time (millisecond), much like an EEG. The waveform peaks are labeled I-VII. These waveforms normally occur within a 10-millisecond time period after a click stimulus presented at high intensities (70-90 dB normal hearing level [nHL]). (See image below.)
Normal adult auditory brainstem response (ABR) audiometry waveform response.
Although the ABR provides information regarding auditory function and hearing sensitivity, it is not a substitute for a formal hearing evaluation, and results should be used in conjunction with behavioral audiometry whenever possible.
Physiology
Auditory brainstem response (ABR) audiometry typically uses a click stimulus that generates a response from the basilar region of the cochlea. The signal travels along the auditory pathway from the cochlear nuclear complex proximally to the inferior colliculus. ABR waves I and II correspond to true action potentials. Later waves may reflect postsynaptic activity in major brainstem auditory centers that concomitantly contribute to waveform peaks and troughs. The positive peaks of the waveforms reflect combined afferent (and likely efferent) activity from axonal pathways in the auditory brain stem.
In the United States, the waveforms are typically plotted with the vertex site electrode in the positive voltage input of the amplifier, resulting in I, III, and V wave peaks. In other countries, the waves are plotted with a negative voltage.
Waveform components
Wave I
The ABR wave I response is the far-field representation of the compound auditory nerve action potential in the distal portion of cranial nerve (CN) VIII. The response is believed to originate from afferent activity of the CN VIII fibers (first-order neurons) as they leave the cochlea and enter the internal auditory canal.
A study by Lin et al indicated that in the assessment of ABR in patients with idiopathic sudden sensorineural hearing loss (ISSNHL), wave I latency is significantly associated with hearing outcomes, with a trend toward prolongation found between patients with complete hearing recovery and those experiencing only slight recovery. [1]
A study by Bramhall et al indicated that in persons with normal pure-tone auditory thresholds, those with a history of greater noise exposure tend to have smaller ABR wave I amplitudes at suprathreshold levels. The study included military veterans exposed to high levels of military noise and non-veterans with a history of firearm use, as well as veterans and non-veterans with less noise exposure. Suprathreshold ABR measurements were made at 1, 3, 4, and 6 kHz, using alternating polarity tone bursts, with the ABR wave I amplitudes at suprathreshold levels being smaller at all four frequencies in the high-noise-level groups. The amplitude differences between the groups could not be attributed to either sex or outer hair cell function variability. The investigators could not confirm whether the differences were due to synaptopathy without postmortem temporal bone examination. [2]
However, a literature review by Barbee et al suggested that ABR wave I amplitude, as well as the summating potential-to-action potential ratio and speech recognition in noise with and without temporal distortion, offers an effective nonbehavioral measure of cochlear synaptopathy. [3]
A study by Silva et al indicated that heart rate variability interacts with the ABR, specifically with regard to wave I and particularly in the right ear, suggesting that autonomic control of the heart
rate is associated with brainstem auditory processing and that vagal tone/cochlear nerve interaction occurs. [4]
Wave II The ABR wave II is generated by the proximal VIII nerve as it enters the brain stem.
Wave III The ABR wave III arises from second-order neuron activity (beyond CN VIII) in or near the cochlear nucleus. Literature suggests wave III is generated in the caudal portion of the auditory pons. The cochlear nucleus contains approximately 100,000 neurons, most of which are innervated by eighth nerve fibers.
Wave IV The ABR wave IV, which often shares the same peak with wave V, is thought to arise from pontine third-order neurons mostly located in the superior olivary complex, but additional contributions may come from the cochlear nucleus and nucleus of lateral lemniscus.
Wave V Generation of wave V likely reflects activity of multiple anatomic auditory structures. The ABR wave V is the component analyzed most often in clinical applications of the ABR. Although some debate exists regarding the precise generation of wave V, it is believed to originate from the vicinity of the inferior colliculus. The second-order neuron activity may additionally contribute in some way to wave V. The inferior colliculus is a complex structure, with more than 99% of the axons from lower auditory brainstem regions going through the lateral lemniscus to the inferior colliculus.
A study by Spitzer et al of 71 preschoolers aged 3.12-4.99 years found a systematic decrease in wave V latency in these subjects, indicating that the ABR is not fully mature by age 2 years, as has been thought, but instead continues to develop through a child’s preschool years. [5]
Waves VI and VII
Thalamic (medial geniculate body) origin is suggested for generation of waves VI and VII, but the actual site of generation is uncertain.
Applications
Identification of retrocochlear pathology
Auditory brainstem response (ABR) audiometry is considered an effective screening tool in the evaluation of suspected retrocochlear pathology such as an acoustic neuroma or vestibular schwannoma. However, an abnormal ABR finding suggestive of retrocochlear pathology indicates the need for MRI of the cerebellopontine angle.
Symptoms of eighth nerve pathology Clinical symptoms may include but are not limited to the following: Asymmetrical or unilateral sensorineural hearing loss Asymmetrical high-frequency hearing loss Unilateral tinnitus Unilaterally or bilaterally poor word recognition scores as compared with degree of sensorineural hearing loss Perceived distortion of sounds when peripheral hearing is essentially normal
Auditory brainstem response evaluation In addition to retrocochlear pathologies, many factors may influence ABR results, including the degree of sensorineural hearing loss, asymmetry of hearing loss, test parameters, and other patient factors. These influences must be factored in when performing and analyzing an ABR result. Findings suggestive of retrocochlear pathology may include any 1 or more of the following: Absolute latency interaural difference wave V (IT5) - Prolonged I-V interpeak interval interaural difference - Prolonged Absolute latency of wave V - Prolonged as compared with normative data Absolute latencies and interpeak intervals latencies I-III, I-V, III-V - Prolonged as compared with normative data Absent auditory brainstem response in the involved ear In general, ABR exhibits a sensitivity of over 90% and a specificity of approximately 70-90%. Sensitivity for small tumors is not as high. For this reason, a symptomatic patient with a normal ABR result should receive a follow-up audiogram in 6 months to monitor for any changes in hearing sensitivity or tinnitus. The ABR may be repeated if indicated. Alternatively, MRI with gadolinium enhancement, which has become the new criterion standard, can be used to identify very small (3-mm) vestibular schwannomas.
The ABR sensitivity in the diagnosis of CN VIII tumors by size according to several studies is as follows:
In a 1994 study by Dornhoffer, Helms, and Hoehmann, the sensitivity was 93% for tumors smaller than 1 cm. [6]
In 1997, Zappia, O'Connor, Wiet, and Dinces reported a sensitivity of 89% for small tumors smaller than 1 cm, 98% for medium tumors 1.1-2 cm, and 100% for tumors larger than 2 cm. The overall sensitivity was 95%. [7]
In a 1995 study, Chandrasekhar, Brackmann, and Devgan reported a sensitivity of 83.1% for tumors smaller than 1 cm and a sensitivity of 100% for tumors larger than 3 cm. Overall sensitivity was 92%. [8]
In 1995, Gordon and Cohen reported the following sensitivities: 69% for tumors smaller than 9 mm, 89% for tumors 1-1.5 cm, 86% for tumors 1.6-2 cm, and 100% for tumors larger than 2 cm. [9]
In a 2001 report by Schmidt, Sataloff, Newman, Spiegel, and Myers, the sensitivity was 58% for tumors smaller than 1 cm, 94% for tumors 1.1-1.5 cm, and 100% for tumors larger than 1.5 cm. The overall sensitivity was 90%. [10]
In a large prospective study that compared ABR with contrast-enhanced MRI (the criterion standard) in 312 patients with asymmetric sensorineural hearing loss, Cueva found that APR yielded a sensitivity and specificity of 71% and 74%, respectively, in revealing the cause of lesions for asymmetric sense and oral hearing loss (including, but not limited to, vestibular schwannoma). The ABR-positive predictive value was only 23%, whereas its negative predictive value was 96%. Seven of 31 positive cases had other lesions that ABR could not identify as a cause of the hearing loss. [11]
Although traditional ABR measures decrease in sensitivity as a factor of tumor size, recent studies have shown that by using a new stacked derived-band ABR that measures amplitude, very small tumors may be detected more accurately. This new technique, combined with traditional ABR audiometry, may soon make possible the detection of very small tumors with accuracy approaching 100% using ABR audiometry.
Other applications of auditory brainstem response
Other applications of ABR continue to evolve. Recent research suggests that although the overall ABR wave latencies are within normal limits in patients with tinnitus, those patients have longer latencies than control patients without tinnitus. [12] This suggests that ABR may be useful in monitoring and understanding tinnitus. ABR has also been used for prognostication in patients with coma. Researchers have found that patients with a Glasgow coma scale of 3 and who also have a significantly abnormal ABR had a greater probability of dying than those with a normal ABR [13] (see the Glasgow Coma Scale calculator).
A study by Sköld et al indicated that ABR wave patterns are significantly different between patients with bipolar disorder type I (BPI) and those with schizophrenia, suggesting that ABR may be useful as a BPI biomarker. The study, which involved 23 patients with BPI and 20 patients with schizophrenia, as well as 20 controls, found that wave III and VII amplitudes were significantly higher in the patients with BPI than in those with schizophrenia. The report also found that in BPI patients, as well as (somewhat less strongly) those with schizophrenia, the portion of the ABR curve containing waves VI and VII did not correlate well will that belonging to the controls. According to the investigators, the study’s results indicate that BPI may be associated with thalamocortical circuitry abnormalities. [14]
Newborn Hearing Screening
Auditory brainstem response (ABR) technology is used in testing newborns. Approximately 1 of every 1000 children is born deaf; many more are born with less severe degrees of hearing impairment, while others may acquire hearing loss during early childhood.
Historically, only infants who met one or more criteria on the high-risk register were tested. Universal hearing screening has been recommended because about 50% of the infants later identified with hearing loss are not tested when neonatal hearing screening is restricted to high-risk groups. Recently, hospitals across the United States have been implementing universal newborn hearing screening programs. These programs are possible because of the combination of technological advances in ABR and otoacoustic emissions (OAE) testing methods and equipment availability, which enables accurate and cost-effective evaluation of hearing in newborns.
Several clinical trials have shown automated auditory brainstem response (AABR) testing (eg, Algo-1 Plus) as an effective screening tool in the evaluation of hearing in newborns, with a sensitivity of 100% and specificity of 96-98%.
When used as a threshold measure to screen for normal hearing, each ear may be evaluated independently, with a stimulus presented at an intensity level of 35-40 dB nHL. Click-evoked ABR is highly correlated with hearing sensitivity in the frequency range from 1000-4000 Hz. AABRs test for the presence or absence of wave V at soft stimulus levels. No operator interpretation is required. AABR can be used on the ward and during oxygen therapy without disturbance from ambient noise.
The 2000 Joint Committee on Infant Hearing has recommended that infants with at least 1 of the following risk indicators for progressive or delayed-onset hearing loss who may have passed the hearing screening should, nonetheless, receive audiologic monitoring every 6 months until age 3 years: [15]
Parental or caregiver concern regarding hearing, speech, language, and/or developmental delay
Family history of permanent childhood hearing loss
Stigmata or other findings associated with a syndrome known to include a sensorineural or conductive hearing loss or eustachian tube dysfunction
Postnatal infections associated with sensorineural hearing loss, including bacterial meningitis
In utero infections such as cytomegalovirus, herpes, rubella, syphilis, and toxoplasmosis
Neonatal indicators, specifically hyperbilirubinemia at a serum level requiring exchange transfusion, persistent pulmonary hypertension of the newborn associated with mechanical ventilation, conditions that require the use of extracorporeal membrane oxygenation (ECMO), bronchopulmonary dysplasia, cytomegalovirus infection, and craniofacial anatomy (Lieu and Champion recently confirmed these results.)
Syndromes associated with progressive hearing loss, such as neurofibromatosis, osteopetrosis, and Usher syndrome
Neurodegenerative disorders, such as Hunter syndrome, or sensory motor neuropathies, such as Friedreich ataxia and Charcot-Marie-Tooth syndrome
Head trauma
Recurrent or persistent otitis media with effusion for at least 3 months
Ototoxic medications (aminoglycosides)
ABRs may be used to detect auditory neuropathy or neural conduction disorders in newborns. Because ABRs are reflective of auditory nerve and brainstem function, these infants can have an abnormal ABR screening result even when peripheral hearing is normal.
Infants that do not pass the newborn hearing screenings do not necessarily have hearing problems. When hearing loss is suspected because of an abnormal ABR screening result, a follow-up diagnostic threshold ABR test is scheduled to determine frequency-specific hearing status. Estimation of hearing at specific frequencies may be obtained through use of brief tone stimulation, such as a tone burst.
Auditory Brainstem Response in Surgery
Intraoperative monitoring
Auditory brainstem response (ABR), often used intraoperatively with electrocochleography, provides early identification of changes in the neurophysiologic status of the peripheral and central nervous systems. This information is useful in the prevention of neurotologic dysfunction and the preservation of postoperative hearing loss. For many patients with tumors of CN VIII or the cerebellopontine angle, hearing may be diminished or completely lost postoperatively, even when the auditory nerve has been preserved anatomically.
Auditory brainstem response evaluation
Wave I, which is generated by the cochlear end of CN VIII, provides valuable real-time information regarding blood flow to the cochlea. Because ischemia is a primary cause of surgery-related hearing loss, wave I is monitored closely for any shift in latency or decrease of amplitude.
Wave I-II and I-III interpeak intervals can provide distal and proximal information during CN VIII surgeries.
Wave V and the I-V interpeak interval latencies are monitored for shifts or alterations in latency and amplitude. The I-V latency provides information regarding the integrity of CN VIII to the auditory brain stem.
Limitations
Wave V alterations occurring intraoperatively do not necessarily reflect changes in hearing status. Changes in latency may instead be caused by desynchronization of neurons or other outside factors. Also, a potential time delay exists between the actual occurrence of insult and when the shift in wave V appears. Patients with preexisting sensorineural hearing loss may have poor waveform morphology and no wave I response.
Typical uses of intraoperative auditory brainstem response Monitoring cochlear function directed at hearing preservation Cerebellopontine angle tumor resection (acoustic neuroma surgery) Vascular decompression of trigeminal neuralgia Vestibular nerve section for the relief of vertigo Exploration of the facial nerve for facial nerve decompression Endolymphatic sac decompression for Mèniére disease Monitoring brainstem integrity Brainstem tumor resection Brainstem aneurysm clipping or arteriovenous malformation resection
Conclusion Auditory brainstem response (ABR) audiometry has a wide range of clinical applications, including screening for retrocochlear pathology, universal newborn hearing screening, and intraoperative monitoring. Additional applications include ICU monitoring, frequency-specific estimation of auditory sensitivity, and diagnostic information regarding suspected demyelinating disorders (eg, multiple sclerosis). As technology continues to evolve, ABR will likely provide more qualitative and quantitative information regarding the function of the auditory nerve and brainstem pathways involved in hearing.
David ----------------------------------------------------------------------------------------------------------------------------- "All music is folk music. I ain't never heard no horse sing a song." - - Louis Armstrong
|
|
|
Post by Audiophile Neuroscience on Sept 8, 2019 16:10:15 GMT 10
There's an even deeper philosophical question which Donald Hoffman proposes - our perceptions are just a hack, we don't actually see reality just a version that is useful to us - like we look at a map of the underground - it's not a true version of the reality of the underground, just a useful representation that allows us to navigate it. But that's a whole nudder topic Plato's Allegory of the Cave
David ----------------------------------------------------------------------------------------------------------------------------- "All music is folk music. I ain't never heard no horse sing a song." - - Louis Armstrong
|
|
STC
Junior Member
Posts: 18
|
Post by STC on Sept 9, 2019 10:10:49 GMT 10
STC I saw this question in my notifications..(post was later deleted) The thread The mechanics of hearing that you participated in has been updated by STC. I have a general question about the auditory cortex. 1) I know where it is located but is it located exactly in the center? 2) is there a difference in the distance between the left and... ....I think the other question related to time differences, if any between left and right auditory pathways. Anyway, As far as I know left and right auditory pathways are anatomically symmetrical and have similar topographical and tonotopic (frequency) representations (maps). The human primary auditory cortex is located in the superior temporal gyrus (of the temporal lobe) and is surrounded by secondary auditory cortex, so called belt and parabelt regions. So these areas are not “in the center” but the difference between left and right, and time for neural conduction should be the same, again AFAIK. If you look at evoked potentials, a test neurologists do at different levels of the nervous system, in the auditory area there is ABR Auditory brainstem Response and OAE OtoAcoustic Emissions. ABR is about 10ms. Auditory Brainstem Response Audiometry Updated: Mar 12, 2019
Author Neil Bhattacharyya, MD Associate Professor of Otology and Laryngology, Harvard Medical School; Consulting Surgeon, Department of Surgery, Division of Otolaryngology, Brigham and Women's Hospital
Neil Bhattacharyya, MD is a member of the following medical societies: American Academy of Otolaryngology-Head and Neck Surgery, American Bronchoesophagological Association, American College of Surgeons, American Medical Association, American Rhinologic Society, Society of University Otolaryngologists-Head and Neck Surgeons, The Triological Society
Disclosure: Nothing to disclose. Specialty Editor Board Francisco Talavera, PharmD, PhD Adjunct Assistant Professor, University of Nebraska Medical Center College of Pharmacy; Editor-in-Chief, Medscape Drug Reference
Overview
Auditory brainstem response (ABR) audiometry is a neurologic test of auditory brainstem function in response to auditory (click) stimuli. First described by Jewett and Williston in 1971, ABR audiometry is the most common application of auditory evoked responses. Test administration and interpretation is typically performed by an audiologist. This article provides an overview of the test and its most common applications. For purposes of clarity and brevity, specialized ABR techniques and more technical issues have been omitted.
ABR audiometry refers to an evoked potential generated by a brief click or tone pip transmitted from an acoustic transducer in the form of an insert earphone or headphone. The elicited waveform response is measured by surface electrodes typically placed at the vertex of the scalp and ear lobes. The amplitude (microvoltage) of the signal is averaged and charted against the time (millisecond), much like an EEG. The waveform peaks are labeled I-VII. These waveforms normally occur within a 10-millisecond time period after a click stimulus presented at high intensities (70-90 dB normal hearing level [nHL]). (See image below.)
Normal adult auditory brainstem response (ABR) audiometry waveform response.
Although the ABR provides information regarding auditory function and hearing sensitivity, it is not a substitute for a formal hearing evaluation, and results should be used in conjunction with behavioral audiometry whenever possible.
Physiology
Auditory brainstem response (ABR) audiometry typically uses a click stimulus that generates a response from the basilar region of the cochlea. The signal travels along the auditory pathway from the cochlear nuclear complex proximally to the inferior colliculus. ABR waves I and II correspond to true action potentials. Later waves may reflect postsynaptic activity in major brainstem auditory centers that concomitantly contribute to waveform peaks and troughs. The positive peaks of the waveforms reflect combined afferent (and likely efferent) activity from axonal pathways in the auditory brain stem.
In the United States, the waveforms are typically plotted with the vertex site electrode in the positive voltage input of the amplifier, resulting in I, III, and V wave peaks. In other countries, the waves are plotted with a negative voltage.
Waveform components
Wave I
The ABR wave I response is the far-field representation of the compound auditory nerve action potential in the distal portion of cranial nerve (CN) VIII. The response is believed to originate from afferent activity of the CN VIII fibers (first-order neurons) as they leave the cochlea and enter the internal auditory canal.
A study by Lin et al indicated that in the assessment of ABR in patients with idiopathic sudden sensorineural hearing loss (ISSNHL), wave I latency is significantly associated with hearing outcomes, with a trend toward prolongation found between patients with complete hearing recovery and those experiencing only slight recovery. [1]
A study by Bramhall et al indicated that in persons with normal pure-tone auditory thresholds, those with a history of greater noise exposure tend to have smaller ABR wave I amplitudes at suprathreshold levels. The study included military veterans exposed to high levels of military noise and non-veterans with a history of firearm use, as well as veterans and non-veterans with less noise exposure. Suprathreshold ABR measurements were made at 1, 3, 4, and 6 kHz, using alternating polarity tone bursts, with the ABR wave I amplitudes at suprathreshold levels being smaller at all four frequencies in the high-noise-level groups. The amplitude differences between the groups could not be attributed to either sex or outer hair cell function variability. The investigators could not confirm whether the differences were due to synaptopathy without postmortem temporal bone examination. [2]
However, a literature review by Barbee et al suggested that ABR wave I amplitude, as well as the summating potential-to-action potential ratio and speech recognition in noise with and without temporal distortion, offers an effective nonbehavioral measure of cochlear synaptopathy. [3]
A study by Silva et al indicated that heart rate variability interacts with the ABR, specifically with regard to wave I and particularly in the right ear, suggesting that autonomic control of the heart
rate is associated with brainstem auditory processing and that vagal tone/cochlear nerve interaction occurs. [4]
Wave II The ABR wave II is generated by the proximal VIII nerve as it enters the brain stem.
Wave III The ABR wave III arises from second-order neuron activity (beyond CN VIII) in or near the cochlear nucleus. Literature suggests wave III is generated in the caudal portion of the auditory pons. The cochlear nucleus contains approximately 100,000 neurons, most of which are innervated by eighth nerve fibers.
Wave IV The ABR wave IV, which often shares the same peak with wave V, is thought to arise from pontine third-order neurons mostly located in the superior olivary complex, but additional contributions may come from the cochlear nucleus and nucleus of lateral lemniscus.
Wave V Generation of wave V likely reflects activity of multiple anatomic auditory structures. The ABR wave V is the component analyzed most often in clinical applications of the ABR. Although some debate exists regarding the precise generation of wave V, it is believed to originate from the vicinity of the inferior colliculus. The second-order neuron activity may additionally contribute in some way to wave V. The inferior colliculus is a complex structure, with more than 99% of the axons from lower auditory brainstem regions going through the lateral lemniscus to the inferior colliculus.
A study by Spitzer et al of 71 preschoolers aged 3.12-4.99 years found a systematic decrease in wave V latency in these subjects, indicating that the ABR is not fully mature by age 2 years, as has been thought, but instead continues to develop through a child’s preschool years. [5]
Waves VI and VII
Thalamic (medial geniculate body) origin is suggested for generation of waves VI and VII, but the actual site of generation is uncertain.
Applications
Identification of retrocochlear pathology
Auditory brainstem response (ABR) audiometry is considered an effective screening tool in the evaluation of suspected retrocochlear pathology such as an acoustic neuroma or vestibular schwannoma. However, an abnormal ABR finding suggestive of retrocochlear pathology indicates the need for MRI of the cerebellopontine angle.
Symptoms of eighth nerve pathology Clinical symptoms may include but are not limited to the following: Asymmetrical or unilateral sensorineural hearing loss Asymmetrical high-frequency hearing loss Unilateral tinnitus Unilaterally or bilaterally poor word recognition scores as compared with degree of sensorineural hearing loss Perceived distortion of sounds when peripheral hearing is essentially normal
Auditory brainstem response evaluation In addition to retrocochlear pathologies, many factors may influence ABR results, including the degree of sensorineural hearing loss, asymmetry of hearing loss, test parameters, and other patient factors. These influences must be factored in when performing and analyzing an ABR result. Findings suggestive of retrocochlear pathology may include any 1 or more of the following: Absolute latency interaural difference wave V (IT5) - Prolonged I-V interpeak interval interaural difference - Prolonged Absolute latency of wave V - Prolonged as compared with normative data Absolute latencies and interpeak intervals latencies I-III, I-V, III-V - Prolonged as compared with normative data Absent auditory brainstem response in the involved ear In general, ABR exhibits a sensitivity of over 90% and a specificity of approximately 70-90%. Sensitivity for small tumors is not as high. For this reason, a symptomatic patient with a normal ABR result should receive a follow-up audiogram in 6 months to monitor for any changes in hearing sensitivity or tinnitus. The ABR may be repeated if indicated. Alternatively, MRI with gadolinium enhancement, which has become the new criterion standard, can be used to identify very small (3-mm) vestibular schwannomas.
The ABR sensitivity in the diagnosis of CN VIII tumors by size according to several studies is as follows:
In a 1994 study by Dornhoffer, Helms, and Hoehmann, the sensitivity was 93% for tumors smaller than 1 cm. [6]
In 1997, Zappia, O'Connor, Wiet, and Dinces reported a sensitivity of 89% for small tumors smaller than 1 cm, 98% for medium tumors 1.1-2 cm, and 100% for tumors larger than 2 cm. The overall sensitivity was 95%. [7]
In a 1995 study, Chandrasekhar, Brackmann, and Devgan reported a sensitivity of 83.1% for tumors smaller than 1 cm and a sensitivity of 100% for tumors larger than 3 cm. Overall sensitivity was 92%. [8]
In 1995, Gordon and Cohen reported the following sensitivities: 69% for tumors smaller than 9 mm, 89% for tumors 1-1.5 cm, 86% for tumors 1.6-2 cm, and 100% for tumors larger than 2 cm. [9]
In a 2001 report by Schmidt, Sataloff, Newman, Spiegel, and Myers, the sensitivity was 58% for tumors smaller than 1 cm, 94% for tumors 1.1-1.5 cm, and 100% for tumors larger than 1.5 cm. The overall sensitivity was 90%. [10]
In a large prospective study that compared ABR with contrast-enhanced MRI (the criterion standard) in 312 patients with asymmetric sensorineural hearing loss, Cueva found that APR yielded a sensitivity and specificity of 71% and 74%, respectively, in revealing the cause of lesions for asymmetric sense and oral hearing loss (including, but not limited to, vestibular schwannoma). The ABR-positive predictive value was only 23%, whereas its negative predictive value was 96%. Seven of 31 positive cases had other lesions that ABR could not identify as a cause of the hearing loss. [11]
Although traditional ABR measures decrease in sensitivity as a factor of tumor size, recent studies have shown that by using a new stacked derived-band ABR that measures amplitude, very small tumors may be detected more accurately. This new technique, combined with traditional ABR audiometry, may soon make possible the detection of very small tumors with accuracy approaching 100% using ABR audiometry.
Other applications of auditory brainstem response
Other applications of ABR continue to evolve. Recent research suggests that although the overall ABR wave latencies are within normal limits in patients with tinnitus, those patients have longer latencies than control patients without tinnitus. [12] This suggests that ABR may be useful in monitoring and understanding tinnitus. ABR has also been used for prognostication in patients with coma. Researchers have found that patients with a Glasgow coma scale of 3 and who also have a significantly abnormal ABR had a greater probability of dying than those with a normal ABR [13] (see the Glasgow Coma Scale calculator).
A study by Sköld et al indicated that ABR wave patterns are significantly different between patients with bipolar disorder type I (BPI) and those with schizophrenia, suggesting that ABR may be useful as a BPI biomarker. The study, which involved 23 patients with BPI and 20 patients with schizophrenia, as well as 20 controls, found that wave III and VII amplitudes were significantly higher in the patients with BPI than in those with schizophrenia. The report also found that in BPI patients, as well as (somewhat less strongly) those with schizophrenia, the portion of the ABR curve containing waves VI and VII did not correlate well will that belonging to the controls. According to the investigators, the study’s results indicate that BPI may be associated with thalamocortical circuitry abnormalities. [14]
Newborn Hearing Screening
Auditory brainstem response (ABR) technology is used in testing newborns. Approximately 1 of every 1000 children is born deaf; many more are born with less severe degrees of hearing impairment, while others may acquire hearing loss during early childhood.
Historically, only infants who met one or more criteria on the high-risk register were tested. Universal hearing screening has been recommended because about 50% of the infants later identified with hearing loss are not tested when neonatal hearing screening is restricted to high-risk groups. Recently, hospitals across the United States have been implementing universal newborn hearing screening programs. These programs are possible because of the combination of technological advances in ABR and otoacoustic emissions (OAE) testing methods and equipment availability, which enables accurate and cost-effective evaluation of hearing in newborns.
Several clinical trials have shown automated auditory brainstem response (AABR) testing (eg, Algo-1 Plus) as an effective screening tool in the evaluation of hearing in newborns, with a sensitivity of 100% and specificity of 96-98%.
When used as a threshold measure to screen for normal hearing, each ear may be evaluated independently, with a stimulus presented at an intensity level of 35-40 dB nHL. Click-evoked ABR is highly correlated with hearing sensitivity in the frequency range from 1000-4000 Hz. AABRs test for the presence or absence of wave V at soft stimulus levels. No operator interpretation is required. AABR can be used on the ward and during oxygen therapy without disturbance from ambient noise.
The 2000 Joint Committee on Infant Hearing has recommended that infants with at least 1 of the following risk indicators for progressive or delayed-onset hearing loss who may have passed the hearing screening should, nonetheless, receive audiologic monitoring every 6 months until age 3 years: [15]
Parental or caregiver concern regarding hearing, speech, language, and/or developmental delay
Family history of permanent childhood hearing loss
Stigmata or other findings associated with a syndrome known to include a sensorineural or conductive hearing loss or eustachian tube dysfunction
Postnatal infections associated with sensorineural hearing loss, including bacterial meningitis
In utero infections such as cytomegalovirus, herpes, rubella, syphilis, and toxoplasmosis
Neonatal indicators, specifically hyperbilirubinemia at a serum level requiring exchange transfusion, persistent pulmonary hypertension of the newborn associated with mechanical ventilation, conditions that require the use of extracorporeal membrane oxygenation (ECMO), bronchopulmonary dysplasia, cytomegalovirus infection, and craniofacial anatomy (Lieu and Champion recently confirmed these results.)
Syndromes associated with progressive hearing loss, such as neurofibromatosis, osteopetrosis, and Usher syndrome
Neurodegenerative disorders, such as Hunter syndrome, or sensory motor neuropathies, such as Friedreich ataxia and Charcot-Marie-Tooth syndrome
Head trauma
Recurrent or persistent otitis media with effusion for at least 3 months
Ototoxic medications (aminoglycosides)
ABRs may be used to detect auditory neuropathy or neural conduction disorders in newborns. Because ABRs are reflective of auditory nerve and brainstem function, these infants can have an abnormal ABR screening result even when peripheral hearing is normal.
Infants that do not pass the newborn hearing screenings do not necessarily have hearing problems. When hearing loss is suspected because of an abnormal ABR screening result, a follow-up diagnostic threshold ABR test is scheduled to determine frequency-specific hearing status. Estimation of hearing at specific frequencies may be obtained through use of brief tone stimulation, such as a tone burst.
Auditory Brainstem Response in Surgery
Intraoperative monitoring
Auditory brainstem response (ABR), often used intraoperatively with electrocochleography, provides early identification of changes in the neurophysiologic status of the peripheral and central nervous systems. This information is useful in the prevention of neurotologic dysfunction and the preservation of postoperative hearing loss. For many patients with tumors of CN VIII or the cerebellopontine angle, hearing may be diminished or completely lost postoperatively, even when the auditory nerve has been preserved anatomically.
Auditory brainstem response evaluation
Wave I, which is generated by the cochlear end of CN VIII, provides valuable real-time information regarding blood flow to the cochlea. Because ischemia is a primary cause of surgery-related hearing loss, wave I is monitored closely for any shift in latency or decrease of amplitude.
Wave I-II and I-III interpeak intervals can provide distal and proximal information during CN VIII surgeries.
Wave V and the I-V interpeak interval latencies are monitored for shifts or alterations in latency and amplitude. The I-V latency provides information regarding the integrity of CN VIII to the auditory brain stem.
Limitations
Wave V alterations occurring intraoperatively do not necessarily reflect changes in hearing status. Changes in latency may instead be caused by desynchronization of neurons or other outside factors. Also, a potential time delay exists between the actual occurrence of insult and when the shift in wave V appears. Patients with preexisting sensorineural hearing loss may have poor waveform morphology and no wave I response.
Typical uses of intraoperative auditory brainstem response Monitoring cochlear function directed at hearing preservation Cerebellopontine angle tumor resection (acoustic neuroma surgery) Vascular decompression of trigeminal neuralgia Vestibular nerve section for the relief of vertigo Exploration of the facial nerve for facial nerve decompression Endolymphatic sac decompression for Mèniére disease Monitoring brainstem integrity Brainstem tumor resection Brainstem aneurysm clipping or arteriovenous malformation resection
Conclusion Auditory brainstem response (ABR) audiometry has a wide range of clinical applications, including screening for retrocochlear pathology, universal newborn hearing screening, and intraoperative monitoring. Additional applications include ICU monitoring, frequency-specific estimation of auditory sensitivity, and diagnostic information regarding suspected demyelinating disorders (eg, multiple sclerosis). As technology continues to evolve, ABR will likely provide more qualitative and quantitative information regarding the function of the auditory nerve and brainstem pathways involved in hearing.
David ----------------------------------------------------------------------------------------------------------------------------- "All music is folk music. I ain't never heard no horse sing a song." - - Louis Armstrong Rightly, I should take about few months to respond to this post. I have to Google every few words in the paragraph to understand the paper. Hahaha.... Anyway, I asked this question because I started a thread about channel imbalance in AS forum and I now suspect this got to do something how both ears process the sound. I read somewhere ( IIRC ) that there is difference of 2cm in the pathway. I could be wrong, though. It gets more complicated when you consider the Cocktail party effect which can only possible if the exist a difference between the final processing of the sound between the ears. Whatever it is, I think the channel imbalance in stereo cannot be resolved. When I say channel imbalance, I refer to soloist vocal which supposed to be in the dead center. I shall stop before embarrassing myself further with theories on a subject I know nothing of. Thank you David for your reply. Much appreciated. Regards, ST
|
|
|
Post by Audiophile Neuroscience on Sept 9, 2019 12:40:35 GMT 10
Rightly, I should take about few months to respond to this post. I have to Google every few words in the paragraph to understand the paper. Hahaha.... Anyway, I asked this question because I started a thread about channel imbalance in AS forum and I now suspect this got to do something how both ears process the sound. I read somewhere ( IIRC ) that there is difference of 2cm in the pathway. I could be wrong, though. It gets more complicated when you consider the Cocktail party effect which can only possible if the exist a difference between the final processing of the sound between the ears. Whatever it is, I think the channel imbalance in stereo cannot be resolved. When I say channel imbalance, I refer to soloist vocal which supposed to be in the dead center. I shall stop before embarrassing myself further with theories on a subject I know nothing of. Thank you David for your reply. Much appreciated. Regards, ST Hi ST
Yeah neuroanatomy and neurophysiology is pretty full on but we are getting down to the basic substrates underlying what we are talking about in perception, whether it be perceiving channel imbalances, spatial localization, ASA ( jkenny), ambiophonics etc
There does appear to be inter-individual variations in the auditory cortex but generally within individuals things are fairly symmetrical for auditory processing, at least in terms of signal transmission times AFAIK. Obviously there are right vs left differences in hemisphere function of sound processing such as speech being a dominant hemisphere, typically left sided.
Not sure about the "2cm pathway difference" but I doubt this would relate to channel imbalance interpretation or how our ears process sound.
So we know the brain uses a combination of frequency, loudness and timing cues to interpret sound localisation and/or to isolate one sound from another.
The tonotopic representation or map for frequency related perception extends from the inner ear (basilar membrane) to the cortex and is better defined in the primary auditory cortex than the secondary auditory cortical areas. The most medial portion of the auditory cortex contains the representation of the basal end of the basilar membrane of the inner ear (by the oval window), whereas the apical end of the basilar membrane of the inner ear is represented in the lateral portion of the auditory cortex. The response of neurons in the primary auditory cortex to specific sound wave frequencies is narrow and binaural (responsive to both ears) forming so called "critical bands" or more precisely bandwidths.
In each hemisphere, and at right angles to the frequency axis (from low to high), exists alternate stripes or bands (not to be confused with bandwidths) of neurons that respond to both ears differently. So, in this striped arrangement of binaural properties the neurons in one stripe are excited by both ears (called EE cells), while the neurons in the next stripe are excited by one ear and inhibited by the other ear (EI cells). The EE and EI stripes alternate.
The auditory system can extract the sound of a desired sound source out of interfering noise aka The Cocktail party Effect. This relates to selective attention possibly partially subserved by this binaural frequency striping of excitation and inhibition at a cortical level and dovetails into ASA (auditory scene analysis). Sound from interfering directions is attenuated, claimed up to 15dB, compared to the sound from the desired direction. Other helpful cues to ASA/Cocktail party effect probably relate to sound localisation mechanisms in the form of amplitude and timing.
The secondary auditory cortex and its brainstem connections (the dorsal and ventral nuclei of the medulla to the inferior colliculus of the midbrain) is important for sound localization. Sound localization occurs via a medial system arising from the medial superior olivary complex and responds to slight differences in the timing of sound arrival at each ear. The lateral system, arising from the lateral superior olivary complex responds to slight differences in sound amplitude arriving at each ear.
David ----------------------------------------------------------------------------------------------------------------------------- "All music is folk music. I ain't never heard no horse sing a song." - - Louis Armstrong
|
|
|
Post by ROWUK on Sept 9, 2019 14:47:54 GMT 10
I am interested in the cocktail party effect when listening to speakers. We seem to have a major problem due to the left speaker signal travelling independently to the left and right ear as well as the right speaker to the right and left ear. This appears to confuse the phantom center image and limit our ability to perceive the sound stage.
|
|
STC
Junior Member
Posts: 18
|
Post by STC on Sept 9, 2019 14:54:03 GMT 10
I am interested in the cocktail party effect when listening to speakers. We seem to have a major problem due to the left speaker signal travelling independently to the left and right ear as well as the right speaker to the right and left ear. This appears to confuse the phantom center image and limit our ability to perceive the sound stage. Yes. Finally, I found someone accepting this as a problem in stereo. I have audio samples to demonstrate this unfortunately ..... no one seemed to be interested to even listen to them.
|
|
|
Post by cj66 on Sept 9, 2019 15:25:36 GMT 10
Is this not more a problem of the created stereo image? The timing difference between the left and right ears receiving the left and right speaker information is essential in perceiving any stereo image at all, surely?
Therefore, the way I see it, any failings in the stereo image are in the mix of the recording itself?
|
|
sandyk (RIP Alex, 1939 - 2021)
Global Moderator
Posts: 226
About Me: Retired ex Principal Telecommunications Technical Officer with 43 years at Telstra (Australia)
I am also a Moderator in Hi Fi Critic Forum
Electronics hobbyist for >65 years with DIY projects including Loudspeakers, Stereo FM tuner, S/W Regen Receiver, Superhet AM ,
Synchrodyne PLL AM tuner (Phase Lock Loop),Stereo Tape Deck, Amplifiers including I.C. types, Class A, Class AB 100W/Ch. (ETI5000) 240W/Ch. Mosfet (AEM6000) ,several DACs , numerous PSUs including VERY low noise (<4uV) types etc.for myself and friends
Audio Industry Affiliation: NIL
|
Post by sandyk (RIP Alex, 1939 - 2021) on Sept 9, 2019 15:59:23 GMT 10
Is this not more a problem of the created stereo image? The timing difference between the left and right ears receiving the left and right speaker information is essential in perceiving any stereo image at all, surely? Therefore, the way I see it, any failings in the stereo image are in the mix of the recording itself? I agree with Chris that the main failings in this are due to the mix of the recording itself, and to some extent to the design of the speakers themselves . My old DCM QED 1A Transmission Line speakers were renowned for their imaging.
|
|
|
Post by Audiophile Neuroscience on Sept 9, 2019 18:21:11 GMT 10
In real life you get one set of inter-aural differences for left and right ear for the various cues (in level, time, frequency) per sound source. With stereo sound reproduction there are two sets of inter-aural differences for left and right ear, as created by left and right speaker. So its not the natural way we perceive real sound objects. There is the experience of some that eliminating this 'cross talk' between stereo channels ie preventing left channel stereo info from reaching the right ear and vice versa will result in better localisation and overall a more natural and life like sound. The reasoning would appear to be that it is closer to simulating how a real sound source is perceived ie returning to one set of inter-aural differences for the various cues.
STC is the expert on such matters. If understanding the effect is like headphones but playing outside, not inside of the head. Nonetheless, By manipulating the mix of sounds in stereo reproduction (and whatever else sound engineers do like EQ) a very convincing illusion of multiple sound images on a soundstage can be created. Using the stereo phantom central image as example, perception of location relates to amplitude and delay panning (with phase also affected but differently for each frequency). Psychoacoustically the brain perceives that the image is located away from the time delayed speaker and towards the not time delayed speaker and similarly for loudness differential. As I understand it, this is then further influenced by room and speaker interactions. Omnidirectional speakers in rooms with lots of early reflections will produce a larger image that will float between the speakers and tend to stay fixed irrespective of whether you get up and move around the room according to Art Noxon (acoustics engineer and Master Physics).
This is exactly what I heard with mbl radialstrahler speakers in an untreated room ( sandyk (RIP Alex, 1939 - 2021) may have heard these at Dennis' place?). In most acoustically treated rooms which dampen first reflections and with more directional speakers (like mine), when you get up and move to the left or right the image and stage shift smoothly to the left or right. In problem rooms, the image collapses to the loudspeaker that is closest to you. Usually for me the central image is at about the level of the plane of the speakers.The vocal in some recordings push out a bit forward of the speakers. This might be in recordings that have some midrange "presence" boost in the Eq? I think Barry Diament bdiament explained that sometimes due to direct and reflected sound combining in an untreated recording booth, a dip in midrange can occur and the engineer unnecessarily compensates by boosting the mids. When played back it sounds brighter and edgier than it should. He didn't mention pushing the image forwards. David ----------------------------------------------------------------------------------------------------------------------------- "All music is folk music. I ain't never heard no horse sing a song." - - Louis Armstrong
|
|
STC
Junior Member
Posts: 18
|
Post by STC on Sept 9, 2019 19:25:25 GMT 10
In real life you get one set of inter-aural differences for left and right ear for the various cues (in level, time, frequency) per sound source. With stereo sound reproduction there are two sets of inter-aural differences for left and right ear, as created by left and right speaker. So its not the natural way we perceive real sound objects. There is the experience of some that eliminating this 'cross talk' between stereo channels ie preventing left channel stereo info from reaching the right ear and vice versa will result in better localisation and overall a more natural and life like sound. The reasoning would appear to be that it is closer to simulating how a real sound source is perceived ie returning to one set of inter-aural differences for the various cues.
STC is the expert on such matters. If understanding the effect is like headphones but playing outside, not inside of the head. Nonetheless, By manipulating the mix of sounds in stereo reproduction (and whatever else sound engineers do like EQ) a very convincing illusion of multiple sound images on a soundstage can be created. Using the stereo phantom central image as example, perception of location relates to amplitude and delay panning (with phase also affected but differently for each frequency). Psychoacoustically the brain perceives that the image is located away from the time delayed speaker and towards the not time delayed speaker and similarly for loudness differential. As I understand it, this is then further influenced by room and speaker interactions. Omnidirectional speakers in rooms with lots of early reflections will produce a larger image that will float between the speakers and tend to stay fixed irrespective of whether you get up and move around the room according to Art Noxon (acoustics engineer and Master Physics).
This is exactly what I heard with mbl radialstrahler speakers in an untreated room ( sandyk (RIP Alex, 1939 - 2021) may have heard these at Dennis' place?). In most acoustically treated rooms which dampen first reflections and with more directional speakers (like mine), when you get up and move to the left or right the image and stage shift smoothly to the left or right. In problem rooms, the image collapses to the loudspeaker that is closest to you. Usually for me the central image is at about the level of the plane of the speakers.The vocal in some recordings push out a bit forward of the speakers. This might be in recordings that have some midrange "presence" boost in the Eq? I think Barry Diament bdiament explained that sometimes due to direct and reflected sound combining in an untreated recording booth, a dip in midrange can occur and the engineer unnecessarily compensates by boosting the mids. When played back it sounds brighter and edgier than it should. He didn't mention pushing the image forwards. David ----------------------------------------------------------------------------------------------------------------------------- "All music is folk music. I ain't never heard no horse sing a song." - - Louis Armstrong You are actually getting three sets of cues. If you take an example of a trumpet at 15 degrees to the left, both speakers would need to produce the sound so that the image emerges at that location. If you turn off the right speaker, the image would be coming from the left speaker (330 degrees) and if you turn off the left speaker the image will be at 30 degrees. As you can see in the absence of one speaker, you are localizing the point of sound where it originally emerges without difficulties because all the cues confirms to spatial hearing. However in stereo, your brain is fighting between two localization and chooses the third phantom image. For a person who experience stereo for the first time would find this to be very unnatural. The multiple images also corrupts the solidness of the phantom image and buries other information which will be too low to be standing out on its own. Here is an example, how s stereo system will never be able to retrieve the correct information from the recording using the conventional stereo playback method. In this video if you were to use headphones, you would clearly hear the male and female were separated to the right and left. Use headphones and you would clearly hear the separation. However, no matter how costly or high end is your system, it would never be able to produce the separation as good as headphones due to crosstalk. This my early attempt to do crosstalk for normal stereo speakers set at 60 or so degrees. The effect is about 30 percent or so compared to true Ambiophonics method but you would generally get an idea. If you wish to experiment this with your high end system, forward me you fovourite track and I would process them for you to do A/B yourself with your high end system. The video contain Famous Blue Raincoat for comparison. p.s. I am limited to very few samples due to copyright and finding the correct examples. .
|
|
|
Post by ROWUK on Sept 10, 2019 4:50:11 GMT 10
I am very interested in this. I remember the Hafler Dynaco L+R/L-R and R-L scheme, Carvers Sonic Holography, Polk Audio SDA, Lamm L2 preamp with some phasing scheme that he does not describe as well as a very convincing demo with stereo speakers centered and the L-R/R-L speakers where we have our normal setup. This was absolutely phenomenal from a sound stage point of view. I think that an awesome "sound bar" could be based on this technology - without DSP or other electronical manipulation.
|
|