Models of a human ear
ACOUSTICS AND PSYCHO-ACOUSTICS OF EARPHONES:
In the literature, when compared to loudspeakers, earphones represent a relatively new ๏ฌeld of study, which came lately with the rise of Portable Electronic Devices (PEDs). A Google ยฎ Ngram is a visual representation of the occurrence of a speci๏ฌc string within books indexed by Google for a determined time frame. In Figure 1.1, it can be seen that the presence of the word loudspeaker in books peaked in the mid-1950โs by being 6 times more prevalent than earphones or headphones. The presence of the word headphones increased after 1965 and surpassed loudspeaker in Google Books the years after the introduction of the iPodยฎ byAppleยฎ in October2001. Literally, the de๏ฌnitionof the word headphones and earphones changed over time, since over time both words were sometimes used to describe the same object. The word headphones is becoming more prevalent in the literature while loudspeaker is disappearing. Interestingly, loudspeaker design science is partially transferable to the earphone because of a certain similarity in the application. However, the psycho-acoustics of loudspeakers are not transferable as will be explored later in section 1.1.2.2. The word earphone will be used in this thesis since it is de๏ฌned by the International Electrotechnical Commission (2010a) as the generic term for a device closely coupled acoustically to the ear.
ย Speci๏ฌcities of hearing sound emitted by adistant source versus with earphones:
This section is not intended to be a comprehensive explanation of the human hearing process. However,it would be useful to simply give a general notion of how sound waves into electrical impulses that can be interpreted by the brain by describing the ear and the basic principles of hearing from a distant source versus when wearing earphones.
ย Overview of the hearing process:
The human ear is divided into three sections, the outer ear, the middle ear and the inner ear. A sectional view of the ear is presented in Figure 1.3. What is commonly called the ear part protruding the head – is scienti๏ฌcally known as the pinna (also known as the auricle). The pinna collects and transforms incoming sound waves and redirects them into the ear canal (or meatus), identi๏ฌed by (gg) in Figure 1.3. The sound waves then reach the eardrum, also called tympanic membrane or tympanum (tf). The eardrum is the boundary between the outer
ear and the middle ear. It is important to note that this boundary is the limit of non-intrusive physical measurement .As it will be explained in section1.2.1,a measurement apparatus known as a probe tube, can be used to measure sound pressure close to the eardrum. The outer earโs function is to collect a sound pressure and transfer this pressure to the eardrum.A sound source close to the pinna or inserted into the ear canal, such as the use of earphones, would alter the transfer function of the pinna. This is explored in section 1.1.2.2 and 1.1.3.
The middle ear, in a ๏ฌrst approximation, is a mechanical system matching the impedance of the sound in the outer ear, a gaseous medium, to the impedance of the ๏ฌuid in the inner ear. This impedance matching is possible due to the connection between the eardrum (tf) and the oval window created by three bones-ossicles-called malleus(h), incus(a)and stapes(s).The malleus (h) is connected to the eardrum while the stapes (s) is connected to the oval window. The incus (a) relates the malleus (h) to the stapes (s) The oval window (not explicitly seen in the Figure 1.3) is the interface between the middle ear and the cochlea (vh). Two major components contributes to the impedance matching performed by the middle ear between the outer ear and the inner ear. The ๏ฌrst is the ratio of the surface area of the eardrum over the oval window, matching the force by a surface variation. The second component contributing to the impedance matching is the lever effect created by the ossicles. The displacement of the malleus is slightly larger than the displacement of the stapes. This impedance matching leads to the inner ear.
The inner ear could be seen as the spectrum analyser of the ear. It is a complex system performing the conversion of the ๏ฌuid pressure variations transmitted by the ossicles (h, a, s) into an electrical signal. This conversion process is performed by a snail-shaped organ called the cochlea(vh,vht,tht).Within this organ,aseries of hairlike structures move with the ๏ฌuid pressure variations and activate nerves, which ultimately transmit the electrical signal through the nervous system to the brain. Further explanation on how this process occurs can be found in anatomy [Gelfand (2009)] and otolaryngology manuals [Nadol Jr. and McKenna (2005)].
ย Hearing a distant source: Sound paths, room re๏ฌections and the Head-Related TransferFunction(HRTF):
When someone is hearing a sound, a complex interaction process between the sound source, the environment and the human body transforms the sound wave before it reaches the tympanic membrane.This interaction is explained in section1.1.2.1,which explains the sound travelling toward a subject. Then, section 1.1.2.2 explains how that sound is transformed by the contact with the body until it reaches the tympanic membrane. These transformations are natural processes of hearing and humans learn and adapt to these transformations. As a result they occur almost subconsciously and need to be taken into account for an earphone design,as covered in section 1.1.3.
ย Sound paths and room re๏ฌections:
When in a room where one – or many – sound source(s) are emitting sound, a subject is experiencing the effect of the sound wave travelling through space as presented in Figure 1.4. This image is simpli๏ฌed to a single source, located in front of the subject, emitting sound toward the subject, with the center of the source at the same height as the center of the subjectโs ear canal entrances (this corresponds to the 0โฆ elevation and 0โฆ azimuth.) As a result, the sound
takes different paths to reach the pinna of the subject. Three of all possible paths are shown in Figure 1.4 to demonstrate the combination of the direct and reverberant ๏ฌeld. The ๏ฌrst path to reach the subject is the direct (1) path. The second path is the so-called early re๏ฌection (2) of sound. The last sound to reach the subject is the so-called late re๏ฌection (3) of sound which travels more distance before reaching the subject. The sound travelling by path 2 and 2+3 will not reach the subject at the same time as the direct path since path 2 and 2+3 have a longer distance to travel and the speed of sound (โ 343 m/s) is constant. The difference between an early re๏ฌection and a late re๏ฌection is the time it takes sound to travel to the measurement point.
A room sound ๏ฌeld can be quanti๏ฌed in relation to the reverberant ๏ฌeld, or quantity of re๏ฌections compared to the direct sound. Beranek (1993) states that all the re๏ฌections reaching a subject following the ๏ฌrst 80 ms after the direct sound are part of the reverberant ๏ฌeld. Otherwise the early sound encompasses all the sound reaching the subject within the ๏ฌrst 80 ms, including the direc tpath of sound.From the de๏ฌnition established by Beranek(1993),it can be seen that the two extreme ๏ฌelds are free-๏ฌeld equivalents and diffuse-๏ฌeld equivalents,de๏ฌned below:
โข Free-๏ฌeld:The determination of the sound power level radiated in an anechoic or a hemianechoic environment is based on the premise that the reverberant ๏ฌeld is negligible at the positions of measurement for the frequency range of interest.
โข Diffuse ๏ฌeld: At any position in the room, energy is incident from all directions with equal intensities and random phases and the reverberant sound does not vary with the receiverโs position.
As noted by Hodgson (1994), several factors such as surface re๏ฌection, surface-absorption distribution and surface-absorption magnitude as well as ๏ฌtting density are necessary toachieve a diffuse ๏ฌeld condition. Therefore, from the de๏ฌnition given above, a diffuse ๏ฌeld is an ideal reverberant ๏ฌeld and if the required conditions are not met, the ๏ฌeld is not diffuse.
ย Head Related Transfer Function:
A Head-Related Transfer Function (HRTF) is a measurement of how a sound emitted from a source located in a speci๏ฌc position in a three-dimensional space is modi๏ฌed, ๏ฌltered and shaped by the human body before it reaches a speci๏ฌc point of the ear canal(s) of a subject. The most usual method to acquire a HRTF is by positioning a microphone or a probe tube at the ear canal entrance and playing a sound from a loudspeaker positioned at a speci๏ฌc position relative to the subject in a de๏ฌned acoustic ๏ฌeld environment. The HRTF de๏ฌnes the ๏ฌlters required to process the signals sent to each ear in order to recreate the speci๏ฌed location of the virtual sound source in space. This is later explained in this section. This binaural approach gives the impression to the subject that a virtual sound source is located at the right position in space.
|
Table des matiรจres
INTRODUCTION
CHAPTER 1 ACOUSTICS AND PSYCHO-ACOUSTICS OF EARPHONESย
1.1 Speci๏ฌcities of hearing sound emitted by a distant source versus with earphones
1.1.1 Overview of the hearing process
1.1.2 Hearing a distant source: Sound paths, room re๏ฌections and the Head-Related Transfer Function (HRTF)
1.1.2.1 Sound paths and room re๏ฌections
1.1.2.2 Head Related Transfer Function
1.1.3 Hearing with earphones
1.1.4 Coupling of the earphone with the ear
1.2 Measuring the sound quality of earphones
1.2.1 Objective measurement of sound quality
1.2.1.1 Acoustic test ๏ฌxture
1.2.1.2 Probe microphone
1.2.2 Subjective measurement of sound quality
1.2.3 Loudness and earphone sound quality
1.2.3.1 Just-Noticeable Sound Changes
1.2.3.2 Signal matching and loudness function
1.2.3.3 The missing 6 dB effect
1.2.3.4 Identifying the reference recording SPL to differentiate the proper loudness transfer functionย 1.2.4 Ear canal sealed by the earphone
1.2.4.1 Increase of the Signal-to-Noise Ratio
1.2.4.2 Effect of sealing the ear canal on the low frequencies reproduction
1.3 On the de๏ฌnition of an earphone target frequency response
1.3.1 Issues related to earphone sound reproduction
1.3.2 Comparison of studied frequency response
1.4 Conclusions on the psycho-acoustics of earphone
CHAPTER 2 MEASUREMENT OF EARPHONES: METHODOLOGY AND DEFINITION
2.1 Collocated measurement of earphones
2.1.1 Application to moving-coil micro-loudspeaker
2.1.2 Application to balanced-armature receivers
2.2 Non-collocated measurement of earphones
2.2.1 Frequency Response Function
2.2.2 Harmonic Distortion
2.2.3 Intermodulation Distortion
2.2.4 Multi-tone Distortion
2.2.5 Triggered Distortion
2.2.6 Relationship between distortion and physical phenomena in a moving-coil micro-loudspeaker
2.3 Conclusions on the measurement of earphones
CHAPTER 3 EARPHONE MODELLING AND SIMULATIONย
3.1 Fundamentals of electro-acoustic modelling
3.1.1 Impedance of physical quantities
3.1.2 Physical analogy of components by domain
3.2 Modelling earphone-ear system coupling
3.2.1 Common methods used to describe physical systems
3.2.1.1 Control System Block Diagram modelling method
3.2.1.2 Port modelling method
3.2.2 Models of a human ear
3.2.3 Models of moving-coil micro-loudspeakers
3.2.3.1 Simple and complex lumped-element model
3.2.3.2 Two-Port model
3.2.3.3 Block-diagram model
3.2.4 Models of balanced-armature micro-loudspeakers
3.2.4.1 Lumped-elements models
3.2.4.2 Complex control system block-diagram model
3.2.5 Models of acoustical features found in earphones
3.2.5.1 Acoustic mass, resistance and compliance
3.2.5.2 Horn
3.2.5.3 Sudden section change
3.2.5.4 Synthetic material membrane
3.2.5.5 Perforated sheets, meshes and foams
3.2.6 Limitations of the modelling and simulation methods
3.2.7 Simulation and computation method using two-port models
3.3 Introduction to Design, Modelling and Simulation of Earphones with Simulinkยฎ
3.3.1 Abstract
3.3.2 Introduction
3.3.3 Description of modelling methods
3.3.3.1 Ordinary Differential Equationand Differential Algebraic Equation
3.3.3.2 Lumped-Element circuit abstraction
3.3.3.3 Control System Block Diagram
3.3.4 Models of acoustical components
3.3.4.1 Model of an Acoustic Mass (MA)
3.3.4.2 Model of an Acoustic Compliance (CA)
3.3.4.3 Model of Acoustic Resistance (RA)
3.3.4.4 Models of Moving Coil Micro-Loudspeakers
3.3.4.5 Balanced Armature Micro-Loudspeakers
3.3.4.6 Ear Simulator Model
3.3.5 Simulation with Simulinkยฎ
3.3.6 Selection of a Solver
3.3.6.1 Coupling Between Domain Sections of the Model
3.3.6.2 SimulatingFrequency ResponseFunctionandPlotting Poles and Zeros
3.3.7 Case study: Lumped-elements of two micro-loudspeakers
3.3.7.1 Lumped-Element Moving-Coil Micro-Loudspeaker Simulated with Simscapeโข
3.3.7.2 Results of Balanced-Armature Micro-Loudspeaker Simulation
3.3.8 Discussion on simulating earphones with Simulinkยฎ
3.3.9 Conclusions
3.3.10 Acknowledgement
3.4 Conclusions on earphone modelling and simulation
CHAPTER 4 FUTURE WORKS
4.1 Improvement of the measurement apparatuses
4.2 Assessment of the psycho-acoustic impact of the coupling between the earphone and the ear
4.3 Enhancing user experience through customization
4.3.1 Post-setting the relative loudness of earphones
4.3.2 Using the impedance frequency relation for compensation
4.4 Conclusions on future works
CONCLUSION
APPENDIX I SMALL-SIGNAL APPROXIMATION OF A MOVING-COIL MICRO-LOUDSPEAKER
APPENDIX II MATLAB, SIMULINK AND SIMSCAPE LIBRARY
APPENDIX III EXPERIMENTAL SET-UP
APPENDIX IV JOURNAL OF THE AUDIO ENGINEERING SOCIETY SUBMISSION
REFERENCES
BIBLIOGRAPHY
Tรฉlรฉcharger le rapport complet