Models of a human ear

Models of a human ear

ACOUSTICS AND PSYCHO-ACOUSTICS OF EARPHONES:

In the literature, when compared to loudspeakers, earphones represent a relatively new field of study, which came lately with the rise of Portable Electronic Devices (PEDs). A Google ® Ngram is a visual representation of the occurrence of a specific string within books indexed by Google for a determined time frame. In Figure 1.1, it can be seen that the presence of the word loudspeaker in books peaked in the mid-1950’s by being 6 times more prevalent than earphones or headphones. The presence of the word headphones increased after 1965 and surpassed loudspeaker in Google Books the years after the introduction of the iPod® byApple® in October2001. Literally, the definitionof the word headphones and earphones changed over time, since over time both words were sometimes used to describe the same object. The word headphones is becoming more prevalent in the literature while loudspeaker is disappearing. Interestingly, loudspeaker design science is partially transferable to the earphone because of a certain similarity in the application. However, the psycho-acoustics of loudspeakers are not transferable as will be explored later in section 1.1.2.2. The word earphone will be used in this thesis since it is defined by the International Electrotechnical Commission (2010a) as the generic term for a device closely coupled acoustically to the ear.

 Specificities of hearing sound emitted by adistant source versus with earphones:

This section is not intended to be a comprehensive explanation of the human hearing process. However,it would be useful to simply give a general notion of how sound waves into electrical impulses that can be interpreted by the brain by describing the ear and the basic principles of hearing from a distant source versus when wearing earphones.

 Overview of the hearing process:

The human ear is divided into three sections, the outer ear, the middle ear and the inner ear. A sectional view of the ear is presented in Figure 1.3. What is commonly called the ear part protruding the head – is scientifically known as the pinna (also known as the auricle). The pinna collects and transforms incoming sound waves and redirects them into the ear canal (or meatus), identified by (gg) in Figure 1.3. The sound waves then reach the eardrum, also called tympanic membrane or tympanum (tf). The eardrum is the boundary between the outer

ear and the middle ear. It is important to note that this boundary is the limit of non-intrusive physical measurement .As it will be explained in section1.2.1,a measurement apparatus known as a probe tube, can be used to measure sound pressure close to the eardrum. The outer ear’s function is to collect a sound pressure and transfer this pressure to the eardrum.A sound source close to the pinna or inserted into the ear canal, such as the use of earphones, would alter the transfer function of the pinna. This is explored in section 1.1.2.2 and 1.1.3.
The middle ear, in a first approximation, is a mechanical system matching the impedance of the sound in the outer ear, a gaseous medium, to the impedance of the fluid in the inner ear. This impedance matching is possible due to the connection between the eardrum (tf) and the oval window created by three bones-ossicles-called malleus(h), incus(a)and stapes(s).The malleus (h) is connected to the eardrum while the stapes (s) is connected to the oval window. The incus (a) relates the malleus (h) to the stapes (s) The oval window (not explicitly seen in the Figure 1.3) is the interface between the middle ear and the cochlea (vh). Two major components contributes to the impedance matching performed by the middle ear between the outer ear and the inner ear. The first is the ratio of the surface area of the eardrum over the oval window, matching the force by a surface variation. The second component contributing to the impedance matching is the lever effect created by the ossicles. The displacement of the malleus is slightly larger than the displacement of the stapes. This impedance matching leads to the inner ear.
The inner ear could be seen as the spectrum analyser of the ear. It is a complex system performing the conversion of the fluid pressure variations transmitted by the ossicles (h, a, s) into an electrical signal. This conversion process is performed by a snail-shaped organ called the cochlea(vh,vht,tht).Within this organ,aseries of hairlike structures move with the fluid pressure variations and activate nerves, which ultimately transmit the electrical signal through the nervous system to the brain. Further explanation on how this process occurs can be found in anatomy [Gelfand (2009)] and otolaryngology manuals [Nadol Jr. and McKenna (2005)].

 Hearing a distant source: Sound paths, room reflections and the Head-Related TransferFunction(HRTF):

When someone is hearing a sound, a complex interaction process between the sound source, the environment and the human body transforms the sound wave before it reaches the tympanic membrane.This interaction is explained in section1.1.2.1,which explains the sound travelling toward a subject. Then, section 1.1.2.2 explains how that sound is transformed by the contact with the body until it reaches the tympanic membrane. These transformations are natural processes of hearing and humans learn and adapt to these transformations. As a result they occur almost subconsciously and need to be taken into account for an earphone design,as covered in section 1.1.3.

 Sound paths and room reflections:

When in a room where one – or many – sound source(s) are emitting sound, a subject is experiencing the effect of the sound wave travelling through space as presented in Figure 1.4. This image is simplified to a single source, located in front of the subject, emitting sound toward the subject, with the center of the source at the same height as the center of the subject’s ear canal entrances (this corresponds to the 0◦ elevation and 0◦ azimuth.) As a result, the sound

takes different paths to reach the pinna of the subject. Three of all possible paths are shown in Figure 1.4 to demonstrate the combination of the direct and reverberant field. The first path to reach the subject is the direct (1) path. The second path is the so-called early reflection (2) of sound. The last sound to reach the subject is the so-called late reflection (3) of sound which travels more distance before reaching the subject. The sound travelling by path 2 and 2+3 will not reach the subject at the same time as the direct path since path 2 and 2+3 have a longer distance to travel and the speed of sound (≈ 343 m/s) is constant. The difference between an early reflection and a late reflection is the time it takes sound to travel to the measurement point.
A room sound field can be quantified in relation to the reverberant field, or quantity of reflections compared to the direct sound. Beranek (1993) states that all the reflections reaching a subject following the first 80 ms after the direct sound are part of the reverberant field. Otherwise the early sound encompasses all the sound reaching the subject within the first 80 ms, including the direc tpath of sound.From the definition established by Beranek(1993),it can be seen that the two extreme fields are free-field equivalents and diffuse-field equivalents,defined below:
Free-field:The determination of the sound power level radiated in an anechoic or a hemianechoic environment is based on the premise that the reverberant field is negligible at the positions of measurement for the frequency range of interest.
Diffuse field: At any position in the room, energy is incident from all directions with equal intensities and random phases and the reverberant sound does not vary with the receiver’s position.
As noted by Hodgson (1994), several factors such as surface reflection, surface-absorption distribution and surface-absorption magnitude as well as fitting density are necessary toachieve a diffuse field condition. Therefore, from the definition given above, a diffuse field is an ideal reverberant field and if the required conditions are not met, the field is not diffuse.

 Head Related Transfer Function:

A Head-Related Transfer Function (HRTF) is a measurement of how a sound emitted from a source located in a specific position in a three-dimensional space is modified, filtered and shaped by the human body before it reaches a specific point of the ear canal(s) of a subject. The most usual method to acquire a HRTF is by positioning a microphone or a probe tube at the ear canal entrance and playing a sound from a loudspeaker positioned at a specific position relative to the subject in a defined acoustic field environment. The HRTF defines the filters required to process the signals sent to each ear in order to recreate the specified location of the virtual sound source in space. This is later explained in this section. This binaural approach gives the impression to the subject that a virtual sound source is located at the right position in space.

Le rapport de stage ou le pfe est un document d’analyse, de synthèse et d’évaluation de votre apprentissage, c’est pour cela chatpfe.com propose le téléchargement des modèles complet de projet de fin d’étude, rapport de stage, mémoire, pfe, thèse, pour connaître la méthodologie à avoir et savoir comment construire les parties d’un projet de fin d’étude.

Table des matières

INTRODUCTION
CHAPTER 1 ACOUSTICS AND PSYCHO-ACOUSTICS OF EARPHONES 
1.1 Specificities of hearing sound emitted by a distant source versus with earphones
1.1.1 Overview of the hearing process
1.1.2 Hearing a distant source: Sound paths, room reflections and the Head-Related Transfer Function (HRTF)
1.1.2.1 Sound paths and room reflections
1.1.2.2 Head Related Transfer Function
1.1.3 Hearing with earphones
1.1.4 Coupling of the earphone with the ear
1.2 Measuring the sound quality of earphones
1.2.1 Objective measurement of sound quality
1.2.1.1 Acoustic test fixture
1.2.1.2 Probe microphone
1.2.2 Subjective measurement of sound quality
1.2.3 Loudness and earphone sound quality
1.2.3.1 Just-Noticeable Sound Changes
1.2.3.2 Signal matching and loudness function
1.2.3.3 The missing 6 dB effect
1.2.3.4 Identifying the reference recording SPL to differentiate the proper loudness transfer function  1.2.4 Ear canal sealed by the earphone
1.2.4.1 Increase of the Signal-to-Noise Ratio
1.2.4.2 Effect of sealing the ear canal on the low frequencies reproduction
1.3 On the definition of an earphone target frequency response
1.3.1 Issues related to earphone sound reproduction
1.3.2 Comparison of studied frequency response
1.4 Conclusions on the psycho-acoustics of earphone
CHAPTER 2 MEASUREMENT OF EARPHONES: METHODOLOGY AND DEFINITION
2.1 Collocated measurement of earphones
2.1.1 Application to moving-coil micro-loudspeaker
2.1.2 Application to balanced-armature receivers
2.2 Non-collocated measurement of earphones
2.2.1 Frequency Response Function
2.2.2 Harmonic Distortion
2.2.3 Intermodulation Distortion
2.2.4 Multi-tone Distortion
2.2.5 Triggered Distortion
2.2.6 Relationship between distortion and physical phenomena in a moving-coil micro-loudspeaker
2.3 Conclusions on the measurement of earphones
CHAPTER 3 EARPHONE MODELLING AND SIMULATION 
3.1 Fundamentals of electro-acoustic modelling
3.1.1 Impedance of physical quantities
3.1.2 Physical analogy of components by domain
3.2 Modelling earphone-ear system coupling
3.2.1 Common methods used to describe physical systems
3.2.1.1 Control System Block Diagram modelling method
3.2.1.2 Port modelling method
3.2.2 Models of a human ear
3.2.3 Models of moving-coil micro-loudspeakers
3.2.3.1 Simple and complex lumped-element model
3.2.3.2 Two-Port model
3.2.3.3 Block-diagram model
3.2.4 Models of balanced-armature micro-loudspeakers
3.2.4.1 Lumped-elements models
3.2.4.2 Complex control system block-diagram model
3.2.5 Models of acoustical features found in earphones
3.2.5.1 Acoustic mass, resistance and compliance
3.2.5.2 Horn
3.2.5.3 Sudden section change
3.2.5.4 Synthetic material membrane
3.2.5.5 Perforated sheets, meshes and foams
3.2.6 Limitations of the modelling and simulation methods
3.2.7 Simulation and computation method using two-port models
3.3 Introduction to Design, Modelling and Simulation of Earphones with Simulink®
3.3.1 Abstract
3.3.2 Introduction
3.3.3 Description of modelling methods
3.3.3.1 Ordinary Differential Equationand Differential Algebraic Equation
3.3.3.2 Lumped-Element circuit abstraction
3.3.3.3 Control System Block Diagram
3.3.4 Models of acoustical components
3.3.4.1 Model of an Acoustic Mass (MA)
3.3.4.2 Model of an Acoustic Compliance (CA)
3.3.4.3 Model of Acoustic Resistance (RA)
3.3.4.4 Models of Moving Coil Micro-Loudspeakers
3.3.4.5 Balanced Armature Micro-Loudspeakers
3.3.4.6 Ear Simulator Model
3.3.5 Simulation with Simulink®
3.3.6 Selection of a Solver
3.3.6.1 Coupling Between Domain Sections of the Model
3.3.6.2 SimulatingFrequency ResponseFunctionandPlotting Poles and Zeros
3.3.7 Case study: Lumped-elements of two micro-loudspeakers
3.3.7.1 Lumped-Element Moving-Coil Micro-Loudspeaker Simulated with Simscape™
3.3.7.2 Results of Balanced-Armature Micro-Loudspeaker Simulation
3.3.8 Discussion on simulating earphones with Simulink®
3.3.9 Conclusions
3.3.10 Acknowledgement
3.4 Conclusions on earphone modelling and simulation
CHAPTER 4 FUTURE WORKS
4.1 Improvement of the measurement apparatuses
4.2 Assessment of the psycho-acoustic impact of the coupling between the earphone and the ear
4.3 Enhancing user experience through customization
4.3.1 Post-setting the relative loudness of earphones
4.3.2 Using the impedance frequency relation for compensation
4.4 Conclusions on future works
CONCLUSION
APPENDIX I SMALL-SIGNAL APPROXIMATION OF A MOVING-COIL MICRO-LOUDSPEAKER
APPENDIX II MATLAB, SIMULINK AND SIMSCAPE LIBRARY
APPENDIX III EXPERIMENTAL SET-UP
APPENDIX IV JOURNAL OF THE AUDIO ENGINEERING SOCIETY SUBMISSION
REFERENCES
BIBLIOGRAPHY

Rapport PFE, mémoire et thèse PDFTélécharger le rapport complet

 

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *