Segmentation, suivi et visualisation d’objets biologiques en microscopie 3D par fluorescence

The human vision system has been the focus of extensive research studies over the past decades. Indeed, we are seamlessly able to recognize people, objects, shapes, colors, textures, motion, distances, etc., thanks to a powerful stereo-vision system backed by a visual memory and a mysterious black box: the visual cortex. Whereas the operating mechanisms of the human eye are nowadays relatively well understood, how the information is stored and processed by the brain is yet a question that remains to be answered.

Computer vision deals with similar recognition problems through the world of digital images. Whether the knowledge about human vision should be used (or not) as starting point for computer vision systems is still nowadays an open debate. While the formation process of digital images is globally identical to that of real images in the human eye, their analysis and interpretation have very little knowledge to start with. This is probably the main reason why image processing, analysis and understanding is a huge and growing research topic in computer science, joining efforts from very diverse domains.

There are two main motivations to develop computer vision systems. The first reason is to assist or replace individuals for repetitive tasks of everyday life involving visual control. Applications may relate either to professional tasks (e.g., quality control, fingerprint authentication, medical diagnosis, architecture restoration, military drones) or to personal comfort amenities such as face recognizing door locks, self-driving cars, robot assistants/companions etc.. This wide range of applications leads to an ever-growing market that will continuously follow the leading-edge of computer vision science. The second reason motivating computer vision research is the exponentially growing amount of data coming from digital cameras and video systems. Managing such a colossal amount of data cannot be done manually, and thus requires automated systems to perform indexing, classification, recognition and information extraction.

With these considerations in mind, we focus on the recognition and information extraction aspects of computer vision, and more particularly on their applications to current challenges in biological microscopy, as we shall see below .

Over the past decade, biology has certainly become one of the fields that depends the most upon imaging, due to substantial progress in microscopy imaging. In this context, fluorescence microscopy has become the major source of information, initiating the migration from subjective visual inspection to robust quantitative analysis. We first give below a brief historical overview fluorescence imaging techniques, for which further details can be found in [Claxton et al., 2006] and [Amos and White, 2003].

The principle of fluorescence imaging is to exploit the capacity of organic and non organic specimens to absorb and subsequently re-emit light. Fluorescence was discovered in 1852 by Sir George Stokes, who noticed that a fluorspar mineral emitted red light through ultraviolet excitation. Similar observation was then made on many other specimens, stimulating the development of fluorescence microscopes to produce digital images of this phenomenon.

The first fluorescent microscopes appeared in 1904, and were first applied to biology in the 1930s with the introduction of fluorescent-labeled antibodies. The explosion of fluorescent imaging started in the late 70s, after antibodies were adapted to normal proteins, and thereby to cellular and subcellular structures.

With the increasing specificity of fluorescent probes, traditional fluorescence microscopes (also called widefield microscopes) started to show their limit: images are disturbed by out-of-focus light mostly due to the thickness of the specimen. Although many researchers avoided the problem by focusing their studies exclusively on thin objects, others were interested in imaging a broader range of objects with higher detail. Focus has then turned to confocal microscopes.

The principle of confocal microscopes was invented by Minsky in 1955. The idea is to illuminate a point of the specimen with a diffraction-limited spot, and place an aperture in front of the detector that rejects out-of-focus light coming back from the medium. The operation is then repeated by sweeping the focal point over the whole domain to produce the final image, yielding the name of point-scanning microscope. This technique induces a slicing effect that eliminates almost completely the blur induced by traditional microscopes, suggesting its suitability for efficient three dimensional imaging via motorized Z-step devices. The drawback of point-scanning systems is that acquisition is much slower than in the widefield case, since image pixels are acquired one at the time. The need for faster acquisition lead to several enhancements such as the spinning disk confocal microscope [Egger and Petran, 1967] and the line-scanning confocal microscope [Sheppard and Mao, 1988]. The former illuminates the medium at several locations simultaneously, and each spot is focused upon by a specific aperture in a Nipkow-type spinning disk, imitating multiple confocal microscopes working in parallel. The latter acquires a whole line of pixels at once using a line-shape illumination and a slit detector.

Throughout the 90s, confocal microscopes benefited from advances in optical and electronic components, improving their efficiency, reliability and resistance to noise. Meanwhile, fluorescent probes tailored to match laser excitation wavelengths were introduced. Coupled with the ever-increasing power and storage capabilities of modern computers, confocal microscopes thus gained tremendous interest in countless applications. Although more expensive than conventional microscopes, recent distribution of personal confocal microscopes has decreased the price of low end systems and increased the number of individual users, making them a « must have » tool in any biological research facility.

Joint efforts in biology and microscopy have allowed to image a broad range of cellular and subcellular functions, both in vitro and in vivo, in two and three dimensions at different wavelengths (colors), eventually with time-lapse imaging to investigate cellular dynamics .

Yet, 2D imaging has some limitations, in particular for the study of objects that either move or exhibit a heterogeneous shape along the depth axis, e.g., cells and nuclei. In the following, we shall focus on the following applications:
• Nuclear morphology
The cell nuclear morphology is a topic of active research in biology. A large array of biological functions is accompanied by major changes in the geometry of the nucleus [Leman and Getzenberg, 2002]. Determining exactly how geometric characteristics relate to cellular function requires accurate morphological information. Therefore, one has to turn to 3D images in order to analyze the entire structure of cellular and sub-cellular compartments [Vonesch et al., 2006].
• Cell motility
Cell migration is central to several fundamental biological processes, many of which have important medical implications. Understanding the mechanisms of cell migration and how they can be controlled to prevent or cure disease is thus also an important goal for biomedical research.

Le rapport de stage ou le pfe est un document d’analyse, de synthèse et d’évaluation de votre apprentissage, c’est pour cela chatpfe.com propose le téléchargement des modèles complet de projet de fin d’étude, rapport de stage, mémoire, pfe, thèse, pour connaître la méthodologie à avoir et savoir comment construire les parties d’un projet de fin d’étude.

Table des matières

Chapter 1 Introduction
1.1 Preamble
1.1.1 Human vision vs. Computer vision
1.1.2 Computer vision: why and what for ?
1.1.3 Computer vision in biology
1.2 Problems and challenges
1.3 Proposed solutions
Chapter 2 State-of-the-Art: deformable contours in computerized image analysis
2.1 Introduction
2.1.1 Energy minimization
2.1.2 Organization of the review
2.2 Parametric or explicit models
2.2.1 Contour representations
2.2.2 Edge-based models
2.2.3 Region-based models
2.2.4 3D discrete models
2.3 Level set or implicit models
2.3.1 From fluid dynamics to images
2.3.2 Optimizations
2.3.3 Edge-based models
2.3.4 Region-based models
Chapter 3 Conclusion

Rapport PFE, mémoire et thèse PDFTélécharger le rapport complet

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *