Computed tomography image compressibility and limitations of compression ratio based guidelines

Compression with JPEG 2000

JPEG is probably the most widely used image compression standard. It is used in all digital cameras and it is currently the preferred image format for transmission over the Internet. However, JPEG was published in 1992 and modern applications such as digital cinema, medical imaging and cultural archiving now show some of its shortcomings. These deficiencies include poor lossless compression performances, inadequate scalability and significant blocking artifacts at low bit rates.In the early 90s, researchers began working on compression schemes based on wavelets transforms pioneered by Daubechies (Daubechies, 1988) and Mallat (Mallat, 1989) with their work on orthogonal wavelets and multi-resolution analysis. These novel techniques were able to overcome most weaknesses of the original JPEG codec. Later, in the mid-90s, the Joint Photographic Experts Group started standardization efforts based on wavelets that culminated with the publication of the JPEG 2000 image coding system by the International Standardization Organization (ISO) as ISO/IEC 15444-1:2000 and the International Telecommunication Union (ITU) as T.800 (Taubman and Marcellin, 2002). Major improvements were achieved by the use of the Discrete Wavelet Transform (DWT), a departure from the Discrete Cosine Transform (DCT) used in JPEG, that enabled spatial localization, flexible quantization and entropy coding as well as clever stream organization. It is those enhancements that enabled new features for the JPEG 2000 codec, including improved compression efficiency, multi-resolution scaling, lossy and lossless compression based on a single code-stream, Regions Of Interest (ROI) coding, random spatial access and progressive quality decoding. Most compression algorithms can be broken up into four fundamental steps: preprocessing, transform, quantization, entropy coding. With JPEG 2000, a fifth step, code-stream organization, enables some of the most advanced features of the codec such as random spatial access and progressive decoding.

Storage and communication with DICOM

Digital Imaging and Communications in Medicine (DICOM) is the leading standard in medical imaging. Work started almost thirty years ago (NEMA, 2016), in 1983, as a joint effort between National Electrical Manufacturers Association (NEMA) and the American College of Radiology (ACR) to provide interoperability across vendors when handling, printing, storing and transmitting medical images. The first version was published in 1985 and the first reversion, version 2.0, quickly followed in 1988. Both versions only allowed raw pixel storage and transfer. In 1989, the DICOM working group 4 (WG4) that was tasked with overseeing the adoption of image compression, published its recommendations in a document titled “Data compression standard” (NEMA, 1989). They concluded that compression did add value and defined a custom compression model with many optional prediction models and entropy coding techniques. Unfortunately, fragmentation caused by many implementation possibilities meant that while images were compressed internally when stored, transmission over networks was still performed with uncompressed raw pixels to preserve interoperability.
DICOM 3.0 was released in 1993 and it included new compression schemes: the JPEG standard that was published the year before, Run Length Encoding and the pack bit algorithm found in the Tagged Image File Format (TIFF). In this revision, compression capabilities could also be negotiated before each transmission allowing fully interoperable lossy and lossless compression. In the mid-90s, significant advancements were made surrounding wavelet-based compression techniques. At the time, they offered flexible compression scalability and higher quality at low bit rate but no open standard format was available causing interoperability issues.

Current state of lossy image compression in the medical domain

After a small survey of radiologists’ opinions in 2006, (Seeram, 2006a) reveled that lossy compression was already being used for both primary readings and clinical reviews in the United States. Canadian institutions, on the other hand, were much more conservative with respect irreversible compression. In this survey, five radiologists from the United States responded, two of them reported using lossy compression before primary reading but they all reported using lossy compression for clinical reviews. The compression ratios used ranged between 2.5:1 and 10:1 for computed tomography (CT) and up to 20:1 for computed radiography. Surprisingly, only three Canadian radiologists out of six reported using lossy compression. And, of these three, two declared using compression ratio between 2.5:1 and 4:1 which are effectively lossless or very close to lossless levels. Almost all radiologists who answered claimed they were concerned by litigation that could emerge from incorrect diagnostic based on lossy compressed images. All radiologists were aware that different image modalities require different compression ratios; that some types of image are more “tolerant” to compression.
Because of risks involved with lossy diagnostic image compression, a common compression target is the visually lossless threshold. The assumption is that if a trained radiologist cannot see any difference between the original and compressed images, compression cannot possibly impact diagnostic performances and liability issues would be minimal. Finding visually lossless threshold usually implies determining the compression ratio at which trained radiologists, in a two-alternative forced choice (2AFC) experiments where the observer can successively alternate between both images as many times as required, start to perceive a difference.

Image quality assessment techniques

Most JPEG 2000 coders allows compression levels to be configured by specifying either atarget quality or a target rate. With the first case, the code-blocks are simply truncated when the target quality, usually in terms of MSE, is reached. Similarly, in the latter case, a quality metric, also usually the MSE, is minimized under the constraint of the targeted rate. Unfortunately, the MSE (and its derivative the Peak Signal-to-Noise Ratio [PSNR]) is a metric that, like the CR, is poorly correlated to image fidelity perceived by human observers (Johnson et al., 2011; Kim et al., 2008c; Oh et al., 2007; Ponomarenko et al., 2010; Przelaskowski et al., 2008; Sheikh and Bovik, 2006; Sheikh et al., 2006; Zhou Wang and Bovik, 2009). Many alternative image quality metrics have been developed to address this issue. The goal is, of course, to find a quality metric that would accurately and consistently predict the human perception of image quality. They are three overarching categories of image quality metrics: full reference (FR), reduced reference (RR) and no reference (NR). However, since this project is about image compression where the original images are always available, only full reference techniques are considered. Within this category, image quality metrics can be further separated into 3 types: mathematical , near- threshold psychophysics, structural similarity / information extraction (Chandler and Hemami, 2007b).

Image quality assessment and compression in the medical domain

In (Jiang et al., 2007; Miao et al., 2008), the authors developed an HVS-based perceptual IQA that they used to evaluate MRI reconstruction algorithms. They concluded that their implementation, case-PDM, performed better than SSIM and MSE at predicting the perceived quality.
However, their assessments were limited to 8-bit low dynamic range images with fixed VOI presets. Another medical image quality index was proposed in (Lin et al., 2011) but the authors have only shown a correlation with CR that is in line with other metrics such as MSE.
Studies with trained radiologists have shown SSIM to be either on par with (Georgiev et al., 2013; Kim et al., 2010a) or slightly better than PSNR (Kowalik-Urbaniak et al., 2014) at predicting perceived quality. In other studies (Aydin et al., 2008; Kim et al., 2009a, 2010a,b), HDR-VDP was found to perform better than MSE and MS-SSIM with JPEG 2000 compressed CT of the abdomen at predicting visually lossless thresholds. However, classifying visually identical pairs in a controlled setting may not translate into accurate diagnostically lossless threshold predictions. Furthermore, HDR-VDP has many parameters and requires careful calibration for each image modality (Kim et al., 2010b).
There were also some attempts at creating diagnostically lossless compression schemes in the past. Region of interest based methods, such as (Ashraf et al., 2006), where a region is losslessly coded while other areas are heavily compressed are common. However, these techniques require prior knowledge of image content and are not the focus of this project. Pre- or postfiltering methods where a filter is applied either before compression to remove small hard-to compress details, such as (Muñoz-Gómez et al., 2011), or after to remove ringging artifacts introduced by compression, such as (Chen and Tai, 2005), are also common. These techniques require substantial modifications to the encoders and decoders and introduce new steps that require further validation. These are also not the focus of the work.
In (Prabhakar and Reddy, 2007), the authors have adapted the set partitioning in hierarchical trees (SPIHT) algorithm, a wavelet compression scheme similar to JPEG 2000, to weight coefficients with HVS filters before the quantization process. These filters are designed to enable further quantization of wavelet coefficients based contrast sensitivity, contrast adaptation and visual masking. However, being HVS-based, their method requires prior knowledge of the viewing conditions and the implementation was only tested with highly compressed low dynamic range 8-bit image.

Le rapport de stage ou le pfe est un document d’analyse, de synthèse et d’évaluation de votre apprentissage, c’est pour cela chatpfe.com propose le téléchargement des modèles complet de projet de fin d’étude, rapport de stage, mémoire, pfe, thèse, pour connaître la méthodologie à avoir et savoir comment construire les parties d’un projet de fin d’étude.

Table des matières

INTRODUCTION
CHAPTER 1 BACKGROUND ON MEDICAL IMAGING INFORMATICS 
1.1 Compression with JPEG 2000 
1.1.1 Preprocessing
1.1.2 Transform
1.1.3 Quantization
1.1.4 Entropy coding (Tier-1 coding)
1.1.5 Code-stream organization (Tier-2 coding)
1.2 Streaming with JPIP
1.3 Storage and communication with DICOM 
1.3.1 DICOM with JPEG 2000
1.3.2 DICOM with JPIP
1.4 Diagnostic imaging characteristics
CHAPTER 2 LITERATURE REVIEW 
2.1 Current state of lossy image compression in the medical domain 
2.2 Image quality assessment techniques 
2.2.1 Mathematical-based quality metrics
2.2.2 Near-threshold psychophysics quality metrics
2.2.2.1 Luminance perception and adaptation
2.2.2.2 Contrast sensitivity
2.2.2.3 Visual masking
2.2.3 Information extraction and structural similarity quality metrics
2.3 Image quality assessment metric evaluation
2.3.1 Evaluation axes
2.3.1.1 Prediction accuracy
2.3.1.2 Prediction monotonicity
2.3.1.3 Prediction consistency
2.3.2 Image quality assessment databases
2.4 Image quality assessment metric survey
2.4.1 MSE/PSNR 
2.4.2 SSIM
2.4.3 MS-SSIM
2.4.4 VIF
2.4.5 IW-SSIM
2.4.6 SR-SIM
2.4.7 Summary of performance
2.5 Image quality assessment and compression in the medical domain 
CHAPTER 3 COMPUTED TOMOGRAPHY IMAGE COMPRESSIBILITY AND LIMITATIONS OF COMPRESSION RATIO BASED GUIDELINES 
3.1 Introduction
3.2 Previous work 
3.3 Methodology
3.3.1 Data
3.3.2 Compression
3.3.3 Fidelity evaluation
3.3.4 Compressibility evaluation
3.3.5 Statistical analysis
3.4 Results 
3.4.1 Impacts of image content
3.4.2 Impacts of acquisition parameters
3.4.2.1 Impacts on prediction
3.4.2.2 Impacts on fidelity
3.4.2.3 Relative importance of each parameter
3.4.2.4 Impacts of noise
3.4.2.5 Impacts of window/level transform on image fidelity
3.5 Discussion 
3.6 Conclusion
CHAPTER 4 MORE EFFICIENT JPEG 2000 COMPRESSION FOR FASTER PROGRESSIVE MEDICAL IMAGE TRANSFER
4.1 Introduction 
4.2 Previous work 
4.3 VOI-based JPEG 2000 compression
4.4 Proposed coder
4.4.1 VOI-progressive quality-based compression
4.4.1.1 Out-of-VOI pruning
4.4.1.2 Approximation sub-band quantization based on VOI width
4.4.1.3 High frequency sub-band quantization based on display PV distortions
4.4.2 VOI-based near-lossless compression
4.5 Evaluation methodology
4.5.1 VOI-based near-lossless compression
4.5.1.1 Compression schemes
4.5.1.2 Dataset
4.5.1.3 VOI ordering
4.5.2 VOI-progressive quality-based streaming
4.6 Results 
4.6.1 VOI-based near-lossless compression
4.6.2 VOI-progressive quality-based streaming
4.7 Conclusion 
CHAPTER 5 A NOVEL KURTOSIS-BASED JPEG 2000 COMPRESSION CONSTRAINT FOR IMPROVED STRUCTURE FIDELITY
5.1 Introduction
5.2 Previous work 
5.3 WDEK-based JPEG 2000 coder 
5.4 Evaluation 
5.4.1 Structure distortions
5.4.1.1 X-Ray computed tomography
5.4.1.2 Breast digital radiography
5.4.2 Non-medical images
5.4.3 WDEK as a full reference IQA metric
5.5 Conclusion 
GENERAL CONCLUSION
LIST OF REFERENCES

Rapport PFE, mémoire et thèse PDFTélécharger le rapport complet

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *