Challenges of MIL in Real-World

Challenges of MIL in Real-World

Applications Using MIL in real-world applications is challenging. First, the degree of supervision entails uncertainty on instance classes. Depending on the working assumption, this uncertainty can be asymmetric. For example, under the standard MIL assumption, only instances in positive bag labels are ambiguous. In other cases, the label space for instance is different from the label space for bags. In instance classification problems, the ambiguity on the true instance labels makes it difficult to constitute a noise-free training set. Also, for the same reason, it is difficult to directly use instance classes in the cost function when training classifiers. Secondly, MIL deals with problems where data is structured in sets (i.e. bags). Aside from set membership, this structure can have implications on how instances relate to each other. For example, some instances may co-occur more often in bags of a given class. Discriminative information may lie in these co-occurrences. In that case, the distribution of instances in bags must be modeled. Sometimes, instances of the same bag share similarities which are not shared with instances from other bags. A successful MIL method must be able to discover what information is related to class membership and not bag membership. Sometimes, there are very few positive instances in positive bags, which makes it difficult for the learner to identify them. These relations and their implications will be discussed in detail in Chapter 1.

Finally, MIL is often associated with class imbalance, especially with instance-level classification. Negative bags only contain negative instances while positive bags contain negative and positive instances. Even with an equal number of bags in each class, there are more negative instances in the training set. This problem is more severe when only a small proportion of instances are positive in positive bags. A lot of MIL methods make implicit assumptions about the data that are often violated in practice. This leads to disappointing results in real-world applications. For example, methods like Expectation Maximization Diverse Density (EM-DD) (Zhang & Goldman, 2001) and Sphere- Description-Based MIL (SDB-MIL) (Xiao et al., 2016) assume that positive instances form a single cluster in feature space. Other methods such as Normalized Set Kernels (NSK) (Gรคrtner et al., 2002) assume that positive bags contain a majority of positive instances. Methods using distance measures like Citation-kNN (CkNN) (Wang & Zucker, 2000) or Constructive Clustering-based Ensemble (CCE) (Zhou & Zhang, 2007) assume that every instance feature is relevant and that the location of an instance in the input space is mainly dependent on its class and not its bag membership.

Thesis Organization

This is a thesis by article, therefore each chapter in the main body corresponds to a publication. As a complement, the annexes contain other published articles that make additional related contributions. Figure 0.2 shows the relationship between each chapter and annex according to MIL assumptions and tasks. In Chapter 1, the literature review, the tasks, assumptions and challenges associated with MIL are surveyed and rigorously analyzed. It is explained that instance-level and bag-level classification are different tasks and that specific methods need to be used for each. Bag-level classification can be performed under different assumptions depending on the application. In the next chapters, we propose methods for MIL classification for each case, each posing their own specific challenges. The second chapter proposes a general purpose method for bag-level classification under the standard MIL assumption. The method addresses several challenges such as the noisy features, multimodal distributions and low witness rates. The next chapter proposes a method for bag classification under the collective assumption for personality assessment in speech signals.

This problem is challenging because the label space for instances is different than for bags. Finally, in Chapter 4, we address instance-level classification problems in an active learning framework. Instance-level classification poses specific challenges because the misclassification cost of instances is different than for bag-level classification and cannot be used directly in the optimization. Moreover, these problems are often associated with severe class imbalance. Next, a more detailed overview of each chapter is presented. The first chapter contains an overview of MIL from the point of view of the important characteristics that make MIL problems unique. The MIL assumptions and related tasks are discussed first. Then, we present a recapitulation of the general literature about MIL problems and methods. After, we proceed with explaining what makes MIL different from other types of learning. Among several other subjects, the distinction between instance-level and bag-level classification is thoroughly discussed, as well as the possible types of relations between instances, the effect of label ambiguity and data distributions. Relevant methods for each characteristic are surveyed. Next, we review MIL formulation for different applications and relate these applications to the problem characteristics. Finally, we conduct experiments where we compare 16 reference methods under various conditions and draw several conclusions. The paper ends on a discussion containing recommendation for experimental protocols, complexity and future directions. This part of the thesis is at its second round of revision for publication in Elsevierโ€™s ยซย Pattern Recognitionย ยป (Carbonneau et al., 2016a).

The second chapter extends a method presented in the previous conference publication (see Annex I). The method is called Random Subspace for Witness Identification (RSWI). In the MIL literature, a positive instance is often called a witness. The method is used to classify instance individually given a collection of labeled bags. In (Garcia-Garcia &Williamson, 2011), a distinction is made between inductive and transductive learning scenario. In the inductive learning scenario, the goal is to train a learner to make inference on new data. This is the classical classification scenario: a classifier learns a decision function using training data in the hope it will generalize well on test data. In the transductive scenario, one aims to discover the structure of data given a finite data set. This corresponds to the classical clustering scenario where one learns the structure of a data set. In that case, there is no test data, the goal is thus to obtain an understanding of the data structure. In this paper, RSWI is used in the transductive scenario: the method is used to classify instance individually given a collection of labeled bags. In this chapter a similar method is used to build a bag-level classifier in an inductive learning scenario. The method is called Random Subspace for Instance Selection (RSIS). In that case, the method determines the likelihood of each instance to be a witness. These likelihoods are used to sample training sets which are used to train a pool of classifiers. Each classifier in the pool is an instance classifier. To perform bag-level classification, predictions for each instance of the bag are combined. The method exhibits high robustness to noisy features and performs well with various types of positive and negative distributions. Furthermore, the method is robust to the proportion of positive instances per positive bag hereafter called low witness rates (WR). This chapter was published in Elsevierโ€™s Pattern Recognition (Carbonneau et al., 2016e).

The third chapter presents a MIL method proposed to infer speaker personality from speech segments. This application in challenging because it is not possible to pinpoint which part of the signal is responsible for class assignation. In fact, personality is a complex concept and it is unlikely that a single instance defines the personality of a speaker over an entire speech segment. On the contrary, personality manifests in a series interrelated cues. This means that the label space for instances is different from the label space for bags. Therefore, the collective MIL assumption must be employed instead of the standard MIL assumption. Moreover, the relations between instances which must be considered because they convey important information. The method proposed in the paper is akin to a BoW, which embeds the content of a bag in a code vector and trains a classifier on these code vectors. While presenting a MIL method, the paper focuses on how to represent speech signals of various lengths in a meaningful way. First, a temporal signal is transformed into a spectrogram from which patches are extracted. Then, the speech signal is represented as a collection of spectrogram patches. In the MIL vocabulary, signals are bags and patches are instances.

A dictionary of concepts is learned from all training patches using a sparse coding formulation. All patches are encoded as a composition of the learned concepts in the dictionary. These instances are sum-aggregated to obtain the code vector representing the whole bag. The method obtains state-of-the-art results on real-world data with a highly reduced complexity when compared to commonly used approaches in the field. This chapter is in its second round of revision for publication in IEEE transactions on Affective Computing. In the fourth chapter, active learning methods are proposed in the context MIL instance classification. The particular structure of MIL problems makes SI active learners suboptimal in this context. We propose to tackle the problem from two different perspectives sometimes referred to as the two faces of active learning (Dasgupta, 2011). The first method, aggregated informativeness (AGIN), identifies the bags containing the most informative instances based on their proximity to the classifier decision boundary. The second method, cluster-based aggregative sampling (C-BAS), discovers the cluster structure of the data. It characterizes each cluster based on how much is known about the cluster composition and the level of conflict between bag and instance labels. Bags are selected based on the membership of instances to promising clusters. The performance of both methods is examined in inductive and transductive learning scenarios. This chapter has been submitted to IEEE Transactions on Neural Networks and Learning Systems in October 2017.

Le rapport de stage ou le pfe est un document dโ€™analyse, de synthรจse et dโ€™รฉvaluation de votre apprentissage, cโ€™est pour cela chatpfe.com propose le tรฉlรฉchargement des modรจles complet de projet de fin dโ€™รฉtude, rapport de stage, mรฉmoire, pfe, thรจse, pour connaรฎtre la mรฉthodologie ร  avoir et savoir comment construire les parties dโ€™un projet de fin dโ€™รฉtude.

Table des matiรจres

INTRODUCTION
CHAPTER 1 LITERATURE REVIEW: MULTIPLE INSTANCE LEARNING: A SURVEY OF PROBLEM CHARACTERISTICS AND APPLICATIONS
1.1 Introduction
1.2 Multiple Instance Learning
1.2.1 Assumptions
1.2.2 Tasks
1.3 Studies on MIL
1.4 Characteristics of MIL Problems
1.4.1 Prediction: Instance-level vs. Bag-level
1.4.2 Bag Composition
1.4.3 Data Distributions
1.4.4 Label Ambiguity
1.5 Applications
1.5.1 Biology and Chemistry
1.5.2 Computer Vision
1.5.3 Document Classification and Web Mining
1.5.4 Other Applications
1.6 Experiments
1.6.1 Data Sets
1.6.2 Instance-Level Classification
1.6.3 Bag Composition: Witness Rate
1.6.4 Data Distribution: Non-Representative Negative Distribution
1.6.5 Label Ambiguity: Label Noise
1.7 Discussion
1.7.1 Benchmarks Data Sets
1.7.2 Accuracy vs. AUC
1.7.3 Open Source Toolboxes
1.7.4 Computational Complexity
1.7.5 Future Direction
1.8 Conclusion
CHAPTER 2 ROBUST MULTIPLE-INSTANCE LEARNING ENSEMBLES USING RANDOM SUBSPACE INSTANCE SELECTION
2.1 Introduction
2.2 Multiple Instance Learning
2.3 Random Subspace Instance Selection for MIL Ensembles
2.3.1 Positivity Score Computation
2.3.2 Ensemble Design
2.3.3 Prediction of Bag Labels
2.3.4 Why it Works
2.4 Experimental Setup
2.4.1 Data sets
2.4.2 Protocol and Performance Metrics
2.4.3 Reference Methods
2.5 Results on Synthetic Data
2.5.1 Number of Concepts
2.5.2 Witness Rate
2.5.3 Proportion of Irrelevant Features
2.6 Results on Benchmark Data Sets
2.6.1 Musk Data Sets
2.6.2 Elephant, Fox and Tiger Data Sets
2.6.3 Newsgroups
2.7 Results on Parameter Sensitivity
2.8 Time Complexity
2.9 Conclusion
CHAPTER 3 FEATURE LEARNING FROMSPECTROGRAMS FOR ASSESSMENT OF PERSONALITY TRAITS
3.1 Introduction
3.2 Feature Learning for Speech Analysis
3.3 Proposed Feature Learning Method
3.3.1 Feature Extraction
3.3.2 Classification
3.3.3 Dictionary Learning
3.4 Experimental Methodology
3.5 Results
3.5.1 Accuracy
3.5.2 Complexity
3.6 Conclusion
CHAPTER 4 BAG-LEVEL AGGREGATION FOR MULTIPLE INSTANCE ACTIVE LEARNING IN INSTANCE CLASSIFICATION PROBLEMS
4.1 Introduction
4.2 Multiple Instance Active Learning
4.3 Proposed Methods
4.3.1 Aggregated Informativeness (AGIN)
4.3.2 Clustering-Based Aggregative Sampling (C-BAS)
4.4 Experiments
4.4.1 Data Sets
4.4.1.1 SIVAL
4.4.1.2 Birds
4.4.1.3 Newsgroups
4.4.2 Implementation Details for C-BAS
4.5 Results and Discussion
4.6 Conclusion
CONCLUSION AND RECOMMENDATIONS
ANNEX I WITNESS IDENTIFICATION IN MULTIPLE INSTANCE LEARNING USING RANDOM SUBSPACES
ANNEX II SCORE THRESHOLDING FOR ACCURATE INSTANCE CLASSIFICATION IN MULTIPLE INSTANCE LEARNING
ANNEX III REAL-TIME VISUAL PLAY-BREAK DETECTION IN SPORT EVENTS USING A CONTEXT DESCRIPTOR
BIBLIOGRAPHY

Rapport PFE, mรฉmoire et thรจse PDFTรฉlรฉcharger le rapport complet

Tรฉlรฉcharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiรฉe. Les champs obligatoires sont indiquรฉs avec *