<
Introduction to Active Learning
>

没有上一篇咯
下一篇

Contrastive Multimodal Fusion with TupleInfoNCE
Paper Reading

Brief Introduction to Active Learning

Active Learning happens when we don’t have the time and energy to annotate all datas so we want to annote those samples which will improve our models the most. Usually there are two heurestic ways to pick such samples iteratively until we reached the budget of total annotations.

Some challenges of active learning in deep learning scenario are that 1. Multi-class classification networks with softmax are usually confident about the prediction result, which poses a challenge to uncertainty sampling; 2. it’s usually time and resource consuming to go one pass over the whole dataset, so instead of picking one sample at a time we prefer to select a batch, which is not trivial.

Self/Semi-supervised Learning for Data Labeling and Quality Evaluation

This paper suggests to “use contrastive learning methods to obtain an unsupervised representation of unlabelled data and construct an NN graph over data samples based on the representation.”

Not All Labels are Equal

The second paper proposes to use an inconsistency-based acquisition function for ranking and a pseudo-labeling module to label the easy images. We will talk about these two techiniques separately.

Top
Foot