Talks and materials
Veronika Cheplygina
In this talk I will discuss some different strategies in limited labeled data scenarios, in particular focusing on examples from medical imaging. These strategies try to use other data, labels and/or assumptions to improve generalization. I will in particular discuss transfer learning with related datasets and/or labels, and crowdsourcing additional labels from non-expert annotators. Finally I will also discuss some general considerations when choosing/working with datasets.
Related papers
- Zhou, S. K., Greenspan, H., Davatzikos, C., Duncan, J. S., Van Ginneken, B., Madabhushi, A., … & Summers, R. M. (2021). A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises. Proceedings of the IEEE, 109(5): 820-838.
- Ørting, S. N., Doyle, A., van Hilten, A., Hirth, M., Inel, O., Madan, C. R., … & Cheplygina, V. (2020). A Survey of Crowdsourcing in Medical Image Analysis. Human Computation, 7, 1-26.
- Varoquaux, G., & Cheplygina, V. (2022). Machine learning for medical imaging: methodological failures and recommendations for the future. NPJ digital medicine, 5(1), 1-8.
Dima Damen
Title: From Holistic to Fine-Grained Video Understanding
Abstract: With the success of images, the community has now moved to tackling the challenge of video – a challenge in additional redundancy, significant increase in input size and ill-posed problems. In this talk, I’ll introduce the current SOTA techniques and problems in action recognition, detection, anticipation and retrieval, showcasing current limitations and future directions.
Related Papers
- Sun et al. VideoBERT: A Joint Model for Video and Language Representation Learning. ICCV 2019
- Price et al. UnweaveNet: Unweaving Activity Stories. CVPR 2022
- Damen et al. Scaling Egocentric Vision. ECCV 2018
- Wray et al. On semantic similarity in video retrieval. CVPR 2021
Gaël Varoquaux
Title: The forgotten practicalities of machine learning: dirty data and model evaluation
This course will cover important practical “details” of machine learning often overlooked, focusing here on dirty data and model evaluation.
Cleaning the data to analyze it is often reported as the number one hassle of data scientists. I will survey what “dirtyness” forces time-consuming data cleaning or curation. I will then cover two specific aspects of dirty data: non-normalized entries and missing values. I will show how, for these two problems, machine-learning practice can be adapted to work directly on a data table without curation. The normalization problem can be tackled by adapting methods from natural language processing. The missing-values problem will lead us to revisit classic statistical results in the setting of supervised learning.
Model evaluation is, in my opinion, the most overlooked step of the machine-learning pipeline. Reliably estimating a model’s performance for a given purpose is crucial and difficult. I will first discuss choosing metric informative for the application, stressing the importance of the class prevalence in classification settings. I will then discussing procedures to estimate the generalization performance, drawing a distinction between evaluating a learning procedure or a prediction rule, and discussing how to give confidence intervals to the performance estimates.
Related papers
- Josse et al 2019, On the consistency of supervised learning with missing values
- Cerda & Varoquaux 2020, Encoding High-Cardinality String Categorical Variables
- Varoquaux & Colliot 2022, Evaluating machine learning models and their diagnostic value
Serge Belongie
Title: Representation Learning for Narratives in Social Media
Abstract: While advances in automated fact-checking are critical in the fight against the spread of misinformation in social media, we argue that more attention is needed in the domain of unfalsifiable claims. In this talk, we outline some promising directions for identifying the prevailing narratives in shared content (image & text) and explore how the associated learned representations can be used to identify misinformation campaigns and sources of polarization.
Related papers
- Wilber et al. (2015) https://arxiv.org/abs/1509.07479
- Augenstein (2021) https://arxiv.org/abs/2108.10274
- http://callison-burch.github.io/publications/discovering-diverse-perspectives.pdf
Jens Petersen
Title: Introduction to human-in-the-loop and active learning
Abstract: Machine learning and in particular deep learning methods often need enormous amounts of human labeled data to learn to solve tasks acceptably. Providing such data can be costly and in some cases impossible. Interactive machine learning is a research field that deals with machine learning methods that interact with agents, such as humans and other machines. Human-in-the-loop learning can be seen as a subset of this field, in which the agents are human. Active learning, in addition to this, implies that the learner is actively engaging in the learning process, such as by selecting which input samples should be used for training. All three paradigms are often discussed in relation to improving the learning behavior and reducing the need for human labeled samples. In this presentation, I plan to introduce basic concepts from these fields with a particular focus on applications in image segmentation and ways of reducing the time spent labeling by providing fewer but more informative labels.
Related papers
- A. Holzinger: Interactive machine learning for health informatics: when do we need the human-in-the-loop?
Oliver Hulme
Title: Reward learning via dopamine
Abstract: I will give a brief primer on the role of dopamine in the learning of rewards. I will chart the history of the connection between reinforcement learning and dopaminergic signaling. I will then fast forward to today’s frontier where ideas of distributional reinforcement learning is offering insight into the long-observed heterogeneity of dopamine signals. Finally, I will outline a generative modelling approach that may help interrogate these ideas in the human brain.
Related papers
- Schultz W, Dayan P, Montague PR. A neural substrate of prediction and reward. Science. 1997 Mar 14;275(5306):1593-9. doi: 10.1126/science.275.5306.1593. PMID: 9054347.
- Dabney W, Kurth-Nelson Z, Uchida N, Starkweather CK, Hassabis D, Munos R, Botvinick M. A distributional code for value in dopamine-based reinforcement learning. Nature. 2020 Jan;577(7792):671-675. doi: 10.1038/s41586-019-1924-6. Epub 2020 Jan 15. PMID: 31942076; PMCID: PMC7476215.
Dimitrios Papadopoulos
Abstract: Training a visual recognition system, such as an object class detector or an instance segmentation model, requires a large set of training images with manually annotated objects. Obtaining such data requires human annotation, which is tedious and time consuming. In this talk, I will first present alternative and efficient human annotation schemes for training such systems that reduce the annotation cost while still obtaining high quality models. I will also discuss useful human-in-the-loop and active learning strategies used when crowdsourcing data for computer vision.
Related papers
- A. Kovashka, O. Russakovsky, L. Fei-Fei, K. Grauman. ”Crowdourcing in computer vision”. Foundations and Trends in Computer Graphics and Vison, 2016.
- D. P. Papadopoulos, J. R. R. Uijlings, F. Keller, V. Ferrari. “We don’t need no bounding-boxes: Training object class detectors using only human verification”, CVPR 2016
- D. P. Papadopoulos, J. R. R. Uijlings, F. Keller, V. Ferrari. ”Training object class detectors with click supervision”, CVPR 2017
- D. P. Papadopoulos, J. R. R. Uijlings, F. Keller, V. Ferrari. ”Extreme clicking for efficient object annotation”, ICCV 2017
- A. Kuznetsova et al, “The open images dataset v4”, IJCV 2020
- D. P. Papadopoulos, E. Weber, A. Torralba. “Scaling up instance segmentation via label propagation”, ICCV 2021
- B. Cheng, O. Parkhi, A. Kirillov. ”Pointly-supervised instance segmentation”, CVPR 2022
Exercises
The exercises will be in form of a team challenge, where each team will get access to a set of training, validation and test data and a simple classification network. The goal is to use the topics of the summer school to achieve the best classification results using sparsely annotated and non-trivial data.
The challenge will be based on Python and PyTorch and we will divide the teams so all teams will have computers that have sufficient GPU resources.
Data, template code and further instruction will be available before the school.