Advanced topis part 2
- Most of AI and ML focuses on full automation
- Bigger and bigger datasets/models
- Good for benchmarking, less good for actual applicaiton
- Datasets are saturated with highly specific models
- Models specialise so much they cannot adapt
Need to keep humans in the loop
- Interactive sense-making: Visualisation + Analysis + Training
- Explainable AI: Explanation + Analysis + Training
- Adversarial training with humans in the loop: Generative Adversarial Networks + Training
- Active learning: Sampling + Training
- Meta Learning: Human-guided training
Interpretable Machine Learning
Interpretability brings advantages:
- Human expert can double check a result
- Human user can therefore take liability
- A ML/AI developer can debug the inference process Does bring drawbacks:
- Additional drawbacks
- Comes at performance cost
Taxonomy of interpretability
- Intrinsic vs post hoc
- Model-specific vs model-agnostic
- Local vs Global
Model-agnostic methods
....
Global Surrogate Model
Use an interpretable model that is trained to approximate the predictions of a black box model
Local Surrogate Model
- Local surrogate models are used to explain individual predictions of black box machine learning models
Explaining with examples
...
Counterfactual Explanation
Prototypes
- Finding instances which are representative of a class
- A criticism is an instance that is not well covered by prototypes
Active Learning
Motivation
- Traditional supervised learning
- Labelled training examples are randomly sampled from the training set and fed into the ML algorithm
- Active Learning
- The ML algorithm starts from scratch and asks the user to label specific data points
Most Popular Approach: Uncertainty sampling .....