ICML 2019

Fri, Jun 14, 2019 8:30 AM — 6:00 PM
Fri, Jun 14, 2019 8:30 AM — 6:00 PM

Workshop Description

There has been growing interest in making deep neural networks robust for real-world applications. Challenges arise when models receive inputs drawn from outside the training distribution. For example, a neural network tasked with classifying handwritten digits may assign high confidence predictions to cat images. Anomalies are frequently encountered when deploying ML models in the real world. Generalization to unseen and worst-case inputs is also essential for robustness to distributional shift. Well-calibrated predictive uncertainty estimates are indispensable for many machine learning applications, such as self-driving cars and medical diagnosis systems. In order to have ML models reliably predict in open environment, we must deepen technical understanding in the following areas: (1) learning algorithms that are robust to changes in input data distribution (e.g., detect out-of-distribution examples); (2) mechanisms to estimate and calibrate confidence produced by neural networks and (3) methods to improve robustness to adversarial and non-adversarial corruptions, and (4) key applications for uncertainty (e.g., computer vision, robotics, self-driving cars, medical imaging) as well as broader machine learning tasks.

Authors

Subutai Ahmad • CEO

Share