Deep Learning and the Challenge of Out-of-Distribution Generalization
Fueled by advances in deep learning, AI has made unexpected and impressive progress in the last decade. However, in comparison with human intelligence, a significant gap remains. In particular, humans can learn from less examples and generalize better and adapt faster to a new situation. Instead, state-of-the-art AI systems trained on data from one distribution (e.g., medical images from a set of hospitals) will often have a drastic loss of performance if used on data from a related but different distribution (e.g., medical images of the same kind, but from other hospitals or studies). The talk will discuss hypothesized causes of the superiority of human beings in terms of out-of-distribution generalization - based on higher-level cognition - and how future deep learning approaches may bridge that gap, incorporating notions of causality, reasoning and modular knowledge representation that are still missing in machine learning and that may bring about more human-like AI.