Consolidated learning – new approach to domain-specific strategy of hyperparameter optimization

Katarzyna Woźnica

supervisor: Przemysław Biecek



For many machine learning models, a choice of hyperparameters is a crucial step towards achieving high performance. Prevalent meta-learning approaches focus on obtaining good hyperparameters configurations with a limited computational budget for a completely new task based on the results obtained from the prior tasks. In this presentation, I will present a new formulation of the tuning problem, called consolidated learning, more suited to these practical challenges faced by ML developers creating models on similar datasets. In domain-specific ML applications, ones do not solve a single prediction problem, but a whole collection of them, and their data sets are composed of similar variables. In such settings, we are interested in the total optimization time rather than tuning for a single task. Consolidated learning assumes leveraging these relations and supporting meta-learning approaches. Providing the benchmark metaMIMIC, we show that consolidated learning enables an effective hyperparameter transfer even in a model-free optimization strategy. In the talk, we will show that the potential of consolidated learning is considerably greater due to its compatibility with many machine learning application scenarios. We investigate the extension of the application of consolidated learning through integrating diverse data sets using the ontology-based similarity of data sets.