Stanisław Pawlak
supervisor: Tomasz Trzciński
In this work, we focus on a generative rehearsal of past examples in a continual learning scenario. While these strategies tend to partially mitigate the effects of catastrophic forgetting, they often suffer from low-quality reconstructions of past samples, degrading with the increasing number of tasks. Inspired by boosting and curriculum learning, we address this limitation by introducing a weighting mechanism that assigns and dynamically adjusts the weights of data samples during training of a generative rehearsal model. This modification increases the flexibility of a generative model by prioritizing the reconstruction quality of specific samples according to the proposed weighting function. Effectively, it reduces the interference introduced by samples that can be entirely dropped in the next stages of continual training. More specifically, we implement our approach, dubbed Selective Generative Replay, by enforcing mini-batch composition based on the distribution of sample weights, selecting only a subset of training samples instead of their uniform representations. The proposed mechanism can be coupled with different weight assignment functions and used together with any generative rehearsal method. We show that even a basic weighting scheme improves the performance of the resulting continually learned model.