Stanisław Pawlak
supervisor: Tomasz Trzciński
In Continual Learning scenarios, the model gets training data in subsequent batches. Accumulating gathered knowledge between iterations is problematic because the model forgets previous information when learning new things (this phenomenon is called catastrophic forgetting). Most Continual Learning methods use a subset of regularization, architectural, or replay strategies to prevent forgetting.
We explore the Generative Rehearsal framework used to produce past examples for rehearsing process in Continual Learning. While replay-based strategies are known to overcome catastrophic forgetting, the use of generative rehearsal methods is still limited by the low quality of generated replay samples. We address this limitation and propose a novel method called Selective Generative Replay, which assigns weights to samples during training to increase the precision of generations. SGR can be easily applied in conjunction with other generative rehearsal methods to bring further improvement. Experiments validate that our method is effective, showing that it boosts the performance of other generative rehearsal methods no matter if they replay full-size generations or internal representations of previous samples.