Modern recommendation systems in industry often use deep learning (DL) models that achieve better model accuracy with more data and model parameters. Current open-source DL frameworks, however, such as TensorFlow and PyTorch, show relatively low scalability on training recommendation models with terabytes of parameters. To efﬁciently learn large-scale recommendation models from data streams that generate hundreds of terabytes training data daily, we introduce a continual learning system called Kraken. Kraken contains a special parameter server implementation that dynamically adapts to the rapidly changing set of sparse features for the continuous training and serving of recommendation models. Kraken provides a sparsity-aware training system that uses different learning optimizers for dense and sparse parameters to reduce memory overhead. Extensive experiments using real-world datasets conﬁrm the effectiveness and scalability of Kraken. Kraken can beneﬁt the accuracy of recommendation tasks with the same memory resources, or trisect the memory usage, while keeping model performance.