Speaker: Laura Herrera Alarcón. Abstract: These papers present the idea of Knowledge Distillation, a method to compress and accelerate large models with high computational and storage cost. Thanks to this, these models can be used for real-time applications or in… Read More