Speaker: Beltrán Labrador Serrano.

Abstract: Based on https://arxiv.org/abs/2110.06500. In this talk we will comment the paper Differentially Private Fine-Tuning for Language Models, where the authors give simpler, sparser, and faster algorithms for differentially private fine-tuning of large-scale pre-trained language models, which achieve the state-of-the-art privacy versus utility tradeoffs on many standard NLP tasks. They propose a meta-framework for this problem, inspired by the recent success of highly parameter-efficient methods for fine-tuning. The experiments show that differentially private adaptations of these approaches outperform previous private algorithms in three important dimensions: utility, privacy, and the computational and memory cost of private training.