Speaker: Miguel Ángel Martínez Pay. Abstract: This work presents a comprehensive overview of Large Language Models (LLMs), from their theoretical framework to practical applications in text classification. It compares the effectiveness of two key approaches: fine-tuning embeddings of smaller models like RoBERTa versus prompting larger generative models such as Flan-T5, LLaMA, and GPT-3.5. The analysis highlights the benefits and limitations of each method, focusing on performance in zero- and few-shot settings.
Additionally, the paper discusses LoRA (Low-Rank Adaptation), a technique that enables efficient fine-tuning of large models by reducing computational costs. The study also includes insights from the EmoSpeech dataset, used for emotion classification, showcasing how LLMs can be adapted for complex NLP tasks. Overall, this review underscores the versatility and ongoing advancements in LLM applications for text classification.