Ponente
Gabriel Hernández Rodríguez
(Universidad de la Habana)
Descripción
The convergence of Large Language Models (LLMs) and Automated Machine Learning (AutoML) offers transformative potential for streamlining NLP pipeline design, traditionally reliant on manual algorithm selection and feature engineering. This research investigates how LLM inference capabilities—leveraged as dynamic, self-contained modules—can automate and optimize NLP task integration within AutoML systems. By framing LLMs as trainable inference engines, we explore their role in replacing rigid, handcrafted NLP components (e.g., tokenizers, parsers, or classifiers) with adaptable, prompt-driven solutions that reduce human intervention in pipeline configuration.
Autores primarios
Alejandro Piad Morffis
(Universidad de La Habana)
Gabriel Hernández Rodríguez
(Universidad de la Habana)
Suilan Estevez Velarde
(Universidad de La Habana)
Yudivian Almeida Cruz
(Universidad de La Habana)