Ponente
Descripción
Artificial intelligence continues to advance and, with it, automated machine learning systems (AutoML). These systems extend their functionalities with novel techniques to solve many real-life problems with adequate performance. Still, the application spectrum grows more significant, and new AutoML tools are coming to light increasingly. Because of this, it is necessary to measure the performance of each of the new-generation systems and update the performance reference of more veteran systems against new problems. In this paper, we present HAutoML-Bench, a benchmark that quantifies the performance of machine learning tools in heterogeneous scenarios. Existing benchmarks are studied to compile strategies and avoid errors in their construction. All the strategies followed for their training are presented, and their effectiveness is analyzed. In addition, experiments, including qualitative and quantitative evaluations of state-of-the-art AutoML systems, are performed.