Call for Papers for a Special Issue on "Recent Advances in Hyperparameter Tuning for Machine Learning Models"
Guest Editors:
Dr. Hai-Canh VU
Roberval Laboratory, Compiègne University of Technology, 60200 Compiègne, France
Email : 该Email地址已收到反垃圾邮件插件保护。要显示它您需要在浏览器中启用JavaScript。
Interests: predictive maintenance; prognostics and health management; machine learning; Industry 4.0
Dr. Nassim Boudaoud
Roberval Loboratory, Compiègne University of Technology, 60200 Compiègne, France
Email : 该Email地址已收到反垃圾邮件插件保护。要显示它您需要在浏览器中启用JavaScript。
Interests: statistical process control; prognostics and health management; machine learning; Industry 4.0
Background and Motivation:
In recent years, machine learning (ML) has demonstrated significant breakthroughs across a wide range of applications—from computer vision and natural language processing to biomedical engineering and industrial systems. However, the performance of ML models often hinges critically on the choice of hyperparameters, such as learning rates, regularization terms, and network architectures. Improper tuning can lead to suboptimal performance, overfitting, or excessive computational costs.
Hyperparameter tuning, therefore, remains a persistent and non-trivial challenge in both academic research and industrial practice. While traditional methods such as grid search and random search have been widely used, they are often inefficient or infeasible for large-scale models. Recently, advanced approaches—including Bayesian optimization, evolutionary strategies, gradient-based tuning, and meta-learning—have gained traction and promise more effective solutions.
This special issue aims to bring together cutting-edge research that addresses the theoretical, computational, and practical challenges of hyperparameter optimization in machine learning. The goal is to foster novel contributions that advance the automation, efficiency, and robustness of model tuning in diverse domains.
Topics of Interest:
- Novel algorithms for hyperparameter optimization (HPO)
- Bayesian optimization, bandit methods, and surrogate models for HPO
- Population-based and evolutionary strategies
- Differentiable and gradient-based hyperparameter learning
- Multi-objective and cost-aware tuning strategies
- AutoML frameworks and systems for scalable tuning
- Hyperparameter transfer learning and warm-starting
- HPO in deep learning and reinforcement learning
- Domain-specific tuning (e.g., for healthcare, finance, robotics)
- Interpretability and explainability in HPO
- Benchmarks, datasets, and empirical comparisons of tuning methods
- Integration of HPO in federated and distributed learning
- Applications of HPO in real-world industrial settings
Target Audience:
The special issue will be of interest to:- Researchers in machine learning, optimization, and artificial intelligence
- Practitioners developing or deploying ML models at scale
- Developers of AutoML platforms and hyperparameter tuning tools
- Applied scientists and engineers in fields such as healthcare, manufacturing, finance, and environmental science
Submission Deadline: November 15, 2025