The better Hyperparameter Tuning using Bayesian Search & Optuna
Six Sigma Pro SMART Six Sigma Pro SMART
34.4K subscribers
67 views
0

 Published On Apr 26, 2024

In this video, we dive deep into hyperparameter tuning using Bayesian search from scikit-optimize and the Optuna framework for a classification problem. We start by revisiting our dataset generated using make_classification and the grid search and randomized search methods we previously used for hyperparameter tuning.

🔍 Merits of Bayesian Search: In our theory video we have discussed the advantages of Bayesian search over grid search and randomized search, highlighting its ability to explore the hyperparameter space more efficiently by leveraging past evaluations to make informed decisions about where to search next.

🔮 Probabilistic Approaches in Optuna: We explore the probabilistic approaches available in Optuna, such as Tree-structured Parzen Estimator (TPE), Gaussian Process (GP), and CMA-ES, discussing how each works and when to use them.

🔄 Optuna's Approach: Optuna tries even more hyperparameter configurations compared to Bayesian search by employing different sampling algorithms, aiming to find the best set of hyperparameters more effectively.

🕒 Time and Performance Comparison: Finally, we compare the time taken and the performance, measured in terms of recall, of the models tuned using Bayesian search from scikit-optimize and Optuna, showcasing the practical benefits of these advanced hyperparameter tuning techniques.

Join us in this hands-on session to learn how to fine-tune your models like a pro and improve their performance for classification tasks! Don't forget to like, subscribe, and hit the bell icon for more data science tutorials! 🚀🎯

show more

Share/Embed