Session: Using Reproducible Experiments To Create Better Machine Learning Models
When you start exploring multiple model architectures with different hyperparameter values, you need a way to quickly iterate. There are a lot of ways to handle this, but all of them require time and you might not be able to go back to a particular point to resume or restart training.
In this talk, you will learn how you can use the open source tool, DVC, to compare training metrics using two methods for tuning hyperparameters: grid search and random search. You’ll learn how you can save and track the changes in your data, code, and metrics without adding a lot of commits to your Git history. This approach will scale with your data and projects and make sure that your team can reproduce results easily.