Hyperparameters tuning is one of the most crucial steps of machine or deep learning process. Hyperparameters are configurations for a machine learning model that are not learned from the data but are set before the training process begins. These parameters are essential for controlling the overall behavior of the model.
While training a machine learning model, you may have to experiment with different hyperparameters such as learning rate, batch size, dropout size, optimizers etc. in order to achieve the model with best accuracy. Performing experiments with hyperparameters one by one can be a tedious and time-consuming process. For instance, you initiate the training process with a specific combination of hyperparameters, and subsequently, you repeat the procedure with a different set of hyperparameters, and so forth.
TensorFlow allows you to run experiments with different sets of hyperparameters in a single execution, enabling you to visualize the metrics on HParam dashboard in TensorBoard. This capability allows you to efficiently determine the best possible combination of hyperparameters for training your model. Let’s see how it is done.
Here, due to the availability of a significant amount of data, I have chosen to use the prebuilt MNIST dataset rather than the animal-building dataset referenced in the earlier posts. Its already available in TensorFlow libraries.
Setting up the experiments
You should be able to identify which hyperparameters you want to perform the experiment on. Here, I have used below hyperparameters. I would use different values and their possible combinations for these hyperparameters and perform the experiment.
Now, I have created a simple model where I have used two dense layers with a dropout layer in between them. Here, I am not using the hardcoded values for optimizer and learning rate, instead, I am using them from hparams dictionary defined (see above image), which are used throughout the training.
I have used only one epoch (see below image), for demonstrating purposes.
Log hparams summary with hyperparameters for each run (see below image). This run function will be called for all the combinations of hyperparameters under experiment.
Execute runs and log them all in a directory
You can perform multiple experiments, each with different set of hyperparameters. I have used all the possible combinations of hyperparameters (optimizers & learning rate) here.
- Adam & .001
- Adam & .002
- SGD & .001
- SGD & .002
Visualize the metrics in TensorBoard HParams tab
You can visualize the outcomes in TensorBoard HParams tab. You need to run the TensorBoard against the directory where you have recorded all the runs. (tensorboard –logdir=C:/cnn-models/hparam_tuning)
You can see the best accuracy achieved with optimizer as ‘Adam’ and learning rate as .002, which are ideal for your model.
Try experimenting with different hyperparameters and comment your results below.
Pingback: Using TensorBoard with Machine Learning - styrishai.com