Transfer learning is a term that is often encountered by deep learning practitioners. For those who do not understand, the following explanation may be useful.
In accordance with the ‘word’, transfer learning means using past learning to use new domains or cases. The benefit is that we don’t learn from scratch again. The easiest way is to freeze the pretrained model termed pretrained, and replaces the last layer of the old number of classes with the new model class.
The following video uses Matlab 2021a to demonstrate how transfer learning works, along with techniques for freezing and augmenting data with rotation and translation. The benefit of augmentation is to increase the amount of training data by changing ‘slightly’ from the initial training data.