- The number of parameters on convolutional layers follows the rule N = w * h * d * c + c, where
- w is convolution window width
- h is convolution window height
- w is the number of input channels
- c is the number of output channels
- The number of parameters on dense layers is simply the number of interconnections between them.
- The number of parameters of conv layers does not depend on image size.
- The convolutional pipeline can work for any image sizes, but the dense layers are specific to the training size.
The next topic was the effect of pretraining. It is usually best to start the training from a pretrained model instead of starting from scratch. Keras has the module "keras.applications", which provides a number of famous (imagenet) pretrained models. As an example, we looked at training a network for classifying pictures of dogs and cats. It turned out that the pretrained model is superior as illustrated by the below figure.
Remember the Monday 12.2. deadline for submitting your report on the assignment. Follow the instructions. The format is free. Concentrate on the facts.
Ei kommentteja:
Lähetä kommentti