Deep learning (DL) is a development of the Neural Network where the number of layers is much more. In the past, neural networks were not able to handle many layers, now due to the development of methods and hardware, models with a large number of layers (even hundreds) can be done.
To improve DL performance, it is sometimes necessary to analyze the design model. In addition to accuracy, sometimes models with high performance in terms of speed are needed. To find out the performance, it is sometimes necessary to analyze floating point operations, namely the computation of computational complexity in DL. Check out the following video.
Currently, FLOPs are no longer relevant for the current hardware conditions that work in parallel. The calculation of processing time can be an alternative which of course will differ from one machine to another, but for comparison it does not matter (as long as running on the same machine). Another method is to calculate the number of parameters of a model. A large number of parameters certainly affects performance, so a system with a small number of parameters with an accuracy that is not much different is a topic that is now being studied a lot, especially those that can work for small computers or mobile phones. Here’s how to find out the number of parameters in Matlab. For Python the TensorFlow library automatically calculates the number.