Is over-image cutting cause a deep learning model in a poor performance?

I’m currently building a deep learning model to recognize images. According to my reading, data addition (such as randomly cropping images) will reduce the model’s overfitting. However, I’m not sure if Overdoing this will result in a worse model. Of course, I can try more cropping and cropping less cropping. But the question is how can I know if the problem comes from the number of crops.

Is it possible from Make all possible crops of size mxm in an image of size nxn, so as to obtain better model performance?

I believe it will. My reason is: when we train a deep learning model, we will look at the train loss and validation loss and train the model until the loss is very low. Suppose initially we have a set of 1000 images The model requires 100 epochs to train. Now, we crop 10 times the additional images from the original train set. Now each epoch can be regarded as equivalent to 10 epochs with less training data in the previous model. However, compared with the 10-fold repetition in the previous model, the training data for each of these 10 epochs is slightly different. Of course, this will lead to overfitting. Is my reasoning correct?

In this case, assuming we have enough computing resources, is there any disadvantage of cropping all possible smaller images?

Currently I am looking to crop all possible 64×64 images from a 72×72 image, which gives me a total of 64 new images for each original image.

I haven’t seen it yet. To any papers related to it. If anyone can point me to it, I would be very grateful. Thank you.

To answer your question, no. It will not hurt performance, but it will add a few milliseconds to the general process. Perhaps the best answer you can get is to try a different method.

I A deep learning model is currently being built to recognize images. According to my reading, data increase (such as randomly cropping images) will reduce the overfitting of the model. However, I am not sure if doing this too much will lead to a worse model. Of course, I can try more cropping and cropping less cropping. But the question is how can I know if the problem comes from the number of crops.

Is it possible to make all of the size mxm from an image of size nxn Is it possible to crop to obtain better model performance?

I believe it will. My reason is: when we train a deep learning model, we will look at the train loss and validation loss and train the model until the loss is very low. Suppose initially we have a set of 1000 images The model requires 100 epochs to train. Now, we crop 10 times the additional images from the original train set. Now each epoch can be regarded as equivalent to 10 epochs with less training data in the previous model. However, compared with the 10-fold repetition in the previous model, the training data for each of these 10 epochs is slightly different. Of course, this will lead to overfitting. Is my reasoning correct?

In this case, assuming we have enough computing resources, is there any disadvantage of cropping all possible smaller images?

Currently I am looking to crop all possible 64×64 images from a 72×72 image, which gives me a total of 64 new images for each original image.

I haven’t seen it yet. To any papers related to it. If anyone can point me to it, I would be very grateful. Thank you.

Answer your question, no. It will not harm Performance, but it will add a few milliseconds to the general process. Perhaps the best answer you can get is to try a different method.

Leave a Comment

Your email address will not be published.