Data sets are growing bigger every day and GPUs are getting faster. This means there are more data sets for deep learning researchers and engineers to train and validate their models.
- Many datasets for research in still image recognition are becoming available with 10 million or more images, including OpenImages and Places.
- million YouTube videos (YouTube 8M) consume about 300 TB in 720p, used for research in object recognition, video analytics, and action recognition.
- The Tobacco Corpus consists of about 20 million scanned HD pages, useful for OCR and text analytics research.
Here’s a full text: https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus