About 54 results
Open links in new tab
  1. torch.utils.data — PyTorch 2.10 documentation

    Jun 13, 2025 · Data loader combines a dataset and a sampler, and provides an iterable over the given dataset. The DataLoader supports both map-style and iterable-style datasets with single- or multi …

  2. Datasets & DataLoaders — PyTorch Tutorials 2.10.0+cu128 …

    PyTorch provides two data primitives: torch.utils.data.DataLoader and torch.utils.data.Dataset that allow you to use pre-loaded datasets as well as your own data.

  3. Writing Custom Datasets, DataLoaders and Transforms - PyTorch

    PyTorch provides many tools to make data loading easy and hopefully, to make your code more readable. In this tutorial, we will see how to load and preprocess/augment data from a non trivial …

  4. A guide on good usage of - PyTorch

    PyTorch notoriously provides a DataLoader class whose constructor accepts a pin_memory argument. Considering our previous discussion on pin_memory, you might wonder how the DataLoader …

  5. Training with PyTorch — PyTorch Tutorials 2.10.0+cu128 documentation

    The DataLoader pulls instances of data from the Dataset (either automatically or with a sampler that you define), collects them in batches, and returns them for consumption by your training loop. The …

  6. Quickstart — PyTorch Tutorials 2.10.0+cu128 documentation

    PyTorch has two primitives to work with data: torch.utils.data.DataLoader and torch.utils.data.Dataset. Dataset stores the samples and their corresponding labels, and DataLoader wraps an iterable …

  7. How does DataLoader pin_memory=True help with data ... - PyTorch …

    Jun 2, 2023 · By this logic, the pin_memory=True option in DataLoader only adds some additional steps that are intrinsically sequential anyways, so how does it really help with data loading?

  8. How does prefetch factor really work? - data - PyTorch Forums

    Feb 3, 2023 · When the model requests the next batch, DataLoader immediately pops off the first batch from the buffer, regardless of whether the buffer is full or not. Then, it goes back and tries to fill up …

  9. Performance Tuning Guide - PyTorch

    torch.utils.data.DataLoader supports asynchronous data loading and data augmentation in separate worker subprocesses. The default setting for DataLoader is num_workers=0, which means that the …

  10. Datasets — Torchvision 0.25 documentation

    Built-in datasets All datasets are subclasses of torch.utils.data.Dataset i.e, they have __getitem__ and __len__ methods implemented. Hence, they can all be passed to a torch.utils.data.DataLoader which …