pytorch模型训练的时候GPU使用率不高的问题

 更新时间:2023年09月08日 10:47:51   作者:两只蜡笔的小新  
这篇文章主要介绍了pytorch模型训练的时候GPU使用率不高的问题,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教

前言

博主使用的显卡配置为:2*RTX 2080Ti,最近在训练的时候,监控显卡的资源使用情况发现,

虽然同是使用了两张显卡,但是每张显卡的使用率很不稳定,貌似是交替使用,这种情况下训练的速度是很慢的,为了解决

下面是解决这个问题的一些过程。

1. CPU和内存的使用情况

2. 用linux命令查看显卡资源的使用情况

watch -n 1 nvidia-smi

模型执行预测阶段 使用显卡0,但是也只有51%的使用率。

模型在训练阶段,同时使用两张显卡,发现里利用率也不高,我截取的最高的也就60%

3. 在pytorch的文档中找到了解决办法

data.DataLoader(dataset: Dataset[T_co], batch_size: Optional[int] = 1,
             shuffle: bool = False, sampler: Optional[Sampler[int]] = None,
             batch_sampler: Optional[Sampler[Sequence[int]]] = None,
             num_workers: int = 0, collate_fn: _collate_fn_t = None,
             pin_memory: bool = False, drop_last: bool = False,
             timeout: float = 0, worker_init_fn: _worker_init_fn_t = None,
             multiprocessing_context=None, generator=None,
             *, prefetch_factor: int = 2,
             persistent_workers: bool = False)

上面是该类的输入参数,经常使用的用红色标出,与本文相关的设置用紫色标出

下面是该类的描述文件:

class DataLoader(Generic[T_co]):
    r"""
    Data loader. Combines a dataset and a sampler, and provides an iterable over
    the given dataset.
    The :class:`~torch.utils.data.DataLoader` supports both map-style and
    iterable-style datasets with single- or multi-process loading, customizing
    loading order and optional automatic batching (collation) and memory pinning.
    See :py:mod:`torch.utils.data` documentation page for more details.
    Args:
        dataset (Dataset): dataset from which to load the data.
        batch_size (int, optional): how many samples per batch to load
            (default: ``1``).
        shuffle (bool, optional): set to ``True`` to have the data reshuffled
            at every epoch (default: ``False``).
        sampler (Sampler or Iterable, optional): defines the strategy to draw
            samples from the dataset. Can be any ``Iterable`` with ``__len__``
            implemented. If specified, :attr:`shuffle` must not be specified.
        batch_sampler (Sampler or Iterable, optional): like :attr:`sampler`, but
            returns a batch of indices at a time. Mutually exclusive with
            :attr:`batch_size`, :attr:`shuffle`, :attr:`sampler`,
            and :attr:`drop_last`.
        num_workers (int, optional): how many subprocesses to use for data
            loading. ``0`` means that the data will be loaded in the main process.
            (default: ``0``)
        collate_fn (callable, optional): merges a list of samples to form a
            mini-batch of Tensor(s).  Used when using batched loading from a
            map-style dataset.
        pin_memory (bool, optional): If ``True``, the data loader will copy Tensors
            into CUDA pinned memory before returning them.  If your data elements
            are a custom type, or your :attr:`collate_fn` returns a batch that is a custom type,
            see the example below.
        drop_last (bool, optional): set to ``True`` to drop the last incomplete batch,
            if the dataset size is not divisible by the batch size. If ``False`` and
            the size of dataset is not divisible by the batch size, then the last batch
            will be smaller. (default: ``False``)
        timeout (numeric, optional): if positive, the timeout value for collecting a batch
            from workers. Should always be non-negative. (default: ``0``)
        worker_init_fn (callable, optional): If not ``None``, this will be called on each
            worker subprocess with the worker id (an int in ``[0, num_workers - 1]``) as
            input, after seeding and before data loading. (default: ``None``)
        prefetch_factor (int, optional, keyword-only arg): Number of samples loaded
            in advance by each worker. ``2`` means there will be a total of
            2 * num_workers samples prefetched across all workers. (default: ``2``)
        persistent_workers (bool, optional): If ``True``, the data loader will not shutdown
            the worker processes after a dataset has been consumed once. This allows to
            maintain the workers `Dataset` instances alive. (default: ``False``)
    .. warning:: If the ``spawn`` start method is used, :attr:`worker_init_fn`
                 cannot be an unpicklable object, e.g., a lambda function. See
                 :ref:`multiprocessing-best-practices` on more details related
                 to multiprocessing in PyTorch.
    .. warning:: ``len(dataloader)`` heuristic is based on the length of the sampler used.
                 When :attr:`dataset` is an :class:`~torch.utils.data.IterableDataset`,
                 it instead returns an estimate based on ``len(dataset) / batch_size``, with proper
                 rounding depending on :attr:`drop_last`, regardless of multi-process loading
                 configurations. This represents the best guess PyTorch can make because PyTorch
                 trusts user :attr:`dataset` code in correctly handling multi-process
                 loading to avoid duplicate data.
                 However, if sharding results in multiple workers having incomplete last batches,
                 this estimate can still be inaccurate, because (1) an otherwise complete batch can
                 be broken into multiple ones and (2) more than one batch worth of samples can be
                 dropped when :attr:`drop_last` is set. Unfortunately, PyTorch can not detect such
                 cases in general.
                 See `Dataset Types`_ for more details on these two types of datasets and how
                 :class:`~torch.utils.data.IterableDataset` interacts with
                 `Multi-process data loading`_.
    .. warning:: See :ref:`reproducibility`, and :ref:`dataloader-workers-random-seed`, and
                 :ref:`data-loading-randomness` notes for random seed related questions.
    """

发现如下连个参数很关键:

num_workers (int, optional): how many subprocesses to use for data
    loading. ``0`` means that the data will be loaded in the main process.
    (default: ``0``)
pin_memory (bool, optional): If ``True``, the data loader will copy Tensors
    into CUDA pinned memory before returning them.  If your data elements
    are a custom type, or your :attr:`collate_fn` returns a batch that is a custom type,
    see the example below.

把 num_workers  = 4,pin_memory = True,发现效率就上来啦!!!

只开 num_workers

开 num_workers 和 pin_memory

总结

以上为个人经验,希望能给大家一个参考,也希望大家多多支持脚本之家。

相关文章

  • python中enumerate() 与zip()函数的使用比较实例分析

    python中enumerate() 与zip()函数的使用比较实例分析

    这篇文章主要介绍了python中enumerate()与zip()函数的使用比较,结合实例形式分析了enumerate()与zip()函数的功能、用法及操作注意事项,需要的朋友可以参考下
    2019-09-09
  • Python实现账号密码输错三次即锁定功能简单示例

    Python实现账号密码输错三次即锁定功能简单示例

    这篇文章主要介绍了Python实现账号密码输错三次即锁定功能,结合实例形式分析了Python文件读取、流程控制、数据判断等相关操作技巧,需要的朋友可以参考下
    2019-03-03
  • Python向日志输出中添加上下文信息

    Python向日志输出中添加上下文信息

    这篇文章主要介绍了Python向日志输出中添加上下文信息的方法,非常不错,具有参考借鉴价值,需要的朋友可以参考下
    2017-05-05
  • Python执行外部命令subprocess的使用详解

    Python执行外部命令subprocess的使用详解

    subeprocess模块是python自带的模块,无需安装,主要用来取代一些就的模块或方法,本文通过实例代码给大家分享Python执行外部命令subprocess及使用方法,感兴趣的朋友跟随小编一起看看吧
    2021-05-05
  • Django Admin实现上传图片校验功能

    Django Admin实现上传图片校验功能

    这篇文章主要介绍了Django Admin实现上传图片校验功能的相关资料,需要的朋友可以参考下
    2016-03-03
  • Python数字比较与类结构

    Python数字比较与类结构

    这篇文章主要介绍了Python数字比较与类结构,文章围绕主题展开详细的内容介绍,具有一定的参考价值,需要的小伙伴可以参考一下
    2022-07-07
  • python实现的简单猜数字游戏

    python实现的简单猜数字游戏

    这篇文章主要介绍了python实现的简单猜数字游戏,涉及Python操作随机数的技巧,具有一定参考借鉴价值,需要的朋友可以参考下
    2015-04-04
  • Python统计时间内的并发数代码实例

    Python统计时间内的并发数代码实例

    这篇文章主要介绍了Python统计时间内的并发数代码实例,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友可以参考下
    2019-12-12
  • pytorch中 gpu与gpu、gpu与cpu 在load时相互转化操作

    pytorch中 gpu与gpu、gpu与cpu 在load时相互转化操作

    这篇文章主要介绍了pytorch模型载入之gpu和cpu互转操作,具有很好的参考价值,希望对大家有所帮助。一起跟随小编过来看看吧
    2020-05-05
  • 在Windows中定时执行Python脚本的详细教程

    在Windows中定时执行Python脚本的详细教程

    在Windows系统中,定时执行Python脚本是一个常见需求,特别是在需要自动化数据处理、监控任务或周期性维护等场景中,本文将结合实际案例,详细介绍如何在Windows中通过任务计划程序(Task Scheduler)来实现定时执行Python脚本的功能,需要的朋友可以参考下
    2024-08-08

最新评论