Pytorch 和 Tensorflow v1 兼容的环境搭建方法

 更新时间:2022年11月03日 14:48:20   作者:daimashiren  
这篇文章主要介绍了搭建Pytorch 和 Tensorflow v1 兼容的环境,本文是小编经过多次实践得到的环境配置教程,给大家介绍的非常详细,对大家的学习或工作具有一定的参考借鉴价值,需要的朋友可以参考下

Github 上很多大牛的代码都是Tensorflow v1 写的,比较新的文章则喜欢用Pytorch,这导致我们复现实验或者对比实验的时候需要花费大量的时间在搭建不同的环境上。这篇文章是我经过反复实践总结出来的环境配置教程,亲测有效!

首先最基本的Python 环境配置如下:

conda create -n py37 python=3.7

python版本不要设置得太高也不要太低,3.6~3.7最佳,适用绝大部分代码库。(Tensorflow v1 最高支持的python 版本也只有3.7)

然后是Pytorch 环境 (因为最简单省力,哈哈哈)

# ROCM 5.1.1 (Linux only)
pip install torch==1.12.1+rocm5.1.1 torchvision==0.13.1+rocm5.1.1 torchaudio==0.12.1 --extra-index-url  https://download.pytorch.org/whl/rocm5.1.1
# CUDA 11.6
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu116
# CUDA 11.3
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
# CUDA 10.2
pip install torch==1.12.1+cu102 torchvision==0.13.1+cu102 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu102
# CPU only
pip install torch==1.12.1+cpu torchvision==0.13.1+cpu torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cpu

推荐使用pip 安装,用conda 安装的有时候会出现torch 识别不到GPU 的问题....

官网教程链接

Previous PyTorch Versions | PyTorch

然后是显卡相关的配置, cudatoolkitcudnn. 前面这个是pytorch 环境必须具备的包,后面这个则是tensorflow 要使用GPU所必需的。前面安装完pytorch 其实已经装好了一个cudatoolkit,我的电脑是Cuda 10.2 ,所以现在环境中已经有了一个cudatookit=10.2的包了,但是Tensorflow v1 最高只支持到 Cuda 10,所以得降低cudatoolkit的版本到10.0 (Cuda 环境是向下兼容的,我的Cuda 环境是10.2 但是cudatoolkit=10.0 也一样能用,这是Tensorflow v1 最高支持的版本,只能妥协......)

conda install cudatoolkit=10.0

然后装cudnn 

conda install cudnn=7.6.5=cuda10.0_0

亦可使用如下命令搜索你所要的cudnn版本

conda search cudnn

 如果conda 下载太慢请切换国内源

https://www.jb51.net/article/199913.htm

最后把Tensorflow v1装上

pip install tensorflow-gpu==1.15.0 -i https://pypi.tuna.tsinghua.edu.cn/simple

推荐的Tensorflow v1 的版本是1.15.0 和1.14.0,其他版本尚未测试。

最后分别测试Pytorch 和Tensorflow 能否使用GPU如下:

import torch 
print(torch.cuda.is_available()

Pytorch 检测GPU的方法相信大家都知道,不再赘述。Tensorflow v1 检测GPU的方法如下: 

from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())

如果输出结果为:

TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
 1. Downgrade the protobuf package to 3.20.x or lower.
 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

则降低protobuf 的版本

pip install protobuf==3.19.6 -i  https://pypi.tuna.tsinghua.edu.cn/simple

正确的输出为:

2022-10-30 21:46:59.982971: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2022-10-30 21:47:00.006072: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3699850000 Hz
2022-10-30 21:47:00.006792: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55d1633f2750 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2022-10-30 21:47:00.006808: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2022-10-30 21:47:00.008473: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2022-10-30 21:47:00.105474: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-30 21:47:00.105762: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55d1635c3f60 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2022-10-30 21:47:00.105784: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): NVIDIA GeForce GTX 1080 Ti, Compute Capability 6.1
2022-10-30 21:47:00.105990: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-30 21:47:00.106166: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: 
name: NVIDIA GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:01:00.0
2022-10-30 21:47:00.106369: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2022-10-30 21:47:00.107666: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2022-10-30 21:47:00.108687: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2022-10-30 21:47:00.108929: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2022-10-30 21:47:00.111721: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2022-10-30 21:47:00.112861: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2022-10-30 21:47:00.116688: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2022-10-30 21:47:00.116826: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-30 21:47:00.117018: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-30 21:47:00.117127: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2022-10-30 21:47:00.117170: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2022-10-30 21:47:00.117421: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-10-30 21:47:00.117435: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      0 
2022-10-30 21:47:00.117446: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0:   N 
2022-10-30 21:47:00.117529: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-30 21:47:00.117678: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-10-30 21:47:00.117813: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/device:GPU:0 with 10361 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 10409023728072267246
, name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 7385902535139826165
physical_device_desc: "device: XLA_CPU device"
, name: "/device:XLA_GPU:0"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
}
incarnation: 7109357658802926795
physical_device_desc: "device: XLA_GPU device"
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 10864479437
locality {
  bus_id: 1
  links {
  }
}
incarnation: 6537278509263123219
physical_device_desc: "device: 0, name: NVIDIA GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1"
]

 最关键的地方是你得看到你的GPU 型号,我的是 GTX 1080Ti,检测成功!

以上环境适用绝大多数深度学习模型,希望能帮到你!喜欢请点赞!

完结!撒花!

参考文献

Could not load dynamic library 'libcudart.so.10.0' - 知乎

https://medium.com/analytics-vidhya/solution-to-tensorflow-2-not-using-gpu-119fb3e04daa

How to tell if tensorflow is using gpu acceleration from inside python shell? - Stack Overflow

到此这篇关于搭建Pytorch 和 Tensorflow v1 兼容的环境的文章就介绍到这了,更多相关Pytorch 和 Tensorflow建环境内容请搜索脚本之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持脚本之家!

相关文章

  • Python 取numpy数组的某几行某几列方法

    Python 取numpy数组的某几行某几列方法

    这篇文章主要介绍了Python 取numpy数组的某几行某几列方法,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一起学习学习吧
    2019-10-10
  • python离散建模之感知器学习算法

    python离散建模之感知器学习算法

    这篇文章主要介绍了python离散建模之感知器学习算法,感知机学习算法是支持向量机的基础,支持向量机通过核函数进行非线性分类,支持向量机也是感知机算法的延伸,下面就来介绍感知算法的相关内容,需要的小伙伴可以参考一下
    2022-02-02
  • 解决Python3 控制台输出InsecureRequestWarning问题

    解决Python3 控制台输出InsecureRequestWarning问题

    这篇文章主要介绍了解决Python3 控制台输出InsecureRequestWarning的问题 ,本文给大家介绍的非常详细,具有一定的参考借鉴价值,需要的朋友可以参考下
    2019-07-07
  • Python中如何保留并查看关键字

    Python中如何保留并查看关键字

    保留关键字是Python语言中具有特殊含义和功能的词汇,这些词汇构成了Python的语法基础,下面就跟随小编一起来了解下Python中如何保留和查看这些关键字吧
    2025-04-04
  • Python numpy大矩阵运算内存不足如何解决

    Python numpy大矩阵运算内存不足如何解决

    这篇文章主要介绍了Python numpy大矩阵运算内存不足如何解决,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友可以参考下
    2020-11-11
  • 对numpy下的轴交换transpose和swapaxes的示例解读

    对numpy下的轴交换transpose和swapaxes的示例解读

    今天小编就为大家分享一篇对numpy下的轴交换transpose和swapaxes的示例解读,具有很好的参考价值,希望对大家有所帮助。一起跟随小编过来看看吧
    2019-06-06
  • Python读写mat文件操作指南(使用scipy.io)

    Python读写mat文件操作指南(使用scipy.io)

    Matlab是一个非常好用的矩阵计算分析软件,然而随着深度学习的发展,Python语言也逐渐成为人们的常用编程语言,这篇文章主要给大家介绍了关于Python使用scipy.io读写mat文件的相关资料,需要的朋友可以参考下
    2023-06-06
  • Python Pandas多种添加行列数据方法总结

    Python Pandas多种添加行列数据方法总结

    在进行数据分析时经常需要按照一定条件创建新的数据列,然后进行进一步分析,下面这篇文章主要给大家介绍了关于Python Pandas多种添加行列数据方法的相关资料,需要的朋友可以参考下
    2022-07-07
  • python多进程下的生产者和消费者模型

    python多进程下的生产者和消费者模型

    这篇文章主要介绍了python多进程下的生产者和消费者模型,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一起学习学习吧
    2020-05-05
  • Pycharm自动添加文件头注释和函数注释参数的方法

    Pycharm自动添加文件头注释和函数注释参数的方法

    这篇文章主要介绍了Pycharm自动添加文件头注释和函数注释参数,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一起学习学习吧
    2020-10-10

最新评论