热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

libtorch入门autograd

libtorch下载libtorch-shared-with-deps-1.7.1cu101.zip例子下载https:github.compytorchexamplestre

 


libtorch 下载

libtorch-shared-with-deps-1.7.1+cu101.zip

 

 


例子下载

https://github.com/pytorch/examples/tree/master/cpp/autograd

 

cMakeList.txt

cmake_minimum_required(VERSION 2.8)project(autograd)
set(CMAKE_CXX_STANDARD 14)find_package(Torch REQUIRED)add_executable(${PROJECT_NAME} "autograd.cpp")
target_link_libraries(${PROJECT_NAME} "${TORCH_LIBRARIES}")# The following code block is suggested to be used on Windows.
# According to https://github.com/pytorch/pytorch/issues/25457,
# the DLLs need to be copied to avoid memory errors.
if (MSVC)file(GLOB TORCH_DLLS "${TORCH_INSTALL_PREFIX}/lib/*.dll")add_custom_command(TARGET ${PROJECT_NAME}POST_BUILDCOMMAND ${CMAKE_COMMAND} -E copy_if_different${TORCH_DLLS}$)
endif (MSVC)

 

autograd.cpp

#include
#include using namespace torch::autograd;void basic_autograd_operations_example() {std::cout <<"&#61;&#61;&#61;&#61;&#61;&#61; Running: \"Basic autograd operations\" &#61;&#61;&#61;&#61;&#61;&#61;" <name() <name() <name() <name() <() <1000) {y &#61; y * 2;}std::cout <name() <() <}void compute_higher_order_gradients_example() {std::cout <<"&#61;&#61;&#61;&#61;&#61;&#61; Running \"Computing higher-order gradients in C&#43;&#43;\" &#61;&#61;&#61;&#61;&#61;&#61;" <}// Inherit from Function
class LinearFunction : public Function {public:// Note that both forward and backward are static functions// bias is an optional argumentstatic torch::Tensor forward(AutogradContext *ctx, torch::Tensor input, torch::Tensor weight, torch::Tensor bias &#61; torch::Tensor()) {ctx->save_for_backward({input, weight, bias});auto output &#61; input.mm(weight.t());if (bias.defined()) {output &#43;&#61; bias.unsqueeze(0).expand_as(output);}return output;}static tensor_list backward(AutogradContext *ctx, tensor_list grad_outputs) {auto saved &#61; ctx->get_saved_variables();auto input &#61; saved[0];auto weight &#61; saved[1];auto bias &#61; saved[2];auto grad_output &#61; grad_outputs[0];auto grad_input &#61; grad_output.mm(weight);auto grad_weight &#61; grad_output.t().mm(input);auto grad_bias &#61; torch::Tensor();if (bias.defined()) {grad_bias &#61; grad_output.sum(0);}return {grad_input, grad_weight, grad_bias};}
};class MulConstant : public Function {public:static torch::Tensor forward(AutogradContext *ctx, torch::Tensor tensor, double constant) {// ctx is a context object that can be used to stash information// for backward computationctx->saved_data["constant"] &#61; constant;return tensor * constant;}static tensor_list backward(AutogradContext *ctx, tensor_list grad_outputs) {// We return as many input gradients as there were arguments.// Gradients of non-tensor arguments to forward must be &#96;torch::Tensor()&#96;.return {grad_outputs[0] * ctx->saved_data["constant"].toDouble(), torch::Tensor()};}
};void custom_autograd_function_example() {std::cout <<"&#61;&#61;&#61;&#61;&#61;&#61; Running \"Using custom autograd function in C&#43;&#43;\" &#61;&#61;&#61;&#61;&#61;&#61;" <}int main() {std::cout <}

 

 


cuda 版本


hlx&#64;W240F1:~/下载$ cat /usr/local/cuda/version.txt
CUDA Version 10.0.130
CUDA Patch Version 10.0.130.1
hlx&#64;W240F1:~/下载$


 

 

 


hlx&#64;W240F1:~/libtorch_tut$ mkdir build
hlx&#64;W240F1:~/libtorch_tut$ cd build
hlx&#64;W240F1:~/libtorch_tut/build$ cmake -DCMAKE_PREFIX_PATH&#61;/home/hlx/libtorch ..-- The C compiler identification is GNU 5.4.0
-- The CXX compiler identification is GNU 5.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c&#43;&#43;
-- Check for working CXX compiler: /usr/bin/c&#43;&#43; -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- Found CUDA: /usr/local/cuda (found version "10.0")
-- Caffe2: CUDA detected: 10.0
-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda
-- Caffe2: Header version is: 10.0
-- Found CUDNN: /usr/local/cuda/lib64/libcudnn.so  
-- Found cuDNN: v7.6.3  (include: /usr/local/cuda/include, library: /usr/local/cuda/lib64/libcudnn.so)
-- Autodetected CUDA architecture(s):  6.1 6.1
-- Added CUDA NVCC flags for: -gencode;arch&#61;compute_61,code&#61;sm_61
-- Found Torch: /home/hlx/libtorch/lib/libtorch.so  
-- Configuring done
-- Generating done
-- Build files have been written to: /home/hlx/libtorch_tut/build
hlx&#64;W240F1:~/libtorch_tut/build$ make
Scanning dependencies of target autograd
[ 50%] Building CXX object CMakeFiles/autograd.dir/autograd.cpp.o
[100%] Linking CXX executable autograd
[100%] Built target autograd
hlx&#64;W240F1:~/libtorch_tut/build$

 


 

 


推荐阅读
  • 自动化部署服务——AWS CodeDeploy 快速入门
    https:amazonaws-china.comcnblogschinagetting-started-with-codedeploy作为DevOps和微服务的深入践行者 ... [详细]
  • 十一、构建我们自己的包在本章中,我们将学习如何构建自己的包。编写包可以让我们创建可以在许多应用 ... [详细]
  • 系统管理部分软件包管理进程管理服务管理磁盘管理系统管理之软件包管理软件包的安装方式yumapt方式rpmdpkg方式编译安装方式二进制安装方式rpm安装方式增删改查安装:-ivh查 ... [详细]
  • 缓冲区溢出实例(一)–Windows
    一、基本概念缓冲区溢出:当缓冲区边界限制不严格时,由于变量传入畸形数据或程序运行错误,导致缓冲区被填满从而覆盖了相邻内存区域的数据。可以修改内存数据,造成进程劫持,执行恶意代码,获 ... [详细]
  • 在Docker中,将主机目录挂载到容器中作为volume使用时,常常会遇到文件权限问题。这是因为容器内外的UID不同所导致的。本文介绍了解决这个问题的方法,包括使用gosu和suexec工具以及在Dockerfile中配置volume的权限。通过这些方法,可以避免在使用Docker时出现无写权限的情况。 ... [详细]
  • 工具系列 | 分布式日志管理graylog 实战
    Graylog是一个开源的日志聚合、分析、审计、展现和预警工具。功能上和ELK类似,但又比ELK要简单,依靠着更加简洁,高效, ... [详细]
  • 系统osx10.11用的是brew下的php56brew下的nginx下了一个项目,在安装过程中提示缺少,intl和apc扩展,就用下面的语句下载了,也装上了,但php还是没有加载 ... [详细]
  • WindowsRegistryEditorVersion5.00[HKEY_CLASSES_ROOT\*\shell\runas]添加管理员权限NoWorkingDirect ... [详细]
  • nvmw安装,用于控制node版本;
    之前一直使用的是nodev2.2.0版本,挺说新版本的node解决了npm安装插件产生文件夹结构过深的问题,所以就想更新试试;上网一看才发现,尼玛的node已经到了6.+版本了,好 ... [详细]
  • linux树莓派和n1,树莓派 斐讯N1 搭建NFS
    什么是NFS?1台Linux主机的磁盘可以通过网络挂载到其他Linux主机上,实现云盘效果。NFS是一套软件和协议,同时也是一种文件系统& ... [详细]
  • 文章目录1.解释一下GBDT算法的过程1.1Boosting思想1.2GBDT原来是这么回事2.梯度提升和梯度下降的区别和联系是什么?3.GBDT的优点和局限性有哪 ... [详细]
  • 1、背景-在项目的实施过程中,由于有dev环境和pro环境,这时会有两个redis集群,但是部分数据从甲方的三方数据库中获取存入生产环境的redis集群中,为了方便测试和数据校验, ... [详细]
  • Fixes#3560Itriedtodowhatproposedintheissue(inthisbranchhttps://gith ... [详细]
  • 深度强化学习Policy Gradient基本实现
    全文共2543个字,2张图,预计阅读时间15分钟。基于值的强化学习算法的基本思想是根据当前的状态,计算采取每个动作的价值,然 ... [详细]
  • 如何去除Win7快捷方式的箭头
    本文介绍了如何去除Win7快捷方式的箭头的方法,通过生成一个透明的ico图标并将其命名为Empty.ico,将图标复制到windows目录下,并导入注册表,即可去除箭头。这样做可以改善默认快捷方式的外观,提升桌面整洁度。 ... [详细]
author-avatar
清明如月_213
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有