1. nvidia提供了一个c++的类库thrust用来简化编程,在安装cuda toolkit时候已经包含了thrust
这个库全是头文件,不需要添加任何库文件的依赖
测试程序
![](https://img.php1.cn/3cd4a/1eebe/cd5/780a3060eeed6a4e.webp)
![](https://img.php1.cn/3cd4a/1eebe/cd5/a1be7872e8d4934f.webp)
#include
#include
#include
#include
#include
#include
#include
#include
#include
void cpu_sort(T begin, T end)
{std::sort(begin, end);
}void gpu_sort(thrust::host_vector<int> &h_vec)
{// transfer data to the devicethrust::device_vector<int> d_vec &#61; h_vec;// sort data on the device (846M keys per second on GeForce GTX 480)
thrust::sort(d_vec.begin(), d_vec.end());// transfer data back to host
thrust::copy(d_vec.begin(), d_vec.end(), h_vec.begin());
}#define CHK_TIME(x) {int t1&#61;GetTickCount();x;int t2&#61;GetTickCount();printf(#x ": %d\n", t2-t1);}int main(void)
{// generate 32M random numbers seriallythrust::host_vector<int> h_vec(32 <<20);std::generate(h_vec.begin(), h_vec.end(), rand);thrust::host_vector<int> h_vec_1(h_vec);CHK_TIME(cpu_sort(h_vec_1.begin(), h_vec_1.end()));thrust::host_vector<int> h_vec_2(h_vec);CHK_TIME(gpu_sort(h_vec_2));return 0;
}
notes
a&#xff09;文件要保存为.cu格式以便使用nvcc编译
b&#xff09;如果不知道vcproj如何设置&#xff0c;最简单的是把代码直接拷贝到一个example里面&#xff0c;利用其现成的工程来编译
c&#xff09;compile的时间实在太长了
d&#xff09;生成的文件太大了&#xff08;15MB&#xff09;
这是俺的测试结果&#xff08;注意&#xff0c;这里cpu是单线程&#xff0c;如果利用上多核的话&#xff0c;cpu性能会好很多&#xff09;
![](https://img.php1.cn/3cd4a/1eebe/cd5/780a3060eeed6a4e.webp)
![](https://img.php1.cn/3cd4a/1eebe/cd5/a1be7872e8d4934f.webp)
&#xff08;debug version&#xff09;
cpu_sort(h_vec_1.begin(), h_vec_1.end()): 94609
gpu_sort(h_vec_2): 3312
&#xff08;release version&#xff09;
cpu_sort(h_vec_1.begin(), h_vec_1.end()): 2828
gpu_sort(h_vec_2): 594
2. 关于cuda的sort算法&#xff0c;用的是 radix sort
![](https://img.php1.cn/3cd4a/1eebe/cd5/780a3060eeed6a4e.webp)
![](https://img.php1.cn/3cd4a/1eebe/cd5/a1be7872e8d4934f.webp)
http://stackoverflow.com/questions/6502151/parallel-sorting-on-cuda
Many GPU sorting implementations are variants of the bitonic sort, which is pretty well known and described in most reasonable texts on algorithms published in the last 25 or 30 years.The "reference" sorting implementation for CUDA done by Nadathur Satish from Berkeley and Mark Harris and Michael Garland from NVIDIA (paper here) is a radix sort, and forms the basis of what is in NPP and Thrust.
3. NPP是nvidia的信号处理函数库&#xff0c;类似于ipp&#xff0c;包含了很多基本的处理算法
https://developer.nvidia.com/npp
![](https://img.php1.cn/3cd4a/1eebe/cd5/780a3060eeed6a4e.webp)
![](https://img.php1.cn/3cd4a/1eebe/cd5/a1be7872e8d4934f.webp)
Eliminates unnecessary copying of data to/from CPU memoryProcess data that is already in GPU memoryLeave results in GPU memory so they are ready for subsequent processingData Exchange and InitializationSet, Convert, Copy, CopyConstBorder, Transpose, SwapChannelsArithmetic and Logical OperationsAdd, Sub, Mul, Div, AbsDiff, Threshold, CompareColor ConversionRGBToYCbCr, YcbCrToRGB, YCbCrToYCbCr, ColorTwist, LUT_LinearFilter FunctionsFilterBox, Filter, FilterRow, FilterColumn, FilterMax, FilterMin, Dilate, Erode, SumWindowColumn, SumWindowRowJPEGDCTQuantInv, DCTQuantFwd, QuantizationTableJPEGGeometry TransformsMirror, WarpAffine, WarpAffineBack, WarpAffineQuad, WarpPerspective, WarpPerspectiveBack , WarpPerspectiveQuad, ResizeStatistics FunctionsMean_StdDev, NormDiff, Sum, MinMax, HistogramEven, RectStdDev
4. 另外&#xff0c;还有一些额外的库比如NVIDIA cuFFT&#xff0c;NVIDIA cuBLAS &#xff08;6x to 17x faster performance than the latest MKL BLAS.&#xff09;&#xff0c;EM Photonics CULA Tools&#xff08;linear algebra library&#xff09;&#xff0c; NVIDIA cuSPARSE&#xff0c;NVIDIA CUDA Math Library
https://developer.nvidia.com/gpu-accelerated-libraries