热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

peopledetect学习,来自opencv中文论坛

OpenCV2.0提供了行人检测的例子,用的是法国人Navneet Dalal最早在CVPR2005会议上提出的方法。 最近正在学习它,下面是自己的学习体会,希望共同探讨提

OpenCV2.0提供了行人检测的例子,用的是法国人Navneet Dalal最早在CVPR2005会议上提出的方法。


最近正在学习它,下面是自己的学习体会,希望共同探讨提高。


1、VC 2008 Express下安装OpenCV2.0--可以直接使用2.1,不用使用CMake进行编译了,避免编译出错


      这是一切工作的基础,感谢版主提供的参考: http://www.opencv.org.cn/index.php/VC_2008_Express下安装OpenCV2.0

2、体会该程序


在DOS界面,进入如下路径: C:\OpenCV2.0\samples\c  peopledetect.exe filename.jpg


其中filename.jpg为待检测的文件名


3、编译程序


 创建一个控制台程序,从C:\OpenCV2.0\samples\c下将peopledetect.cpp加入到工程中;按步骤1的方法进行设置。编译成功,但是在DEBUG模式下生成的EXE文件运行出错,很奇怪 。


改成RELEASE模式后再次编译,生成的EXE文件可以运行。


4程序代码简要说明


1) getDefaultPeopleDetector() 获得3780维检测算子(105 blocks with 4 histograms each and 9 bins per histogram there are 3,780 values)--(为什么是105blocks?)


2).cv::HOGDescriptor hog; 创建类的对象 一系列变量初始化  


winSize(64,128), blockSize(16,16), blockStride(8,8),


cellSize(8,8), nbins(9), derivAperture(1), winSigma(-1),


histogramNormType(L2Hys), L2HysThreshold(0.2), gammaCorrection(true)


3). 调用函数:detectMultiScale(img, found, 0, cv::Size(8,8), cv::Size(24,16), 1.05, 2); 


  参数分别为待检图像、返回结果列表、门槛值hitThreshold、窗口步长winStride、图像padding margin、比例系数、门槛值groupThreshold;通过修改参数发现,就所用的某图片,参数0改为0.01就检测不到,改为0.001可以;1.05改为1.1就不行,1.06可以;2改为1可以,0.8以下不行,(24,16)改成(0,0)也可以,(32,32)也行


该函数内容如下


(1) 得到层数 levels 


某图片(530,402)为例,lg(402/128)/lg1.05=23.4 则得到层数为24


 (2) 循环levels次,每次执行内容如下


HOGThreadData& tdata = threadData[getThreadNum()];


Mat smallerImg(sz, img.type(), tdata.smallerImgBuf.data);


    调用以下核心函数


detect(smallerImg, tdata.locations, hitThreshold, winStride, padding);


其参数分别为,该比例下图像、返回结果列表、门槛值、步长、margin


该函数内容如下:


(a)得到补齐图像尺寸paddedImgSize


(b)创建类的对象 HOGCache cache(this, img, padding, padding, nwindows == 0, cacheStride); 在创建过程中,首先初始化 HOGCache::init,包括:计算梯度 descriptor->computeGradient、得到块的个数105、每块参数个数36 


    (c)获得窗口个数nwindows,以第一层为例,其窗口数为(530+32*2-64)/8+1、(402+32*2-128)/8+1 =67*43=2881,其中(32,32)为winStride参数,也可用(24,16)


(d)在每个窗口执行循环,内容如下


在105个块中执行循环,每个块内容为:通过getblock函数计算HOG特征并归一化,36个数分别与算子中对应数进行相应运算;判断105个块的总和 s >= hitThreshold 则认为检测到目标 


4)主体部分感觉就是以上这些,但很多细节还需要进一步弄清。


5、原文献写的算法流程


文献NavneetDalalThesis.pdf 78页图5.5描述了The complete object detection algorithm.


前2步为初始化,上面基本提到了。后面2步如下


For each scale Si = [Ss, SsSr, . . . , Sn]


(a) Rescale the input image using bilinear interpolation


(b) Extract features (Fig. 4.12) and densely scan the scaled image with stride Ns for object/non-object detections


(c) Push all detections with t(wi) > c to a list


Non-maximum suppression


(a) Represent each detection in 3-D position and scale space yi


(b) Using (5.9), compute the uncertainty matrices Hi for each point


(c) Compute the mean shift vector (5.7) iteratively for each point in the list until it converges to a mode


(d) The list of all of the modes gives the final fused detections


(e) For each mode compute the bounding box from the final centre point and scale

以下内容节选自文献NavneetDalalThesis.pdf,把重要的部分挑出来了。其中保留了原文章节号,便于查找。




4. Histogram of Oriented Gradients Based Encoding of Images


Default Detector.


As a yardstick for the purpose of comparison, throughout this section we compare results to our


default detector which has the following properties: input image in RGB colour space (without


any gamma correction); image gradient computed by applying [?1, 0, 1] filter along x- and yaxis


with no smoothing; linear gradient voting into 9 orientation bins in 0_–180_; 16×16 pixel


blocks containing 2×2 cells of 8×8 pixel; Gaussian block windowing with _ = 8 pixel; L2-Hys


(Lowe-style clipped L2 norm) block normalisation; blocks spaced with a stride of 8 pixels (hence


4-fold coverage of each cell); 64×128 detection window; and linear SVM classifier. We often


quote the performance at 10?4 false positives per window (FPPW) – the maximum false positive


rate that we consider to be useful for a real detector given that 103–104 windows are tested for


each image.


4.3.2 Gradient Computation


The simple [?1, 0, 1] masks give the best performance.


4.3.3 Spatial / Orientation Binning


Each pixel contributes a weighted vote for orientation based on the orientation of the gradient element centred on it.


The votes are accumulated into orientation bins over local spatial regions that we call cells.


To reduce aliasing, votes are interpolated trilinearly between the neighbouring bin centres in both orientation and position.


Details of the trilinear interpolation voting procedure are presented in Appendix D.


The vote is a function of the gradient magnitude at the pixel, either the magnitude itself, its square, its


square root, or a clipped form of the magnitude representing soft presence/absence of an edge at the pixel. In practice, using the magnitude itself gives the best results.


4.3.4 Block Normalisation Schemes and Descriptor Overlap


good normalisation is critical and including overlap significantly improves the performance.


Figure 4.4(d) shows that L2-Hys, L2-norm and L1-sqrt all perform equally well for the person detector.


such as cars and motorbikes, L1-sqrt gives the best results.


4.3.5 Descriptor Blocks


R-HOG.


For human detection, 3×3 cell blocks of 6×6 pixel cells perform best with 10.4% miss-rate


at 10?4 FPPW. Our standard 2×2 cell blocks of 8×8 cells are a close second.


We find 2×2 and 3×3 cell blocks work best.


4.3.6 Detector Window and Context


Our 64×128 detection window includes about 16 pixels of margin around the person on all four


sides.


4.3.7 Classifier


By default we use a soft (C=0.01) linear SVM trained with SVMLight [Joachims 1999].We modified


SVMLight to reduce memory usage for problems with large dense descriptor vectors.


---------------------------------


5. Multi-Scale Object Localisation


the detector scans the image with a detection window at all positions and scales, running the classifier in each window and fusing multiple overlapping detections to yield the final object detections.


We represent detections using kernel density estimation (KDE) in 3-D position and scale space. KDE is a data-driven process where continuous densities are evaluated by applying a smoothing kernel to observed data points. The bandwidth of the smoothing kernel defines the local neighbourhood. The detection scores are incorporated by weighting the observed detection points by their score values while computing the density estimate. Thus KDE naturally incorporates the first two criteria. The overlap criterion follows from the fact that detections at very different scales or positions are far off in 3-D position and scale space, and are thus not smoothed together. The modes (maxima) of the density estimate correspond to the positions and scales of final detections.


Let xi = [xi, yi] and s0i denote the detection position and scale, respectively, for the i-th detection.


the detections are represented in 3-D space as y = [x, y, s], where s = log(s’).


the variable bandwidth mean shift vector is defined as (5.7)




For each of the n point the mean shift based iterative procedure is guaranteed to converge to a mode2.


Detection Uncertainty Matrix Hi.


One key input to the above mode detection algorithm is the amount of uncertainty Hi to be associated with each point. We assume isosymmetric covariances, i.e. the Hi’s are diagonal matrices.


Let diag [H] represent the 3 diagonal elements of H. We use scale dependent covariance


matrices such that diag


[Hi] = [(exp(si)_x)2, (exp(si)_y)2, (_s)2] (5.9)


where _x, _y and _s are user supplied smoothing values.




The term t(wi) provides the weight for each detection. For linear SVMs we usually use threshold = 0.


the smoothing parameters _x, _y,and _s used in the non-maximum suppression stage. These parameters can have a significant impact on performance so proper evaluation is necessary. For all of the results here, unless otherwise noted, a scale ratio of 1.05, a stride of 8 pixels, and _x = 8, _y = 16, _s = log(1.3) are used as default values.


A scale ratio of 1.01 gives the best performance, but significantly slows the overall process.


Scale smoothing of log(1.3)–log(1.6) gives good performance for most object classes.


We group these mode candidates using a proximity measure. The final location is the ode corresponding to the highest density.


----------------------------------------------------


附录 A. INRIA Static Person Data Set


The (centred and normalised) positive windows are supplied by the user, and the initial set of negatives is created once and for all by randomly sampling negative images.A preliminary classifier is thus trained using these. Second, the preliminary detector is used to exhaustively scan the negative training images for hard examples (false positives). The classifier is then re-trained using this augmented training set (user supplied positives, initial negatives and hard examples) to produce the final detector.


INRIA Static Person Data Set


As images of people are highly variable, to learn an effective classifier, the positive training examples need to be properly normalized and centered to minimize the variance among them. For this we manually annotated all upright people in the original images.


The image regions belonging to the annotations were cropped and rescaled to 64×128 pixel image windows. On average the subjects height is 96 pixels in these normalised windows to allow for an approximately16 pixel margin on each side. In practise we leave a further 16 pixel margin around each side of the image window to ensure that flow and gradients can be computed without boundary effects. The margins were added by appropriately expanding the annotations on each side before cropping the image regions.

//<------------------------以上摘自datal的博士毕业论文

关于INRIA Person Dataset的更多介绍,见以下链接

http://pascal.inrialpes.fr/data/human/

Original Images


            Folders 'Train' and 'Test' correspond, respectively, to original training and test images. Both folders have three sub folders: (a) 'pos' (positive training or test images), (b) 'neg' (negative training or test images), and (c) 'annotations' (annotation files for positive images in Pascal Challenge format).


Normalized Images


        Folders 'train_64x128_H96' and 'test_64x128_H96' correspond to normalized dataset as used in above referenced paper. Both folders have two sub folders: (a) 'pos' (normalized positive training or test images centered on the person with their left-right reflections), (b) 'neg' (containing original negative training or test images). Note images in folder 'train/pos' are of 96x160 pixels (a margin of 16 pixels around each side), and images in folder 'test/pos' are of 70x134 pixels (a margin of 3 pixels around each side). This has been done to avoid boundary conditions (thus to avoid any particular bias in the classifier). In both folders, use the centered 64x128 pixels window for original detection task.


Negative windows


        To generate negative training windows from normalized images, a fixed set of 12180 windows (10 windows per negative image) are sampled randomly from 1218 negative training photos providing the initial negative training set. For each detector and parameter combination, a preliminary detector is trained and all negative training images are searched exhaustively (over a scale-space pyramid) for false positives (`hard examples'). All examples with score greater than zero are considered hard examples. The method is then re-trained using this augmented set (initial 12180 + hard examples) to produce the final detector. The set of hard examples is subsampled if necessary, so that the descriptors of the final training set fit into 1.7 GB of RAM for SVM training.

//------------------------------------------------------______________>

原作者对 OpenCV2.0 peopledetect 进行了2次更新

https://code.ros.org/trac/opencv/changeset/2314/trunk

最近一次改为如下:


---------------------


#include "cvaux.h"


#include "highgui.h"


#include


#include


#include


using namespace cv;


using namespace std;


int main(int argc, char** argv)


{


Mat img;


FILE* f = 0;


char _filename[1024];


if( argc == 1 )


{


printf("Usage: peopledetect ( | .txt)\n");


return 0;


}


img = imread(argv[1]);


if( img.data )


{


strcpy(_filename, argv[1]);


}


else


{


f = fopen(argv[1], "rt");


if(!f)


{


fprintf( stderr, "ERROR: the specified file could not be loaded\n");


return -1;


}


}


HOGDescriptor hog;


hog.setSVMDetector(HOGDescriptor::getDefaultPeopleDetector());


for(;;)


{


char* filename = _filename;


if(f)


{


if(!fgets(filename, (int)sizeof(_filename)-2, f))


break;


//while(*filename && isspace(*filename))


// ++filename;


if(filename[0] == '#')


continue;


int l = strlen(filename);


while(l > 0 && isspace(filename[l-1]))


--l;


filename[l] = '\0';


img = imread(filename);


}


printf("%s:\n", filename);


if(!img.data)


continue;


fflush(stdout);


vector found, found_filtered;


double t = (double)getTickCount();


// run the detector with default parameters. to get a higher hit-rate


// (and more false alarms, respectively), decrease the hitThreshold and


// groupThreshold (set groupThreshold to 0 to turn off the grouping completely).


int can = img.channels();


hog.detectMultiScale(img, found, 0, Size(8,8), Size(32,32), 1.05, 2);


t = (double)getTickCount() - t;


printf("tdetection time = %gms\n", t*1000./cv::getTickFrequency());


size_t i, j;


for( i = 0; i

{


Rect r = found[i];


for( j = 0; j

if( j != i && (r & found[j]) == r)


break;


if( j == found.size() )


found_filtered.push_back(r);


}


for( i = 0; i

{


Rect r = found_filtered[i];


// the HOG detector returns slightly larger rectangles than the real objects.


// so we slightly shrink the rectangles to get a nicer output.


r.x += cvRound(r.width*0.1);


r.width = cvRound(r.width*0.1);


r.y += cvRound(r.height*0.07);


r.height = cvRound(r.height*0.1);


rectangle(img, r.tl(), r.br(), cv::Scalar(0,255,0), 3);


}


imshow("people detector", img);


int c = waitKey(0) & 255;


if( c == 'q' || c == 'Q' || !f)


break;


}


if(f)


fclose(f);


return 0;


}

更新后可以批量检测图片!






将需要批量检测的图片,构造一个TXT文本,文件名为filename.txt, 其内容如下


1.jpg


2.jpg


......




然后在DOS界面输入 peopledetect filename.txt , 即可自动检测每个图片。

//////////////////////////////////////////////////////////////////------------------------------ Navneet Dalal的OLT工作流程描述

Navneet Dalal在以下网站提供了INRIA Object Detection and Localization Toolkit

http://pascal.inrialpes.fr/soft/olt/

Wilson Suryajaya Leoputra提供了它的windows版本

http://www.computing.edu.au/~12482661/hog.html

需要 Copy all the dll's (boost_1.34.1*.dll, blitz_0.9.dll, opencv*.dll) into "/debug/"


Navneet Dalal提供了linux下的可执行程序,借别人的linux系统,运行一下,先把总体流程了解了。


下面结合OLTbinaries\readme和OLTbinaries\HOG\record两个文件把其流程描述一下。


1.下载 INRIA person detection database 解压到OLTbinaries\;把其中的'train_64x128_H96' 重命名为 'train' ; 'test_64x128_H96' 重命名为 'test'.


2.在linux下运行 'runall.sh' script.


等待结果出来后,打开matlab 运行 plotdet.m 可绘制 DET曲线;


------这是一步到位法--------------------------------------------------


-------此外,它还提供了分步执行法-------------------------------------


1、由pos.lst列表提供的图片,计算正样本R-HOG特征,pos.lst列表格式如下


train/pos/crop_000010a.png


train/pos/crop_000010b.png


train/pos/crop_000011a.png


------以下表示-linux下执行语句(下同)------


./bin//dump_rhog -W 64,128 -C 8,8 -N 2,2 -B 9 -G 8,8 -S 0 --wtscale 2 --maxvalue 0.2 -- epsilon 1 --fullcirc 0 -v 3 --proc rgb_sqrt --norm l2hys -s 1 train/pos.lst  HOG/train_pos.RHOG


2.计算负样本R-HOG特征


./bin//dump_rhog -W 64,128 -C 8,8 -N 2,2 -B 9 -G 8,8 -S 0 --wtscale 2 --maxvalue 0.2 -- epsilon 1 --fullcirc 0 -v 3 --proc rgb_sqrt --norm l2hys -s 10 train/neg.lst HOG/train_neg.RHOG


3.训练


./bin//dump4svmlearn -p HOG/train_pos.RHOG -n HOG/train_neg.RHOG HOG/train_BiSVMLight.blt -v


4.创建 model file: HOG/model_4BiSVMLight.alt


./bin//svm_learn -j 3 -B 1 -z c -v 1 -t 0 HOG/train_BiSVMLight.blt HOG/model_4BiSVMLight.alt


5.创建文件夹


mkdir -p HOG/hard


6.分类


./bin//classify_rhog train/neg.lst HOG/hard/list.txt HOG/model_4BiSVMLight.alt -d HOG/hard/hard_neg.txt -c HOG/hard/hist.txt -m 0 -t 0 --no_nonmax 1 --avsize 0 --margin 0 --scaleratio 1.2 -l N -W 64,128 -C 8,8 -N 2,2 -B 9 -G 8,8 -S 0 --wtscale 2 --maxvalue 0.2 --


epsilon 1 --fullcirc 0 -v 3 --proc rgb_sqrt --norm l2hys


--------


false +/- 分类结果会写入 HOG/hard/hard_neg.txt


7. 将hard加入到neg,再次计算RHOG特征


./bin//dump_rhog -W 64,128 -C 8,8 -N 2,2 -B 9 -G 8,8 -S 0 --wtscale 2 --maxvalue 0.2 -- epsilon 1 --fullcirc 0 -v 3 --proc rgb_sqrt --norm l2hys -s 0 HOG/hard/hard_neg.txt OG/train_hard_neg.RHOG --poscases 2416 --negcases 12180 --dumphard 1 --hardscore 0 -- memorylimit 1700


8.再次训练


./bin//dump4svmlearn -p HOG/train_pos.RHOG -n HOG/train_neg.RHOG -n HOG/train_hard_neg.RHOG HOG/train_BiSVMLight.blt -v 4


9.得到最终的模型


./bin//svm_learn -j 3 -B 1 -z c -v 1 -t 0 HOG/train_BiSVMLight.blt HOG/model_4BiSVMLight.alt


Opencv中用到的3780 个值,应该就在这个模型里面model_4BiSVMLight.alt,不过它的格式未知,无法直接读取,但是可以研究svm_learn程序是如何生成它的;此外,该模型由程序classify_rhog调用,研究它如何调用,估计是一个解析此格式的思路


10.创建文件夹


mkdir -p HOG/WindowTest_Negative


11.负样本检测结果


./bin//classify_rhog -W 64,128 -C 8,8 -N 2,2 -B 9 -G 8,8 -S 0 --wtscale 2 --maxvalue 0.2 --epsilon 1 --fullcirc 0 -v 3 --proc rgb_sqrt --norm l2hys -p 1 --no_nonmax 1 --nopyramid 0 - -scaleratio 1.2 -t 0 -m 0 --avsize 0 --margin 0 test/neg.lst HOG/WindowTest_Negative/list.txt HOG/model_4BiSVMLight.alt -c HOG/WindowTest_Negative/histogram.txt


12.创建文件夹


mkdir -p HOG/WindowTest_Positive


13.正样本检测结果


./bin//classify_rhog -W 64,128 -C 8,8 -N 2,2 -B 9 -G 8,8 -S 0 --wtscale 2 --maxvalue 0.2 -- epsilon 1 --fullcirc 0 -v 3 --proc rgb_sqrt --norm l2hys -p 1 --no_nonmax 1 --nopyramid 1 -t 0 -m 0 --avsize 0 --margin 0 test/pos.lst HOG/WindowTest_Positive/list.txt  HOG/model_4BiSVMLight.alt -c HOG/WindowTest_Positive/histogram.txt

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////


如何制作训练样本

分析了原作者的数据集,结合网上一些资料,下面描述如何制作训练样本


1、如何从原始图片生成样本


对比INRIAPerson\INRIAPerson\Train\pos(原始图片),INRIAPerson\train_64x128_H96\pos(生成样本)可以发现,作者从原始图片裁剪出一些站立的人,要求该人不被遮挡,然后对剪裁的图片left-right reflect。以第一张图片为例crop001001,它剪裁了2个不被遮挡的人,再加上原照片,共3张,再加左右镜像,总共6张。


2、裁剪


 可利用基于opencv1.0的程序imageclipper,进行裁剪并保存,它会自动生成文件名并保存在同一路径下新生成的imageclipper文件夹下。


3.改变图片大小


 可以利用Acdsee软件,Tools/open in editor,进去后到Resize选项; tools/rotate还可实现left-right reflect




自己编了一个程序,批量改变图片大小,代码见下一楼




4. 制作pos.lst列表


  进入dos界面,定位到需要制作列表的图片文件夹下,输入 dir /b> pos.lst,即可生成文件列表;

/////////////////////////

#include "cv.h"


#include "highgui.h"


#include "cvaux.h"






int main(int argc,char * argv[])


{


IplImage* src ;


IplImage* dst = 0;




CvSize dst_size;




FILE* f = 0;


char _filename[1024];


int l;




f = fopen(argv[1], "rt");


if(!f)


{


fprintf( stderr, "ERROR: the specified file could not be loaded\n");


return -1;


}




for(;;)


{


char* filename = _filename;


if(f)


{


if(!fgets(filename, (int)sizeof(_filename)-2, f))


break;


if(filename[0] == '#')


continue;


l = strlen(filename);


while(l > 0 && isspace(filename[l-1]))


--l;


filename[l] = '\0';


src=cvLoadImage(filename,1);


}




dst_size.width = 96;


dst_size.height = 160;


dst=cvCreateImage(dst_size,src->depth,src->nChannels);


cvResize(src,dst,CV_INTER_LINEAR);//////////////////


char* filename2 = _filename;char* filename3 = _filename; filename3="_96x160.jpg";


strncat(filename2, filename,l-4);


strcat(filename2, filename3);




cvSaveImage(filename2, dst);




}


if(f)


fclose(f);




cvWaitKey(-1);


cvReleaseImage( &src );


cvReleaseImage( &dst );




return 0;


}



 


peopledetect学习,来自opencv中文论坛




推荐阅读
  • 本文介绍了使用Spark实现低配版高斯朴素贝叶斯模型的原因和原理。随着数据量的增大,单机上运行高斯朴素贝叶斯模型会变得很慢,因此考虑使用Spark来加速运行。然而,Spark的MLlib并没有实现高斯朴素贝叶斯模型,因此需要自己动手实现。文章还介绍了朴素贝叶斯的原理和公式,并对具有多个特征和类别的模型进行了讨论。最后,作者总结了实现低配版高斯朴素贝叶斯模型的步骤。 ... [详细]
  • 本文介绍了南邮ctf-web的writeup,包括签到题和md5 collision。在CTF比赛和渗透测试中,可以通过查看源代码、代码注释、页面隐藏元素、超链接和HTTP响应头部来寻找flag或提示信息。利用PHP弱类型,可以发现md5('QNKCDZO')='0e830400451993494058024219903391'和md5('240610708')='0e462097431906509019562988736854'。 ... [详细]
  • 本文介绍了Java工具类库Hutool,该工具包封装了对文件、流、加密解密、转码、正则、线程、XML等JDK方法的封装,并提供了各种Util工具类。同时,还介绍了Hutool的组件,包括动态代理、布隆过滤、缓存、定时任务等功能。该工具包可以简化Java代码,提高开发效率。 ... [详细]
  • PDO MySQL
    PDOMySQL如果文章有成千上万篇,该怎样保存?数据保存有多种方式,比如单机文件、单机数据库(SQLite)、网络数据库(MySQL、MariaDB)等等。根据项目来选择,做We ... [详细]
  • 如何实现织梦DedeCms全站伪静态
    本文介绍了如何通过修改织梦DedeCms源代码来实现全站伪静态,以提高管理和SEO效果。全站伪静态可以避免重复URL的问题,同时通过使用mod_rewrite伪静态模块和.htaccess正则表达式,可以更好地适应搜索引擎的需求。文章还提到了一些相关的技术和工具,如Ubuntu、qt编程、tomcat端口、爬虫、php request根目录等。 ... [详细]
  • 本文详细解析了JavaScript中相称性推断的知识点,包括严厉相称和宽松相称的区别,以及范例转换的规则。针对不同类型的范例值,如差别范例值、统一类的原始范例值和统一类的复合范例值,都给出了具体的比较方法。对于宽松相称的情况,也解释了原始范例值和对象之间的比较规则。通过本文的学习,读者可以更好地理解JavaScript中相称性推断的概念和应用。 ... [详细]
  • Android中高级面试必知必会,积累总结
    本文介绍了Android中高级面试的必知必会内容,并总结了相关经验。文章指出,如今的Android市场对开发人员的要求更高,需要更专业的人才。同时,文章还给出了针对Android岗位的职责和要求,并提供了简历突出的建议。 ... [详细]
  • Metasploit攻击渗透实践
    本文介绍了Metasploit攻击渗透实践的内容和要求,包括主动攻击、针对浏览器和客户端的攻击,以及成功应用辅助模块的实践过程。其中涉及使用Hydra在不知道密码的情况下攻击metsploit2靶机获取密码,以及攻击浏览器中的tomcat服务的具体步骤。同时还讲解了爆破密码的方法和设置攻击目标主机的相关参数。 ... [详细]
  • 生成对抗式网络GAN及其衍生CGAN、DCGAN、WGAN、LSGAN、BEGAN介绍
    一、GAN原理介绍学习GAN的第一篇论文当然由是IanGoodfellow于2014年发表的GenerativeAdversarialNetworks(论文下载链接arxiv:[h ... [详细]
  • 不同优化算法的比较分析及实验验证
    本文介绍了神经网络优化中常用的优化方法,包括学习率调整和梯度估计修正,并通过实验验证了不同优化算法的效果。实验结果表明,Adam算法在综合考虑学习率调整和梯度估计修正方面表现较好。该研究对于优化神经网络的训练过程具有指导意义。 ... [详细]
  • 也就是|小窗_卷积的特征提取与参数计算
    篇首语:本文由编程笔记#小编为大家整理,主要介绍了卷积的特征提取与参数计算相关的知识,希望对你有一定的参考价值。Dense和Conv2D根本区别在于,Den ... [详细]
  • 深入理解CSS中的margin属性及其应用场景
    本文主要介绍了CSS中的margin属性及其应用场景,包括垂直外边距合并、padding的使用时机、行内替换元素与费替换元素的区别、margin的基线、盒子的物理大小、显示大小、逻辑大小等知识点。通过深入理解这些概念,读者可以更好地掌握margin的用法和原理。同时,文中提供了一些相关的文档和规范供读者参考。 ... [详细]
  • 怎么在PHP项目中实现一个HTTP断点续传功能发布时间:2021-01-1916:26:06来源:亿速云阅读:96作者:Le ... [详细]
  • 海马s5近光灯能否直接更换为H7?
    本文主要介绍了海马s5车型的近光灯是否可以直接更换为H7灯泡,并提供了完整的教程下载地址。此外,还详细讲解了DSP功能函数中的数据拷贝、数据填充和浮点数转换为定点数的相关内容。 ... [详细]
  • 一句话解决高并发的核心原则
    本文介绍了解决高并发的核心原则,即将用户访问请求尽量往前推,避免访问CDN、静态服务器、动态服务器、数据库和存储,从而实现高性能、高并发、高可扩展的网站架构。同时提到了Google的成功案例,以及适用于千万级别PV站和亿级PV网站的架构层次。 ... [详细]
author-avatar
神秘布拉阁俱乐部
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有