热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

【自动驾驶】second模型训练

1,数据组织:训练验证数据生成:pythoncreate_data.pynuscenes_data_prep--data_pathNU

1,数据组织:

训练验证数据生成:

python create_data.py nuscenes_data_prep --data_path=NUSCENES_TRAINVAL_DATASET_ROOT --version="v1.0-trainval" --max_sweeps=10python create_data.py nuscenes_data_prep --data_path=NUSCENES_TEST_DATASET_ROOT --version="v1.0-test" --max_sweeps=10 --dataset_name="NuscenesDataset"

如果为自定义数据要根据类别进行修改,将数据转成bin文件,打包成pkl文件再执行上面指令。

如: (1)my_common.py 将标签与点云打包成pkl。

(2)python my_create_data.py mydata --data_path=datapath

2,修改config文件

打开second.pytorch/second/configs/car.lite.config 编辑类别与训练数据路径

train_input_reader: {...database_sampler {database_info_path: "/path/to/dataset_dbinfos_train.pkl"...}dataset: {dataset_class_name: "DATASET_NAME"kitti_info_path: "/path/to/dataset_infos_train.pkl"kitti_root_path: "DATASET_ROOT"}
}
...
eval_input_reader: {...dataset: {dataset_class_name: "DATASET_NAME"kitti_info_path: "/path/to/dataset_infos_val.pkl"kitti_root_path: "DATASET_ROOT"}
}

3,开始训练

单GPU:

python ./pytorch/train.py train --config_path=./configs/car.fhd.config --model_dir=/path/to/model_dir

多GPU:

CUDA_VISIBLE_DEVICES=0,1,3 python ./pytorch/train.py train --config_path=./configs/car.fhd.config --model_dir=/path/to/model_dir --multi_gpu=True

浮点16位训练: 

修改配置文件,将enable\u mixed\u precision设置为true。
(1)如果要训练新模型,请确保“/path/to/model\u dir”不存在。如果model\u dir不存在,将创建一个新目录,否则将读取其中的检查点。
(2)训练过程使用batchsize=6作为1080Ti的默认值,如果GPU内存较少,则需要减少batchsize。
(3)目前仅支持单个GPU训练,但训练一个模型在单个1080Ti中只需要20小时,并且只需要50个循环就可以达到78.3 AP,并在Kitti验证日期集中使用super converge in car Medium 3D。

 4,验证

 检测结果默认保存为:result.pkl,可设置--pickle_result=False,将结果保存为kitt标签格式

python ./pytorch/train.py evaluate --config_path=./configs/car.fhd.config --model_dir=/path/to/model_dir --measure_time=True --batch_size=1

5,训练自定义数据 

需要修改或者重写second.data.kitti_dataset以 @register_dataset   方式注册函数

训练时要修改eval.py,主要是自定义数据的类别等。

from pathlib import Path
import pickle
import time
from functools import partialimport numpy as npfrom second.core import box_np_ops
from second.core import preprocess as prep
from second.data import kitti_common as kitti
from second.utils.eval import get_coco_eval_result, get_official_eval_result
from second.data.dataset import Dataset, register_dataset
from second.utils.progress_bar import progress_bar_iter as prog_bar@register_dataset
class KittiDataset(Dataset):NumPointFeatures &#61; 4def __init__(self,root_path,info_path,class_names&#61;None,prep_func&#61;None,num_point_features&#61;None):assert info_path is not Nonewith open(info_path, &#39;rb&#39;) as f:infos &#61; pickle.load(f)self._root_path &#61; Path(root_path)self._kitti_infos &#61; infosprint("remain number of infos:", len(self._kitti_infos))self._class_names &#61; class_namesself._prep_func &#61; prep_funcdef __len__(self):return len(self._kitti_infos)def convert_detection_to_kitti_annos(self, detection):class_names &#61; self._class_namesdet_image_idxes &#61; [det["metadata"]["image_idx"] for det in detection]gt_image_idxes &#61; [info["image"]["image_idx"] for info in self._kitti_infos]annos &#61; []for i in range(len(detection)):det_idx &#61; det_image_idxes[i]det &#61; detection[i]# info &#61; self._kitti_infos[gt_image_idxes.index(det_idx)]info &#61; self._kitti_infos[i]calib &#61; info["calib"]rect &#61; calib["R0_rect"]Trv2c &#61; calib["Tr_velo_to_cam"]P2 &#61; calib["P2"]final_box_preds &#61; det["box3d_lidar"].detach().cpu().numpy()label_preds &#61; det["label_preds"].detach().cpu().numpy()scores &#61; det["scores"].detach().cpu().numpy()if final_box_preds.shape[0] !&#61; 0:final_box_preds[:, 2] -&#61; final_box_preds[:, 5] / 2box3d_camera &#61; box_np_ops.box_lidar_to_camera(final_box_preds, rect, Trv2c)locs &#61; box3d_camera[:, :3]dims &#61; box3d_camera[:, 3:6]angles &#61; box3d_camera[:, 6]camera_box_origin &#61; [0.5, 1.0, 0.5]box_corners &#61; box_np_ops.center_to_corner_box3d(locs, dims, angles, camera_box_origin, axis&#61;1)box_corners_in_image &#61; box_np_ops.project_to_image(box_corners, P2)# box_corners_in_image: [N, 8, 2]minxy &#61; np.min(box_corners_in_image, axis&#61;1)maxxy &#61; np.max(box_corners_in_image, axis&#61;1)bbox &#61; np.concatenate([minxy, maxxy], axis&#61;1)anno &#61; kitti.get_start_result_anno()num_example &#61; 0box3d_lidar &#61; final_box_predsfor j in range(box3d_lidar.shape[0]):image_shape &#61; info["image"]["image_shape"]if bbox[j, 0] > image_shape[1] or bbox[j, 1] > image_shape[0]:continueif bbox[j, 2] <0 or bbox[j, 3] <0:continuebbox[j, 2:] &#61; np.minimum(bbox[j, 2:], image_shape[::-1])bbox[j, :2] &#61; np.maximum(bbox[j, :2], [0, 0])anno["bbox"].append(bbox[j])# convert center format to kitti format# box3d_lidar[j, 2] -&#61; box3d_lidar[j, 5] / 2anno["alpha"].append(-np.arctan2(-box3d_lidar[j, 1], box3d_lidar[j, 0]) &#43;box3d_camera[j, 6])anno["dimensions"].append(box3d_camera[j, 3:6])anno["location"].append(box3d_camera[j, :3])anno["rotation_y"].append(box3d_camera[j, 6])anno["name"].append(class_names[int(label_preds[j])])anno["truncated"].append(0.0)anno["occluded"].append(0)anno["score"].append(scores[j])num_example &#43;&#61; 1if num_example !&#61; 0:anno &#61; {n: np.stack(v) for n, v in anno.items()}annos.append(anno)else:annos.append(kitti.empty_result_anno())num_example &#61; annos[-1]["name"].shape[0]annos[-1]["metadata"] &#61; det["metadata"]return annosdef evaluation(self, detections, output_dir):"""detectionWhen you want to eval your own dataset, you MUST set correctthe z axis and box z center.If you want to eval by my KITTI eval function, you must provide the correct format annotations.ground_truth_annotations format:{bbox: [N, 4], if you fill fake data, MUST HAVE >25 HEIGHT!!!!!!alpha: [N], you can use -10 to ignore it.occluded: [N], you can use zero.truncated: [N], you can use zero.name: [N]location: [N, 3] center of 3d box.dimensions: [N, 3] dim of 3d box.rotation_y: [N] angle.}all fields must be filled, but some fields can fillzero."""if "annos" not in self._kitti_infos[0]:return Nonegt_annos &#61; [info["annos"] for info in self._kitti_infos]dt_annos &#61; self.convert_detection_to_kitti_annos(detections)# firstly convert standard detection to kitti-format dt annosz_axis &#61; 1 # KITTI camera format use y as regular "z" axis.z_center &#61; 1.0 # KITTI camera box&#39;s center is [0.5, 1, 0.5]# for regular raw lidar data, z_axis &#61; 2, z_center &#61; 0.5.result_official_dict &#61; get_official_eval_result(gt_annos,dt_annos,self._class_names,z_axis&#61;z_axis,z_center&#61;z_center)result_coco &#61; get_coco_eval_result(gt_annos,dt_annos,self._class_names,z_axis&#61;z_axis,z_center&#61;z_center)return {"results": {"official": result_official_dict["result"],"coco": result_coco["result"],},"detail": {"eval.kitti": {"official": result_official_dict["detail"],"coco": result_coco["detail"]}},}def __getitem__(self, idx):input_dict &#61; self.get_sensor_data(idx)example &#61; self._prep_func(input_dict&#61;input_dict)example["metadata"] &#61; {}if "image_idx" in input_dict["metadata"]:example["metadata"] &#61; input_dict["metadata"]if "anchors_mask" in example:example["anchors_mask"] &#61; example["anchors_mask"].astype(np.uint8)return exampledef get_sensor_data(self, query):read_image &#61; Falseidx &#61; queryif isinstance(query, dict):read_image &#61; "cam" in queryassert "lidar" in queryidx &#61; query["lidar"]["idx"]info &#61; self._kitti_infos[idx]res &#61; {"lidar": {"type": "lidar","points": None,},"metadata": {"image_idx": info["image"]["image_idx"],"image_shape": info["image"]["image_shape"],},"calib": None,"cam": {}}pc_info &#61; info["point_cloud"]velo_path &#61; Path(pc_info[&#39;velodyne_path&#39;])if not velo_path.is_absolute():velo_path &#61; Path(self._root_path) / pc_info[&#39;velodyne_path&#39;]velo_reduced_path &#61; velo_path.parent.parent / (velo_path.parent.stem &#43; &#39;_reduced&#39;) / velo_path.nameif velo_reduced_path.exists():velo_path &#61; velo_reduced_pathpoints &#61; np.fromfile(str(velo_path), dtype&#61;np.float32,count&#61;-1).reshape([-1, self.NumPointFeatures])res["lidar"]["points"] &#61; pointsimage_info &#61; info["image"]image_path &#61; image_info[&#39;image_path&#39;]if read_image:image_path &#61; self._root_path / image_pathwith open(str(image_path), &#39;rb&#39;) as f:image_str &#61; f.read()res["cam"] &#61; {"type": "camera","data": image_str,"datatype": image_path.suffix[1:],}calib &#61; info["calib"]calib_dict &#61; {&#39;rect&#39;: calib[&#39;R0_rect&#39;],&#39;Trv2c&#39;: calib[&#39;Tr_velo_to_cam&#39;],&#39;P2&#39;: calib[&#39;P2&#39;],}res["calib"] &#61; calib_dictif &#39;annos&#39; in info:annos &#61; info[&#39;annos&#39;]# we need other objects to avoid collision when sampleannos &#61; kitti.remove_dontcare(annos)locs &#61; annos["location"]dims &#61; annos["dimensions"]rots &#61; annos["rotation_y"]gt_names &#61; annos["name"]# rots &#61; np.concatenate([np.zeros([locs.shape[0], 2], dtype&#61;np.float32), rots], axis&#61;1)gt_boxes &#61; np.concatenate([locs, dims, rots[..., np.newaxis]],axis&#61;1).astype(np.float32)calib &#61; info["calib"]gt_boxes &#61; box_np_ops.box_camera_to_lidar(gt_boxes, calib["R0_rect"], calib["Tr_velo_to_cam"])# only center format is allowed. so we need to convert# kitti [0.5, 0.5, 0] center to [0.5, 0.5, 0.5]box_np_ops.change_box3d_center_(gt_boxes, [0.5, 0.5, 0],[0.5, 0.5, 0.5])res["lidar"]["annotations"] &#61; {&#39;boxes&#39;: gt_boxes,&#39;names&#39;: gt_names,}res["cam"]["annotations"] &#61; {&#39;boxes&#39;: annos["bbox"],&#39;names&#39;: gt_names,}return resdef convert_to_kitti_info_version2(info):"""convert kitti info v1 to v2 if possible."""if "image" not in info or "calib" not in info or "point_cloud" not in info:info["image"] &#61; {&#39;image_shape&#39;: info["img_shape"],&#39;image_idx&#39;: info[&#39;image_idx&#39;],&#39;image_path&#39;: info[&#39;img_path&#39;],}info["calib"] &#61; {"R0_rect": info[&#39;calib/R0_rect&#39;],"Tr_velo_to_cam": info[&#39;calib/Tr_velo_to_cam&#39;],"P2": info[&#39;calib/P2&#39;],}info["point_cloud"] &#61; {"velodyne_path": info[&#39;velodyne_path&#39;],}def kitti_anno_to_label_file(annos, folder):folder &#61; Path(folder)for anno in annos:image_idx &#61; anno["metadata"]["image_idx"]label_lines &#61; []for j in range(anno["bbox"].shape[0]):label_dict &#61; {&#39;name&#39;: anno["name"][j],&#39;alpha&#39;: anno["alpha"][j],&#39;bbox&#39;: anno["bbox"][j],&#39;location&#39;: anno["location"][j],&#39;dimensions&#39;: anno["dimensions"][j],&#39;rotation_y&#39;: anno["rotation_y"][j],&#39;score&#39;: anno["score"][j],}label_line &#61; kitti.kitti_result_line(label_dict)label_lines.append(label_line)label_file &#61; folder / f"{kitti.get_image_index_str(image_idx)}.txt"label_str &#61; &#39;\n&#39;.join(label_lines)with open(label_file, &#39;w&#39;) as f:f.write(label_str)def _read_imageset_file(path):with open(path, &#39;r&#39;) as f:lines &#61; f.readlines()return [int(line) for line in lines]def _calculate_num_points_in_gt(data_path,infos,relative_path,remove_outside&#61;True,num_features&#61;4):for info in infos:pc_info &#61; info["point_cloud"]image_info &#61; info["image"]calib &#61; info["calib"]if relative_path:v_path &#61; str(Path(data_path) / pc_info["velodyne_path"])else:v_path &#61; pc_info["velodyne_path"]points_v &#61; np.fromfile(v_path, dtype&#61;np.float32, count&#61;-1).reshape([-1, num_features])rect &#61; calib[&#39;R0_rect&#39;]Trv2c &#61; calib[&#39;Tr_velo_to_cam&#39;]P2 &#61; calib[&#39;P2&#39;]if remove_outside:points_v &#61; box_np_ops.remove_outside_points(points_v, rect, Trv2c, P2, image_info["image_shape"])# points_v &#61; points_v[points_v[:, 0] > 0]annos &#61; info[&#39;annos&#39;]num_obj &#61; len([n for n in annos[&#39;name&#39;] if n !&#61; &#39;DontCare&#39;])# annos &#61; kitti.filter_kitti_anno(annos, [&#39;DontCare&#39;])dims &#61; annos[&#39;dimensions&#39;][:num_obj]loc &#61; annos[&#39;location&#39;][:num_obj]rots &#61; annos[&#39;rotation_y&#39;][:num_obj]gt_boxes_camera &#61; np.concatenate([loc, dims, rots[..., np.newaxis]],axis&#61;1)gt_boxes_lidar &#61; box_np_ops.box_camera_to_lidar(gt_boxes_camera, rect, Trv2c)indices &#61; box_np_ops.points_in_rbbox(points_v[:, :3], gt_boxes_lidar)num_points_in_gt &#61; indices.sum(0)num_ignored &#61; len(annos[&#39;dimensions&#39;]) - num_objnum_points_in_gt &#61; np.concatenate([num_points_in_gt, -np.ones([num_ignored])])annos["num_points_in_gt"] &#61; num_points_in_gt.astype(np.int32)def create_kitti_info_file(data_path, save_path&#61;None, relative_path&#61;True):imageset_folder &#61; Path(__file__).resolve().parent / "ImageSets"train_img_ids &#61; _read_imageset_file(str(imageset_folder / "train.txt"))val_img_ids &#61; _read_imageset_file(str(imageset_folder / "val.txt"))test_img_ids &#61; _read_imageset_file(str(imageset_folder / "test.txt"))print("Generate info. this may take several minutes.")if save_path is None:save_path &#61; Path(data_path)else:save_path &#61; Path(save_path)kitti_infos_train &#61; kitti.get_kitti_image_info(data_path,training&#61;True,velodyne&#61;True,calib&#61;True,image_ids&#61;train_img_ids,relative_path&#61;relative_path)_calculate_num_points_in_gt(data_path, kitti_infos_train, relative_path)filename &#61; save_path / &#39;kitti_infos_train.pkl&#39;print(f"Kitti info train file is saved to {filename}")with open(filename, &#39;wb&#39;) as f:pickle.dump(kitti_infos_train, f)kitti_infos_val &#61; kitti.get_kitti_image_info(data_path,training&#61;True,velodyne&#61;True,calib&#61;True,image_ids&#61;val_img_ids,relative_path&#61;relative_path)_calculate_num_points_in_gt(data_path, kitti_infos_val, relative_path)filename &#61; save_path / &#39;kitti_infos_val.pkl&#39;print(f"Kitti info val file is saved to {filename}")with open(filename, &#39;wb&#39;) as f:pickle.dump(kitti_infos_val, f)filename &#61; save_path / &#39;kitti_infos_trainval.pkl&#39;print(f"Kitti info trainval file is saved to {filename}")with open(filename, &#39;wb&#39;) as f:pickle.dump(kitti_infos_train &#43; kitti_infos_val, f)kitti_infos_test &#61; kitti.get_kitti_image_info(data_path,training&#61;False,label_info&#61;False,velodyne&#61;True,calib&#61;True,image_ids&#61;test_img_ids,relative_path&#61;relative_path)filename &#61; save_path / &#39;kitti_infos_test.pkl&#39;print(f"Kitti info test file is saved to {filename}")with open(filename, &#39;wb&#39;) as f:pickle.dump(kitti_infos_test, f)def _create_reduced_point_cloud(data_path,info_path,save_path&#61;None,back&#61;False):with open(info_path, &#39;rb&#39;) as f:kitti_infos &#61; pickle.load(f)for info in prog_bar(kitti_infos):pc_info &#61; info["point_cloud"]image_info &#61; info["image"]calib &#61; info["calib"]v_path &#61; pc_info[&#39;velodyne_path&#39;]v_path &#61; Path(data_path) / v_pathpoints_v &#61; np.fromfile(str(v_path), dtype&#61;np.float32, count&#61;-1).reshape([-1, 4])rect &#61; calib[&#39;R0_rect&#39;]P2 &#61; calib[&#39;P2&#39;]Trv2c &#61; calib[&#39;Tr_velo_to_cam&#39;]# first remove z <0 points# keep &#61; points_v[:, -1] > 0# points_v &#61; points_v[keep]# then remove outside.if back:points_v[:, 0] &#61; -points_v[:, 0]points_v &#61; box_np_ops.remove_outside_points(points_v, rect, Trv2c, P2,image_info["image_shape"])if save_path is None:save_filename &#61; v_path.parent.parent / (v_path.parent.stem &#43; "_reduced") / v_path.name# save_filename &#61; str(v_path) &#43; &#39;_reduced&#39;if back:save_filename &#43;&#61; "_back"else:save_filename &#61; str(Path(save_path) / v_path.name)if back:save_filename &#43;&#61; "_back"with open(save_filename, &#39;w&#39;) as f:points_v.tofile(f)def create_reduced_point_cloud(data_path,train_info_path&#61;None,val_info_path&#61;None,test_info_path&#61;None,save_path&#61;None,with_back&#61;False):if train_info_path is None:train_info_path &#61; Path(data_path) / &#39;kitti_infos_train.pkl&#39;if val_info_path is None:val_info_path &#61; Path(data_path) / &#39;kitti_infos_val.pkl&#39;if test_info_path is None:test_info_path &#61; Path(data_path) / &#39;kitti_infos_test.pkl&#39;_create_reduced_point_cloud(data_path, train_info_path, save_path)_create_reduced_point_cloud(data_path, val_info_path, save_path)_create_reduced_point_cloud(data_path, test_info_path, save_path)if with_back:_create_reduced_point_cloud(data_path, train_info_path, save_path, back&#61;True)_create_reduced_point_cloud(data_path, val_info_path, save_path, back&#61;True)_create_reduced_point_cloud(data_path, test_info_path, save_path, back&#61;True)if __name__ &#61;&#61; "__main__":fire.Fire()

修改second.utils.eval   class_to_name &#61; {
        0: &#39;Car&#39;,
        1: &#39;Pedestrian&#39;,
        2: &#39;Cyclist&#39;,
        3: &#39;Van&#39;,
        4: &#39;Person_sitting&#39;,
        5: &#39;car&#39;,
        6: &#39;tractor&#39;,
        7: &#39;trailer&#39;,
    }

def get_official_eval_result(gt_annos,dt_annos,current_classes,difficultys&#61;[0, 1, 2],z_axis&#61;1,z_center&#61;1.0):"""gt_annos and dt_annos must contains following keys:[bbox, location, dimensions, rotation_y, score]"""overlap_mod &#61; np.array([[0.7, 0.5, 0.5, 0.7, 0.5, 0.7, 0.7, 0.7],[0.7, 0.5, 0.5, 0.7, 0.5, 0.7, 0.7, 0.7],[0.7, 0.5, 0.5, 0.7, 0.5, 0.7, 0.7, 0.7]])overlap_easy &#61; np.array([[0.7, 0.5, 0.5, 0.7, 0.5, 0.5, 0.5, 0.5],[0.5, 0.25, 0.25, 0.5, 0.25, 0.5, 0.5, 0.5],[0.5, 0.25, 0.25, 0.5, 0.25, 0.5, 0.5, 0.5]])min_overlaps &#61; np.stack([overlap_mod, overlap_easy], axis&#61;0) # [2, 3, 5]class_to_name &#61; {0: &#39;Car&#39;,1: &#39;Pedestrian&#39;,2: &#39;Cyclist&#39;,3: &#39;Van&#39;,4: &#39;Person_sitting&#39;,5: &#39;car&#39;,6: &#39;tractor&#39;,7: &#39;trailer&#39;,}name_to_class &#61; {v: n for n, v in class_to_name.items()}if not isinstance(current_classes, (list, tuple)):current_classes &#61; [current_classes]current_classes_int &#61; []for curcls in current_classes:if isinstance(curcls, str):current_classes_int.append(name_to_class[curcls])else:current_classes_int.append(curcls)current_classes &#61; current_classes_intmin_overlaps &#61; min_overlaps[:, :, current_classes]result &#61; &#39;&#39;# check whether alpha is validcompute_aos &#61; Falsefor anno in dt_annos:if anno[&#39;alpha&#39;].shape[0] !&#61; 0:if anno[&#39;alpha&#39;][0] !&#61; -10:compute_aos &#61; Truebreakmetrics &#61; do_eval_v3(gt_annos,dt_annos,current_classes,min_overlaps,compute_aos,difficultys,z_axis&#61;z_axis,z_center&#61;z_center)detail &#61; {}for j, curcls in enumerate(current_classes):# mAP threshold array: [num_minoverlap, metric, class]# mAP result: [num_class, num_diff, num_minoverlap]class_name &#61; class_to_name[curcls]detail[class_name] &#61; {}for i in range(min_overlaps.shape[0]):mAPbbox &#61; get_mAP(metrics["bbox"]["precision"][j, :, i])mAPbev &#61; get_mAP(metrics["bev"]["precision"][j, :, i])mAP3d &#61; get_mAP(metrics["3d"]["precision"][j, :, i])detail[class_name][f"bbox&#64;{min_overlaps[i, 0, j]:.2f}"] &#61; mAPbbox.tolist()detail[class_name][f"bev&#64;{min_overlaps[i, 1, j]:.2f}"] &#61; mAPbev.tolist()detail[class_name][f"3d&#64;{min_overlaps[i, 2, j]:.2f}"] &#61; mAP3d.tolist()result &#43;&#61; print_str((f"{class_to_name[curcls]} ""AP(Average Precision)&#64;{:.2f}, {:.2f}, {:.2f}:".format(*min_overlaps[i, :, j])))mAPbbox &#61; ", ".join(f"{v:.2f}" for v in mAPbbox)mAPbev &#61; ", ".join(f"{v:.2f}" for v in mAPbev)mAP3d &#61; ", ".join(f"{v:.2f}" for v in mAP3d)result &#43;&#61; print_str(f"bbox AP:{mAPbbox}")result &#43;&#61; print_str(f"bev AP:{mAPbev}")result &#43;&#61; print_str(f"3d AP:{mAP3d}")if compute_aos:mAPaos &#61; get_mAP(metrics["bbox"]["orientation"][j, :, i])detail[class_name][f"aos"] &#61; mAPaos.tolist()mAPaos &#61; ", ".join(f"{v:.2f}" for v in mAPaos)result &#43;&#61; print_str(f"aos AP:{mAPaos}")return {"result": result,"detail": detail,}


推荐阅读
  • 在Java程序中使用多线程要比在C或C++中容易得多,这是因为Java编程语言提供了语言级的支持。为什么会排队等待?下面的这个简单的Java程序完成四项不相关的任 ... [详细]
  • ProblemDescription:Readtheprogrambelowcarefullythenanswerthequestion.#pragmacomment(linker ... [详细]
  • DBA的日常运维–Part11.活动状态检查 ... [详细]
  • 使用ffmpeg进行视频格式转换的简单例子2006-12-1623:12主要参考FFMPEG里面的apiexample.c以及output_example.c编写intmain(in ... [详细]
  • Android游戏开发:游戏框架的搭建(4)
    6.游戏框架  所有的基础工作做完后,我们最后来探讨一下游戏框架本身。我们看下为了运行我们的游戏,还需要什么样的工作要做:游戏被分为不同的屏幕(screen),每个屏幕执行着相同的任务:判断用户输入, ... [详细]
  • 这篇文章主要介绍“大文本数据怎么导入导出到数据库”,在日常操作中,相信很多人在大文本数据怎么导入导出到数据库问题上存在疑惑,小编查阅了各 ... [详细]
  • Java反序列化漏洞(ysoserial工具使用、shiro反序列化利用)
    Java反序列化机制Java通过writeObject序列化将对象保存为二进制数据流,通过readObject反序列化将序列化后的二进制重新反序列化为Java对象& ... [详细]
  • Linux数据链路层的包解析仅以此文作为学习笔记,初学者,如有错误欢迎批评指正,但求轻喷。一般而言,Linux系统截获数据包后,会通过协议栈,按照TCPIP层次进行解析,那我们如何 ... [详细]
  • 自定义_自定义AXIIP核(转)
    本文由编程笔记#小编为大家整理,主要介绍了自定义AXI-IP核(转)相关的知识,希望对你有一定的参考价值。 ... [详细]
  • 本文介绍了[从头学数学]中第101节关于比例的相关问题的研究和修炼过程。主要内容包括[机器小伟]和[工程师阿伟]一起研究比例的相关问题,并给出了一个求比例的函数scale的实现。 ... [详细]
  • 也就是|小窗_卷积的特征提取与参数计算
    篇首语:本文由编程笔记#小编为大家整理,主要介绍了卷积的特征提取与参数计算相关的知识,希望对你有一定的参考价值。Dense和Conv2D根本区别在于,Den ... [详细]
  • r2dbc配置多数据源
    R2dbc配置多数据源问题根据官网配置r2dbc连接mysql多数据源所遇到的问题pom配置可以参考官网,不过我这样配置会报错我并没有这样配置将以下内容添加到pom.xml文件d ... [详细]
  • 本文介绍了如何清除Eclipse中SVN用户的设置。首先需要查看使用的SVN接口,然后根据接口类型找到相应的目录并删除相关文件。最后使用SVN更新或提交来应用更改。 ... [详细]
  • 本文介绍了在满足特定条件时如何在输入字段中使用默认值的方法和相应的代码。当输入字段填充100或更多的金额时,使用50作为默认值;当输入字段填充有-20或更多(负数)时,使用-10作为默认值。文章还提供了相关的JavaScript和Jquery代码,用于动态地根据条件使用默认值。 ... [详细]
  • 就我个人在学习Python的过程中,经常会出现学习了新方法后,如果隔上几天不用,就忘了的情况,或者刚学习的更好的方法没有得到 ... [详细]
author-avatar
this_is_me活在自己小世界
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有