热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

tensorflow的tutorial的卷积神经网络的例子convolutional.py

具体的网址在这里:https:github.comtensorflowtensorflowtreer0.12tensorflowmodels一个卷积神经网络用于股票分

具体的网址在这里:

https://github.com/tensorflow/tensorflow/tree/r0.12/tensorflow/models

一个卷积神经网络用于股票分析的例子:  https://github.com/keon/deepstock,      https://github.com/keon/deepstock

 

import argparse
import gzip
import os
import sys
import timeimport numpy
import tensorflow as tfSOURCE_URL = 'http://yann.lecun.com/exdb/mnist/'
WORK_DIRECTORY
= '/home/hzh/tf'
IMAGE_SIZE
= 28
NUM_CHANNELS
= 1
PIXEL_DEPTH
= 255
NUM_LABELS
= 10
VALIDATION_SIZE
= 5000 # Size of the validation set.
SEED = 66478 # Set to None for random seed.
BATCH_SIZE = 64
NUM_EPOCHS
= 10
EVAL_BATCH_SIZE
= 64
EVAL_FREQUENCY
= 100 # Number of steps between evaluations.
FLAGS &#61; Nonedef data_type():"""Return the type of the activations, weights, and placeholder variables."""if FLAGS.use_fp16:return tf.float16else:return tf.float32def maybe_download(filename):"""Download the data from Yann&#39;s website, unless it&#39;s already here."""if not tf.gfile.Exists(WORK_DIRECTORY):tf.gfile.MakeDirs(WORK_DIRECTORY)filepath &#61; os.path.join(WORK_DIRECTORY, filename)return filepathdef extract_data(filename, num_images):"""Extract the images into a 4D tensor [image index, y, x, channels].Values are rescaled from [0, 255] down to [-0.5, 0.5]."""print(&#39;Extracting&#39;, filename)with gzip.open(filename) as bytestream:bytestream.read(16)buf &#61; bytestream.read(IMAGE_SIZE * IMAGE_SIZE * num_images * NUM_CHANNELS)data &#61; numpy.frombuffer(buf, dtype&#61;numpy.uint8).astype(numpy.float32)data &#61; (data - (PIXEL_DEPTH / 2.0)) / PIXEL_DEPTHdata &#61; data.reshape(num_images, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS)return datadef extract_labels(filename, num_images):"""Extract the labels into a vector of int64 label IDs."""print(&#39;Extracting&#39;, filename)with gzip.open(filename) as bytestream:bytestream.read(8)buf &#61; bytestream.read(1 * num_images)labels &#61; numpy.frombuffer(buf, dtype&#61;numpy.uint8).astype(numpy.int64)return labelsdef fake_data(num_images):"""Generate a fake dataset that matches the dimensions of MNIST."""data &#61; numpy.ndarray(shape&#61;(num_images, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS),dtype&#61;numpy.float32)labels &#61; numpy.zeros(shape&#61;(num_images,), dtype&#61;numpy.int64)for image in range(num_images):label &#61; image % 2data[image, :, :, 0] &#61; label - 0.5labels[image] &#61; labelreturn data, labelsdef error_rate(predictions, labels):"""Return the error rate based on dense predictions and sparse labels."""return 100.0 - (100.0 *numpy.sum(numpy.argmax(predictions, 1) &#61;&#61; labels) / predictions.shape[0])def main(_):if FLAGS.self_test:print(&#39;Running self-test.&#39;)train_data, train_labels &#61; fake_data(256)validation_data, validation_labels &#61; fake_data(EVAL_BATCH_SIZE)test_data, test_labels &#61; fake_data(EVAL_BATCH_SIZE)num_epochs &#61; 1else:# Get the data.train_data_filename &#61; maybe_download(&#39;train-images-idx3-ubyte.gz&#39;)train_labels_filename &#61; maybe_download(&#39;train-labels-idx1-ubyte.gz&#39;)test_data_filename &#61; maybe_download(&#39;t10k-images-idx3-ubyte.gz&#39;)test_labels_filename &#61; maybe_download(&#39;t10k-labels-idx1-ubyte.gz&#39;)# Extract it into numpy arrays.train_data &#61; extract_data(train_data_filename, 60000)train_labels &#61; extract_labels(train_labels_filename, 60000)test_data &#61; extract_data(test_data_filename, 10000)test_labels &#61; extract_labels(test_labels_filename, 10000)# Generate a validation set.validation_data &#61; train_data[:VALIDATION_SIZE, ...]validation_labels &#61; train_labels[:VALIDATION_SIZE]train_data &#61; train_data[VALIDATION_SIZE:, ...]train_labels &#61; train_labels[VALIDATION_SIZE:]num_epochs &#61; NUM_EPOCHStrain_size &#61; train_labels.shape[0]# This is where training samples and labels are fed to the graph.# These placeholder nodes will be fed a batch of training data at each# training step using the {feed_dict} argument to the Run() call below.train_data_node &#61; tf.placeholder(data_type(),shape&#61;(BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS))train_labels_node &#61; tf.placeholder(tf.int64, shape&#61;(BATCH_SIZE,))eval_data &#61; tf.placeholder(data_type(),shape&#61;(EVAL_BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS))# The variables below hold all the trainable weights. They are passed an# initial value which will be assigned when we call:# {tf.global_variables_initializer().run()}conv1_weights &#61; tf.Variable(tf.truncated_normal([5, 5, NUM_CHANNELS, 32], # 5x5 filter, depth 32.stddev&#61;0.1,seed&#61;SEED, dtype&#61;data_type()))conv1_biases &#61; tf.Variable(tf.zeros([32], dtype&#61;data_type()))conv2_weights &#61; tf.Variable(tf.truncated_normal([5, 5, 32, 64], stddev&#61;0.1,seed&#61;SEED, dtype&#61;data_type()))conv2_biases &#61; tf.Variable(tf.constant(0.1, shape&#61;[64], dtype&#61;data_type()))fc1_weights &#61; tf.Variable( # fully connected, depth 512.tf.truncated_normal([IMAGE_SIZE // 4 * IMAGE_SIZE // 4 * 64, 512],stddev&#61;0.1,seed&#61;SEED,dtype&#61;data_type()))fc1_biases &#61; tf.Variable(tf.constant(0.1, shape&#61;[512], dtype&#61;data_type()))fc2_weights &#61; tf.Variable(tf.truncated_normal([512, NUM_LABELS],stddev&#61;0.1,seed&#61;SEED,dtype&#61;data_type()))fc2_biases &#61; tf.Variable(tf.constant(0.1, shape&#61;[NUM_LABELS], dtype&#61;data_type()))# We will replicate the model structure for the training subgraph, as well# as the evaluation subgraphs, while sharing the trainable parameters.def model(data, train&#61;False):"""The Model definition."""# 2D convolution, with &#39;SAME&#39; padding (i.e. the output feature map has# the same size as the input). Note that {strides} is a 4D array whose# shape matches the data layout: [image index, y, x, depth].conv &#61; tf.nn.conv2d(data,conv1_weights,strides&#61;[1, 1, 1, 1],padding&#61;&#39;SAME&#39;)# Bias and rectified linear non-linearity.relu &#61; tf.nn.relu(tf.nn.bias_add(conv, conv1_biases))# Max pooling. The kernel size spec {ksize} also follows the layout of# the data. Here we have a pooling window of 2, and a stride of 2.pool &#61; tf.nn.max_pool(relu,ksize&#61;[1, 2, 2, 1],strides&#61;[1, 2, 2, 1],padding&#61;&#39;SAME&#39;)conv &#61; tf.nn.conv2d(pool,conv2_weights,strides&#61;[1, 1, 1, 1],padding&#61;&#39;SAME&#39;)relu &#61; tf.nn.relu(tf.nn.bias_add(conv, conv2_biases))pool &#61; tf.nn.max_pool(relu,ksize&#61;[1, 2, 2, 1],strides&#61;[1, 2, 2, 1],padding&#61;&#39;SAME&#39;)# Reshape the feature map cuboid into a 2D matrix to feed it to the# fully connected layers.pool_shape &#61; pool.get_shape().as_list()reshape &#61; tf.reshape(pool,[pool_shape[0], pool_shape[1] * pool_shape[2] * pool_shape[3]])# Fully connected layer. Note that the &#39;&#43;&#39; operation automatically# broadcasts the biases.hidden &#61; tf.nn.relu(tf.matmul(reshape, fc1_weights) &#43; fc1_biases)# Add a 50% dropout during training only. Dropout also scales# activations such that no rescaling is needed at evaluation time.if train:hidden &#61; tf.nn.dropout(hidden, 0.5, seed&#61;SEED)return tf.matmul(hidden, fc2_weights) &#43; fc2_biases# Training computation: logits &#43; cross-entropy loss.logits &#61; model(train_data_node, True)loss &#61; tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits&#61;logits, labels&#61;train_labels_node))# L2 regularization for the fully connected parameters.regularizers &#61; (tf.nn.l2_loss(fc1_weights) &#43; tf.nn.l2_loss(fc1_biases) &#43;tf.nn.l2_loss(fc2_weights) &#43; tf.nn.l2_loss(fc2_biases))# Add the regularization term to the loss.loss &#43;&#61; 5e-4 * regularizers# Optimizer: set up a variable that&#39;s incremented once per batch and# controls the learning rate decay.batch &#61; tf.Variable(0, dtype&#61;data_type())# Decay once per epoch, using an exponential schedule starting at 0.01.learning_rate &#61; tf.train.exponential_decay(0.01, # Base learning rate.batch * BATCH_SIZE, # Current index into the dataset.train_size, # Decay step.0.95, # Decay rate.staircase&#61;True)# Use simple momentum for the optimization.optimizer &#61; tf.train.MomentumOptimizer(learning_rate,0.9).minimize(loss,global_step&#61;batch)# Predictions for the current training minibatch.train_prediction &#61; tf.nn.softmax(logits)# Predictions for the test and validation, which we&#39;ll compute less often.eval_prediction &#61; tf.nn.softmax(model(eval_data))# Small utility function to evaluate a dataset by feeding batches of data to# {eval_data} and pulling the results from {eval_predictions}.# Saves memory and enables this to run on smaller GPUs.def eval_in_batches(data, sess):"""Get all predictions for a dataset by running it in small batches."""size &#61; data.shape[0]if size < EVAL_BATCH_SIZE:raise ValueError("batch size for evals larger than dataset: %d" % size)predictions &#61; numpy.ndarray(shape&#61;(size, NUM_LABELS), dtype&#61;numpy.float32)for begin in range(0, size, EVAL_BATCH_SIZE):end &#61; begin &#43; EVAL_BATCH_SIZEif end <&#61; size:predictions[begin:end, :] &#61; sess.run(eval_prediction,feed_dict&#61;{eval_data: data[begin:end, ...]})else:batch_predictions &#61; sess.run(eval_prediction,feed_dict&#61;{eval_data: data[-EVAL_BATCH_SIZE:, ...]})predictions[begin:, :] &#61; batch_predictions[begin - size:, :]return predictions# Create a local session to run the training.start_time &#61; time.time()with tf.Session() as sess:# Run all the initializers to prepare the trainable parameters.
tf.global_variables_initializer().run()print(&#39;Initialized!&#39;)# Loop through training steps.for step in range(int(num_epochs * train_size) // BATCH_SIZE):# Compute the offset of the current minibatch in the data.# Note that we could use better randomization across epochs.offset &#61; (step * BATCH_SIZE) % (train_size - BATCH_SIZE)batch_data &#61; train_data[offset:(offset &#43; BATCH_SIZE), ...]batch_labels &#61; train_labels[offset:(offset &#43; BATCH_SIZE)]# This dictionary maps the batch data (as a numpy array) to the# node in the graph it should be fed to.feed_dict &#61; {train_data_node: batch_data, train_labels_node: batch_labels}# Run the optimizer to update weights.sess.run(optimizer, feed_dict&#61;feed_dict)# print some extra information once reach the evaluation frequencyif step % EVAL_FREQUENCY &#61;&#61; 0:# fetch some extra nodes&#39; datal, lr, predictions &#61; sess.run([loss, learning_rate, train_prediction], feed_dict&#61;feed_dict)elapsed_time &#61; time.time() - start_timestart_time &#61; time.time()print(&#39;Step %d (epoch %.2f), %.1f ms&#39; %(step, float(step) * BATCH_SIZE / train_size,1000 * elapsed_time / EVAL_FREQUENCY))print(&#39;Minibatch loss: %.3f, learning rate: %.6f&#39; % (l, lr))print(&#39;Minibatch error: %.1f%%&#39; % error_rate(predictions, batch_labels))print(&#39;Validation error: %.1f%%&#39; % error_rate(eval_in_batches(validation_data, sess), validation_labels))sys.stdout.flush()# Finally print the result!test_error &#61; error_rate(eval_in_batches(test_data, sess), test_labels)print(&#39;Test error: %.1f%%&#39; % test_error)if FLAGS.self_test:print(&#39;test_error&#39;, test_error)assert test_error &#61;&#61; 0.0, &#39;expected 0.0 test_error, got %.2f&#39; % (test_error,)if __name__ &#61;&#61; &#39;__main__&#39;:parser &#61; argparse.ArgumentParser()parser.add_argument(&#39;--use_fp16&#39;,default&#61;False,help&#61;&#39;Use half floats instead of full floats if True.&#39;,action&#61;&#39;store_true&#39;)parser.add_argument(&#39;--self_test&#39;,default&#61;False,action&#61;&#39;store_true&#39;,help&#61;&#39;True if running a self test.&#39;)FLAGS, unparsed &#61; parser.parse_known_args()tf.app.run(main&#61;main, argv&#61;[sys.argv[0]] &#43; unparsed)

 

网址里面还有很多其它的示例&#xff0c;这些示例代码是最全的&#xff0c;比google网站上的还全&#xff0c;也比 github 上最新的 tensorflow 的库的例子要全要好 。

 



推荐阅读
  • 开源Keras Faster RCNN模型介绍及代码结构解析
    本文介绍了开源Keras Faster RCNN模型的环境需求和代码结构,包括FasterRCNN源码解析、RPN与classifier定义、data_generators.py文件的功能以及损失计算。同时提供了该模型的开源地址和安装所需的库。 ... [详细]
  • 我先解释一下必要信息:tf.conv2d_transpose(value,filter,output_shape,strides,paddingSAME,dat ... [详细]
  • 微软头条实习生分享深度学习自学指南
    本文介绍了一位微软头条实习生自学深度学习的经验分享,包括学习资源推荐、重要基础知识的学习要点等。作者强调了学好Python和数学基础的重要性,并提供了一些建议。 ... [详细]
  • Webmin远程命令执行漏洞复现及防护方法
    本文介绍了Webmin远程命令执行漏洞CVE-2019-15107的漏洞详情和复现方法,同时提供了防护方法。漏洞存在于Webmin的找回密码页面中,攻击者无需权限即可注入命令并执行任意系统命令。文章还提供了相关参考链接和搭建靶场的步骤。此外,还指出了参考链接中的数据包不准确的问题,并解释了漏洞触发的条件。最后,给出了防护方法以避免受到该漏洞的攻击。 ... [详细]
  • Android日历提醒软件开源项目分享及使用教程
    本文介绍了一款名为Android日历提醒软件的开源项目,作者分享了该项目的代码和使用教程,并提供了GitHub项目地址。文章详细介绍了该软件的主界面风格、日程信息的分类查看功能,以及添加日程提醒和查看详情的界面。同时,作者还提醒了读者在使用过程中可能遇到的Android6.0权限问题,并提供了解决方法。 ... [详细]
  • 本文介绍了贝叶斯垃圾邮件分类的机器学习代码,代码来源于https://www.cnblogs.com/huangyc/p/10327209.html,并对代码进行了简介。朴素贝叶斯分类器训练函数包括求p(Ci)和基于词汇表的p(w|Ci)。 ... [详细]
  • 通过Anaconda安装tensorflow,并安装运行spyder编译器的完整教程
    本文提供了一个完整的教程,介绍了如何通过Anaconda安装tensorflow,并安装运行spyder编译器。文章详细介绍了安装Anaconda、创建tensorflow环境、安装GPU版本tensorflow、安装和运行Spyder编译器以及安装OpenCV等步骤。该教程适用于Windows 8操作系统,并提供了相关的网址供参考。通过本教程,读者可以轻松地安装和配置tensorflow环境,以及运行spyder编译器进行开发。 ... [详细]
  • 关于如何快速定义自己的数据集,可以参考我的前一篇文章PyTorch中快速加载自定义数据(入门)_晨曦473的博客-CSDN博客刚开始学习P ... [详细]
  • [翻译]PyCairo指南裁剪和masking
    裁剪和masking在PyCairo指南的这个部分,我么将讨论裁剪和masking操作。裁剪裁剪就是将图形的绘制限定在一定的区域内。这样做有一些效率的因素࿰ ... [详细]
  • 详解 Python 的二元算术运算,为什么说减法只是语法糖?[Python常见问题]
    原题|UnravellingbinaryarithmeticoperationsinPython作者|BrettCannon译者|豌豆花下猫(“Python猫 ... [详细]
  • 代码如下:#coding:utf-8importstring,os,sysimportnumpyasnpimportmatplotlib.py ... [详细]
  • IamgettingaUnicodeerror:UnicodeEncodeError:charmapcodeccantencodecharacteru\xa9in ... [详细]
  • 强化学习之ActorCritic
    强化学习 ... [详细]
  • 本文介绍了数据库的存储结构及其重要性,强调了关系数据库范例中将逻辑存储与物理存储分开的必要性。通过逻辑结构和物理结构的分离,可以实现对物理存储的重新组织和数据库的迁移,而应用程序不会察觉到任何更改。文章还展示了Oracle数据库的逻辑结构和物理结构,并介绍了表空间的概念和作用。 ... [详细]
  • 表达式树摘录(1)
    本文主要讲述ConstantExpression介绍表示具有常量值的表达式。ParameterExpression介绍表示命名的参数表达式。UnaryExpression介绍表示包 ... [详细]
author-avatar
putongren1980
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有