首页 > 编程 > Python > 正文

对tensorflow中cifar-10文档的Read操作详解

2020-02-15 21:18:25
字体:
来源:转载
供稿:网友

前言

在tensorflow的官方文档中得卷积神经网络一章,有一个使用cifar-10图片数据集的实验,搭建卷积神经网络倒不难,但是那个cifar10_input文件着实让我费了一番心思。配合着官方文档也算看的七七八八,但是中间还是有一些不太明白,不明白的mark一下,这次记下一些已经明白的。

研究

cifar10_input.py文件的read操作,主要的就是下面的代码:

if not eval_data:  filenames = [os.path.join(data_dir, 'data_batch_%d.bin' % i)         for i in xrange(1, 6)]  num_examples_per_epoch = NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN else:  filenames = [os.path.join(data_dir, 'test_batch.bin')]  num_examples_per_epoch = NUM_EXAMPLES_PER_EPOCH_FOR_EVAL...filename_queue = tf.train.string_input_producer(filenames)...label_bytes = 1 # 2 for CIFAR-100 result.height = 32 result.width = 32 result.depth = 3 image_bytes = result.height * result.width * result.depth # Every record consists of a label followed by the image, with a # fixed number of bytes for each. record_bytes = label_bytes + image_bytes # Read a record, getting filenames from the filename_queue. No # header or footer in the CIFAR-10 format, so we leave header_bytes # and footer_bytes at their default of 0. reader = tf.FixedLengthRecordReader(record_bytes=record_bytes) result.key, value = reader.read(filename_queue) ... if shuffle:  images, label_batch = tf.train.shuffle_batch(    [image, label],    batch_size=batch_size,    num_threads=num_preprocess_threads,    capacity=min_queue_examples + 3 * batch_size,    min_after_dequeue=min_queue_examples) else:  images, label_batch = tf.train.batch(    [image, label],    batch_size=batch_size,    num_threads=num_preprocess_threads,    capacity=min_queue_examples + 3 * batch_size)

开始并不明白这段代码是用来干什么的,越看越糊涂,因为之前使用tensorflow最多也就是使用哪个tf.placeholder()这个操作,并没有使用tensorflow自带的读写方法来读写,所以上面的代码看的很费劲儿。不过我在官方文档的How-To这个document中看到了这个东西:

Batchingdef read_my_file_format(filename_queue): reader = tf.SomeReader() key, record_string = reader.read(filename_queue) example, label = tf.some_decoder(record_string) processed_example = some_processing(example) return processed_example, labeldef input_pipeline(filenames, batch_size, num_epochs=None): filename_queue = tf.train.string_input_producer(   filenames, num_epochs=num_epochs, shuffle=True) example, label = read_my_file_format(filename_queue) # min_after_dequeue defines how big a buffer we will randomly sample #  from -- bigger means better shuffling but slower start up and more #  memory used. # capacity must be larger than min_after_dequeue and the amount larger #  determines the maximum we will prefetch. Recommendation: #  min_after_dequeue + (num_threads + a small safety margin) * batch_size min_after_dequeue = 10000 capacity = min_after_dequeue + 3 * batch_size example_batch, label_batch = tf.train.shuffle_batch(   [example, label], batch_size=batch_size, capacity=capacity,   min_after_dequeue=min_after_dequeue) return example_batch, label_batch            
发表评论 共有条评论
用户名: 密码:
验证码: 匿名发表