Monday, May 15, 2017

Image Classification(Conv Neural Networks) Using TensorFlow in Python

dlnd_image_classification Github Link Click here <!-- Custom stylesheet, it must be in the same directory as the html file

Image Classification

In this project, we'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. we'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. we'll build a convolutional, max pooling, dropout, and fully connected layers. At the end, we'll get to see your neural network's predictions on the sample images.

Get the Data

Run the following cell to download the CIFAR-10 dataset for python.
In [2]:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile

cifar10_dataset_folder_path = 'cifar-10-batches-py'

# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
    tar_gz_path = floyd_cifar10_location
else:
    tar_gz_path = 'cifar-10-python.tar.gz'

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile(tar_gz_path):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
        urlretrieve(
            'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
            tar_gz_path,
            pbar.hook)

if not isdir(cifar10_dataset_folder_path):
    with tarfile.open(tar_gz_path) as tar:
        tar.extractall()
        tar.close()


tests.test_folder_path(cifar10_dataset_folder_path)
All files found!

Explore the Data

The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
  • airplane
  • automobile
  • bird
  • cat
  • deer
  • dog
  • frog
  • horse
  • ship
  • truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
In [3]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import helper
import numpy as np

# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Stats of batch 1:
Samples: 10000
Label Counts: {0: 1005, 1: 974, 2: 1032, 3: 1016, 4: 999, 5: 937, 6: 1030, 7: 1001, 8: 1025, 9: 981}
First 20 Labels: [6, 9, 9, 4, 1, 1, 2, 7, 8, 3, 4, 7, 7, 2, 9, 9, 9, 3, 2, 6]

Example of Image 5:
Image - Min Value: 0 Max Value: 252
Image - Shape: (32, 32, 3)
Label - Label Id: 1 Name: automobile

Implement Preprocess Functions

Normalize

In the cell below, normalize function take in image data, x, and return it as a normalized Numpy array. The values is in the range of 0 to 1, inclusive. The return object is the same shape as x.
In [4]:
def normalize(x):
    """
    Normalize a list of sample image data in the range of 0 to 1
    : x: List of image data.  The image shape is (32, 32, 3)
    : return: Numpy array of normalize data
    """
    # TODO: Implement Function
    #print(x.shape)
    normalized_images=np.zeros((x.shape))
    nbr_images=x.shape[0]
    min_image,max_image=x.min(),x.max()
    #print(max_image)
    for image in range(nbr_images):
        normalized_images[image,...]=((x[image,...]-min_image)-float(min_image))/(float(max_image-min_image))
    return normalized_images



tests.test_normalize(normalize)
Tests Passed

One-hot encode

Just like the previous code cell, we'll be implementing a function for preprocessing. This time, we'll implement the one_hot_encode function. The input, x, are a list of labels. Function returns the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function return the same encoding for each value between each call to one_hot_encode.
In [5]:
from sklearn.preprocessing import OneHotEncoder
from sklearn import preprocessing
lb=None
def one_hot_encode(x):
    """
    One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
    : x: List of sample Labels
    : return: Numpy array of one-hot encoded labels
    """
    # TODO: Implement Function
    
    global lb
    if lb is None:
        lb = preprocessing.LabelBinarizer()
        lb.fit(x)
    encodings = lb.transform(x)
    return encodings



tests.test_one_hot_encode(one_hot_encode)
Tests Passed

Preprocess all the data and save it

Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
In [6]:
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)

Check Point

This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
In [7]:
import pickle
import problem_unittests as tests
import helper

# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))

Build the network

For the neural network, we'll build each layer into a function.
Let's begin!

Input

The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. we'll Implement the following functions
  • neural_net_image_input
    • Return a TF Placeholder
    • Set the shape using image_shape with batch size set to None.
    • Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
  • neural_net_label_input
    • Return a TF Placeholder
    • Set the shape using n_classes with batch size set to None.
    • Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
  • neural_net_keep_prob_input
    • Return a TF Placeholder for dropout keep probability.
    • Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
In [8]:
import tensorflow as tf

def neural_net_image_input(image_shape):
    """
    Return a Tensor for a batch of image input
    : image_shape: Shape of the images
    : return: Tensor for image input.
    """
    # TODO: Implement Function
    
    return tf.placeholder(tf.float32, shape=[None]+list(image_shape),name="x")


def neural_net_label_input(n_classes):
    """
    Return a Tensor for a batch of label input
    : n_classes: Number of classes
    : return: Tensor for label input.
    """
    # TODO: Implement Function
   
    #print(n_classes)
    return  tf.placeholder(tf.float32, shape=[None,n_classes],name="y")


def neural_net_keep_prob_input():
    """
    Return a Tensor for keep probability
    : return: Tensor for keep probability.
    """
    # TODO: Implement Function
    return  tf.placeholder(tf.float32, shape=(None),name="keep_prob")



tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Image Input Tests Passed.
Label Input Tests Passed.
Keep Prob Tests Passed.

Convolution and Max Pooling Layer

Convolution layers have a lot of success with images. For this code cell, we'll implement the function conv2d_maxpool to apply convolution then max pooling:
  • Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
  • Apply a convolution to x_tensor using weight and conv_strides.
  • Add bias
  • Add a nonlinear activation to the convolution.
  • Apply Max Pooling using pool_ksize and pool_strides.
In [9]:
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
    """
    Apply convolution then max pooling to x_tensor
    :param x_tensor: TensorFlow Tensor
    :param conv_num_outputs: Number of outputs for the convolutional layer
    :param conv_ksize: kernal size 2-D Tuple for the convolutional layer
    :param conv_strides: Stride 2-D Tuple for convolution
    :param pool_ksize: kernal size 2-D Tuple for pool
    :param pool_strides: Stride 2-D Tuple for pool
    : return: A tensor that represents convolution and max pooling of x_tensor
    """
    # TODO: Implement Function
    shape=[conv_ksize[0], conv_ksize[1], ((x_tensor.get_shape())[3]), conv_num_outputs]
    print((list(x_tensor.get_shape())[3]))
    weights =tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_tensor.get_shape().as_list()[3], conv_num_outputs],
                                            mean=0.0,stddev = .1))
    bias=tf.Variable(tf.zeros(conv_num_outputs))
    
    
    x = tf.nn.conv2d(x_tensor, weights, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME')
    x = tf.nn.bias_add(x, bias)
    x=  tf.nn.relu(x)
    
    output=tf.nn.max_pool(
        x,
        ksize=[1, pool_ksize[0], pool_ksize[1], 1],
        strides=[1, pool_strides[0], pool_strides[1], 1],
        padding='SAME')
    return output 



tests.test_con_pool(conv2d_maxpool)
5
Tests Passed

Flatten Layer

Here we'll Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer.
In [10]:
def flatten(x_tensor):
    """
    Flatten x_tensor to (Batch Size, Flattened Image Size)
    : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
    : return: A tensor of size (Batch Size, Flattened Image Size).
    """
    # TODO: Implement Function
    x_tensor_shape=( x_tensor.get_shape().as_list())
    x_tensor_ht=x_tensor_shape[1]
    x_tensor_wt=x_tensor_shape[2]
    x_tensor_channels=x_tensor_shape[3]
    pool2_flat = tf.reshape(x_tensor, [-1, x_tensor_ht *x_tensor_wt*x_tensor_channels])
    return pool2_flat



tests.test_flatten(flatten)
Tests Passed

Fully-Connected Layer

IHere we'll mplement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer.
In [11]:
def fully_conn(x_tensor, num_outputs):
    """
    Apply a fully connected layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    
    # TODO: Implement Function
    x_tensor_shape=np.array(x_tensor.get_shape().as_list()[1:]).prod()
    
    weights =tf.Variable(tf.truncated_normal([x_tensor_shape, num_outputs],mean=0.0,stddev = .1))
    biases=tf.Variable(tf.zeros(num_outputs))
    c1 = tf.add(tf.matmul(x_tensor, weights), biases)
    
    c1 = tf.nn.relu(c1)
    
    return c1



tests.test_fully_conn(fully_conn)
Tests Passed

Output Layer

we'll Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
In [12]:
def output(x_tensor, num_outputs):
    """
    Apply a output layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    # TODO: Implement Function
    x_tensor_shape=np.array(x_tensor.get_shape().as_list()[1:]).prod()
    print(x_tensor_shape)
    weights =tf.Variable(tf.truncated_normal([x_tensor_shape, num_outputs],mean=0.0,stddev = .1))
    biases=tf.Variable(tf.zeros(num_outputs))
    c1 = tf.add(tf.matmul(x_tensor, weights), biases)
    return c1



tests.test_output(output)
128
Tests Passed

Create Convolutional Model

we'll Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
  • Apply 1, 2, or 3 Convolution and Max Pool layers
  • Apply a Flatten Layer
  • Apply 1, 2, or 3 Fully Connected Layers
  • Apply an Output Layer
  • Return the output
  • Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
In [13]:
def conv_net(x, keep_prob):
    """
    Create a convolutional neural network model
    : x: Placeholder tensor that holds image data.
    : keep_prob: Placeholder tensor that hold dropout keep probability.
    : return: Tensor that represents logits
    """
    # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
    #    Play around with different number of outputs, kernel size and stride
    # Function Definition from Above:
    #    conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
    conv = conv2d_maxpool(x,
                           conv_num_outputs=16,
                           conv_ksize=[5,5],
                           conv_strides=[1,1],
                           pool_ksize=[3,3],
                           pool_strides=[2,2])
    
    conv = conv2d_maxpool(conv,
                           conv_num_outputs=32,
                           conv_ksize=[5,5],
                           conv_strides=[1,1],
                           pool_ksize=[3,3],
                           pool_strides=[2,2])
    
#     conv = conv2d_maxpool(conv,
#                             conv_num_outputs=64,
#                             conv_ksize=[5,5],
#                             conv_strides=[1,1],
#                             pool_ksize=[3,3],
#                             pool_strides=[2,2])
    
    # TODO: Apply a Flatten Layer
    # Function Definition from Above:
    #   flatten(x_tensor)
    flattened=flatten(conv)

    # TODO: Apply 1, 2, or 3 Fully Connected Layers
    #    Play around with different number of outputs
    # Function Definition from Above:
    #   fully_conn(x_tensor, num_outputs)
    fully_c= fully_conn(flattened,512)
    fully_c= fully_conn(fully_c,250)
    #fully_c= fully_conn(fully_c,50)
    
    # TODO: Apply an Output Layer
    #    Set this to the number of classes
    # Function Definition from Above:
    #   output(x_tensor, num_outputs)
    fully_c=tf.nn.dropout(fully_c,keep_prob)
    output_cn=output(fully_c, 10)
    
    
    # TODO: return output
    return output_cn




##############################
## Build the Neural Network ##
##############################

# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()

# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()

# Model
logits = conv_net(x, keep_prob)

# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')

# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)

# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')

tests.test_conv_net(conv_net)
3
16
250
3
16
250
Neural Network Built!

Train the Neural Network

Single Optimization

We'll Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
  • x for image input
  • y for labels
  • keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
In [14]:
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
    """
    Optimize the session on a batch of images and labels
    : session: Current TensorFlow session
    : optimizer: TensorFlow optimizer function
    : keep_probability: keep probability
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    """
    # TODO: Implement Function
    session.run(optimizer, feed_dict={
                    x: feature_batch,
                    y: label_batch,
                keep_prob: keep_probability})
        
        
    


tests.test_train_nn(train_neural_network)
Tests Passed

Show Stats

We'll Implement the function print_stats to print loss and validation accuracy. We'll Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
In [15]:
def print_stats(session, feature_batch, label_batch, cost, accuracy):
    """
    Print information about loss and validation accuracy
    : session: Current TensorFlow session
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    : cost: TensorFlow cost function
    : accuracy: TensorFlow accuracy function
    """
    # TODO: Implement Function
    loss=sess.run(cost,feed_dict={x:valid_features,
                            y:valid_labels,
                            keep_prob:1})
    
    valid_acc=sess.run(accuracy,feed_dict={x:valid_features,
                            y:valid_labels,
                            keep_prob:1})
    
    print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_acc))
    

Hyperparameters

We'll Tune the following parameters:
  • Set epochs to the number of iterations until the network stops learning or start overfitting
  • Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
    • 64
    • 128
    • 256
    • ...
  • Set keep_probability to the probability of keeping a node using dropout
In [18]:
# TODO: Tune Parameters
epochs = 75
batch_size = 256
keep_probability = .75

Train on a Single CIFAR-10 Batch

Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
In [20]:
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        batch_i = 1
        for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
            train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
        print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
        print_stats(sess, batch_features, batch_labels, cost, accuracy)
Checking the Training on a Single Batch...
Epoch  1, CIFAR-10 Batch 1:  Loss:     1.9453 Validation Accuracy: 0.319400
Epoch  2, CIFAR-10 Batch 1:  Loss:     1.6291 Validation Accuracy: 0.410600
Epoch  3, CIFAR-10 Batch 1:  Loss:     1.5037 Validation Accuracy: 0.453200
Epoch  4, CIFAR-10 Batch 1:  Loss:     1.4132 Validation Accuracy: 0.493600
Epoch  5, CIFAR-10 Batch 1:  Loss:     1.3927 Validation Accuracy: 0.514200
Epoch  6, CIFAR-10 Batch 1:  Loss:     1.3337 Validation Accuracy: 0.527400
Epoch  7, CIFAR-10 Batch 1:  Loss:     1.3034 Validation Accuracy: 0.537200
Epoch  8, CIFAR-10 Batch 1:  Loss:     1.3498 Validation Accuracy: 0.525200
Epoch  9, CIFAR-10 Batch 1:  Loss:     1.3351 Validation Accuracy: 0.540600
Epoch 10, CIFAR-10 Batch 1:  Loss:     1.3110 Validation Accuracy: 0.549400
Epoch 11, CIFAR-10 Batch 1:  Loss:     1.3274 Validation Accuracy: 0.541800
Epoch 12, CIFAR-10 Batch 1:  Loss:     1.3446 Validation Accuracy: 0.546400
Epoch 13, CIFAR-10 Batch 1:  Loss:     1.3870 Validation Accuracy: 0.540400
Epoch 14, CIFAR-10 Batch 1:  Loss:     1.4308 Validation Accuracy: 0.541000
Epoch 15, CIFAR-10 Batch 1:  Loss:     1.4158 Validation Accuracy: 0.550000
Epoch 16, CIFAR-10 Batch 1:  Loss:     1.4125 Validation Accuracy: 0.565800
Epoch 17, CIFAR-10 Batch 1:  Loss:     1.4593 Validation Accuracy: 0.560000
Epoch 18, CIFAR-10 Batch 1:  Loss:     1.4545 Validation Accuracy: 0.555800
Epoch 19, CIFAR-10 Batch 1:  Loss:     1.6217 Validation Accuracy: 0.526800
Epoch 20, CIFAR-10 Batch 1:  Loss:     1.6108 Validation Accuracy: 0.543200
Epoch 21, CIFAR-10 Batch 1:  Loss:     1.5684 Validation Accuracy: 0.560000
Epoch 22, CIFAR-10 Batch 1:  Loss:     1.7514 Validation Accuracy: 0.534400
Epoch 23, CIFAR-10 Batch 1:  Loss:     1.7895 Validation Accuracy: 0.540800
Epoch 24, CIFAR-10 Batch 1:  Loss:     1.6858 Validation Accuracy: 0.556400
Epoch 25, CIFAR-10 Batch 1:  Loss:     1.7469 Validation Accuracy: 0.546000
Epoch 26, CIFAR-10 Batch 1:  Loss:     1.8842 Validation Accuracy: 0.552000
Epoch 27, CIFAR-10 Batch 1:  Loss:     1.9133 Validation Accuracy: 0.549800
Epoch 28, CIFAR-10 Batch 1:  Loss:     1.8369 Validation Accuracy: 0.578600
Epoch 29, CIFAR-10 Batch 1:  Loss:     1.9627 Validation Accuracy: 0.580600
Epoch 30, CIFAR-10 Batch 1:  Loss:     2.3553 Validation Accuracy: 0.547600
Epoch 31, CIFAR-10 Batch 1:  Loss:     2.1859 Validation Accuracy: 0.564600
Epoch 32, CIFAR-10 Batch 1:  Loss:     2.1143 Validation Accuracy: 0.574000
Epoch 33, CIFAR-10 Batch 1:  Loss:     2.3696 Validation Accuracy: 0.548600
Epoch 34, CIFAR-10 Batch 1:  Loss:     2.2448 Validation Accuracy: 0.568200
Epoch 35, CIFAR-10 Batch 1:  Loss:     2.3524 Validation Accuracy: 0.555800
Epoch 36, CIFAR-10 Batch 1:  Loss:     2.3916 Validation Accuracy: 0.562600
Epoch 37, CIFAR-10 Batch 1:  Loss:     2.3851 Validation Accuracy: 0.569200
Epoch 38, CIFAR-10 Batch 1:  Loss:     2.4926 Validation Accuracy: 0.555800
Epoch 39, CIFAR-10 Batch 1:  Loss:     2.5082 Validation Accuracy: 0.554000
Epoch 40, CIFAR-10 Batch 1:  Loss:     2.5586 Validation Accuracy: 0.553800
Epoch 41, CIFAR-10 Batch 1:  Loss:     2.5286 Validation Accuracy: 0.570800
Epoch 42, CIFAR-10 Batch 1:  Loss:     2.5486 Validation Accuracy: 0.566600
Epoch 43, CIFAR-10 Batch 1:  Loss:     2.4552 Validation Accuracy: 0.569000
Epoch 44, CIFAR-10 Batch 1:  Loss:     2.6058 Validation Accuracy: 0.552200
Epoch 45, CIFAR-10 Batch 1:  Loss:     2.5882 Validation Accuracy: 0.554000
Epoch 46, CIFAR-10 Batch 1:  Loss:     2.7108 Validation Accuracy: 0.542200
Epoch 47, CIFAR-10 Batch 1:  Loss:     2.5673 Validation Accuracy: 0.564800
Epoch 48, CIFAR-10 Batch 1:  Loss:     2.7328 Validation Accuracy: 0.554400
Epoch 49, CIFAR-10 Batch 1:  Loss:     2.7590 Validation Accuracy: 0.544400
Epoch 50, CIFAR-10 Batch 1:  Loss:     3.0196 Validation Accuracy: 0.534200
Epoch 51, CIFAR-10 Batch 1:  Loss:     2.6753 Validation Accuracy: 0.562600
Epoch 52, CIFAR-10 Batch 1:  Loss:     2.8396 Validation Accuracy: 0.542000
Epoch 53, CIFAR-10 Batch 1:  Loss:     2.9521 Validation Accuracy: 0.541800
Epoch 54, CIFAR-10 Batch 1:  Loss:     2.6771 Validation Accuracy: 0.575800
Epoch 55, CIFAR-10 Batch 1:  Loss:     2.7127 Validation Accuracy: 0.579600
Epoch 56, CIFAR-10 Batch 1:  Loss:     2.7345 Validation Accuracy: 0.585200
Epoch 57, CIFAR-10 Batch 1:  Loss:     2.8376 Validation Accuracy: 0.588800
Epoch 58, CIFAR-10 Batch 1:  Loss:     2.9677 Validation Accuracy: 0.581600
Epoch 59, CIFAR-10 Batch 1:  Loss:     2.9451 Validation Accuracy: 0.586400
Epoch 60, CIFAR-10 Batch 1:  Loss:     2.9684 Validation Accuracy: 0.584000
Epoch 61, CIFAR-10 Batch 1:  Loss:     3.0246 Validation Accuracy: 0.584600
Epoch 62, CIFAR-10 Batch 1:  Loss:     2.9269 Validation Accuracy: 0.584400
Epoch 63, CIFAR-10 Batch 1:  Loss:     3.0627 Validation Accuracy: 0.581600
Epoch 64, CIFAR-10 Batch 1:  Loss:     3.0016 Validation Accuracy: 0.587600
Epoch 65, CIFAR-10 Batch 1:  Loss:     3.0332 Validation Accuracy: 0.583200
Epoch 66, CIFAR-10 Batch 1:  Loss:     3.1667 Validation Accuracy: 0.585600
Epoch 67, CIFAR-10 Batch 1:  Loss:     3.0713 Validation Accuracy: 0.581000
Epoch 68, CIFAR-10 Batch 1:  Loss:     3.1346 Validation Accuracy: 0.580200
Epoch 69, CIFAR-10 Batch 1:  Loss:     3.1963 Validation Accuracy: 0.571200
Epoch 70, CIFAR-10 Batch 1:  Loss:     3.0929 Validation Accuracy: 0.585000
Epoch 71, CIFAR-10 Batch 1:  Loss:     3.2740 Validation Accuracy: 0.583800
Epoch 72, CIFAR-10 Batch 1:  Loss:     3.1692 Validation Accuracy: 0.585200
Epoch 73, CIFAR-10 Batch 1:  Loss:     3.2710 Validation Accuracy: 0.577600
Epoch 74, CIFAR-10 Batch 1:  Loss:     3.1355 Validation Accuracy: 0.584000
Epoch 75, CIFAR-10 Batch 1:  Loss:     3.1333 Validation Accuracy: 0.580400

Fully Train the Model

Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
In [21]:
save_model_path = './image_classification'

print('Training...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        # Loop over all batches
        n_batches = 5
        for batch_i in range(1, n_batches + 1):
            for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
                train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
            print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
            print_stats(sess, batch_features, batch_labels, cost, accuracy)
            
    # Save Model
    saver = tf.train.Saver()
    save_path = saver.save(sess, save_model_path)
Training...
Epoch  1, CIFAR-10 Batch 1:  Loss:     1.8815 Validation Accuracy: 0.334600
Epoch  1, CIFAR-10 Batch 2:  Loss:     1.6022 Validation Accuracy: 0.410600
Epoch  1, CIFAR-10 Batch 3:  Loss:     1.5636 Validation Accuracy: 0.427200
Epoch  1, CIFAR-10 Batch 4:  Loss:     1.4260 Validation Accuracy: 0.480200
Epoch  1, CIFAR-10 Batch 5:  Loss:     1.3706 Validation Accuracy: 0.504400
Epoch  2, CIFAR-10 Batch 1:  Loss:     1.3705 Validation Accuracy: 0.503200
Epoch  2, CIFAR-10 Batch 2:  Loss:     1.2763 Validation Accuracy: 0.545800
Epoch  2, CIFAR-10 Batch 3:  Loss:     1.3518 Validation Accuracy: 0.512800
Epoch  2, CIFAR-10 Batch 4:  Loss:     1.2208 Validation Accuracy: 0.570400
Epoch  2, CIFAR-10 Batch 5:  Loss:     1.2468 Validation Accuracy: 0.557000
Epoch  3, CIFAR-10 Batch 1:  Loss:     1.2572 Validation Accuracy: 0.555400
Epoch  3, CIFAR-10 Batch 2:  Loss:     1.2234 Validation Accuracy: 0.574400
Epoch  3, CIFAR-10 Batch 3:  Loss:     1.1662 Validation Accuracy: 0.590600
Epoch  3, CIFAR-10 Batch 4:  Loss:     1.1412 Validation Accuracy: 0.596600
Epoch  3, CIFAR-10 Batch 5:  Loss:     1.1484 Validation Accuracy: 0.599800
Epoch  4, CIFAR-10 Batch 1:  Loss:     1.1268 Validation Accuracy: 0.600600
Epoch  4, CIFAR-10 Batch 2:  Loss:     1.1021 Validation Accuracy: 0.618600
Epoch  4, CIFAR-10 Batch 3:  Loss:     1.0746 Validation Accuracy: 0.619000
Epoch  4, CIFAR-10 Batch 4:  Loss:     1.1201 Validation Accuracy: 0.616200
Epoch  4, CIFAR-10 Batch 5:  Loss:     1.0672 Validation Accuracy: 0.626800
Epoch  5, CIFAR-10 Batch 1:  Loss:     1.1006 Validation Accuracy: 0.617600
Epoch  5, CIFAR-10 Batch 2:  Loss:     1.0743 Validation Accuracy: 0.626200
Epoch  5, CIFAR-10 Batch 3:  Loss:     1.0350 Validation Accuracy: 0.638200
Epoch  5, CIFAR-10 Batch 4:  Loss:     1.1137 Validation Accuracy: 0.616800
Epoch  5, CIFAR-10 Batch 5:  Loss:     1.0008 Validation Accuracy: 0.652800
Epoch  6, CIFAR-10 Batch 1:  Loss:     1.0510 Validation Accuracy: 0.636200
Epoch  6, CIFAR-10 Batch 2:  Loss:     1.0332 Validation Accuracy: 0.645600
Epoch  6, CIFAR-10 Batch 3:  Loss:     1.0279 Validation Accuracy: 0.649400
Epoch  6, CIFAR-10 Batch 4:  Loss:     1.0645 Validation Accuracy: 0.637400
Epoch  6, CIFAR-10 Batch 5:  Loss:     1.0421 Validation Accuracy: 0.641600
Epoch  7, CIFAR-10 Batch 1:  Loss:     1.0115 Validation Accuracy: 0.654200
Epoch  7, CIFAR-10 Batch 2:  Loss:     1.0392 Validation Accuracy: 0.644800
Epoch  7, CIFAR-10 Batch 3:  Loss:     1.0306 Validation Accuracy: 0.651600
Epoch  7, CIFAR-10 Batch 4:  Loss:     1.0606 Validation Accuracy: 0.643400
Epoch  7, CIFAR-10 Batch 5:  Loss:     1.0284 Validation Accuracy: 0.657800
Epoch  8, CIFAR-10 Batch 1:  Loss:     1.0259 Validation Accuracy: 0.658600
Epoch  8, CIFAR-10 Batch 2:  Loss:     1.0513 Validation Accuracy: 0.651800
Epoch  8, CIFAR-10 Batch 3:  Loss:     1.0068 Validation Accuracy: 0.668800
Epoch  8, CIFAR-10 Batch 4:  Loss:     1.0267 Validation Accuracy: 0.653200
Epoch  8, CIFAR-10 Batch 5:  Loss:     1.0319 Validation Accuracy: 0.660400
Epoch  9, CIFAR-10 Batch 1:  Loss:     1.0865 Validation Accuracy: 0.642200
Epoch  9, CIFAR-10 Batch 2:  Loss:     1.0563 Validation Accuracy: 0.661400
Epoch  9, CIFAR-10 Batch 3:  Loss:     1.0288 Validation Accuracy: 0.656600
Epoch  9, CIFAR-10 Batch 4:  Loss:     1.0281 Validation Accuracy: 0.668400
Epoch  9, CIFAR-10 Batch 5:  Loss:     1.0338 Validation Accuracy: 0.669000
Epoch 10, CIFAR-10 Batch 1:  Loss:     1.1744 Validation Accuracy: 0.630400
Epoch 10, CIFAR-10 Batch 2:  Loss:     1.0790 Validation Accuracy: 0.659600
Epoch 10, CIFAR-10 Batch 3:  Loss:     1.2478 Validation Accuracy: 0.596400
Epoch 10, CIFAR-10 Batch 4:  Loss:     1.0905 Validation Accuracy: 0.656800
Epoch 10, CIFAR-10 Batch 5:  Loss:     1.0399 Validation Accuracy: 0.669600
Epoch 11, CIFAR-10 Batch 1:  Loss:     1.0691 Validation Accuracy: 0.658600
Epoch 11, CIFAR-10 Batch 2:  Loss:     1.0974 Validation Accuracy: 0.657800
Epoch 11, CIFAR-10 Batch 3:  Loss:     1.1919 Validation Accuracy: 0.633200
Epoch 11, CIFAR-10 Batch 4:  Loss:     1.1101 Validation Accuracy: 0.653000
Epoch 11, CIFAR-10 Batch 5:  Loss:     1.1639 Validation Accuracy: 0.649600
Epoch 12, CIFAR-10 Batch 1:  Loss:     1.0961 Validation Accuracy: 0.647600
Epoch 12, CIFAR-10 Batch 2:  Loss:     1.1924 Validation Accuracy: 0.645400
Epoch 12, CIFAR-10 Batch 3:  Loss:     1.1331 Validation Accuracy: 0.661200
Epoch 12, CIFAR-10 Batch 4:  Loss:     1.1520 Validation Accuracy: 0.648000
Epoch 12, CIFAR-10 Batch 5:  Loss:     1.2593 Validation Accuracy: 0.634400
Epoch 13, CIFAR-10 Batch 1:  Loss:     1.3286 Validation Accuracy: 0.619200
Epoch 13, CIFAR-10 Batch 2:  Loss:     1.2352 Validation Accuracy: 0.638400
Epoch 13, CIFAR-10 Batch 3:  Loss:     1.1805 Validation Accuracy: 0.656800
Epoch 13, CIFAR-10 Batch 4:  Loss:     1.2211 Validation Accuracy: 0.654600
Epoch 13, CIFAR-10 Batch 5:  Loss:     1.3316 Validation Accuracy: 0.643400
Epoch 14, CIFAR-10 Batch 1:  Loss:     1.2525 Validation Accuracy: 0.641000
Epoch 14, CIFAR-10 Batch 2:  Loss:     1.3460 Validation Accuracy: 0.631200
Epoch 14, CIFAR-10 Batch 3:  Loss:     1.3029 Validation Accuracy: 0.640000
Epoch 14, CIFAR-10 Batch 4:  Loss:     1.1861 Validation Accuracy: 0.660200
Epoch 14, CIFAR-10 Batch 5:  Loss:     1.3200 Validation Accuracy: 0.650200
Epoch 15, CIFAR-10 Batch 1:  Loss:     1.3201 Validation Accuracy: 0.638200
Epoch 15, CIFAR-10 Batch 2:  Loss:     1.3726 Validation Accuracy: 0.637400
Epoch 15, CIFAR-10 Batch 3:  Loss:     1.2654 Validation Accuracy: 0.649000
Epoch 15, CIFAR-10 Batch 4:  Loss:     1.3279 Validation Accuracy: 0.638200
Epoch 15, CIFAR-10 Batch 5:  Loss:     1.4150 Validation Accuracy: 0.646000
Epoch 16, CIFAR-10 Batch 1:  Loss:     1.3063 Validation Accuracy: 0.655200
Epoch 16, CIFAR-10 Batch 2:  Loss:     1.2876 Validation Accuracy: 0.650600
Epoch 16, CIFAR-10 Batch 3:  Loss:     1.2099 Validation Accuracy: 0.667400
Epoch 16, CIFAR-10 Batch 4:  Loss:     1.4229 Validation Accuracy: 0.638800
Epoch 16, CIFAR-10 Batch 5:  Loss:     1.4131 Validation Accuracy: 0.648400
Epoch 17, CIFAR-10 Batch 1:  Loss:     1.4170 Validation Accuracy: 0.660000
Epoch 17, CIFAR-10 Batch 2:  Loss:     1.3962 Validation Accuracy: 0.642800
Epoch 17, CIFAR-10 Batch 3:  Loss:     1.2833 Validation Accuracy: 0.654400
Epoch 17, CIFAR-10 Batch 4:  Loss:     1.3916 Validation Accuracy: 0.649600
Epoch 17, CIFAR-10 Batch 5:  Loss:     1.4361 Validation Accuracy: 0.648000
Epoch 18, CIFAR-10 Batch 1:  Loss:     1.4687 Validation Accuracy: 0.652200
Epoch 18, CIFAR-10 Batch 2:  Loss:     1.4597 Validation Accuracy: 0.640600
Epoch 18, CIFAR-10 Batch 3:  Loss:     1.3905 Validation Accuracy: 0.645600
Epoch 18, CIFAR-10 Batch 4:  Loss:     1.3993 Validation Accuracy: 0.657400
Epoch 18, CIFAR-10 Batch 5:  Loss:     1.4195 Validation Accuracy: 0.662400
Epoch 19, CIFAR-10 Batch 1:  Loss:     1.5199 Validation Accuracy: 0.656600
Epoch 19, CIFAR-10 Batch 2:  Loss:     1.5435 Validation Accuracy: 0.631200
Epoch 19, CIFAR-10 Batch 3:  Loss:     1.4847 Validation Accuracy: 0.641200
Epoch 19, CIFAR-10 Batch 4:  Loss:     1.4084 Validation Accuracy: 0.664200
Epoch 19, CIFAR-10 Batch 5:  Loss:     1.5331 Validation Accuracy: 0.661600
Epoch 20, CIFAR-10 Batch 1:  Loss:     1.5942 Validation Accuracy: 0.646600
Epoch 20, CIFAR-10 Batch 2:  Loss:     1.6731 Validation Accuracy: 0.625400
Epoch 20, CIFAR-10 Batch 3:  Loss:     1.5722 Validation Accuracy: 0.640200
Epoch 20, CIFAR-10 Batch 4:  Loss:     1.5142 Validation Accuracy: 0.659600
Epoch 20, CIFAR-10 Batch 5:  Loss:     1.6341 Validation Accuracy: 0.653600
Epoch 21, CIFAR-10 Batch 1:  Loss:     1.6401 Validation Accuracy: 0.640800
Epoch 21, CIFAR-10 Batch 2:  Loss:     1.7011 Validation Accuracy: 0.639400
Epoch 21, CIFAR-10 Batch 3:  Loss:     1.6529 Validation Accuracy: 0.645400
Epoch 21, CIFAR-10 Batch 4:  Loss:     1.6274 Validation Accuracy: 0.661200
Epoch 21, CIFAR-10 Batch 5:  Loss:     1.7441 Validation Accuracy: 0.653600
Epoch 22, CIFAR-10 Batch 1:  Loss:     1.6736 Validation Accuracy: 0.652200
Epoch 22, CIFAR-10 Batch 2:  Loss:     1.6464 Validation Accuracy: 0.657800
Epoch 22, CIFAR-10 Batch 3:  Loss:     1.7144 Validation Accuracy: 0.646400
Epoch 22, CIFAR-10 Batch 4:  Loss:     1.6894 Validation Accuracy: 0.658400
Epoch 22, CIFAR-10 Batch 5:  Loss:     1.8249 Validation Accuracy: 0.660600
Epoch 23, CIFAR-10 Batch 1:  Loss:     1.6878 Validation Accuracy: 0.653600
Epoch 23, CIFAR-10 Batch 2:  Loss:     1.6565 Validation Accuracy: 0.651000
Epoch 23, CIFAR-10 Batch 3:  Loss:     1.7832 Validation Accuracy: 0.638800
Epoch 23, CIFAR-10 Batch 4:  Loss:     1.7259 Validation Accuracy: 0.660600
Epoch 23, CIFAR-10 Batch 5:  Loss:     1.8615 Validation Accuracy: 0.660000
Epoch 24, CIFAR-10 Batch 1:  Loss:     1.9457 Validation Accuracy: 0.640400
Epoch 24, CIFAR-10 Batch 2:  Loss:     1.7092 Validation Accuracy: 0.653000
Epoch 24, CIFAR-10 Batch 3:  Loss:     1.7725 Validation Accuracy: 0.653400
Epoch 24, CIFAR-10 Batch 4:  Loss:     1.8744 Validation Accuracy: 0.646600
Epoch 24, CIFAR-10 Batch 5:  Loss:     1.8818 Validation Accuracy: 0.661600
Epoch 25, CIFAR-10 Batch 1:  Loss:     1.8937 Validation Accuracy: 0.655600
Epoch 25, CIFAR-10 Batch 2:  Loss:     1.6906 Validation Accuracy: 0.659800
Epoch 25, CIFAR-10 Batch 3:  Loss:     1.9218 Validation Accuracy: 0.640200
Epoch 25, CIFAR-10 Batch 4:  Loss:     1.8197 Validation Accuracy: 0.652400
Epoch 25, CIFAR-10 Batch 5:  Loss:     2.0058 Validation Accuracy: 0.649200
Epoch 26, CIFAR-10 Batch 1:  Loss:     1.9785 Validation Accuracy: 0.642400
Epoch 26, CIFAR-10 Batch 2:  Loss:     1.7688 Validation Accuracy: 0.659200
Epoch 26, CIFAR-10 Batch 3:  Loss:     1.8771 Validation Accuracy: 0.652800
Epoch 26, CIFAR-10 Batch 4:  Loss:     2.0424 Validation Accuracy: 0.637000
Epoch 26, CIFAR-10 Batch 5:  Loss:     2.1618 Validation Accuracy: 0.644400
Epoch 27, CIFAR-10 Batch 1:  Loss:     2.0781 Validation Accuracy: 0.646400
Epoch 27, CIFAR-10 Batch 2:  Loss:     1.9189 Validation Accuracy: 0.649400
Epoch 27, CIFAR-10 Batch 3:  Loss:     1.9000 Validation Accuracy: 0.663800
Epoch 27, CIFAR-10 Batch 4:  Loss:     2.0840 Validation Accuracy: 0.643400
Epoch 27, CIFAR-10 Batch 5:  Loss:     2.2043 Validation Accuracy: 0.652800
Epoch 28, CIFAR-10 Batch 1:  Loss:     1.9694 Validation Accuracy: 0.663000
Epoch 28, CIFAR-10 Batch 2:  Loss:     1.8827 Validation Accuracy: 0.665400
Epoch 28, CIFAR-10 Batch 3:  Loss:     1.9725 Validation Accuracy: 0.659200
Epoch 28, CIFAR-10 Batch 4:  Loss:     2.2781 Validation Accuracy: 0.630000
Epoch 28, CIFAR-10 Batch 5:  Loss:     2.1931 Validation Accuracy: 0.648600
Epoch 29, CIFAR-10 Batch 1:  Loss:     2.0282 Validation Accuracy: 0.655000
Epoch 29, CIFAR-10 Batch 2:  Loss:     1.9907 Validation Accuracy: 0.663400
Epoch 29, CIFAR-10 Batch 3:  Loss:     2.0202 Validation Accuracy: 0.656600
Epoch 29, CIFAR-10 Batch 4:  Loss:     2.3301 Validation Accuracy: 0.631600
Epoch 29, CIFAR-10 Batch 5:  Loss:     2.2136 Validation Accuracy: 0.646600
Epoch 30, CIFAR-10 Batch 1:  Loss:     2.0190 Validation Accuracy: 0.658800
Epoch 30, CIFAR-10 Batch 2:  Loss:     2.1072 Validation Accuracy: 0.656000
Epoch 30, CIFAR-10 Batch 3:  Loss:     2.0147 Validation Accuracy: 0.661600
Epoch 30, CIFAR-10 Batch 4:  Loss:     2.1953 Validation Accuracy: 0.653200
Epoch 30, CIFAR-10 Batch 5:  Loss:     2.1944 Validation Accuracy: 0.650800
Epoch 31, CIFAR-10 Batch 1:  Loss:     2.0276 Validation Accuracy: 0.653800
Epoch 31, CIFAR-10 Batch 2:  Loss:     2.0050 Validation Accuracy: 0.662000
Epoch 31, CIFAR-10 Batch 3:  Loss:     2.0805 Validation Accuracy: 0.654800
Epoch 31, CIFAR-10 Batch 4:  Loss:     2.1648 Validation Accuracy: 0.664200
Epoch 31, CIFAR-10 Batch 5:  Loss:     2.2018 Validation Accuracy: 0.653600
Epoch 32, CIFAR-10 Batch 1:  Loss:     2.1870 Validation Accuracy: 0.653600
Epoch 32, CIFAR-10 Batch 2:  Loss:     2.1730 Validation Accuracy: 0.654600
Epoch 32, CIFAR-10 Batch 3:  Loss:     2.1745 Validation Accuracy: 0.655400
Epoch 32, CIFAR-10 Batch 4:  Loss:     2.1335 Validation Accuracy: 0.663600
Epoch 32, CIFAR-10 Batch 5:  Loss:     2.2857 Validation Accuracy: 0.659400
Epoch 33, CIFAR-10 Batch 1:  Loss:     2.1886 Validation Accuracy: 0.664800
Epoch 33, CIFAR-10 Batch 2:  Loss:     2.3599 Validation Accuracy: 0.635800
Epoch 33, CIFAR-10 Batch 3:  Loss:     2.1913 Validation Accuracy: 0.662000
Epoch 33, CIFAR-10 Batch 4:  Loss:     2.3116 Validation Accuracy: 0.666200
Epoch 33, CIFAR-10 Batch 5:  Loss:     2.4042 Validation Accuracy: 0.650800
Epoch 34, CIFAR-10 Batch 1:  Loss:     2.3799 Validation Accuracy: 0.660000
Epoch 34, CIFAR-10 Batch 2:  Loss:     2.4974 Validation Accuracy: 0.635800
Epoch 34, CIFAR-10 Batch 3:  Loss:     2.2989 Validation Accuracy: 0.654000
Epoch 34, CIFAR-10 Batch 4:  Loss:     2.2947 Validation Accuracy: 0.665000
Epoch 34, CIFAR-10 Batch 5:  Loss:     2.5643 Validation Accuracy: 0.646600
Epoch 35, CIFAR-10 Batch 1:  Loss:     2.5984 Validation Accuracy: 0.639600
Epoch 35, CIFAR-10 Batch 2:  Loss:     2.3430 Validation Accuracy: 0.652400
Epoch 35, CIFAR-10 Batch 3:  Loss:     2.2577 Validation Accuracy: 0.662800
Epoch 35, CIFAR-10 Batch 4:  Loss:     2.2925 Validation Accuracy: 0.663000
Epoch 35, CIFAR-10 Batch 5:  Loss:     2.4469 Validation Accuracy: 0.661600
Epoch 36, CIFAR-10 Batch 1:  Loss:     2.3665 Validation Accuracy: 0.654600
Epoch 36, CIFAR-10 Batch 2:  Loss:     2.3603 Validation Accuracy: 0.643200
Epoch 36, CIFAR-10 Batch 3:  Loss:     2.3291 Validation Accuracy: 0.668400
Epoch 36, CIFAR-10 Batch 4:  Loss:     2.3486 Validation Accuracy: 0.659400
Epoch 36, CIFAR-10 Batch 5:  Loss:     2.4837 Validation Accuracy: 0.648800
Epoch 37, CIFAR-10 Batch 1:  Loss:     2.2490 Validation Accuracy: 0.655200
Epoch 37, CIFAR-10 Batch 2:  Loss:     2.2644 Validation Accuracy: 0.652800
Epoch 37, CIFAR-10 Batch 3:  Loss:     2.3491 Validation Accuracy: 0.668000
Epoch 37, CIFAR-10 Batch 4:  Loss:     2.3102 Validation Accuracy: 0.663600
Epoch 37, CIFAR-10 Batch 5:  Loss:     2.4621 Validation Accuracy: 0.659400
Epoch 38, CIFAR-10 Batch 1:  Loss:     2.4700 Validation Accuracy: 0.651200
Epoch 38, CIFAR-10 Batch 2:  Loss:     2.2931 Validation Accuracy: 0.656600
Epoch 38, CIFAR-10 Batch 3:  Loss:     2.2832 Validation Accuracy: 0.662000
Epoch 38, CIFAR-10 Batch 4:  Loss:     2.4437 Validation Accuracy: 0.652200
Epoch 38, CIFAR-10 Batch 5:  Loss:     2.3971 Validation Accuracy: 0.662800
Epoch 39, CIFAR-10 Batch 1:  Loss:     2.6420 Validation Accuracy: 0.642400
Epoch 39, CIFAR-10 Batch 2:  Loss:     2.5254 Validation Accuracy: 0.658000
Epoch 39, CIFAR-10 Batch 3:  Loss:     2.4795 Validation Accuracy: 0.661200
Epoch 39, CIFAR-10 Batch 4:  Loss:     2.3491 Validation Accuracy: 0.662400
Epoch 39, CIFAR-10 Batch 5:  Loss:     2.5571 Validation Accuracy: 0.658400
Epoch 40, CIFAR-10 Batch 1:  Loss:     2.4768 Validation Accuracy: 0.667600
Epoch 40, CIFAR-10 Batch 2:  Loss:     2.5479 Validation Accuracy: 0.659800
Epoch 40, CIFAR-10 Batch 3:  Loss:     2.3763 Validation Accuracy: 0.669200
Epoch 40, CIFAR-10 Batch 4:  Loss:     2.5028 Validation Accuracy: 0.664000
Epoch 40, CIFAR-10 Batch 5:  Loss:     2.5618 Validation Accuracy: 0.649800
Epoch 41, CIFAR-10 Batch 1:  Loss:     2.7137 Validation Accuracy: 0.646200
Epoch 41, CIFAR-10 Batch 2:  Loss:     2.4016 Validation Accuracy: 0.664000
Epoch 41, CIFAR-10 Batch 3:  Loss:     2.4315 Validation Accuracy: 0.670800
Epoch 41, CIFAR-10 Batch 4:  Loss:     2.4832 Validation Accuracy: 0.667400
Epoch 41, CIFAR-10 Batch 5:  Loss:     2.8092 Validation Accuracy: 0.641200
Epoch 42, CIFAR-10 Batch 1:  Loss:     2.6313 Validation Accuracy: 0.652000
Epoch 42, CIFAR-10 Batch 2:  Loss:     2.4120 Validation Accuracy: 0.664400
Epoch 42, CIFAR-10 Batch 3:  Loss:     2.5090 Validation Accuracy: 0.671200
Epoch 42, CIFAR-10 Batch 4:  Loss:     2.6039 Validation Accuracy: 0.650800
Epoch 42, CIFAR-10 Batch 5:  Loss:     2.5727 Validation Accuracy: 0.664200
Epoch 43, CIFAR-10 Batch 1:  Loss:     2.5751 Validation Accuracy: 0.662400
Epoch 43, CIFAR-10 Batch 2:  Loss:     2.5084 Validation Accuracy: 0.664800
Epoch 43, CIFAR-10 Batch 3:  Loss:     2.5819 Validation Accuracy: 0.655200
Epoch 43, CIFAR-10 Batch 4:  Loss:     2.5561 Validation Accuracy: 0.654800
Epoch 43, CIFAR-10 Batch 5:  Loss:     2.8008 Validation Accuracy: 0.653200
Epoch 44, CIFAR-10 Batch 1:  Loss:     2.6466 Validation Accuracy: 0.662800
Epoch 44, CIFAR-10 Batch 2:  Loss:     2.6579 Validation Accuracy: 0.660200
Epoch 44, CIFAR-10 Batch 3:  Loss:     2.5565 Validation Accuracy: 0.662800
Epoch 44, CIFAR-10 Batch 4:  Loss:     2.7364 Validation Accuracy: 0.658000
Epoch 44, CIFAR-10 Batch 5:  Loss:     2.6798 Validation Accuracy: 0.654800
Epoch 45, CIFAR-10 Batch 1:  Loss:     2.5869 Validation Accuracy: 0.663800
Epoch 45, CIFAR-10 Batch 2:  Loss:     2.6896 Validation Accuracy: 0.666400
Epoch 45, CIFAR-10 Batch 3:  Loss:     2.6745 Validation Accuracy: 0.661200
Epoch 45, CIFAR-10 Batch 4:  Loss:     2.5940 Validation Accuracy: 0.659000
Epoch 45, CIFAR-10 Batch 5:  Loss:     2.7350 Validation Accuracy: 0.656600
Epoch 46, CIFAR-10 Batch 1:  Loss:     2.7272 Validation Accuracy: 0.665000
Epoch 46, CIFAR-10 Batch 2:  Loss:     2.7539 Validation Accuracy: 0.666600
Epoch 46, CIFAR-10 Batch 3:  Loss:     2.6525 Validation Accuracy: 0.666200
Epoch 46, CIFAR-10 Batch 4:  Loss:     2.5551 Validation Accuracy: 0.664400
Epoch 46, CIFAR-10 Batch 5:  Loss:     2.6936 Validation Accuracy: 0.659000
Epoch 47, CIFAR-10 Batch 1:  Loss:     2.7330 Validation Accuracy: 0.663400
Epoch 47, CIFAR-10 Batch 2:  Loss:     2.6693 Validation Accuracy: 0.671200
Epoch 47, CIFAR-10 Batch 3:  Loss:     2.7992 Validation Accuracy: 0.652400
Epoch 47, CIFAR-10 Batch 4:  Loss:     2.6865 Validation Accuracy: 0.658600
Epoch 47, CIFAR-10 Batch 5:  Loss:     2.9376 Validation Accuracy: 0.653000
Epoch 48, CIFAR-10 Batch 1:  Loss:     2.6797 Validation Accuracy: 0.670400
Epoch 48, CIFAR-10 Batch 2:  Loss:     2.8669 Validation Accuracy: 0.660000
Epoch 48, CIFAR-10 Batch 3:  Loss:     2.7645 Validation Accuracy: 0.660200
Epoch 48, CIFAR-10 Batch 4:  Loss:     2.7294 Validation Accuracy: 0.664800
Epoch 48, CIFAR-10 Batch 5:  Loss:     2.8857 Validation Accuracy: 0.660000
Epoch 49, CIFAR-10 Batch 1:  Loss:     2.7303 Validation Accuracy: 0.672200
Epoch 49, CIFAR-10 Batch 2:  Loss:     2.9093 Validation Accuracy: 0.665600
Epoch 49, CIFAR-10 Batch 3:  Loss:     2.8915 Validation Accuracy: 0.654000
Epoch 49, CIFAR-10 Batch 4:  Loss:     2.8573 Validation Accuracy: 0.650600
Epoch 49, CIFAR-10 Batch 5:  Loss:     2.9880 Validation Accuracy: 0.647000
Epoch 50, CIFAR-10 Batch 1:  Loss:     2.9594 Validation Accuracy: 0.651200
Epoch 50, CIFAR-10 Batch 2:  Loss:     3.1068 Validation Accuracy: 0.660800
Epoch 50, CIFAR-10 Batch 3:  Loss:     2.8510 Validation Accuracy: 0.655000
Epoch 50, CIFAR-10 Batch 4:  Loss:     2.7723 Validation Accuracy: 0.659400
Epoch 50, CIFAR-10 Batch 5:  Loss:     2.9452 Validation Accuracy: 0.651800
Epoch 51, CIFAR-10 Batch 1:  Loss:     2.7302 Validation Accuracy: 0.671400
Epoch 51, CIFAR-10 Batch 2:  Loss:     2.9305 Validation Accuracy: 0.660200
Epoch 51, CIFAR-10 Batch 3:  Loss:     2.7941 Validation Accuracy: 0.656000
Epoch 51, CIFAR-10 Batch 4:  Loss:     2.9361 Validation Accuracy: 0.636400
Epoch 51, CIFAR-10 Batch 5:  Loss:     2.9910 Validation Accuracy: 0.650000
Epoch 52, CIFAR-10 Batch 1:  Loss:     2.7431 Validation Accuracy: 0.662400
Epoch 52, CIFAR-10 Batch 2:  Loss:     2.9393 Validation Accuracy: 0.663600
Epoch 52, CIFAR-10 Batch 3:  Loss:     2.6550 Validation Accuracy: 0.668000
Epoch 52, CIFAR-10 Batch 4:  Loss:     2.8660 Validation Accuracy: 0.666200
Epoch 52, CIFAR-10 Batch 5:  Loss:     2.8893 Validation Accuracy: 0.655400
Epoch 53, CIFAR-10 Batch 1:  Loss:     2.6200 Validation Accuracy: 0.674800
Epoch 53, CIFAR-10 Batch 2:  Loss:     2.9225 Validation Accuracy: 0.659400
Epoch 53, CIFAR-10 Batch 3:  Loss:     2.6860 Validation Accuracy: 0.672800
Epoch 53, CIFAR-10 Batch 4:  Loss:     2.9434 Validation Accuracy: 0.663400
Epoch 53, CIFAR-10 Batch 5:  Loss:     2.9732 Validation Accuracy: 0.656600
Epoch 54, CIFAR-10 Batch 1:  Loss:     2.8411 Validation Accuracy: 0.670600
Epoch 54, CIFAR-10 Batch 2:  Loss:     2.8522 Validation Accuracy: 0.657200
Epoch 54, CIFAR-10 Batch 3:  Loss:     2.8904 Validation Accuracy: 0.657200
Epoch 54, CIFAR-10 Batch 4:  Loss:     2.9270 Validation Accuracy: 0.657000
Epoch 54, CIFAR-10 Batch 5:  Loss:     2.9973 Validation Accuracy: 0.646800
Epoch 55, CIFAR-10 Batch 1:  Loss:     2.9224 Validation Accuracy: 0.672800
Epoch 55, CIFAR-10 Batch 2:  Loss:     2.9168 Validation Accuracy: 0.666200
Epoch 55, CIFAR-10 Batch 3:  Loss:     2.7214 Validation Accuracy: 0.668200
Epoch 55, CIFAR-10 Batch 4:  Loss:     2.8528 Validation Accuracy: 0.653400
Epoch 55, CIFAR-10 Batch 5:  Loss:     2.9433 Validation Accuracy: 0.660800
Epoch 56, CIFAR-10 Batch 1:  Loss:     3.0369 Validation Accuracy: 0.673800
Epoch 56, CIFAR-10 Batch 2:  Loss:     3.0486 Validation Accuracy: 0.657200
Epoch 56, CIFAR-10 Batch 3:  Loss:     2.8622 Validation Accuracy: 0.665800
Epoch 56, CIFAR-10 Batch 4:  Loss:     2.9411 Validation Accuracy: 0.657400
Epoch 56, CIFAR-10 Batch 5:  Loss:     2.8441 Validation Accuracy: 0.665600
Epoch 57, CIFAR-10 Batch 1:  Loss:     2.9858 Validation Accuracy: 0.667600
Epoch 57, CIFAR-10 Batch 2:  Loss:     2.8920 Validation Accuracy: 0.658400
Epoch 57, CIFAR-10 Batch 3:  Loss:     2.8757 Validation Accuracy: 0.652600
Epoch 57, CIFAR-10 Batch 4:  Loss:     3.0774 Validation Accuracy: 0.657400
Epoch 57, CIFAR-10 Batch 5:  Loss:     2.8773 Validation Accuracy: 0.667000
Epoch 58, CIFAR-10 Batch 1:  Loss:     2.9626 Validation Accuracy: 0.663600
Epoch 58, CIFAR-10 Batch 2:  Loss:     3.0485 Validation Accuracy: 0.662200
Epoch 58, CIFAR-10 Batch 3:  Loss:     3.0633 Validation Accuracy: 0.656600
Epoch 58, CIFAR-10 Batch 4:  Loss:     3.0329 Validation Accuracy: 0.654600
Epoch 58, CIFAR-10 Batch 5:  Loss:     2.9582 Validation Accuracy: 0.663000
Epoch 59, CIFAR-10 Batch 1:  Loss:     2.9781 Validation Accuracy: 0.663800
Epoch 59, CIFAR-10 Batch 2:  Loss:     3.0042 Validation Accuracy: 0.670200
Epoch 59, CIFAR-10 Batch 3:  Loss:     2.9790 Validation Accuracy: 0.651200
Epoch 59, CIFAR-10 Batch 4:  Loss:     2.9076 Validation Accuracy: 0.657400
Epoch 59, CIFAR-10 Batch 5:  Loss:     3.0568 Validation Accuracy: 0.655800
Epoch 60, CIFAR-10 Batch 1:  Loss:     2.9538 Validation Accuracy: 0.666600
Epoch 60, CIFAR-10 Batch 2:  Loss:     2.9939 Validation Accuracy: 0.659600
Epoch 60, CIFAR-10 Batch 3:  Loss:     3.2029 Validation Accuracy: 0.648800
Epoch 60, CIFAR-10 Batch 4:  Loss:     3.0693 Validation Accuracy: 0.653400
Epoch 60, CIFAR-10 Batch 5:  Loss:     3.0450 Validation Accuracy: 0.660800
Epoch 61, CIFAR-10 Batch 1:  Loss:     2.9735 Validation Accuracy: 0.672000
Epoch 61, CIFAR-10 Batch 2:  Loss:     2.9162 Validation Accuracy: 0.668800
Epoch 61, CIFAR-10 Batch 3:  Loss:     3.1523 Validation Accuracy: 0.656200
Epoch 61, CIFAR-10 Batch 4:  Loss:     3.0937 Validation Accuracy: 0.658800
Epoch 61, CIFAR-10 Batch 5:  Loss:     2.9874 Validation Accuracy: 0.666600
Epoch 62, CIFAR-10 Batch 1:  Loss:     2.9094 Validation Accuracy: 0.669000
Epoch 62, CIFAR-10 Batch 2:  Loss:     2.9971 Validation Accuracy: 0.659800
Epoch 62, CIFAR-10 Batch 3:  Loss:     2.9047 Validation Accuracy: 0.655800
Epoch 62, CIFAR-10 Batch 4:  Loss:     2.9947 Validation Accuracy: 0.657800
Epoch 62, CIFAR-10 Batch 5:  Loss:     3.0090 Validation Accuracy: 0.660400
Epoch 63, CIFAR-10 Batch 1:  Loss:     2.9240 Validation Accuracy: 0.668000
Epoch 63, CIFAR-10 Batch 2:  Loss:     2.8939 Validation Accuracy: 0.674600
Epoch 63, CIFAR-10 Batch 3:  Loss:     2.9322 Validation Accuracy: 0.660800
Epoch 63, CIFAR-10 Batch 4:  Loss:     3.0603 Validation Accuracy: 0.650600
Epoch 63, CIFAR-10 Batch 5:  Loss:     3.0008 Validation Accuracy: 0.660000
Epoch 64, CIFAR-10 Batch 1:  Loss:     2.9664 Validation Accuracy: 0.669600
Epoch 64, CIFAR-10 Batch 2:  Loss:     2.8117 Validation Accuracy: 0.665800
Epoch 64, CIFAR-10 Batch 3:  Loss:     3.0254 Validation Accuracy: 0.664600
Epoch 64, CIFAR-10 Batch 4:  Loss:     2.9981 Validation Accuracy: 0.662400
Epoch 64, CIFAR-10 Batch 5:  Loss:     3.0683 Validation Accuracy: 0.660000
Epoch 65, CIFAR-10 Batch 1:  Loss:     2.9741 Validation Accuracy: 0.666000
Epoch 65, CIFAR-10 Batch 2:  Loss:     3.0756 Validation Accuracy: 0.665600
Epoch 65, CIFAR-10 Batch 3:  Loss:     3.1262 Validation Accuracy: 0.654800
Epoch 65, CIFAR-10 Batch 4:  Loss:     3.1228 Validation Accuracy: 0.655200
Epoch 65, CIFAR-10 Batch 5:  Loss:     2.9411 Validation Accuracy: 0.662800
Epoch 66, CIFAR-10 Batch 1:  Loss:     3.0846 Validation Accuracy: 0.666600
Epoch 66, CIFAR-10 Batch 2:  Loss:     2.9064 Validation Accuracy: 0.671600
Epoch 66, CIFAR-10 Batch 3:  Loss:     3.1326 Validation Accuracy: 0.657000
Epoch 66, CIFAR-10 Batch 4:  Loss:     2.9543 Validation Accuracy: 0.666600
Epoch 66, CIFAR-10 Batch 5:  Loss:     3.1795 Validation Accuracy: 0.667400
Epoch 67, CIFAR-10 Batch 1:  Loss:     3.2163 Validation Accuracy: 0.665400
Epoch 67, CIFAR-10 Batch 2:  Loss:     3.0748 Validation Accuracy: 0.669400
Epoch 67, CIFAR-10 Batch 3:  Loss:     3.0063 Validation Accuracy: 0.668400
Epoch 67, CIFAR-10 Batch 4:  Loss:     3.0193 Validation Accuracy: 0.657600
Epoch 67, CIFAR-10 Batch 5:  Loss:     3.1468 Validation Accuracy: 0.661200
Epoch 68, CIFAR-10 Batch 1:  Loss:     3.1198 Validation Accuracy: 0.664200
Epoch 68, CIFAR-10 Batch 2:  Loss:     2.9216 Validation Accuracy: 0.667800
Epoch 68, CIFAR-10 Batch 3:  Loss:     3.0664 Validation Accuracy: 0.669200
Epoch 68, CIFAR-10 Batch 4:  Loss:     2.9743 Validation Accuracy: 0.659600
Epoch 68, CIFAR-10 Batch 5:  Loss:     3.0932 Validation Accuracy: 0.662800
Epoch 69, CIFAR-10 Batch 1:  Loss:     3.0283 Validation Accuracy: 0.665800
Epoch 69, CIFAR-10 Batch 2:  Loss:     3.1875 Validation Accuracy: 0.656600
Epoch 69, CIFAR-10 Batch 3:  Loss:     3.0620 Validation Accuracy: 0.666400
Epoch 69, CIFAR-10 Batch 4:  Loss:     2.9224 Validation Accuracy: 0.680000
Epoch 69, CIFAR-10 Batch 5:  Loss:     3.2041 Validation Accuracy: 0.660000
Epoch 70, CIFAR-10 Batch 1:  Loss:     3.0927 Validation Accuracy: 0.668200
Epoch 70, CIFAR-10 Batch 2:  Loss:     3.0795 Validation Accuracy: 0.667400
Epoch 70, CIFAR-10 Batch 3:  Loss:     3.0782 Validation Accuracy: 0.663200
Epoch 70, CIFAR-10 Batch 4:  Loss:     3.0387 Validation Accuracy: 0.660200
Epoch 70, CIFAR-10 Batch 5:  Loss:     3.2831 Validation Accuracy: 0.655400
Epoch 71, CIFAR-10 Batch 1:  Loss:     3.1593 Validation Accuracy: 0.664200
Epoch 71, CIFAR-10 Batch 2:  Loss:     3.1523 Validation Accuracy: 0.657600
Epoch 71, CIFAR-10 Batch 3:  Loss:     2.9708 Validation Accuracy: 0.666000
Epoch 71, CIFAR-10 Batch 4:  Loss:     2.9060 Validation Accuracy: 0.665000
Epoch 71, CIFAR-10 Batch 5:  Loss:     3.0284 Validation Accuracy: 0.662800
Epoch 72, CIFAR-10 Batch 1:  Loss:     3.0616 Validation Accuracy: 0.657800
Epoch 72, CIFAR-10 Batch 2:  Loss:     3.0813 Validation Accuracy: 0.656200
Epoch 72, CIFAR-10 Batch 3:  Loss:     3.0796 Validation Accuracy: 0.668000
Epoch 72, CIFAR-10 Batch 4:  Loss:     3.2913 Validation Accuracy: 0.659000
Epoch 72, CIFAR-10 Batch 5:  Loss:     3.1424 Validation Accuracy: 0.665400
Epoch 73, CIFAR-10 Batch 1:  Loss:     3.2347 Validation Accuracy: 0.664000
Epoch 73, CIFAR-10 Batch 2:  Loss:     3.3237 Validation Accuracy: 0.658000
Epoch 73, CIFAR-10 Batch 3:  Loss:     3.2233 Validation Accuracy: 0.654400
Epoch 73, CIFAR-10 Batch 4:  Loss:     3.2461 Validation Accuracy: 0.653800
Epoch 73, CIFAR-10 Batch 5:  Loss:     3.2182 Validation Accuracy: 0.663000
Epoch 74, CIFAR-10 Batch 1:  Loss:     3.2207 Validation Accuracy: 0.666200
Epoch 74, CIFAR-10 Batch 2:  Loss:     3.2190 Validation Accuracy: 0.675400
Epoch 74, CIFAR-10 Batch 3:  Loss:     3.2188 Validation Accuracy: 0.667600
Epoch 74, CIFAR-10 Batch 4:  Loss:     3.2113 Validation Accuracy: 0.669600
Epoch 74, CIFAR-10 Batch 5:  Loss:     3.5035 Validation Accuracy: 0.662000
Epoch 75, CIFAR-10 Batch 1:  Loss:     3.2802 Validation Accuracy: 0.663200
Epoch 75, CIFAR-10 Batch 2:  Loss:     3.2202 Validation Accuracy: 0.667800
Epoch 75, CIFAR-10 Batch 3:  Loss:     3.2104 Validation Accuracy: 0.671800
Epoch 75, CIFAR-10 Batch 4:  Loss:     3.2206 Validation Accuracy: 0.658800
Epoch 75, CIFAR-10 Batch 5:  Loss:     3.2077 Validation Accuracy: 0.664400

Checkpoint

The model has been saved to disk.

Test Model

Lets Test the model against the test dataset. This will be our final accuracy. we should have an accuracy greater than 50%. If we don't, we'll keep tweaking the model architecture and parameters.
In [22]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import tensorflow as tf
import pickle
import helper
import random

# Set batch size if not already set
try:
    if batch_size:
        pass
except NameError:
    batch_size = 64

save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3

def test_model():
    """
    Test the saved model against the test dataset
    """

    test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
    loaded_graph = tf.Graph()

    with tf.Session(graph=loaded_graph) as sess:
        # Load model
        loader = tf.train.import_meta_graph(save_model_path + '.meta')
        loader.restore(sess, save_model_path)

        # Get Tensors from loaded model
        loaded_x = loaded_graph.get_tensor_by_name('x:0')
        loaded_y = loaded_graph.get_tensor_by_name('y:0')
        loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
        loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
        loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
        
        # Get accuracy in batches for memory limitations
        test_batch_acc_total = 0
        test_batch_count = 0
        
        for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
            test_batch_acc_total += sess.run(
                loaded_acc,
                feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
            test_batch_count += 1

        print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))

        # Print Random Samples
        random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
        random_test_predictions = sess.run(
            tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
            feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
        helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)


test_model()
Testing Accuracy: 0.6693359375

Why 67% Accuracy?

You might be wondering why we can't get an accuracy any higher. First things first, 67% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't learned all there is to know about neural networks. We still need to cover a few more techniques.
In [ ]:
 
-->

No comments:

Post a Comment