probability_model = tf.keras.Sequential([model, tf.keras.layers.Softmax()])
predictions = probability_model.predict(test_images)
predictions[1]
array([7.1652730e-06, 5.0723611e-15, 9.9805683e-01, 7.4160178e-10, 6.1433221e-04, 2.4728529e-16, 1.3216519e-03, 4.8570981e-22, 2.7355423e-10, 1.1303254e-15], dtype=float32)
This array represents the model’s “confidence” that this image is correlated to the t-shirt/top, trouser, pullover, dress, coat, sandal, shirt, sneaker, bag, and ankle boot. For further clarification, the first value in this array represents the model’s confidence that this image is a t-shirt/top. Additionally, the last value in this array represents the model’s confidence that this image is an ankle boot.
The Softmax() function was used to calculate the probabilities of images corresponding to each of the different articles of clothing. The argmax() function was used to infer what article of clothing an image corresponds to based on the image having the highest probability of belonging in this class of clothing.
Yes, the output from np.argmax() does match the label from my test_labels dataset. Due to np.argmax(predictions[1]) and test_labels[1] both equaling 2, this image has been correctly classified as a pullover.
The predicted value does match test label, because both values are 7. Based on this information, this image definitely consists of sneakers.
Also, I did not use Softmax() a second time, because I am already referencing it through the predictions_single variable. To clarify, the predictions_single variable references the probability_model variable. Furthermore, the probability_model variable references the Softmax() function.
probability_model = tf.keras.Sequential([model, tf.keras.layers.Softmax()])
predictions_single = probability_model.predict(img)
-Plot
My model for the MNIST dataset had a testing accuracy of 0.9790999889373779 and a training accuracy of 0.9973166584968567. For my model of the MNIST dataset, the difference between the values of accuracy is 0.01821666955. Also, my model for the Fashion_MNIST dataset had a testing accuracy of 0.8863000273704529 and a training accuracy of 0.918666660785675. For my model of the Fashion_MNIST dataset, the difference between the values of accuracy is 0.03236663341. My model for the MNIST dataset is more accurate, because it has a higher testing accuracy than the other model. By having a higher testing accuracy, my model for the MNIST dataset is able to more accurately predict the numbers of handwritten numbers in new data. By having a lower testing accuracy, my model for the Fashion_MNIST dataset is less accurate in predicting articles of clothing in new data. Additionally, based on the differences between the values of accuracy, my model for the MNIST data set seems to be less overfit than my model for the Fashion_MNIST dataset.