keras model only predicts one class for all the test images
$begingroup$
I am trying to build an image classification model with 2 classes with (1) or without (0). I can build the model and get an accuracy of 1. which is too good to be true (which is an issue) but when I use predict_generator as I have my images in folders, it only returns 1 class 0 (without class). There seems to be an issue but I can't work it out, i have looked at a number of articles but I still can't fix the issue.
image_shape = (220, 525, 3) #height, width, channels
img_width = 96
img_height = 96
channels = 3
epochs = 10
no_train_images = 11957 #!ls ../data/train/* | wc -l
no_test_images = 652 #!ls ../data/test/* | wc -l
no_valid_images = 6156 #!ls ../data/test/* | wc -l
train_dir = '../data/train/'
test_dir = '../data/test/'
valid_dir = '../data/valid/'
classification_model = Sequential()
# First layer with 2D convolution (32 filters, (3, 3) kernel size 3x3, input_shape=(img_width, img_height, channels))
classification_model.add(Conv2D(32, (3, 3), input_shape=input_shape))
# Activation Function = ReLu increases the non-linearity
classification_model.add(Activation('relu'))
# Max-Pooling layer with the size of the grid 2x2
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
# Randomly disconnets some nodes between this layer and the next
classification_model.add(Dropout(0.2))
classification_model.add(Conv2D(32, (3, 3)))
classification_model.add(Activation('relu'))
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
classification_model.add(Dropout(0.2))
classification_model.add(Conv2D(64, (3, 3)))
classification_model.add(Activation('relu'))
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
classification_model.add(Dropout(0.25))
classification_model.add(Conv2D(64, (3, 3)))
classification_model.add(Activation('relu'))
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
classification_model.add(Dropout(0.3))
classification_model.add(Flatten())
classification_model.add(Dense(64))
classification_model.add(Activation('relu'))
classification_model.add(Dropout(0.5))
classification_model.add(Dense(1))
classification_model.add(Activation('sigmoid'))
# Using binary_crossentropy as we only have 2 classes
classification_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
batch_size = 32
# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
rescale=1. / 255,
zoom_range=0.2)
# this is the augmentation configuration we will use for testing:
# only rescaling
valid_datagen = ImageDataGenerator(rescale=1. / 255)
test_datagen = ImageDataGenerator()
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size = (img_width, img_height),
batch_size = batch_size,
class_mode = 'binary',
shuffle = True)
valid_generator = valid_datagen.flow_from_directory(
valid_dir,
target_size = (img_width, img_height),
batch_size = batch_size,
class_mode = 'binary',
shuffle = False)
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size = (img_width, img_height),
batch_size = 1,
class_mode = None,
shuffle = False)
mpd = classification_model.fit_generator(
train_generator,
steps_per_epoch = no_train_images // batch_size, # number of images per epoch
epochs = epochs, # number of iterations over the entire data
validation_data = valid_generator,
validation_steps = no_valid_images // batch_size)
Epoch 1/10
373/373 [==============================] - 119s 320ms/step - loss: 0.5214 - acc: 0.7357 - val_loss: 0.2720 - val_acc: 0.8758
Epoch 2/10
373/373 [==============================] - 120s 322ms/step - loss: 0.2485 - acc: 0.8935 - val_loss: 0.0568 - val_acc: 0.9829
Epoch 3/10
373/373 [==============================] - 130s 350ms/step - loss: 0.1427 - acc: 0.9435 - val_loss: 0.0410 - val_acc: 0.9796
Epoch 4/10
373/373 [==============================] - 127s 341ms/step - loss: 0.1053 - acc: 0.9623 - val_loss: 0.0197 - val_acc: 0.9971
Epoch 5/10
373/373 [==============================] - 126s 337ms/step - loss: 0.0817 - acc: 0.9682 - val_loss: 0.0136 - val_acc: 0.9948
Epoch 6/10
373/373 [==============================] - 123s 329ms/step - loss: 0.0665 - acc: 0.9754 - val_loss: 0.0116 - val_acc: 0.9985
Epoch 7/10
373/373 [==============================] - 140s 376ms/step - loss: 0.0518 - acc: 0.9817 - val_loss: 0.0035 - val_acc: 0.9997
Epoch 8/10
373/373 [==============================] - 144s 386ms/step - loss: 0.0539 - acc: 0.9832 - val_loss: 8.9459e-04 - val_acc: 1.0000
Epoch 9/10
373/373 [==============================] - 122s 327ms/step - loss: 0.0434 - acc: 0.9850 - val_loss: 0.0023 - val_acc: 0.9997
Epoch 10/10
373/373 [==============================] - 125s 336ms/step - loss: 0.0513 - acc: 0.9844 - val_loss: 0.0014 - val_acc: 1.0000
valid_generator.batch_size=1
score = classification_model.evaluate_generator(valid_generator,
no_test_images/batch_size, pickle_safe=False)
test_generator.reset()
scores=classification_model.predict_generator(test_generator, len(test_generator))
print("Loss: ", score[0], "Accuracy: ", score[1])
predicted_class_indices=np.argmax(scores,axis=1)
print(predicted_class_indices)
labels = (train_generator.class_indices)
labelss = dict((v,k) for k,v in labels.items())
predictions = [labelss[k] for k in predicted_class_indices]
filenames=test_generator.filenames
results=pd.DataFrame({"Filename":filenames,
"Predictions":predictions})
print(results)
Loss: 5.404246180551993e-06 Accuracy: 1.0
print(predicted_class_indices) - ALL 0
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
Filename Predictions
0 test_folder/video_3_frame10.jpg without
1 test_folder/video_3_frame1001.jpg without
2 test_folder/video_3_frame1006.jpg without
3 test_folder/video_3_frame1008.jpg without
4 test_folder/video_3_frame1009.jpg without
5 test_folder/video_3_frame1010.jpg without
6 test_folder/video_3_frame1013.jpg without
7 test_folder/video_3_frame1014.jpg without
8 test_folder/video_3_frame1022.jpg without
9 test_folder/video_3_frame1023.jpg without
10 test_folder/video_3_frame103.jpg without
11 test_folder/video_3_frame1036.jpg without
12 test_folder/video_3_frame1039.jpg without
13 test_folder/video_3_frame104.jpg without
14 test_folder/video_3_frame1042.jpg without
15 test_folder/video_3_frame1043.jpg without
16 test_folder/video_3_frame1048.jpg without
17 test_folder/video_3_frame105.jpg without
18 test_folder/video_3_frame1051.jpg without
19 test_folder/video_3_frame1052.jpg without
20 test_folder/video_3_frame1054.jpg without
21 test_folder/video_3_frame1055.jpg without
22 test_folder/video_3_frame1057.jpg without
23 test_folder/video_3_frame1059.jpg without
24 test_folder/video_3_frame1060.jpg without
...just some of the outputs but all 650+ are without class.
This is the output and as you can see all the predicted values are 0 for the without class.
This is my first attempt at using Keras and CNN so any help would be really appreciated.
python keras cnn image-classification
New contributor
$endgroup$
add a comment |
$begingroup$
I am trying to build an image classification model with 2 classes with (1) or without (0). I can build the model and get an accuracy of 1. which is too good to be true (which is an issue) but when I use predict_generator as I have my images in folders, it only returns 1 class 0 (without class). There seems to be an issue but I can't work it out, i have looked at a number of articles but I still can't fix the issue.
image_shape = (220, 525, 3) #height, width, channels
img_width = 96
img_height = 96
channels = 3
epochs = 10
no_train_images = 11957 #!ls ../data/train/* | wc -l
no_test_images = 652 #!ls ../data/test/* | wc -l
no_valid_images = 6156 #!ls ../data/test/* | wc -l
train_dir = '../data/train/'
test_dir = '../data/test/'
valid_dir = '../data/valid/'
classification_model = Sequential()
# First layer with 2D convolution (32 filters, (3, 3) kernel size 3x3, input_shape=(img_width, img_height, channels))
classification_model.add(Conv2D(32, (3, 3), input_shape=input_shape))
# Activation Function = ReLu increases the non-linearity
classification_model.add(Activation('relu'))
# Max-Pooling layer with the size of the grid 2x2
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
# Randomly disconnets some nodes between this layer and the next
classification_model.add(Dropout(0.2))
classification_model.add(Conv2D(32, (3, 3)))
classification_model.add(Activation('relu'))
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
classification_model.add(Dropout(0.2))
classification_model.add(Conv2D(64, (3, 3)))
classification_model.add(Activation('relu'))
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
classification_model.add(Dropout(0.25))
classification_model.add(Conv2D(64, (3, 3)))
classification_model.add(Activation('relu'))
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
classification_model.add(Dropout(0.3))
classification_model.add(Flatten())
classification_model.add(Dense(64))
classification_model.add(Activation('relu'))
classification_model.add(Dropout(0.5))
classification_model.add(Dense(1))
classification_model.add(Activation('sigmoid'))
# Using binary_crossentropy as we only have 2 classes
classification_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
batch_size = 32
# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
rescale=1. / 255,
zoom_range=0.2)
# this is the augmentation configuration we will use for testing:
# only rescaling
valid_datagen = ImageDataGenerator(rescale=1. / 255)
test_datagen = ImageDataGenerator()
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size = (img_width, img_height),
batch_size = batch_size,
class_mode = 'binary',
shuffle = True)
valid_generator = valid_datagen.flow_from_directory(
valid_dir,
target_size = (img_width, img_height),
batch_size = batch_size,
class_mode = 'binary',
shuffle = False)
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size = (img_width, img_height),
batch_size = 1,
class_mode = None,
shuffle = False)
mpd = classification_model.fit_generator(
train_generator,
steps_per_epoch = no_train_images // batch_size, # number of images per epoch
epochs = epochs, # number of iterations over the entire data
validation_data = valid_generator,
validation_steps = no_valid_images // batch_size)
Epoch 1/10
373/373 [==============================] - 119s 320ms/step - loss: 0.5214 - acc: 0.7357 - val_loss: 0.2720 - val_acc: 0.8758
Epoch 2/10
373/373 [==============================] - 120s 322ms/step - loss: 0.2485 - acc: 0.8935 - val_loss: 0.0568 - val_acc: 0.9829
Epoch 3/10
373/373 [==============================] - 130s 350ms/step - loss: 0.1427 - acc: 0.9435 - val_loss: 0.0410 - val_acc: 0.9796
Epoch 4/10
373/373 [==============================] - 127s 341ms/step - loss: 0.1053 - acc: 0.9623 - val_loss: 0.0197 - val_acc: 0.9971
Epoch 5/10
373/373 [==============================] - 126s 337ms/step - loss: 0.0817 - acc: 0.9682 - val_loss: 0.0136 - val_acc: 0.9948
Epoch 6/10
373/373 [==============================] - 123s 329ms/step - loss: 0.0665 - acc: 0.9754 - val_loss: 0.0116 - val_acc: 0.9985
Epoch 7/10
373/373 [==============================] - 140s 376ms/step - loss: 0.0518 - acc: 0.9817 - val_loss: 0.0035 - val_acc: 0.9997
Epoch 8/10
373/373 [==============================] - 144s 386ms/step - loss: 0.0539 - acc: 0.9832 - val_loss: 8.9459e-04 - val_acc: 1.0000
Epoch 9/10
373/373 [==============================] - 122s 327ms/step - loss: 0.0434 - acc: 0.9850 - val_loss: 0.0023 - val_acc: 0.9997
Epoch 10/10
373/373 [==============================] - 125s 336ms/step - loss: 0.0513 - acc: 0.9844 - val_loss: 0.0014 - val_acc: 1.0000
valid_generator.batch_size=1
score = classification_model.evaluate_generator(valid_generator,
no_test_images/batch_size, pickle_safe=False)
test_generator.reset()
scores=classification_model.predict_generator(test_generator, len(test_generator))
print("Loss: ", score[0], "Accuracy: ", score[1])
predicted_class_indices=np.argmax(scores,axis=1)
print(predicted_class_indices)
labels = (train_generator.class_indices)
labelss = dict((v,k) for k,v in labels.items())
predictions = [labelss[k] for k in predicted_class_indices]
filenames=test_generator.filenames
results=pd.DataFrame({"Filename":filenames,
"Predictions":predictions})
print(results)
Loss: 5.404246180551993e-06 Accuracy: 1.0
print(predicted_class_indices) - ALL 0
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
Filename Predictions
0 test_folder/video_3_frame10.jpg without
1 test_folder/video_3_frame1001.jpg without
2 test_folder/video_3_frame1006.jpg without
3 test_folder/video_3_frame1008.jpg without
4 test_folder/video_3_frame1009.jpg without
5 test_folder/video_3_frame1010.jpg without
6 test_folder/video_3_frame1013.jpg without
7 test_folder/video_3_frame1014.jpg without
8 test_folder/video_3_frame1022.jpg without
9 test_folder/video_3_frame1023.jpg without
10 test_folder/video_3_frame103.jpg without
11 test_folder/video_3_frame1036.jpg without
12 test_folder/video_3_frame1039.jpg without
13 test_folder/video_3_frame104.jpg without
14 test_folder/video_3_frame1042.jpg without
15 test_folder/video_3_frame1043.jpg without
16 test_folder/video_3_frame1048.jpg without
17 test_folder/video_3_frame105.jpg without
18 test_folder/video_3_frame1051.jpg without
19 test_folder/video_3_frame1052.jpg without
20 test_folder/video_3_frame1054.jpg without
21 test_folder/video_3_frame1055.jpg without
22 test_folder/video_3_frame1057.jpg without
23 test_folder/video_3_frame1059.jpg without
24 test_folder/video_3_frame1060.jpg without
...just some of the outputs but all 650+ are without class.
This is the output and as you can see all the predicted values are 0 for the without class.
This is my first attempt at using Keras and CNN so any help would be really appreciated.
python keras cnn image-classification
New contributor
$endgroup$
$begingroup$
You are overfitting badly(data is scarce I assume, use Augmentations or download similar images and expand)
$endgroup$
– Aditya
yesterday
$begingroup$
@Aditya I have augmented my images using cv2 in a different script such as flipping images, brightness and this was before I knew you could do the same using the ImageDataGenerator but thanks. Any idea why there is only one class being predicted. I have more than 13000 images for training and 6000 for valid.
$endgroup$
– vis7
yesterday
$begingroup$
What do your images represent? Are your images separated into folders for train and val splits?
$endgroup$
– Antonio Jurić
20 hours ago
$begingroup$
@AntonioJurić The images represent someone with and without a football. The directory structure for both train and valid is the following.Train->without->image1, image2 ->with->image4, image5 Valid->without->image3, image6 ->with->image0, image7
$endgroup$
– vis7
18 hours ago
add a comment |
$begingroup$
I am trying to build an image classification model with 2 classes with (1) or without (0). I can build the model and get an accuracy of 1. which is too good to be true (which is an issue) but when I use predict_generator as I have my images in folders, it only returns 1 class 0 (without class). There seems to be an issue but I can't work it out, i have looked at a number of articles but I still can't fix the issue.
image_shape = (220, 525, 3) #height, width, channels
img_width = 96
img_height = 96
channels = 3
epochs = 10
no_train_images = 11957 #!ls ../data/train/* | wc -l
no_test_images = 652 #!ls ../data/test/* | wc -l
no_valid_images = 6156 #!ls ../data/test/* | wc -l
train_dir = '../data/train/'
test_dir = '../data/test/'
valid_dir = '../data/valid/'
classification_model = Sequential()
# First layer with 2D convolution (32 filters, (3, 3) kernel size 3x3, input_shape=(img_width, img_height, channels))
classification_model.add(Conv2D(32, (3, 3), input_shape=input_shape))
# Activation Function = ReLu increases the non-linearity
classification_model.add(Activation('relu'))
# Max-Pooling layer with the size of the grid 2x2
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
# Randomly disconnets some nodes between this layer and the next
classification_model.add(Dropout(0.2))
classification_model.add(Conv2D(32, (3, 3)))
classification_model.add(Activation('relu'))
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
classification_model.add(Dropout(0.2))
classification_model.add(Conv2D(64, (3, 3)))
classification_model.add(Activation('relu'))
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
classification_model.add(Dropout(0.25))
classification_model.add(Conv2D(64, (3, 3)))
classification_model.add(Activation('relu'))
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
classification_model.add(Dropout(0.3))
classification_model.add(Flatten())
classification_model.add(Dense(64))
classification_model.add(Activation('relu'))
classification_model.add(Dropout(0.5))
classification_model.add(Dense(1))
classification_model.add(Activation('sigmoid'))
# Using binary_crossentropy as we only have 2 classes
classification_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
batch_size = 32
# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
rescale=1. / 255,
zoom_range=0.2)
# this is the augmentation configuration we will use for testing:
# only rescaling
valid_datagen = ImageDataGenerator(rescale=1. / 255)
test_datagen = ImageDataGenerator()
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size = (img_width, img_height),
batch_size = batch_size,
class_mode = 'binary',
shuffle = True)
valid_generator = valid_datagen.flow_from_directory(
valid_dir,
target_size = (img_width, img_height),
batch_size = batch_size,
class_mode = 'binary',
shuffle = False)
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size = (img_width, img_height),
batch_size = 1,
class_mode = None,
shuffle = False)
mpd = classification_model.fit_generator(
train_generator,
steps_per_epoch = no_train_images // batch_size, # number of images per epoch
epochs = epochs, # number of iterations over the entire data
validation_data = valid_generator,
validation_steps = no_valid_images // batch_size)
Epoch 1/10
373/373 [==============================] - 119s 320ms/step - loss: 0.5214 - acc: 0.7357 - val_loss: 0.2720 - val_acc: 0.8758
Epoch 2/10
373/373 [==============================] - 120s 322ms/step - loss: 0.2485 - acc: 0.8935 - val_loss: 0.0568 - val_acc: 0.9829
Epoch 3/10
373/373 [==============================] - 130s 350ms/step - loss: 0.1427 - acc: 0.9435 - val_loss: 0.0410 - val_acc: 0.9796
Epoch 4/10
373/373 [==============================] - 127s 341ms/step - loss: 0.1053 - acc: 0.9623 - val_loss: 0.0197 - val_acc: 0.9971
Epoch 5/10
373/373 [==============================] - 126s 337ms/step - loss: 0.0817 - acc: 0.9682 - val_loss: 0.0136 - val_acc: 0.9948
Epoch 6/10
373/373 [==============================] - 123s 329ms/step - loss: 0.0665 - acc: 0.9754 - val_loss: 0.0116 - val_acc: 0.9985
Epoch 7/10
373/373 [==============================] - 140s 376ms/step - loss: 0.0518 - acc: 0.9817 - val_loss: 0.0035 - val_acc: 0.9997
Epoch 8/10
373/373 [==============================] - 144s 386ms/step - loss: 0.0539 - acc: 0.9832 - val_loss: 8.9459e-04 - val_acc: 1.0000
Epoch 9/10
373/373 [==============================] - 122s 327ms/step - loss: 0.0434 - acc: 0.9850 - val_loss: 0.0023 - val_acc: 0.9997
Epoch 10/10
373/373 [==============================] - 125s 336ms/step - loss: 0.0513 - acc: 0.9844 - val_loss: 0.0014 - val_acc: 1.0000
valid_generator.batch_size=1
score = classification_model.evaluate_generator(valid_generator,
no_test_images/batch_size, pickle_safe=False)
test_generator.reset()
scores=classification_model.predict_generator(test_generator, len(test_generator))
print("Loss: ", score[0], "Accuracy: ", score[1])
predicted_class_indices=np.argmax(scores,axis=1)
print(predicted_class_indices)
labels = (train_generator.class_indices)
labelss = dict((v,k) for k,v in labels.items())
predictions = [labelss[k] for k in predicted_class_indices]
filenames=test_generator.filenames
results=pd.DataFrame({"Filename":filenames,
"Predictions":predictions})
print(results)
Loss: 5.404246180551993e-06 Accuracy: 1.0
print(predicted_class_indices) - ALL 0
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
Filename Predictions
0 test_folder/video_3_frame10.jpg without
1 test_folder/video_3_frame1001.jpg without
2 test_folder/video_3_frame1006.jpg without
3 test_folder/video_3_frame1008.jpg without
4 test_folder/video_3_frame1009.jpg without
5 test_folder/video_3_frame1010.jpg without
6 test_folder/video_3_frame1013.jpg without
7 test_folder/video_3_frame1014.jpg without
8 test_folder/video_3_frame1022.jpg without
9 test_folder/video_3_frame1023.jpg without
10 test_folder/video_3_frame103.jpg without
11 test_folder/video_3_frame1036.jpg without
12 test_folder/video_3_frame1039.jpg without
13 test_folder/video_3_frame104.jpg without
14 test_folder/video_3_frame1042.jpg without
15 test_folder/video_3_frame1043.jpg without
16 test_folder/video_3_frame1048.jpg without
17 test_folder/video_3_frame105.jpg without
18 test_folder/video_3_frame1051.jpg without
19 test_folder/video_3_frame1052.jpg without
20 test_folder/video_3_frame1054.jpg without
21 test_folder/video_3_frame1055.jpg without
22 test_folder/video_3_frame1057.jpg without
23 test_folder/video_3_frame1059.jpg without
24 test_folder/video_3_frame1060.jpg without
...just some of the outputs but all 650+ are without class.
This is the output and as you can see all the predicted values are 0 for the without class.
This is my first attempt at using Keras and CNN so any help would be really appreciated.
python keras cnn image-classification
New contributor
$endgroup$
I am trying to build an image classification model with 2 classes with (1) or without (0). I can build the model and get an accuracy of 1. which is too good to be true (which is an issue) but when I use predict_generator as I have my images in folders, it only returns 1 class 0 (without class). There seems to be an issue but I can't work it out, i have looked at a number of articles but I still can't fix the issue.
image_shape = (220, 525, 3) #height, width, channels
img_width = 96
img_height = 96
channels = 3
epochs = 10
no_train_images = 11957 #!ls ../data/train/* | wc -l
no_test_images = 652 #!ls ../data/test/* | wc -l
no_valid_images = 6156 #!ls ../data/test/* | wc -l
train_dir = '../data/train/'
test_dir = '../data/test/'
valid_dir = '../data/valid/'
classification_model = Sequential()
# First layer with 2D convolution (32 filters, (3, 3) kernel size 3x3, input_shape=(img_width, img_height, channels))
classification_model.add(Conv2D(32, (3, 3), input_shape=input_shape))
# Activation Function = ReLu increases the non-linearity
classification_model.add(Activation('relu'))
# Max-Pooling layer with the size of the grid 2x2
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
# Randomly disconnets some nodes between this layer and the next
classification_model.add(Dropout(0.2))
classification_model.add(Conv2D(32, (3, 3)))
classification_model.add(Activation('relu'))
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
classification_model.add(Dropout(0.2))
classification_model.add(Conv2D(64, (3, 3)))
classification_model.add(Activation('relu'))
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
classification_model.add(Dropout(0.25))
classification_model.add(Conv2D(64, (3, 3)))
classification_model.add(Activation('relu'))
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
classification_model.add(Dropout(0.3))
classification_model.add(Flatten())
classification_model.add(Dense(64))
classification_model.add(Activation('relu'))
classification_model.add(Dropout(0.5))
classification_model.add(Dense(1))
classification_model.add(Activation('sigmoid'))
# Using binary_crossentropy as we only have 2 classes
classification_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
batch_size = 32
# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
rescale=1. / 255,
zoom_range=0.2)
# this is the augmentation configuration we will use for testing:
# only rescaling
valid_datagen = ImageDataGenerator(rescale=1. / 255)
test_datagen = ImageDataGenerator()
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size = (img_width, img_height),
batch_size = batch_size,
class_mode = 'binary',
shuffle = True)
valid_generator = valid_datagen.flow_from_directory(
valid_dir,
target_size = (img_width, img_height),
batch_size = batch_size,
class_mode = 'binary',
shuffle = False)
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size = (img_width, img_height),
batch_size = 1,
class_mode = None,
shuffle = False)
mpd = classification_model.fit_generator(
train_generator,
steps_per_epoch = no_train_images // batch_size, # number of images per epoch
epochs = epochs, # number of iterations over the entire data
validation_data = valid_generator,
validation_steps = no_valid_images // batch_size)
Epoch 1/10
373/373 [==============================] - 119s 320ms/step - loss: 0.5214 - acc: 0.7357 - val_loss: 0.2720 - val_acc: 0.8758
Epoch 2/10
373/373 [==============================] - 120s 322ms/step - loss: 0.2485 - acc: 0.8935 - val_loss: 0.0568 - val_acc: 0.9829
Epoch 3/10
373/373 [==============================] - 130s 350ms/step - loss: 0.1427 - acc: 0.9435 - val_loss: 0.0410 - val_acc: 0.9796
Epoch 4/10
373/373 [==============================] - 127s 341ms/step - loss: 0.1053 - acc: 0.9623 - val_loss: 0.0197 - val_acc: 0.9971
Epoch 5/10
373/373 [==============================] - 126s 337ms/step - loss: 0.0817 - acc: 0.9682 - val_loss: 0.0136 - val_acc: 0.9948
Epoch 6/10
373/373 [==============================] - 123s 329ms/step - loss: 0.0665 - acc: 0.9754 - val_loss: 0.0116 - val_acc: 0.9985
Epoch 7/10
373/373 [==============================] - 140s 376ms/step - loss: 0.0518 - acc: 0.9817 - val_loss: 0.0035 - val_acc: 0.9997
Epoch 8/10
373/373 [==============================] - 144s 386ms/step - loss: 0.0539 - acc: 0.9832 - val_loss: 8.9459e-04 - val_acc: 1.0000
Epoch 9/10
373/373 [==============================] - 122s 327ms/step - loss: 0.0434 - acc: 0.9850 - val_loss: 0.0023 - val_acc: 0.9997
Epoch 10/10
373/373 [==============================] - 125s 336ms/step - loss: 0.0513 - acc: 0.9844 - val_loss: 0.0014 - val_acc: 1.0000
valid_generator.batch_size=1
score = classification_model.evaluate_generator(valid_generator,
no_test_images/batch_size, pickle_safe=False)
test_generator.reset()
scores=classification_model.predict_generator(test_generator, len(test_generator))
print("Loss: ", score[0], "Accuracy: ", score[1])
predicted_class_indices=np.argmax(scores,axis=1)
print(predicted_class_indices)
labels = (train_generator.class_indices)
labelss = dict((v,k) for k,v in labels.items())
predictions = [labelss[k] for k in predicted_class_indices]
filenames=test_generator.filenames
results=pd.DataFrame({"Filename":filenames,
"Predictions":predictions})
print(results)
Loss: 5.404246180551993e-06 Accuracy: 1.0
print(predicted_class_indices) - ALL 0
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
Filename Predictions
0 test_folder/video_3_frame10.jpg without
1 test_folder/video_3_frame1001.jpg without
2 test_folder/video_3_frame1006.jpg without
3 test_folder/video_3_frame1008.jpg without
4 test_folder/video_3_frame1009.jpg without
5 test_folder/video_3_frame1010.jpg without
6 test_folder/video_3_frame1013.jpg without
7 test_folder/video_3_frame1014.jpg without
8 test_folder/video_3_frame1022.jpg without
9 test_folder/video_3_frame1023.jpg without
10 test_folder/video_3_frame103.jpg without
11 test_folder/video_3_frame1036.jpg without
12 test_folder/video_3_frame1039.jpg without
13 test_folder/video_3_frame104.jpg without
14 test_folder/video_3_frame1042.jpg without
15 test_folder/video_3_frame1043.jpg without
16 test_folder/video_3_frame1048.jpg without
17 test_folder/video_3_frame105.jpg without
18 test_folder/video_3_frame1051.jpg without
19 test_folder/video_3_frame1052.jpg without
20 test_folder/video_3_frame1054.jpg without
21 test_folder/video_3_frame1055.jpg without
22 test_folder/video_3_frame1057.jpg without
23 test_folder/video_3_frame1059.jpg without
24 test_folder/video_3_frame1060.jpg without
...just some of the outputs but all 650+ are without class.
This is the output and as you can see all the predicted values are 0 for the without class.
This is my first attempt at using Keras and CNN so any help would be really appreciated.
python keras cnn image-classification
python keras cnn image-classification
New contributor
New contributor
New contributor
asked yesterday
vis7vis7
1
1
New contributor
New contributor
$begingroup$
You are overfitting badly(data is scarce I assume, use Augmentations or download similar images and expand)
$endgroup$
– Aditya
yesterday
$begingroup$
@Aditya I have augmented my images using cv2 in a different script such as flipping images, brightness and this was before I knew you could do the same using the ImageDataGenerator but thanks. Any idea why there is only one class being predicted. I have more than 13000 images for training and 6000 for valid.
$endgroup$
– vis7
yesterday
$begingroup$
What do your images represent? Are your images separated into folders for train and val splits?
$endgroup$
– Antonio Jurić
20 hours ago
$begingroup$
@AntonioJurić The images represent someone with and without a football. The directory structure for both train and valid is the following.Train->without->image1, image2 ->with->image4, image5 Valid->without->image3, image6 ->with->image0, image7
$endgroup$
– vis7
18 hours ago
add a comment |
$begingroup$
You are overfitting badly(data is scarce I assume, use Augmentations or download similar images and expand)
$endgroup$
– Aditya
yesterday
$begingroup$
@Aditya I have augmented my images using cv2 in a different script such as flipping images, brightness and this was before I knew you could do the same using the ImageDataGenerator but thanks. Any idea why there is only one class being predicted. I have more than 13000 images for training and 6000 for valid.
$endgroup$
– vis7
yesterday
$begingroup$
What do your images represent? Are your images separated into folders for train and val splits?
$endgroup$
– Antonio Jurić
20 hours ago
$begingroup$
@AntonioJurić The images represent someone with and without a football. The directory structure for both train and valid is the following.Train->without->image1, image2 ->with->image4, image5 Valid->without->image3, image6 ->with->image0, image7
$endgroup$
– vis7
18 hours ago
$begingroup$
You are overfitting badly(data is scarce I assume, use Augmentations or download similar images and expand)
$endgroup$
– Aditya
yesterday
$begingroup$
You are overfitting badly(data is scarce I assume, use Augmentations or download similar images and expand)
$endgroup$
– Aditya
yesterday
$begingroup$
@Aditya I have augmented my images using cv2 in a different script such as flipping images, brightness and this was before I knew you could do the same using the ImageDataGenerator but thanks. Any idea why there is only one class being predicted. I have more than 13000 images for training and 6000 for valid.
$endgroup$
– vis7
yesterday
$begingroup$
@Aditya I have augmented my images using cv2 in a different script such as flipping images, brightness and this was before I knew you could do the same using the ImageDataGenerator but thanks. Any idea why there is only one class being predicted. I have more than 13000 images for training and 6000 for valid.
$endgroup$
– vis7
yesterday
$begingroup$
What do your images represent? Are your images separated into folders for train and val splits?
$endgroup$
– Antonio Jurić
20 hours ago
$begingroup$
What do your images represent? Are your images separated into folders for train and val splits?
$endgroup$
– Antonio Jurić
20 hours ago
$begingroup$
@AntonioJurić The images represent someone with and without a football. The directory structure for both train and valid is the following.
Train->without->image1, image2 ->with->image4, image5 Valid->without->image3, image6 ->with->image0, image7
$endgroup$
– vis7
18 hours ago
$begingroup$
@AntonioJurić The images represent someone with and without a football. The directory structure for both train and valid is the following.
Train->without->image1, image2 ->with->image4, image5 Valid->without->image3, image6 ->with->image0, image7
$endgroup$
– vis7
18 hours ago
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
vis7 is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45833%2fkeras-model-only-predicts-one-class-for-all-the-test-images%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
vis7 is a new contributor. Be nice, and check out our Code of Conduct.
vis7 is a new contributor. Be nice, and check out our Code of Conduct.
vis7 is a new contributor. Be nice, and check out our Code of Conduct.
vis7 is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45833%2fkeras-model-only-predicts-one-class-for-all-the-test-images%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
You are overfitting badly(data is scarce I assume, use Augmentations or download similar images and expand)
$endgroup$
– Aditya
yesterday
$begingroup$
@Aditya I have augmented my images using cv2 in a different script such as flipping images, brightness and this was before I knew you could do the same using the ImageDataGenerator but thanks. Any idea why there is only one class being predicted. I have more than 13000 images for training and 6000 for valid.
$endgroup$
– vis7
yesterday
$begingroup$
What do your images represent? Are your images separated into folders for train and val splits?
$endgroup$
– Antonio Jurić
20 hours ago
$begingroup$
@AntonioJurić The images represent someone with and without a football. The directory structure for both train and valid is the following.
Train->without->image1, image2 ->with->image4, image5 Valid->without->image3, image6 ->with->image0, image7
$endgroup$
– vis7
18 hours ago