How to prepare the varied size input in CNN prediction












2












$begingroup$


I want to make a CNN model in Keras which can be fed images of different sizes. According to other questions, I could understand how to set a model, like Input =(None,None,3). However, I'm not sure how to prepare the input/output datasets.
Concretely, now I want to combine the datasets with (100,100) and (240,360).
However, I don't know how to combine these datasets.










share|improve this question











$endgroup$

















    2












    $begingroup$


    I want to make a CNN model in Keras which can be fed images of different sizes. According to other questions, I could understand how to set a model, like Input =(None,None,3). However, I'm not sure how to prepare the input/output datasets.
    Concretely, now I want to combine the datasets with (100,100) and (240,360).
    However, I don't know how to combine these datasets.










    share|improve this question











    $endgroup$















      2












      2








      2





      $begingroup$


      I want to make a CNN model in Keras which can be fed images of different sizes. According to other questions, I could understand how to set a model, like Input =(None,None,3). However, I'm not sure how to prepare the input/output datasets.
      Concretely, now I want to combine the datasets with (100,100) and (240,360).
      However, I don't know how to combine these datasets.










      share|improve this question











      $endgroup$




      I want to make a CNN model in Keras which can be fed images of different sizes. According to other questions, I could understand how to set a model, like Input =(None,None,3). However, I'm not sure how to prepare the input/output datasets.
      Concretely, now I want to combine the datasets with (100,100) and (240,360).
      However, I don't know how to combine these datasets.







      machine-learning neural-network deep-learning keras cnn






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Oct 30 '18 at 17:27









      Media

      6,98062060




      6,98062060










      asked Oct 30 '18 at 16:27









      kainamanamakainamanama

      162




      162






















          5 Answers
          5






          active

          oldest

          votes


















          0












          $begingroup$

          At least, as far as I know, you can't. The reason is clear. In neural networks, you attempt to find appropriate weights to diminish a typical cost function. You have to find appropriate weights for a specified number of predefined weights. When you specify an input shape, the rest of the network weights will depend on the weights of input. You can't change the input size of a network. In other words, you can't feed your network with different input sizes for convolutional networks. A typical solution for dealing with such situations is to resize the input.






          share|improve this answer









          $endgroup$





















            0












            $begingroup$

            There are some ways to deal with it but they do not solve the problem well. You can use black pixels, special values for nan, resizing and a separate mask layer that says where the information on the picture is. But most likely they are working not so well. Otherwise the image datasets would have images of different sizes. Separate layers for masks is used in the currently best image recognition neural network (SENet. Hu et al. Winner of ImageNet in 2017). But they use masking for zooming into the picture and not for different image sizes.






            share|improve this answer









            $endgroup$





















              0












              $begingroup$

              Conventionally, when dealing with images of different sizes in CNN(which happens very often in real world problems), we resize the images to the size of the smallest images with the help of any image manipulation library (OpenCV, PIL etc) or some times, pad the images of unequal size to desired size. Resizing the image is simpler and is used most often.



              As mentioned by Media in the above answer, it is not possible to directly use images of different sizes. It is because when you define a CNN architecture, you plan as to how many layers you should have depending on the input size. Without having a fixed input shape, you cannot define architecture of your model. It is therefore necessary to convert all your images to same size.






              share|improve this answer









              $endgroup$













              • $begingroup$
                Actually, we don't resize images to the the smallest one size, but to the same size of the Cnn's inputs ! (plus you can change the sort of the answers on this website, there is no evidence that Media answer always be above yours ! )
                $endgroup$
                – Jérémy Blain
                Oct 31 '18 at 14:07












              • $begingroup$
                Thanks @JérémyBlain. I think when we build CNN architecture based on our dataset, we resize the images to the least size of all images in the dataset. But, when we already have a CNN architecture defined, then as you said, we resize the images to the input size of CNN. So, the size depends on whether we already have a CNN or we building a CNN for this particular dataset. Please correct me if I am wrong.
                $endgroup$
                – Amruth Lakkavaram
                Nov 1 '18 at 3:57










              • $begingroup$
                I think you're right :) I don't really know if the image are resized to the smallest (in practice), but I think it's the best way to do it (it is better to lose some informations than recreate ones, with possibly conflict or artifact !)
                $endgroup$
                – Jérémy Blain
                Nov 1 '18 at 13:36










              • $begingroup$
                I don't agree with those statements: in theory you can define a CNN without taking into account the size of the input... The weights and the biases are related to the shape of the filter kernels, not to the image shape. Indeed, you can use the same CNN for a 255x255 and for a 1024x1024 images, isn it? What we can't do with the majority of the API is to use the same network for different image sizes at the same time. The thing is that, in practical implementations, it is an arduous task handling variable sized data (allocating memory in the gpu, transfer data between cpu/gpu)
                $endgroup$
                – ignatius
                Nov 8 '18 at 12:10



















              0












              $begingroup$

              There is a concatenate function in Keras: https://keras.io/layers/merge/#concatenate and https://keras.io/backend/#concatenate . Also see this paper: https://arxiv.org/abs/1605.07333 . Its application can be seen here: https://machinelearningmastery.com/develop-n-gram-multichannel-convolutional-neural-network-sentiment-analysis/ and https://machinelearningmastery.com/cnn-models-for-human-activity-recognition-time-series-classification/



              This method can be used to have multiple input channels with different image sizes.






              share|improve this answer











              $endgroup$













              • $begingroup$
                this is not an answer, but a survey ! :)
                $endgroup$
                – Jérémy Blain
                Oct 31 '18 at 14:09



















              0












              $begingroup$

              There is a way to include both image sizes. You can preprocess your images so that they are re-sized to the same dimensions.



              Some of the freely available code that shows this:



              img_width, img_height = 150, 150

              train_data_dir = '/yourdir/train'
              validation_data_dir = '/yourdir/validation'
              nb_train_samples =
              nb_validation_samples =
              epochs = 50
              batch_size = 16

              if K.image_data_format() == 'channels_first':
              input_shape = (3, img_width, img_height)
              else:
              input_shape = (img_width, img_height, 3)



              model = Sequential()
              model.add(Conv2D(32, (3, 3), input_shape=input_shape))
              model.add(Activation('relu'))
              model.add(MaxPooling2D(pool_size=(2, 2)))

              model.add(Conv2D(32, (3, 3)))
              model.add(Activation('relu'))
              model.add(MaxPooling2D(pool_size=(2, 2)))

              model.add(Conv2D(64, (3, 3)))
              model.add(Activation('relu'))
              model.add(MaxPooling2D(pool_size=(2, 2)))

              model.add(Flatten())
              model.add(Dense(64))
              model.add(Activation('relu'))
              model.add(Dense(64))
              model.add(Activation('relu'))
              model.add(Dropout(0.3))
              model.add(Dense(1))
              model.add(Activation('sigmoid'))

              model.compile(loss='binary_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])


              train_datagen = ImageDataGenerator(
              rescale=1. / 255,
              shear_range=0.1,
              zoom_range=0.1,
              horizontal_flip=True)

              test_datagen = ImageDataGenerator(rescale=1. / 255)

              train_generator = train_datagen.flow_from_directory(
              train_data_dir,
              target_size=(img_width, img_height),
              batch_size=batch_size,
              class_mode='binary')

              validation_generator = test_datagen.flow_from_directory(
              validation_data_dir,
              target_size=(img_width, img_height),
              batch_size=batch_size,
              class_mode='binary')



              This uses the Keras image flow API for data augmentation on the fly, and the data generators at the bottom of the code will adjust your images to whatever dimensions you specify at the top.






              share|improve this answer








              New contributor




              Anthony Bozzo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.






              $endgroup$













                Your Answer





                StackExchange.ifUsing("editor", function () {
                return StackExchange.using("mathjaxEditing", function () {
                StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
                StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
                });
                });
                }, "mathjax-editing");

                StackExchange.ready(function() {
                var channelOptions = {
                tags: "".split(" "),
                id: "557"
                };
                initTagRenderer("".split(" "), "".split(" "), channelOptions);

                StackExchange.using("externalEditor", function() {
                // Have to fire editor after snippets, if snippets enabled
                if (StackExchange.settings.snippets.snippetsEnabled) {
                StackExchange.using("snippets", function() {
                createEditor();
                });
                }
                else {
                createEditor();
                }
                });

                function createEditor() {
                StackExchange.prepareEditor({
                heartbeatType: 'answer',
                autoActivateHeartbeat: false,
                convertImagesToLinks: false,
                noModals: true,
                showLowRepImageUploadWarning: true,
                reputationToPostImages: null,
                bindNavPrevention: true,
                postfix: "",
                imageUploader: {
                brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
                contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
                allowUrls: true
                },
                onDemand: true,
                discardSelector: ".discard-answer"
                ,immediatelyShowMarkdownHelp:true
                });


                }
                });














                draft saved

                draft discarded


















                StackExchange.ready(
                function () {
                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f40462%2fhow-to-prepare-the-varied-size-input-in-cnn-prediction%23new-answer', 'question_page');
                }
                );

                Post as a guest















                Required, but never shown

























                5 Answers
                5






                active

                oldest

                votes








                5 Answers
                5






                active

                oldest

                votes









                active

                oldest

                votes






                active

                oldest

                votes









                0












                $begingroup$

                At least, as far as I know, you can't. The reason is clear. In neural networks, you attempt to find appropriate weights to diminish a typical cost function. You have to find appropriate weights for a specified number of predefined weights. When you specify an input shape, the rest of the network weights will depend on the weights of input. You can't change the input size of a network. In other words, you can't feed your network with different input sizes for convolutional networks. A typical solution for dealing with such situations is to resize the input.






                share|improve this answer









                $endgroup$


















                  0












                  $begingroup$

                  At least, as far as I know, you can't. The reason is clear. In neural networks, you attempt to find appropriate weights to diminish a typical cost function. You have to find appropriate weights for a specified number of predefined weights. When you specify an input shape, the rest of the network weights will depend on the weights of input. You can't change the input size of a network. In other words, you can't feed your network with different input sizes for convolutional networks. A typical solution for dealing with such situations is to resize the input.






                  share|improve this answer









                  $endgroup$
















                    0












                    0








                    0





                    $begingroup$

                    At least, as far as I know, you can't. The reason is clear. In neural networks, you attempt to find appropriate weights to diminish a typical cost function. You have to find appropriate weights for a specified number of predefined weights. When you specify an input shape, the rest of the network weights will depend on the weights of input. You can't change the input size of a network. In other words, you can't feed your network with different input sizes for convolutional networks. A typical solution for dealing with such situations is to resize the input.






                    share|improve this answer









                    $endgroup$



                    At least, as far as I know, you can't. The reason is clear. In neural networks, you attempt to find appropriate weights to diminish a typical cost function. You have to find appropriate weights for a specified number of predefined weights. When you specify an input shape, the rest of the network weights will depend on the weights of input. You can't change the input size of a network. In other words, you can't feed your network with different input sizes for convolutional networks. A typical solution for dealing with such situations is to resize the input.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Oct 30 '18 at 17:25









                    MediaMedia

                    6,98062060




                    6,98062060























                        0












                        $begingroup$

                        There are some ways to deal with it but they do not solve the problem well. You can use black pixels, special values for nan, resizing and a separate mask layer that says where the information on the picture is. But most likely they are working not so well. Otherwise the image datasets would have images of different sizes. Separate layers for masks is used in the currently best image recognition neural network (SENet. Hu et al. Winner of ImageNet in 2017). But they use masking for zooming into the picture and not for different image sizes.






                        share|improve this answer









                        $endgroup$


















                          0












                          $begingroup$

                          There are some ways to deal with it but they do not solve the problem well. You can use black pixels, special values for nan, resizing and a separate mask layer that says where the information on the picture is. But most likely they are working not so well. Otherwise the image datasets would have images of different sizes. Separate layers for masks is used in the currently best image recognition neural network (SENet. Hu et al. Winner of ImageNet in 2017). But they use masking for zooming into the picture and not for different image sizes.






                          share|improve this answer









                          $endgroup$
















                            0












                            0








                            0





                            $begingroup$

                            There are some ways to deal with it but they do not solve the problem well. You can use black pixels, special values for nan, resizing and a separate mask layer that says where the information on the picture is. But most likely they are working not so well. Otherwise the image datasets would have images of different sizes. Separate layers for masks is used in the currently best image recognition neural network (SENet. Hu et al. Winner of ImageNet in 2017). But they use masking for zooming into the picture and not for different image sizes.






                            share|improve this answer









                            $endgroup$



                            There are some ways to deal with it but they do not solve the problem well. You can use black pixels, special values for nan, resizing and a separate mask layer that says where the information on the picture is. But most likely they are working not so well. Otherwise the image datasets would have images of different sizes. Separate layers for masks is used in the currently best image recognition neural network (SENet. Hu et al. Winner of ImageNet in 2017). But they use masking for zooming into the picture and not for different image sizes.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered Oct 30 '18 at 23:59









                            keiv.flykeiv.fly

                            5979




                            5979























                                0












                                $begingroup$

                                Conventionally, when dealing with images of different sizes in CNN(which happens very often in real world problems), we resize the images to the size of the smallest images with the help of any image manipulation library (OpenCV, PIL etc) or some times, pad the images of unequal size to desired size. Resizing the image is simpler and is used most often.



                                As mentioned by Media in the above answer, it is not possible to directly use images of different sizes. It is because when you define a CNN architecture, you plan as to how many layers you should have depending on the input size. Without having a fixed input shape, you cannot define architecture of your model. It is therefore necessary to convert all your images to same size.






                                share|improve this answer









                                $endgroup$













                                • $begingroup$
                                  Actually, we don't resize images to the the smallest one size, but to the same size of the Cnn's inputs ! (plus you can change the sort of the answers on this website, there is no evidence that Media answer always be above yours ! )
                                  $endgroup$
                                  – Jérémy Blain
                                  Oct 31 '18 at 14:07












                                • $begingroup$
                                  Thanks @JérémyBlain. I think when we build CNN architecture based on our dataset, we resize the images to the least size of all images in the dataset. But, when we already have a CNN architecture defined, then as you said, we resize the images to the input size of CNN. So, the size depends on whether we already have a CNN or we building a CNN for this particular dataset. Please correct me if I am wrong.
                                  $endgroup$
                                  – Amruth Lakkavaram
                                  Nov 1 '18 at 3:57










                                • $begingroup$
                                  I think you're right :) I don't really know if the image are resized to the smallest (in practice), but I think it's the best way to do it (it is better to lose some informations than recreate ones, with possibly conflict or artifact !)
                                  $endgroup$
                                  – Jérémy Blain
                                  Nov 1 '18 at 13:36










                                • $begingroup$
                                  I don't agree with those statements: in theory you can define a CNN without taking into account the size of the input... The weights and the biases are related to the shape of the filter kernels, not to the image shape. Indeed, you can use the same CNN for a 255x255 and for a 1024x1024 images, isn it? What we can't do with the majority of the API is to use the same network for different image sizes at the same time. The thing is that, in practical implementations, it is an arduous task handling variable sized data (allocating memory in the gpu, transfer data between cpu/gpu)
                                  $endgroup$
                                  – ignatius
                                  Nov 8 '18 at 12:10
















                                0












                                $begingroup$

                                Conventionally, when dealing with images of different sizes in CNN(which happens very often in real world problems), we resize the images to the size of the smallest images with the help of any image manipulation library (OpenCV, PIL etc) or some times, pad the images of unequal size to desired size. Resizing the image is simpler and is used most often.



                                As mentioned by Media in the above answer, it is not possible to directly use images of different sizes. It is because when you define a CNN architecture, you plan as to how many layers you should have depending on the input size. Without having a fixed input shape, you cannot define architecture of your model. It is therefore necessary to convert all your images to same size.






                                share|improve this answer









                                $endgroup$













                                • $begingroup$
                                  Actually, we don't resize images to the the smallest one size, but to the same size of the Cnn's inputs ! (plus you can change the sort of the answers on this website, there is no evidence that Media answer always be above yours ! )
                                  $endgroup$
                                  – Jérémy Blain
                                  Oct 31 '18 at 14:07












                                • $begingroup$
                                  Thanks @JérémyBlain. I think when we build CNN architecture based on our dataset, we resize the images to the least size of all images in the dataset. But, when we already have a CNN architecture defined, then as you said, we resize the images to the input size of CNN. So, the size depends on whether we already have a CNN or we building a CNN for this particular dataset. Please correct me if I am wrong.
                                  $endgroup$
                                  – Amruth Lakkavaram
                                  Nov 1 '18 at 3:57










                                • $begingroup$
                                  I think you're right :) I don't really know if the image are resized to the smallest (in practice), but I think it's the best way to do it (it is better to lose some informations than recreate ones, with possibly conflict or artifact !)
                                  $endgroup$
                                  – Jérémy Blain
                                  Nov 1 '18 at 13:36










                                • $begingroup$
                                  I don't agree with those statements: in theory you can define a CNN without taking into account the size of the input... The weights and the biases are related to the shape of the filter kernels, not to the image shape. Indeed, you can use the same CNN for a 255x255 and for a 1024x1024 images, isn it? What we can't do with the majority of the API is to use the same network for different image sizes at the same time. The thing is that, in practical implementations, it is an arduous task handling variable sized data (allocating memory in the gpu, transfer data between cpu/gpu)
                                  $endgroup$
                                  – ignatius
                                  Nov 8 '18 at 12:10














                                0












                                0








                                0





                                $begingroup$

                                Conventionally, when dealing with images of different sizes in CNN(which happens very often in real world problems), we resize the images to the size of the smallest images with the help of any image manipulation library (OpenCV, PIL etc) or some times, pad the images of unequal size to desired size. Resizing the image is simpler and is used most often.



                                As mentioned by Media in the above answer, it is not possible to directly use images of different sizes. It is because when you define a CNN architecture, you plan as to how many layers you should have depending on the input size. Without having a fixed input shape, you cannot define architecture of your model. It is therefore necessary to convert all your images to same size.






                                share|improve this answer









                                $endgroup$



                                Conventionally, when dealing with images of different sizes in CNN(which happens very often in real world problems), we resize the images to the size of the smallest images with the help of any image manipulation library (OpenCV, PIL etc) or some times, pad the images of unequal size to desired size. Resizing the image is simpler and is used most often.



                                As mentioned by Media in the above answer, it is not possible to directly use images of different sizes. It is because when you define a CNN architecture, you plan as to how many layers you should have depending on the input size. Without having a fixed input shape, you cannot define architecture of your model. It is therefore necessary to convert all your images to same size.







                                share|improve this answer












                                share|improve this answer



                                share|improve this answer










                                answered Oct 31 '18 at 14:04









                                Amruth LakkavaramAmruth Lakkavaram

                                414




                                414












                                • $begingroup$
                                  Actually, we don't resize images to the the smallest one size, but to the same size of the Cnn's inputs ! (plus you can change the sort of the answers on this website, there is no evidence that Media answer always be above yours ! )
                                  $endgroup$
                                  – Jérémy Blain
                                  Oct 31 '18 at 14:07












                                • $begingroup$
                                  Thanks @JérémyBlain. I think when we build CNN architecture based on our dataset, we resize the images to the least size of all images in the dataset. But, when we already have a CNN architecture defined, then as you said, we resize the images to the input size of CNN. So, the size depends on whether we already have a CNN or we building a CNN for this particular dataset. Please correct me if I am wrong.
                                  $endgroup$
                                  – Amruth Lakkavaram
                                  Nov 1 '18 at 3:57










                                • $begingroup$
                                  I think you're right :) I don't really know if the image are resized to the smallest (in practice), but I think it's the best way to do it (it is better to lose some informations than recreate ones, with possibly conflict or artifact !)
                                  $endgroup$
                                  – Jérémy Blain
                                  Nov 1 '18 at 13:36










                                • $begingroup$
                                  I don't agree with those statements: in theory you can define a CNN without taking into account the size of the input... The weights and the biases are related to the shape of the filter kernels, not to the image shape. Indeed, you can use the same CNN for a 255x255 and for a 1024x1024 images, isn it? What we can't do with the majority of the API is to use the same network for different image sizes at the same time. The thing is that, in practical implementations, it is an arduous task handling variable sized data (allocating memory in the gpu, transfer data between cpu/gpu)
                                  $endgroup$
                                  – ignatius
                                  Nov 8 '18 at 12:10


















                                • $begingroup$
                                  Actually, we don't resize images to the the smallest one size, but to the same size of the Cnn's inputs ! (plus you can change the sort of the answers on this website, there is no evidence that Media answer always be above yours ! )
                                  $endgroup$
                                  – Jérémy Blain
                                  Oct 31 '18 at 14:07












                                • $begingroup$
                                  Thanks @JérémyBlain. I think when we build CNN architecture based on our dataset, we resize the images to the least size of all images in the dataset. But, when we already have a CNN architecture defined, then as you said, we resize the images to the input size of CNN. So, the size depends on whether we already have a CNN or we building a CNN for this particular dataset. Please correct me if I am wrong.
                                  $endgroup$
                                  – Amruth Lakkavaram
                                  Nov 1 '18 at 3:57










                                • $begingroup$
                                  I think you're right :) I don't really know if the image are resized to the smallest (in practice), but I think it's the best way to do it (it is better to lose some informations than recreate ones, with possibly conflict or artifact !)
                                  $endgroup$
                                  – Jérémy Blain
                                  Nov 1 '18 at 13:36










                                • $begingroup$
                                  I don't agree with those statements: in theory you can define a CNN without taking into account the size of the input... The weights and the biases are related to the shape of the filter kernels, not to the image shape. Indeed, you can use the same CNN for a 255x255 and for a 1024x1024 images, isn it? What we can't do with the majority of the API is to use the same network for different image sizes at the same time. The thing is that, in practical implementations, it is an arduous task handling variable sized data (allocating memory in the gpu, transfer data between cpu/gpu)
                                  $endgroup$
                                  – ignatius
                                  Nov 8 '18 at 12:10
















                                $begingroup$
                                Actually, we don't resize images to the the smallest one size, but to the same size of the Cnn's inputs ! (plus you can change the sort of the answers on this website, there is no evidence that Media answer always be above yours ! )
                                $endgroup$
                                – Jérémy Blain
                                Oct 31 '18 at 14:07






                                $begingroup$
                                Actually, we don't resize images to the the smallest one size, but to the same size of the Cnn's inputs ! (plus you can change the sort of the answers on this website, there is no evidence that Media answer always be above yours ! )
                                $endgroup$
                                – Jérémy Blain
                                Oct 31 '18 at 14:07














                                $begingroup$
                                Thanks @JérémyBlain. I think when we build CNN architecture based on our dataset, we resize the images to the least size of all images in the dataset. But, when we already have a CNN architecture defined, then as you said, we resize the images to the input size of CNN. So, the size depends on whether we already have a CNN or we building a CNN for this particular dataset. Please correct me if I am wrong.
                                $endgroup$
                                – Amruth Lakkavaram
                                Nov 1 '18 at 3:57




                                $begingroup$
                                Thanks @JérémyBlain. I think when we build CNN architecture based on our dataset, we resize the images to the least size of all images in the dataset. But, when we already have a CNN architecture defined, then as you said, we resize the images to the input size of CNN. So, the size depends on whether we already have a CNN or we building a CNN for this particular dataset. Please correct me if I am wrong.
                                $endgroup$
                                – Amruth Lakkavaram
                                Nov 1 '18 at 3:57












                                $begingroup$
                                I think you're right :) I don't really know if the image are resized to the smallest (in practice), but I think it's the best way to do it (it is better to lose some informations than recreate ones, with possibly conflict or artifact !)
                                $endgroup$
                                – Jérémy Blain
                                Nov 1 '18 at 13:36




                                $begingroup$
                                I think you're right :) I don't really know if the image are resized to the smallest (in practice), but I think it's the best way to do it (it is better to lose some informations than recreate ones, with possibly conflict or artifact !)
                                $endgroup$
                                – Jérémy Blain
                                Nov 1 '18 at 13:36












                                $begingroup$
                                I don't agree with those statements: in theory you can define a CNN without taking into account the size of the input... The weights and the biases are related to the shape of the filter kernels, not to the image shape. Indeed, you can use the same CNN for a 255x255 and for a 1024x1024 images, isn it? What we can't do with the majority of the API is to use the same network for different image sizes at the same time. The thing is that, in practical implementations, it is an arduous task handling variable sized data (allocating memory in the gpu, transfer data between cpu/gpu)
                                $endgroup$
                                – ignatius
                                Nov 8 '18 at 12:10




                                $begingroup$
                                I don't agree with those statements: in theory you can define a CNN without taking into account the size of the input... The weights and the biases are related to the shape of the filter kernels, not to the image shape. Indeed, you can use the same CNN for a 255x255 and for a 1024x1024 images, isn it? What we can't do with the majority of the API is to use the same network for different image sizes at the same time. The thing is that, in practical implementations, it is an arduous task handling variable sized data (allocating memory in the gpu, transfer data between cpu/gpu)
                                $endgroup$
                                – ignatius
                                Nov 8 '18 at 12:10











                                0












                                $begingroup$

                                There is a concatenate function in Keras: https://keras.io/layers/merge/#concatenate and https://keras.io/backend/#concatenate . Also see this paper: https://arxiv.org/abs/1605.07333 . Its application can be seen here: https://machinelearningmastery.com/develop-n-gram-multichannel-convolutional-neural-network-sentiment-analysis/ and https://machinelearningmastery.com/cnn-models-for-human-activity-recognition-time-series-classification/



                                This method can be used to have multiple input channels with different image sizes.






                                share|improve this answer











                                $endgroup$













                                • $begingroup$
                                  this is not an answer, but a survey ! :)
                                  $endgroup$
                                  – Jérémy Blain
                                  Oct 31 '18 at 14:09
















                                0












                                $begingroup$

                                There is a concatenate function in Keras: https://keras.io/layers/merge/#concatenate and https://keras.io/backend/#concatenate . Also see this paper: https://arxiv.org/abs/1605.07333 . Its application can be seen here: https://machinelearningmastery.com/develop-n-gram-multichannel-convolutional-neural-network-sentiment-analysis/ and https://machinelearningmastery.com/cnn-models-for-human-activity-recognition-time-series-classification/



                                This method can be used to have multiple input channels with different image sizes.






                                share|improve this answer











                                $endgroup$













                                • $begingroup$
                                  this is not an answer, but a survey ! :)
                                  $endgroup$
                                  – Jérémy Blain
                                  Oct 31 '18 at 14:09














                                0












                                0








                                0





                                $begingroup$

                                There is a concatenate function in Keras: https://keras.io/layers/merge/#concatenate and https://keras.io/backend/#concatenate . Also see this paper: https://arxiv.org/abs/1605.07333 . Its application can be seen here: https://machinelearningmastery.com/develop-n-gram-multichannel-convolutional-neural-network-sentiment-analysis/ and https://machinelearningmastery.com/cnn-models-for-human-activity-recognition-time-series-classification/



                                This method can be used to have multiple input channels with different image sizes.






                                share|improve this answer











                                $endgroup$



                                There is a concatenate function in Keras: https://keras.io/layers/merge/#concatenate and https://keras.io/backend/#concatenate . Also see this paper: https://arxiv.org/abs/1605.07333 . Its application can be seen here: https://machinelearningmastery.com/develop-n-gram-multichannel-convolutional-neural-network-sentiment-analysis/ and https://machinelearningmastery.com/cnn-models-for-human-activity-recognition-time-series-classification/



                                This method can be used to have multiple input channels with different image sizes.







                                share|improve this answer














                                share|improve this answer



                                share|improve this answer








                                edited Nov 1 '18 at 1:24

























                                answered Oct 31 '18 at 1:15









                                rnsornso

                                461114




                                461114












                                • $begingroup$
                                  this is not an answer, but a survey ! :)
                                  $endgroup$
                                  – Jérémy Blain
                                  Oct 31 '18 at 14:09


















                                • $begingroup$
                                  this is not an answer, but a survey ! :)
                                  $endgroup$
                                  – Jérémy Blain
                                  Oct 31 '18 at 14:09
















                                $begingroup$
                                this is not an answer, but a survey ! :)
                                $endgroup$
                                – Jérémy Blain
                                Oct 31 '18 at 14:09




                                $begingroup$
                                this is not an answer, but a survey ! :)
                                $endgroup$
                                – Jérémy Blain
                                Oct 31 '18 at 14:09











                                0












                                $begingroup$

                                There is a way to include both image sizes. You can preprocess your images so that they are re-sized to the same dimensions.



                                Some of the freely available code that shows this:



                                img_width, img_height = 150, 150

                                train_data_dir = '/yourdir/train'
                                validation_data_dir = '/yourdir/validation'
                                nb_train_samples =
                                nb_validation_samples =
                                epochs = 50
                                batch_size = 16

                                if K.image_data_format() == 'channels_first':
                                input_shape = (3, img_width, img_height)
                                else:
                                input_shape = (img_width, img_height, 3)



                                model = Sequential()
                                model.add(Conv2D(32, (3, 3), input_shape=input_shape))
                                model.add(Activation('relu'))
                                model.add(MaxPooling2D(pool_size=(2, 2)))

                                model.add(Conv2D(32, (3, 3)))
                                model.add(Activation('relu'))
                                model.add(MaxPooling2D(pool_size=(2, 2)))

                                model.add(Conv2D(64, (3, 3)))
                                model.add(Activation('relu'))
                                model.add(MaxPooling2D(pool_size=(2, 2)))

                                model.add(Flatten())
                                model.add(Dense(64))
                                model.add(Activation('relu'))
                                model.add(Dense(64))
                                model.add(Activation('relu'))
                                model.add(Dropout(0.3))
                                model.add(Dense(1))
                                model.add(Activation('sigmoid'))

                                model.compile(loss='binary_crossentropy',
                                optimizer='rmsprop',
                                metrics=['accuracy'])


                                train_datagen = ImageDataGenerator(
                                rescale=1. / 255,
                                shear_range=0.1,
                                zoom_range=0.1,
                                horizontal_flip=True)

                                test_datagen = ImageDataGenerator(rescale=1. / 255)

                                train_generator = train_datagen.flow_from_directory(
                                train_data_dir,
                                target_size=(img_width, img_height),
                                batch_size=batch_size,
                                class_mode='binary')

                                validation_generator = test_datagen.flow_from_directory(
                                validation_data_dir,
                                target_size=(img_width, img_height),
                                batch_size=batch_size,
                                class_mode='binary')



                                This uses the Keras image flow API for data augmentation on the fly, and the data generators at the bottom of the code will adjust your images to whatever dimensions you specify at the top.






                                share|improve this answer








                                New contributor




                                Anthony Bozzo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                Check out our Code of Conduct.






                                $endgroup$


















                                  0












                                  $begingroup$

                                  There is a way to include both image sizes. You can preprocess your images so that they are re-sized to the same dimensions.



                                  Some of the freely available code that shows this:



                                  img_width, img_height = 150, 150

                                  train_data_dir = '/yourdir/train'
                                  validation_data_dir = '/yourdir/validation'
                                  nb_train_samples =
                                  nb_validation_samples =
                                  epochs = 50
                                  batch_size = 16

                                  if K.image_data_format() == 'channels_first':
                                  input_shape = (3, img_width, img_height)
                                  else:
                                  input_shape = (img_width, img_height, 3)



                                  model = Sequential()
                                  model.add(Conv2D(32, (3, 3), input_shape=input_shape))
                                  model.add(Activation('relu'))
                                  model.add(MaxPooling2D(pool_size=(2, 2)))

                                  model.add(Conv2D(32, (3, 3)))
                                  model.add(Activation('relu'))
                                  model.add(MaxPooling2D(pool_size=(2, 2)))

                                  model.add(Conv2D(64, (3, 3)))
                                  model.add(Activation('relu'))
                                  model.add(MaxPooling2D(pool_size=(2, 2)))

                                  model.add(Flatten())
                                  model.add(Dense(64))
                                  model.add(Activation('relu'))
                                  model.add(Dense(64))
                                  model.add(Activation('relu'))
                                  model.add(Dropout(0.3))
                                  model.add(Dense(1))
                                  model.add(Activation('sigmoid'))

                                  model.compile(loss='binary_crossentropy',
                                  optimizer='rmsprop',
                                  metrics=['accuracy'])


                                  train_datagen = ImageDataGenerator(
                                  rescale=1. / 255,
                                  shear_range=0.1,
                                  zoom_range=0.1,
                                  horizontal_flip=True)

                                  test_datagen = ImageDataGenerator(rescale=1. / 255)

                                  train_generator = train_datagen.flow_from_directory(
                                  train_data_dir,
                                  target_size=(img_width, img_height),
                                  batch_size=batch_size,
                                  class_mode='binary')

                                  validation_generator = test_datagen.flow_from_directory(
                                  validation_data_dir,
                                  target_size=(img_width, img_height),
                                  batch_size=batch_size,
                                  class_mode='binary')



                                  This uses the Keras image flow API for data augmentation on the fly, and the data generators at the bottom of the code will adjust your images to whatever dimensions you specify at the top.






                                  share|improve this answer








                                  New contributor




                                  Anthony Bozzo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                  Check out our Code of Conduct.






                                  $endgroup$
















                                    0












                                    0








                                    0





                                    $begingroup$

                                    There is a way to include both image sizes. You can preprocess your images so that they are re-sized to the same dimensions.



                                    Some of the freely available code that shows this:



                                    img_width, img_height = 150, 150

                                    train_data_dir = '/yourdir/train'
                                    validation_data_dir = '/yourdir/validation'
                                    nb_train_samples =
                                    nb_validation_samples =
                                    epochs = 50
                                    batch_size = 16

                                    if K.image_data_format() == 'channels_first':
                                    input_shape = (3, img_width, img_height)
                                    else:
                                    input_shape = (img_width, img_height, 3)



                                    model = Sequential()
                                    model.add(Conv2D(32, (3, 3), input_shape=input_shape))
                                    model.add(Activation('relu'))
                                    model.add(MaxPooling2D(pool_size=(2, 2)))

                                    model.add(Conv2D(32, (3, 3)))
                                    model.add(Activation('relu'))
                                    model.add(MaxPooling2D(pool_size=(2, 2)))

                                    model.add(Conv2D(64, (3, 3)))
                                    model.add(Activation('relu'))
                                    model.add(MaxPooling2D(pool_size=(2, 2)))

                                    model.add(Flatten())
                                    model.add(Dense(64))
                                    model.add(Activation('relu'))
                                    model.add(Dense(64))
                                    model.add(Activation('relu'))
                                    model.add(Dropout(0.3))
                                    model.add(Dense(1))
                                    model.add(Activation('sigmoid'))

                                    model.compile(loss='binary_crossentropy',
                                    optimizer='rmsprop',
                                    metrics=['accuracy'])


                                    train_datagen = ImageDataGenerator(
                                    rescale=1. / 255,
                                    shear_range=0.1,
                                    zoom_range=0.1,
                                    horizontal_flip=True)

                                    test_datagen = ImageDataGenerator(rescale=1. / 255)

                                    train_generator = train_datagen.flow_from_directory(
                                    train_data_dir,
                                    target_size=(img_width, img_height),
                                    batch_size=batch_size,
                                    class_mode='binary')

                                    validation_generator = test_datagen.flow_from_directory(
                                    validation_data_dir,
                                    target_size=(img_width, img_height),
                                    batch_size=batch_size,
                                    class_mode='binary')



                                    This uses the Keras image flow API for data augmentation on the fly, and the data generators at the bottom of the code will adjust your images to whatever dimensions you specify at the top.






                                    share|improve this answer








                                    New contributor




                                    Anthony Bozzo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.






                                    $endgroup$



                                    There is a way to include both image sizes. You can preprocess your images so that they are re-sized to the same dimensions.



                                    Some of the freely available code that shows this:



                                    img_width, img_height = 150, 150

                                    train_data_dir = '/yourdir/train'
                                    validation_data_dir = '/yourdir/validation'
                                    nb_train_samples =
                                    nb_validation_samples =
                                    epochs = 50
                                    batch_size = 16

                                    if K.image_data_format() == 'channels_first':
                                    input_shape = (3, img_width, img_height)
                                    else:
                                    input_shape = (img_width, img_height, 3)



                                    model = Sequential()
                                    model.add(Conv2D(32, (3, 3), input_shape=input_shape))
                                    model.add(Activation('relu'))
                                    model.add(MaxPooling2D(pool_size=(2, 2)))

                                    model.add(Conv2D(32, (3, 3)))
                                    model.add(Activation('relu'))
                                    model.add(MaxPooling2D(pool_size=(2, 2)))

                                    model.add(Conv2D(64, (3, 3)))
                                    model.add(Activation('relu'))
                                    model.add(MaxPooling2D(pool_size=(2, 2)))

                                    model.add(Flatten())
                                    model.add(Dense(64))
                                    model.add(Activation('relu'))
                                    model.add(Dense(64))
                                    model.add(Activation('relu'))
                                    model.add(Dropout(0.3))
                                    model.add(Dense(1))
                                    model.add(Activation('sigmoid'))

                                    model.compile(loss='binary_crossentropy',
                                    optimizer='rmsprop',
                                    metrics=['accuracy'])


                                    train_datagen = ImageDataGenerator(
                                    rescale=1. / 255,
                                    shear_range=0.1,
                                    zoom_range=0.1,
                                    horizontal_flip=True)

                                    test_datagen = ImageDataGenerator(rescale=1. / 255)

                                    train_generator = train_datagen.flow_from_directory(
                                    train_data_dir,
                                    target_size=(img_width, img_height),
                                    batch_size=batch_size,
                                    class_mode='binary')

                                    validation_generator = test_datagen.flow_from_directory(
                                    validation_data_dir,
                                    target_size=(img_width, img_height),
                                    batch_size=batch_size,
                                    class_mode='binary')



                                    This uses the Keras image flow API for data augmentation on the fly, and the data generators at the bottom of the code will adjust your images to whatever dimensions you specify at the top.







                                    share|improve this answer








                                    New contributor




                                    Anthony Bozzo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.









                                    share|improve this answer



                                    share|improve this answer






                                    New contributor




                                    Anthony Bozzo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.









                                    answered yesterday









                                    Anthony BozzoAnthony Bozzo

                                    1




                                    1




                                    New contributor




                                    Anthony Bozzo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.





                                    New contributor





                                    Anthony Bozzo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.






                                    Anthony Bozzo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.






























                                        draft saved

                                        draft discarded




















































                                        Thanks for contributing an answer to Data Science Stack Exchange!


                                        • Please be sure to answer the question. Provide details and share your research!

                                        But avoid



                                        • Asking for help, clarification, or responding to other answers.

                                        • Making statements based on opinion; back them up with references or personal experience.


                                        Use MathJax to format equations. MathJax reference.


                                        To learn more, see our tips on writing great answers.




                                        draft saved


                                        draft discarded














                                        StackExchange.ready(
                                        function () {
                                        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f40462%2fhow-to-prepare-the-varied-size-input-in-cnn-prediction%23new-answer', 'question_page');
                                        }
                                        );

                                        Post as a guest















                                        Required, but never shown





















































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown

































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown







                                        Popular posts from this blog

                                        How to label and detect the document text images

                                        Vallis Paradisi

                                        Tabula Rosettana