Difference between “reducing batch_size” and “increasing epochs” to decrease loss amount?












0












$begingroup$


In my experience, both reducing batch_size and increasing epochs can decrease loss amount. But I like to know is there any difference to using which one? Has decreased loss amount same meaning and it's not important you reached it with what way(I mean has no effects on results)?



For ex, I got same loss amount 2.5e-4 with both the following case:



1. batch_size = 1 , epochs = 100
2. batch_size = 60 , epochs = 1000


Are they same result?










share|improve this question











$endgroup$

















    0












    $begingroup$


    In my experience, both reducing batch_size and increasing epochs can decrease loss amount. But I like to know is there any difference to using which one? Has decreased loss amount same meaning and it's not important you reached it with what way(I mean has no effects on results)?



    For ex, I got same loss amount 2.5e-4 with both the following case:



    1. batch_size = 1 , epochs = 100
    2. batch_size = 60 , epochs = 1000


    Are they same result?










    share|improve this question











    $endgroup$















      0












      0








      0





      $begingroup$


      In my experience, both reducing batch_size and increasing epochs can decrease loss amount. But I like to know is there any difference to using which one? Has decreased loss amount same meaning and it's not important you reached it with what way(I mean has no effects on results)?



      For ex, I got same loss amount 2.5e-4 with both the following case:



      1. batch_size = 1 , epochs = 100
      2. batch_size = 60 , epochs = 1000


      Are they same result?










      share|improve this question











      $endgroup$




      In my experience, both reducing batch_size and increasing epochs can decrease loss amount. But I like to know is there any difference to using which one? Has decreased loss amount same meaning and it's not important you reached it with what way(I mean has no effects on results)?



      For ex, I got same loss amount 2.5e-4 with both the following case:



      1. batch_size = 1 , epochs = 100
      2. batch_size = 60 , epochs = 1000


      Are they same result?







      lstm loss-function epochs






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited 22 hours ago







      user145959

















      asked 22 hours ago









      user145959user145959

      1007




      1007






















          1 Answer
          1






          active

          oldest

          votes


















          1












          $begingroup$

          The concept of overfitting does not generalize over a specific combination of batch size and epochs. It depends on your data and the architecture of your model



          A friend of mine ran into these scenarios with a CPU based image classifier:



          1) If I use more epochs ,it may take me a lot of time to come to a desirable
          outcome.



          2) If I prefer small batch-sizes over small epochs , It might take less time
          time to compute , but not reach the desirable outcome by that epoch limit.



          I used a GPU and my results were different. Using low epochs , and better convolutional architecture , I reached a better accuracy with not so small batch sizes.



          I increased epochs, and my accuracy improved until I felt I reached overfitting.
          I increased batch-sizes and my accuracy was not increasing at a decent rate.



          I had to come to balance at which my model was acceptable.



          Its a balance , yes. A balance which I am afraid, the designer needs to take care of. Not always they are inversely proportional. The data set and the architecture makes all the difference in this debate I am afraid.






          share|improve this answer








          New contributor




          Savinay_ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "557"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46452%2fdifference-between-reducing-batch-size-and-increasing-epochs-to-decrease-los%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1












            $begingroup$

            The concept of overfitting does not generalize over a specific combination of batch size and epochs. It depends on your data and the architecture of your model



            A friend of mine ran into these scenarios with a CPU based image classifier:



            1) If I use more epochs ,it may take me a lot of time to come to a desirable
            outcome.



            2) If I prefer small batch-sizes over small epochs , It might take less time
            time to compute , but not reach the desirable outcome by that epoch limit.



            I used a GPU and my results were different. Using low epochs , and better convolutional architecture , I reached a better accuracy with not so small batch sizes.



            I increased epochs, and my accuracy improved until I felt I reached overfitting.
            I increased batch-sizes and my accuracy was not increasing at a decent rate.



            I had to come to balance at which my model was acceptable.



            Its a balance , yes. A balance which I am afraid, the designer needs to take care of. Not always they are inversely proportional. The data set and the architecture makes all the difference in this debate I am afraid.






            share|improve this answer








            New contributor




            Savinay_ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.






            $endgroup$


















              1












              $begingroup$

              The concept of overfitting does not generalize over a specific combination of batch size and epochs. It depends on your data and the architecture of your model



              A friend of mine ran into these scenarios with a CPU based image classifier:



              1) If I use more epochs ,it may take me a lot of time to come to a desirable
              outcome.



              2) If I prefer small batch-sizes over small epochs , It might take less time
              time to compute , but not reach the desirable outcome by that epoch limit.



              I used a GPU and my results were different. Using low epochs , and better convolutional architecture , I reached a better accuracy with not so small batch sizes.



              I increased epochs, and my accuracy improved until I felt I reached overfitting.
              I increased batch-sizes and my accuracy was not increasing at a decent rate.



              I had to come to balance at which my model was acceptable.



              Its a balance , yes. A balance which I am afraid, the designer needs to take care of. Not always they are inversely proportional. The data set and the architecture makes all the difference in this debate I am afraid.






              share|improve this answer








              New contributor




              Savinay_ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.






              $endgroup$
















                1












                1








                1





                $begingroup$

                The concept of overfitting does not generalize over a specific combination of batch size and epochs. It depends on your data and the architecture of your model



                A friend of mine ran into these scenarios with a CPU based image classifier:



                1) If I use more epochs ,it may take me a lot of time to come to a desirable
                outcome.



                2) If I prefer small batch-sizes over small epochs , It might take less time
                time to compute , but not reach the desirable outcome by that epoch limit.



                I used a GPU and my results were different. Using low epochs , and better convolutional architecture , I reached a better accuracy with not so small batch sizes.



                I increased epochs, and my accuracy improved until I felt I reached overfitting.
                I increased batch-sizes and my accuracy was not increasing at a decent rate.



                I had to come to balance at which my model was acceptable.



                Its a balance , yes. A balance which I am afraid, the designer needs to take care of. Not always they are inversely proportional. The data set and the architecture makes all the difference in this debate I am afraid.






                share|improve this answer








                New contributor




                Savinay_ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.






                $endgroup$



                The concept of overfitting does not generalize over a specific combination of batch size and epochs. It depends on your data and the architecture of your model



                A friend of mine ran into these scenarios with a CPU based image classifier:



                1) If I use more epochs ,it may take me a lot of time to come to a desirable
                outcome.



                2) If I prefer small batch-sizes over small epochs , It might take less time
                time to compute , but not reach the desirable outcome by that epoch limit.



                I used a GPU and my results were different. Using low epochs , and better convolutional architecture , I reached a better accuracy with not so small batch sizes.



                I increased epochs, and my accuracy improved until I felt I reached overfitting.
                I increased batch-sizes and my accuracy was not increasing at a decent rate.



                I had to come to balance at which my model was acceptable.



                Its a balance , yes. A balance which I am afraid, the designer needs to take care of. Not always they are inversely proportional. The data set and the architecture makes all the difference in this debate I am afraid.







                share|improve this answer








                New contributor




                Savinay_ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.









                share|improve this answer



                share|improve this answer






                New contributor




                Savinay_ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.









                answered 19 hours ago









                Savinay_Savinay_

                444




                444




                New contributor




                Savinay_ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.





                New contributor





                Savinay_ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.






                Savinay_ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46452%2fdifference-between-reducing-batch-size-and-increasing-epochs-to-decrease-los%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    How to label and detect the document text images

                    Vallis Paradisi

                    Tabula Rosettana