Can we use decreasing step size to replace mini-batch in SGD?












0












$begingroup$


As far as I know, mini-batch can be used to reduce the variance of the gradient, but I am also considering if we can achieve the same result if we use the decreasing step size and only single sample in each iteration? Can we compare the convergence rate of them?










share|improve this question







New contributor




coolcat is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$

















    0












    $begingroup$


    As far as I know, mini-batch can be used to reduce the variance of the gradient, but I am also considering if we can achieve the same result if we use the decreasing step size and only single sample in each iteration? Can we compare the convergence rate of them?










    share|improve this question







    New contributor




    coolcat is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$















      0












      0








      0





      $begingroup$


      As far as I know, mini-batch can be used to reduce the variance of the gradient, but I am also considering if we can achieve the same result if we use the decreasing step size and only single sample in each iteration? Can we compare the convergence rate of them?










      share|improve this question







      New contributor




      coolcat is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      As far as I know, mini-batch can be used to reduce the variance of the gradient, but I am also considering if we can achieve the same result if we use the decreasing step size and only single sample in each iteration? Can we compare the convergence rate of them?







      machine-learning optimization gradient-descent mini-batch-gradient-descent






      share|improve this question







      New contributor




      coolcat is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question







      New contributor




      coolcat is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question






      New contributor




      coolcat is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked yesterday









      coolcatcoolcat

      11




      11




      New contributor




      coolcat is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      coolcat is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      coolcat is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$

          Generally answer is "it's not known". Similarity of effects of increasing minibatches size and decreasing learning rate is mostly empirical, there is no known asymptotic formula for it. Also effect of small LR and big minibatch is not the same. For example batch normalization layer would act completely different on those two approaches. Probabilistic distribution of gradients produced by minibatches and single sample (or mb of significantly different size) would be also quite different






          share|improve this answer








          New contributor




          mirror2image is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "557"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });






            coolcat is a new contributor. Be nice, and check out our Code of Conduct.










            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46364%2fcan-we-use-decreasing-step-size-to-replace-mini-batch-in-sgd%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0












            $begingroup$

            Generally answer is "it's not known". Similarity of effects of increasing minibatches size and decreasing learning rate is mostly empirical, there is no known asymptotic formula for it. Also effect of small LR and big minibatch is not the same. For example batch normalization layer would act completely different on those two approaches. Probabilistic distribution of gradients produced by minibatches and single sample (or mb of significantly different size) would be also quite different






            share|improve this answer








            New contributor




            mirror2image is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.






            $endgroup$


















              0












              $begingroup$

              Generally answer is "it's not known". Similarity of effects of increasing minibatches size and decreasing learning rate is mostly empirical, there is no known asymptotic formula for it. Also effect of small LR and big minibatch is not the same. For example batch normalization layer would act completely different on those two approaches. Probabilistic distribution of gradients produced by minibatches and single sample (or mb of significantly different size) would be also quite different






              share|improve this answer








              New contributor




              mirror2image is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.






              $endgroup$
















                0












                0








                0





                $begingroup$

                Generally answer is "it's not known". Similarity of effects of increasing minibatches size and decreasing learning rate is mostly empirical, there is no known asymptotic formula for it. Also effect of small LR and big minibatch is not the same. For example batch normalization layer would act completely different on those two approaches. Probabilistic distribution of gradients produced by minibatches and single sample (or mb of significantly different size) would be also quite different






                share|improve this answer








                New contributor




                mirror2image is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.






                $endgroup$



                Generally answer is "it's not known". Similarity of effects of increasing minibatches size and decreasing learning rate is mostly empirical, there is no known asymptotic formula for it. Also effect of small LR and big minibatch is not the same. For example batch normalization layer would act completely different on those two approaches. Probabilistic distribution of gradients produced by minibatches and single sample (or mb of significantly different size) would be also quite different







                share|improve this answer








                New contributor




                mirror2image is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.









                share|improve this answer



                share|improve this answer






                New contributor




                mirror2image is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.









                answered 22 hours ago









                mirror2imagemirror2image

                101




                101




                New contributor




                mirror2image is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.





                New contributor





                mirror2image is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.






                mirror2image is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.






















                    coolcat is a new contributor. Be nice, and check out our Code of Conduct.










                    draft saved

                    draft discarded


















                    coolcat is a new contributor. Be nice, and check out our Code of Conduct.













                    coolcat is a new contributor. Be nice, and check out our Code of Conduct.












                    coolcat is a new contributor. Be nice, and check out our Code of Conduct.
















                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46364%2fcan-we-use-decreasing-step-size-to-replace-mini-batch-in-sgd%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    How to label and detect the document text images

                    Vallis Paradisi

                    Tabula Rosettana