optimal combination of hyper parameters and model selection












2












$begingroup$


This is a general question which often comes up when tuning deep learning and machine learning algorithms such as recurrent neural network, multilayer perceptron or SVM etc.



When we tune the hyper parameters of a deep learning model every possible combination of hyper parameters results in a different model. And we select an optimal combination based on the loss curves. What exactly is an optimal combination of hyper parameters?



My question is exactly this: There is an infinite number of combinations of hyperparams possible. We know that there are many possible configurations of hyperparams possible that give similar generalization error. What should the model selection decision be based upon?
And how do I know I have hit the bottom and no other combination of hyperparams will give me better results?










share|improve this question









$endgroup$




bumped to the homepage by Community yesterday


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.















  • $begingroup$
    Well the answer is pretty simple, we keep on exploring as much as we can but we do keep a set of values which currently does the job well but that doesn't mean there aren't better parameters than the current one..
    $endgroup$
    – Aditya
    Jun 10 '18 at 10:59










  • $begingroup$
    The best combination is the one which optimizes our objective, which is typically to predict unseen data. And it is in general impossible to know whether or not we have found this optimum point except in the most trivial cases. We just do the best we can and hope for a reasonable approximation.
    $endgroup$
    – dsaxton
    Jul 10 '18 at 22:42
















2












$begingroup$


This is a general question which often comes up when tuning deep learning and machine learning algorithms such as recurrent neural network, multilayer perceptron or SVM etc.



When we tune the hyper parameters of a deep learning model every possible combination of hyper parameters results in a different model. And we select an optimal combination based on the loss curves. What exactly is an optimal combination of hyper parameters?



My question is exactly this: There is an infinite number of combinations of hyperparams possible. We know that there are many possible configurations of hyperparams possible that give similar generalization error. What should the model selection decision be based upon?
And how do I know I have hit the bottom and no other combination of hyperparams will give me better results?










share|improve this question









$endgroup$




bumped to the homepage by Community yesterday


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.















  • $begingroup$
    Well the answer is pretty simple, we keep on exploring as much as we can but we do keep a set of values which currently does the job well but that doesn't mean there aren't better parameters than the current one..
    $endgroup$
    – Aditya
    Jun 10 '18 at 10:59










  • $begingroup$
    The best combination is the one which optimizes our objective, which is typically to predict unseen data. And it is in general impossible to know whether or not we have found this optimum point except in the most trivial cases. We just do the best we can and hope for a reasonable approximation.
    $endgroup$
    – dsaxton
    Jul 10 '18 at 22:42














2












2








2


1



$begingroup$


This is a general question which often comes up when tuning deep learning and machine learning algorithms such as recurrent neural network, multilayer perceptron or SVM etc.



When we tune the hyper parameters of a deep learning model every possible combination of hyper parameters results in a different model. And we select an optimal combination based on the loss curves. What exactly is an optimal combination of hyper parameters?



My question is exactly this: There is an infinite number of combinations of hyperparams possible. We know that there are many possible configurations of hyperparams possible that give similar generalization error. What should the model selection decision be based upon?
And how do I know I have hit the bottom and no other combination of hyperparams will give me better results?










share|improve this question









$endgroup$




This is a general question which often comes up when tuning deep learning and machine learning algorithms such as recurrent neural network, multilayer perceptron or SVM etc.



When we tune the hyper parameters of a deep learning model every possible combination of hyper parameters results in a different model. And we select an optimal combination based on the loss curves. What exactly is an optimal combination of hyper parameters?



My question is exactly this: There is an infinite number of combinations of hyperparams possible. We know that there are many possible configurations of hyperparams possible that give similar generalization error. What should the model selection decision be based upon?
And how do I know I have hit the bottom and no other combination of hyperparams will give me better results?







machine-learning deep-learning model-selection






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Jun 10 '18 at 10:26









naivenaive

2817




2817





bumped to the homepage by Community yesterday


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







bumped to the homepage by Community yesterday


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.














  • $begingroup$
    Well the answer is pretty simple, we keep on exploring as much as we can but we do keep a set of values which currently does the job well but that doesn't mean there aren't better parameters than the current one..
    $endgroup$
    – Aditya
    Jun 10 '18 at 10:59










  • $begingroup$
    The best combination is the one which optimizes our objective, which is typically to predict unseen data. And it is in general impossible to know whether or not we have found this optimum point except in the most trivial cases. We just do the best we can and hope for a reasonable approximation.
    $endgroup$
    – dsaxton
    Jul 10 '18 at 22:42


















  • $begingroup$
    Well the answer is pretty simple, we keep on exploring as much as we can but we do keep a set of values which currently does the job well but that doesn't mean there aren't better parameters than the current one..
    $endgroup$
    – Aditya
    Jun 10 '18 at 10:59










  • $begingroup$
    The best combination is the one which optimizes our objective, which is typically to predict unseen data. And it is in general impossible to know whether or not we have found this optimum point except in the most trivial cases. We just do the best we can and hope for a reasonable approximation.
    $endgroup$
    – dsaxton
    Jul 10 '18 at 22:42
















$begingroup$
Well the answer is pretty simple, we keep on exploring as much as we can but we do keep a set of values which currently does the job well but that doesn't mean there aren't better parameters than the current one..
$endgroup$
– Aditya
Jun 10 '18 at 10:59




$begingroup$
Well the answer is pretty simple, we keep on exploring as much as we can but we do keep a set of values which currently does the job well but that doesn't mean there aren't better parameters than the current one..
$endgroup$
– Aditya
Jun 10 '18 at 10:59












$begingroup$
The best combination is the one which optimizes our objective, which is typically to predict unseen data. And it is in general impossible to know whether or not we have found this optimum point except in the most trivial cases. We just do the best we can and hope for a reasonable approximation.
$endgroup$
– dsaxton
Jul 10 '18 at 22:42




$begingroup$
The best combination is the one which optimizes our objective, which is typically to predict unseen data. And it is in general impossible to know whether or not we have found this optimum point except in the most trivial cases. We just do the best we can and hope for a reasonable approximation.
$endgroup$
– dsaxton
Jul 10 '18 at 22:42










2 Answers
2






active

oldest

votes


















0












$begingroup$

This comes down to "how can we be sure we've found global minima, what if it's just a few steps away".



Until we go there, it's unknown. However, there is a clever way to be very sure we've found a global minimum. I am too unexperienced to understand it, but here it is (Tensor Methods: A new paradigm for training probabilistic models and for feature learning, Anima Anandkumar)
https://www.youtube.com/watch?v=B4YvhcGaafw



As I recall, they "un-bend the search space" so it literally exposes the global minimum, then just select it ...ugh :s



If someone could comment on my understanding of the video, I would be thankful






share|improve this answer











$endgroup$





















    0












    $begingroup$

    "What exactly is an optimal combination of hyper parameters?" The combination of hyperparameters that produces the lowest possible error on unseen data for that model architecture.



    "What should the model selection decision be based upon?" The best way to estimate error on unseen data that I know of is k-fold cross-validation. I examine the mean and standard deviation of the error across the k-folds; in most cases, I select the model with the smallest standard deviation among the models with the best mean error.



    "And how do I know I have hit the bottom and no other combination of hyperparams will give me better results?" As far as I know, one can never know what combination of hyperparameters will produce the lowest possible error on unseen data. In my experience, a good search strategy will get you close enough to the optimal combination so that further search is not worth the effort.






    share|improve this answer









    $endgroup$














      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "557"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f32897%2foptimal-combination-of-hyper-parameters-and-model-selection%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      0












      $begingroup$

      This comes down to "how can we be sure we've found global minima, what if it's just a few steps away".



      Until we go there, it's unknown. However, there is a clever way to be very sure we've found a global minimum. I am too unexperienced to understand it, but here it is (Tensor Methods: A new paradigm for training probabilistic models and for feature learning, Anima Anandkumar)
      https://www.youtube.com/watch?v=B4YvhcGaafw



      As I recall, they "un-bend the search space" so it literally exposes the global minimum, then just select it ...ugh :s



      If someone could comment on my understanding of the video, I would be thankful






      share|improve this answer











      $endgroup$


















        0












        $begingroup$

        This comes down to "how can we be sure we've found global minima, what if it's just a few steps away".



        Until we go there, it's unknown. However, there is a clever way to be very sure we've found a global minimum. I am too unexperienced to understand it, but here it is (Tensor Methods: A new paradigm for training probabilistic models and for feature learning, Anima Anandkumar)
        https://www.youtube.com/watch?v=B4YvhcGaafw



        As I recall, they "un-bend the search space" so it literally exposes the global minimum, then just select it ...ugh :s



        If someone could comment on my understanding of the video, I would be thankful






        share|improve this answer











        $endgroup$
















          0












          0








          0





          $begingroup$

          This comes down to "how can we be sure we've found global minima, what if it's just a few steps away".



          Until we go there, it's unknown. However, there is a clever way to be very sure we've found a global minimum. I am too unexperienced to understand it, but here it is (Tensor Methods: A new paradigm for training probabilistic models and for feature learning, Anima Anandkumar)
          https://www.youtube.com/watch?v=B4YvhcGaafw



          As I recall, they "un-bend the search space" so it literally exposes the global minimum, then just select it ...ugh :s



          If someone could comment on my understanding of the video, I would be thankful






          share|improve this answer











          $endgroup$



          This comes down to "how can we be sure we've found global minima, what if it's just a few steps away".



          Until we go there, it's unknown. However, there is a clever way to be very sure we've found a global minimum. I am too unexperienced to understand it, but here it is (Tensor Methods: A new paradigm for training probabilistic models and for feature learning, Anima Anandkumar)
          https://www.youtube.com/watch?v=B4YvhcGaafw



          As I recall, they "un-bend the search space" so it literally exposes the global minimum, then just select it ...ugh :s



          If someone could comment on my understanding of the video, I would be thankful







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Jun 10 '18 at 15:13

























          answered Jun 10 '18 at 15:00









          KariKari

          621424




          621424























              0












              $begingroup$

              "What exactly is an optimal combination of hyper parameters?" The combination of hyperparameters that produces the lowest possible error on unseen data for that model architecture.



              "What should the model selection decision be based upon?" The best way to estimate error on unseen data that I know of is k-fold cross-validation. I examine the mean and standard deviation of the error across the k-folds; in most cases, I select the model with the smallest standard deviation among the models with the best mean error.



              "And how do I know I have hit the bottom and no other combination of hyperparams will give me better results?" As far as I know, one can never know what combination of hyperparameters will produce the lowest possible error on unseen data. In my experience, a good search strategy will get you close enough to the optimal combination so that further search is not worth the effort.






              share|improve this answer









              $endgroup$


















                0












                $begingroup$

                "What exactly is an optimal combination of hyper parameters?" The combination of hyperparameters that produces the lowest possible error on unseen data for that model architecture.



                "What should the model selection decision be based upon?" The best way to estimate error on unseen data that I know of is k-fold cross-validation. I examine the mean and standard deviation of the error across the k-folds; in most cases, I select the model with the smallest standard deviation among the models with the best mean error.



                "And how do I know I have hit the bottom and no other combination of hyperparams will give me better results?" As far as I know, one can never know what combination of hyperparameters will produce the lowest possible error on unseen data. In my experience, a good search strategy will get you close enough to the optimal combination so that further search is not worth the effort.






                share|improve this answer









                $endgroup$
















                  0












                  0








                  0





                  $begingroup$

                  "What exactly is an optimal combination of hyper parameters?" The combination of hyperparameters that produces the lowest possible error on unseen data for that model architecture.



                  "What should the model selection decision be based upon?" The best way to estimate error on unseen data that I know of is k-fold cross-validation. I examine the mean and standard deviation of the error across the k-folds; in most cases, I select the model with the smallest standard deviation among the models with the best mean error.



                  "And how do I know I have hit the bottom and no other combination of hyperparams will give me better results?" As far as I know, one can never know what combination of hyperparameters will produce the lowest possible error on unseen data. In my experience, a good search strategy will get you close enough to the optimal combination so that further search is not worth the effort.






                  share|improve this answer









                  $endgroup$



                  "What exactly is an optimal combination of hyper parameters?" The combination of hyperparameters that produces the lowest possible error on unseen data for that model architecture.



                  "What should the model selection decision be based upon?" The best way to estimate error on unseen data that I know of is k-fold cross-validation. I examine the mean and standard deviation of the error across the k-folds; in most cases, I select the model with the smallest standard deviation among the models with the best mean error.



                  "And how do I know I have hit the bottom and no other combination of hyperparams will give me better results?" As far as I know, one can never know what combination of hyperparameters will produce the lowest possible error on unseen data. In my experience, a good search strategy will get you close enough to the optimal combination so that further search is not worth the effort.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Oct 9 '18 at 2:23









                  from keras import michaelfrom keras import michael

                  29810




                  29810






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Data Science Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f32897%2foptimal-combination-of-hyper-parameters-and-model-selection%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      How to label and detect the document text images

                      Vallis Paradisi

                      Tabula Rosettana