What is the difference between (objective / error / criterion / cost / loss) function in the context of...












4












$begingroup$


The title says it all: I have seen three terms for functions so far, that seem to be the same / similar:




  • error function

  • criterion function

  • cost function

  • objective function

  • loss function


I was working on classification problems



$$E(W) = frac{1}{2} sum_{x in E}(t_x-o(x))^2$$



where $W$ are the weights, $E$ is the evaluation set, $t_x$ is the desired output (the class) of $x$ and $o(x)$ is the given output. This function
seems to be commonly called "error function".



But while reading about this topic, I've also seen the terms "criterion function" and "objective function". Do they all mean the same for neural nets?





  • Geoffrey Hinton called cross-entropy for softmax-neurons and $E(W) = frac{1}{2} sum_{x in E}(t_x-o(x))^2$ a cost function.










share|improve this question











$endgroup$








  • 3




    $begingroup$
    An answer provided on SE explains some differences between these similar terms. I believe criterion, error, & cost are synonyms however.
    $endgroup$
    – cdeterman
    Feb 15 '16 at 19:41
















4












$begingroup$


The title says it all: I have seen three terms for functions so far, that seem to be the same / similar:




  • error function

  • criterion function

  • cost function

  • objective function

  • loss function


I was working on classification problems



$$E(W) = frac{1}{2} sum_{x in E}(t_x-o(x))^2$$



where $W$ are the weights, $E$ is the evaluation set, $t_x$ is the desired output (the class) of $x$ and $o(x)$ is the given output. This function
seems to be commonly called "error function".



But while reading about this topic, I've also seen the terms "criterion function" and "objective function". Do they all mean the same for neural nets?





  • Geoffrey Hinton called cross-entropy for softmax-neurons and $E(W) = frac{1}{2} sum_{x in E}(t_x-o(x))^2$ a cost function.










share|improve this question











$endgroup$








  • 3




    $begingroup$
    An answer provided on SE explains some differences between these similar terms. I believe criterion, error, & cost are synonyms however.
    $endgroup$
    – cdeterman
    Feb 15 '16 at 19:41














4












4








4


3



$begingroup$


The title says it all: I have seen three terms for functions so far, that seem to be the same / similar:




  • error function

  • criterion function

  • cost function

  • objective function

  • loss function


I was working on classification problems



$$E(W) = frac{1}{2} sum_{x in E}(t_x-o(x))^2$$



where $W$ are the weights, $E$ is the evaluation set, $t_x$ is the desired output (the class) of $x$ and $o(x)$ is the given output. This function
seems to be commonly called "error function".



But while reading about this topic, I've also seen the terms "criterion function" and "objective function". Do they all mean the same for neural nets?





  • Geoffrey Hinton called cross-entropy for softmax-neurons and $E(W) = frac{1}{2} sum_{x in E}(t_x-o(x))^2$ a cost function.










share|improve this question











$endgroup$




The title says it all: I have seen three terms for functions so far, that seem to be the same / similar:




  • error function

  • criterion function

  • cost function

  • objective function

  • loss function


I was working on classification problems



$$E(W) = frac{1}{2} sum_{x in E}(t_x-o(x))^2$$



where $W$ are the weights, $E$ is the evaluation set, $t_x$ is the desired output (the class) of $x$ and $o(x)$ is the given output. This function
seems to be commonly called "error function".



But while reading about this topic, I've also seen the terms "criterion function" and "objective function". Do they all mean the same for neural nets?





  • Geoffrey Hinton called cross-entropy for softmax-neurons and $E(W) = frac{1}{2} sum_{x in E}(t_x-o(x))^2$ a cost function.







terminology






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Feb 16 '16 at 15:01







Martin Thoma

















asked Feb 15 '16 at 9:34









Martin ThomaMartin Thoma

6,3931354132




6,3931354132








  • 3




    $begingroup$
    An answer provided on SE explains some differences between these similar terms. I believe criterion, error, & cost are synonyms however.
    $endgroup$
    – cdeterman
    Feb 15 '16 at 19:41














  • 3




    $begingroup$
    An answer provided on SE explains some differences between these similar terms. I believe criterion, error, & cost are synonyms however.
    $endgroup$
    – cdeterman
    Feb 15 '16 at 19:41








3




3




$begingroup$
An answer provided on SE explains some differences between these similar terms. I believe criterion, error, & cost are synonyms however.
$endgroup$
– cdeterman
Feb 15 '16 at 19:41




$begingroup$
An answer provided on SE explains some differences between these similar terms. I believe criterion, error, & cost are synonyms however.
$endgroup$
– cdeterman
Feb 15 '16 at 19:41










3 Answers
3






active

oldest

votes


















3












$begingroup$

The error function is the function representing the difference between the values computed by your model and the real values. In the optimization field often they speak about two phases: a training phase in which the model is set, and a test phase in which the model tests its behaviour against the real values of output. In the training phase the error is necessary to improve the model, while in the test phase the error is useful to check if the model works properly.



The objective function is the function you want to maximize or minimize. When they call it "cost function" (again, it's the objective function) it's because they want to only minimize it. I see the cost function and the objective function as the same thing seen from slightly different perspectives.



The "criterion" is usually the rule for stopping the algorithm you're using. Suppose you want that your model find the minimum of an objective function, in real experiences it is often hard to find the exact minimum and the algorithm could continuing to work for a very long time. In that case you could accept to stop it "near" to the optimum with a particular stopping criterion.



I hope I gave you a correct idea of these topics.






share|improve this answer









$endgroup$









  • 2




    $begingroup$
    Based on this definition I guess "loss function" is a synonym to "cost function"? (i.e. one that we want to minimize"). Or does it fall under a separate bucket?
    $endgroup$
    – Atlas7
    May 11 '17 at 15:10






  • 1




    $begingroup$
    Based on my knowledge, 'loss function' is just another way to call the 'cost function' so.. same bucket! :)
    $endgroup$
    – Andrea Ianni ௫
    May 12 '17 at 9:00



















0












$begingroup$

objective function aka criterion - a function to be minimized or maximized





  • error function - an objective function to be minimized




    • aka cost, energy, loss, penalty, regret function, where in some scenarios loss is with respect to a single example and cost is with respect to a set of examples




  • utility function - an objective function to be maximized




    • aka fitness, profit, reward function








share|improve this answer








New contributor




Michael is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$





















    0












    $begingroup$

    When applied to machine learning (ML), these terms could all mean the same thing or not, depending on the context.



    From the optimization standpoint, one would always like to have them minimized (or maximized) in order to find the solution to ML problem.



    Each term came from a different field (optimization, statistics, decision theory, information theory, etc.) and brought some overlapping to the mixture:



    it is quite common to have a loss function, composed of the error + some other cost term, used as the objective function in some optimization algorithm :-)



    When dealing with modern neural networks, almost any error function could be eventually called a cost/loss/objective and the criterion at the same time. Therefore, it is important to distinguish between their usages:




    • functions optimized directly while training: usually referred to as loss functions,
      but it is quite common to see the term "cost", "objective" or simply "error" used
      as well. These functions can be combinations of several other loss or functions,
      including different error terms and regularizes (e.g., mean-squared error + L1 norm of
      the weights).


    • functions optimized indirectly: usually referred to as metrics. These are used as
      criteria for performance evaluation and for other heuristics (e.g., early
      stopping, cross-validation). Almost any loss function can be used as a metric, which is
      quite common. The opposite, however, may not work well since
      commonly used metric functions (such as F1, AUC, IoU and even binary accuracy) are not
      suitable to be optimized directly.







    share|improve this answer








    New contributor




    m0nzderr is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.






    $endgroup$













      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "557"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f10250%2fwhat-is-the-difference-between-objective-error-criterion-cost-loss-fun%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      3












      $begingroup$

      The error function is the function representing the difference between the values computed by your model and the real values. In the optimization field often they speak about two phases: a training phase in which the model is set, and a test phase in which the model tests its behaviour against the real values of output. In the training phase the error is necessary to improve the model, while in the test phase the error is useful to check if the model works properly.



      The objective function is the function you want to maximize or minimize. When they call it "cost function" (again, it's the objective function) it's because they want to only minimize it. I see the cost function and the objective function as the same thing seen from slightly different perspectives.



      The "criterion" is usually the rule for stopping the algorithm you're using. Suppose you want that your model find the minimum of an objective function, in real experiences it is often hard to find the exact minimum and the algorithm could continuing to work for a very long time. In that case you could accept to stop it "near" to the optimum with a particular stopping criterion.



      I hope I gave you a correct idea of these topics.






      share|improve this answer









      $endgroup$









      • 2




        $begingroup$
        Based on this definition I guess "loss function" is a synonym to "cost function"? (i.e. one that we want to minimize"). Or does it fall under a separate bucket?
        $endgroup$
        – Atlas7
        May 11 '17 at 15:10






      • 1




        $begingroup$
        Based on my knowledge, 'loss function' is just another way to call the 'cost function' so.. same bucket! :)
        $endgroup$
        – Andrea Ianni ௫
        May 12 '17 at 9:00
















      3












      $begingroup$

      The error function is the function representing the difference between the values computed by your model and the real values. In the optimization field often they speak about two phases: a training phase in which the model is set, and a test phase in which the model tests its behaviour against the real values of output. In the training phase the error is necessary to improve the model, while in the test phase the error is useful to check if the model works properly.



      The objective function is the function you want to maximize or minimize. When they call it "cost function" (again, it's the objective function) it's because they want to only minimize it. I see the cost function and the objective function as the same thing seen from slightly different perspectives.



      The "criterion" is usually the rule for stopping the algorithm you're using. Suppose you want that your model find the minimum of an objective function, in real experiences it is often hard to find the exact minimum and the algorithm could continuing to work for a very long time. In that case you could accept to stop it "near" to the optimum with a particular stopping criterion.



      I hope I gave you a correct idea of these topics.






      share|improve this answer









      $endgroup$









      • 2




        $begingroup$
        Based on this definition I guess "loss function" is a synonym to "cost function"? (i.e. one that we want to minimize"). Or does it fall under a separate bucket?
        $endgroup$
        – Atlas7
        May 11 '17 at 15:10






      • 1




        $begingroup$
        Based on my knowledge, 'loss function' is just another way to call the 'cost function' so.. same bucket! :)
        $endgroup$
        – Andrea Ianni ௫
        May 12 '17 at 9:00














      3












      3








      3





      $begingroup$

      The error function is the function representing the difference between the values computed by your model and the real values. In the optimization field often they speak about two phases: a training phase in which the model is set, and a test phase in which the model tests its behaviour against the real values of output. In the training phase the error is necessary to improve the model, while in the test phase the error is useful to check if the model works properly.



      The objective function is the function you want to maximize or minimize. When they call it "cost function" (again, it's the objective function) it's because they want to only minimize it. I see the cost function and the objective function as the same thing seen from slightly different perspectives.



      The "criterion" is usually the rule for stopping the algorithm you're using. Suppose you want that your model find the minimum of an objective function, in real experiences it is often hard to find the exact minimum and the algorithm could continuing to work for a very long time. In that case you could accept to stop it "near" to the optimum with a particular stopping criterion.



      I hope I gave you a correct idea of these topics.






      share|improve this answer









      $endgroup$



      The error function is the function representing the difference between the values computed by your model and the real values. In the optimization field often they speak about two phases: a training phase in which the model is set, and a test phase in which the model tests its behaviour against the real values of output. In the training phase the error is necessary to improve the model, while in the test phase the error is useful to check if the model works properly.



      The objective function is the function you want to maximize or minimize. When they call it "cost function" (again, it's the objective function) it's because they want to only minimize it. I see the cost function and the objective function as the same thing seen from slightly different perspectives.



      The "criterion" is usually the rule for stopping the algorithm you're using. Suppose you want that your model find the minimum of an objective function, in real experiences it is often hard to find the exact minimum and the algorithm could continuing to work for a very long time. In that case you could accept to stop it "near" to the optimum with a particular stopping criterion.



      I hope I gave you a correct idea of these topics.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered Feb 16 '16 at 11:13









      Andrea Ianni ௫Andrea Ianni ௫

      235111




      235111








      • 2




        $begingroup$
        Based on this definition I guess "loss function" is a synonym to "cost function"? (i.e. one that we want to minimize"). Or does it fall under a separate bucket?
        $endgroup$
        – Atlas7
        May 11 '17 at 15:10






      • 1




        $begingroup$
        Based on my knowledge, 'loss function' is just another way to call the 'cost function' so.. same bucket! :)
        $endgroup$
        – Andrea Ianni ௫
        May 12 '17 at 9:00














      • 2




        $begingroup$
        Based on this definition I guess "loss function" is a synonym to "cost function"? (i.e. one that we want to minimize"). Or does it fall under a separate bucket?
        $endgroup$
        – Atlas7
        May 11 '17 at 15:10






      • 1




        $begingroup$
        Based on my knowledge, 'loss function' is just another way to call the 'cost function' so.. same bucket! :)
        $endgroup$
        – Andrea Ianni ௫
        May 12 '17 at 9:00








      2




      2




      $begingroup$
      Based on this definition I guess "loss function" is a synonym to "cost function"? (i.e. one that we want to minimize"). Or does it fall under a separate bucket?
      $endgroup$
      – Atlas7
      May 11 '17 at 15:10




      $begingroup$
      Based on this definition I guess "loss function" is a synonym to "cost function"? (i.e. one that we want to minimize"). Or does it fall under a separate bucket?
      $endgroup$
      – Atlas7
      May 11 '17 at 15:10




      1




      1




      $begingroup$
      Based on my knowledge, 'loss function' is just another way to call the 'cost function' so.. same bucket! :)
      $endgroup$
      – Andrea Ianni ௫
      May 12 '17 at 9:00




      $begingroup$
      Based on my knowledge, 'loss function' is just another way to call the 'cost function' so.. same bucket! :)
      $endgroup$
      – Andrea Ianni ௫
      May 12 '17 at 9:00











      0












      $begingroup$

      objective function aka criterion - a function to be minimized or maximized





      • error function - an objective function to be minimized




        • aka cost, energy, loss, penalty, regret function, where in some scenarios loss is with respect to a single example and cost is with respect to a set of examples




      • utility function - an objective function to be maximized




        • aka fitness, profit, reward function








      share|improve this answer








      New contributor




      Michael is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      $endgroup$


















        0












        $begingroup$

        objective function aka criterion - a function to be minimized or maximized





        • error function - an objective function to be minimized




          • aka cost, energy, loss, penalty, regret function, where in some scenarios loss is with respect to a single example and cost is with respect to a set of examples




        • utility function - an objective function to be maximized




          • aka fitness, profit, reward function








        share|improve this answer








        New contributor




        Michael is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.






        $endgroup$
















          0












          0








          0





          $begingroup$

          objective function aka criterion - a function to be minimized or maximized





          • error function - an objective function to be minimized




            • aka cost, energy, loss, penalty, regret function, where in some scenarios loss is with respect to a single example and cost is with respect to a set of examples




          • utility function - an objective function to be maximized




            • aka fitness, profit, reward function








          share|improve this answer








          New contributor




          Michael is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






          $endgroup$



          objective function aka criterion - a function to be minimized or maximized





          • error function - an objective function to be minimized




            • aka cost, energy, loss, penalty, regret function, where in some scenarios loss is with respect to a single example and cost is with respect to a set of examples




          • utility function - an objective function to be maximized




            • aka fitness, profit, reward function









          share|improve this answer








          New contributor




          Michael is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.









          share|improve this answer



          share|improve this answer






          New contributor




          Michael is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.









          answered 2 days ago









          MichaelMichael

          1011




          1011




          New contributor




          Michael is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.





          New contributor





          Michael is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






          Michael is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.























              0












              $begingroup$

              When applied to machine learning (ML), these terms could all mean the same thing or not, depending on the context.



              From the optimization standpoint, one would always like to have them minimized (or maximized) in order to find the solution to ML problem.



              Each term came from a different field (optimization, statistics, decision theory, information theory, etc.) and brought some overlapping to the mixture:



              it is quite common to have a loss function, composed of the error + some other cost term, used as the objective function in some optimization algorithm :-)



              When dealing with modern neural networks, almost any error function could be eventually called a cost/loss/objective and the criterion at the same time. Therefore, it is important to distinguish between their usages:




              • functions optimized directly while training: usually referred to as loss functions,
                but it is quite common to see the term "cost", "objective" or simply "error" used
                as well. These functions can be combinations of several other loss or functions,
                including different error terms and regularizes (e.g., mean-squared error + L1 norm of
                the weights).


              • functions optimized indirectly: usually referred to as metrics. These are used as
                criteria for performance evaluation and for other heuristics (e.g., early
                stopping, cross-validation). Almost any loss function can be used as a metric, which is
                quite common. The opposite, however, may not work well since
                commonly used metric functions (such as F1, AUC, IoU and even binary accuracy) are not
                suitable to be optimized directly.







              share|improve this answer








              New contributor




              m0nzderr is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.






              $endgroup$


















                0












                $begingroup$

                When applied to machine learning (ML), these terms could all mean the same thing or not, depending on the context.



                From the optimization standpoint, one would always like to have them minimized (or maximized) in order to find the solution to ML problem.



                Each term came from a different field (optimization, statistics, decision theory, information theory, etc.) and brought some overlapping to the mixture:



                it is quite common to have a loss function, composed of the error + some other cost term, used as the objective function in some optimization algorithm :-)



                When dealing with modern neural networks, almost any error function could be eventually called a cost/loss/objective and the criterion at the same time. Therefore, it is important to distinguish between their usages:




                • functions optimized directly while training: usually referred to as loss functions,
                  but it is quite common to see the term "cost", "objective" or simply "error" used
                  as well. These functions can be combinations of several other loss or functions,
                  including different error terms and regularizes (e.g., mean-squared error + L1 norm of
                  the weights).


                • functions optimized indirectly: usually referred to as metrics. These are used as
                  criteria for performance evaluation and for other heuristics (e.g., early
                  stopping, cross-validation). Almost any loss function can be used as a metric, which is
                  quite common. The opposite, however, may not work well since
                  commonly used metric functions (such as F1, AUC, IoU and even binary accuracy) are not
                  suitable to be optimized directly.







                share|improve this answer








                New contributor




                m0nzderr is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.






                $endgroup$
















                  0












                  0








                  0





                  $begingroup$

                  When applied to machine learning (ML), these terms could all mean the same thing or not, depending on the context.



                  From the optimization standpoint, one would always like to have them minimized (or maximized) in order to find the solution to ML problem.



                  Each term came from a different field (optimization, statistics, decision theory, information theory, etc.) and brought some overlapping to the mixture:



                  it is quite common to have a loss function, composed of the error + some other cost term, used as the objective function in some optimization algorithm :-)



                  When dealing with modern neural networks, almost any error function could be eventually called a cost/loss/objective and the criterion at the same time. Therefore, it is important to distinguish between their usages:




                  • functions optimized directly while training: usually referred to as loss functions,
                    but it is quite common to see the term "cost", "objective" or simply "error" used
                    as well. These functions can be combinations of several other loss or functions,
                    including different error terms and regularizes (e.g., mean-squared error + L1 norm of
                    the weights).


                  • functions optimized indirectly: usually referred to as metrics. These are used as
                    criteria for performance evaluation and for other heuristics (e.g., early
                    stopping, cross-validation). Almost any loss function can be used as a metric, which is
                    quite common. The opposite, however, may not work well since
                    commonly used metric functions (such as F1, AUC, IoU and even binary accuracy) are not
                    suitable to be optimized directly.







                  share|improve this answer








                  New contributor




                  m0nzderr is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.






                  $endgroup$



                  When applied to machine learning (ML), these terms could all mean the same thing or not, depending on the context.



                  From the optimization standpoint, one would always like to have them minimized (or maximized) in order to find the solution to ML problem.



                  Each term came from a different field (optimization, statistics, decision theory, information theory, etc.) and brought some overlapping to the mixture:



                  it is quite common to have a loss function, composed of the error + some other cost term, used as the objective function in some optimization algorithm :-)



                  When dealing with modern neural networks, almost any error function could be eventually called a cost/loss/objective and the criterion at the same time. Therefore, it is important to distinguish between their usages:




                  • functions optimized directly while training: usually referred to as loss functions,
                    but it is quite common to see the term "cost", "objective" or simply "error" used
                    as well. These functions can be combinations of several other loss or functions,
                    including different error terms and regularizes (e.g., mean-squared error + L1 norm of
                    the weights).


                  • functions optimized indirectly: usually referred to as metrics. These are used as
                    criteria for performance evaluation and for other heuristics (e.g., early
                    stopping, cross-validation). Almost any loss function can be used as a metric, which is
                    quite common. The opposite, however, may not work well since
                    commonly used metric functions (such as F1, AUC, IoU and even binary accuracy) are not
                    suitable to be optimized directly.








                  share|improve this answer








                  New contributor




                  m0nzderr is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.









                  share|improve this answer



                  share|improve this answer






                  New contributor




                  m0nzderr is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.









                  answered 2 days ago









                  m0nzderrm0nzderr

                  263




                  263




                  New contributor




                  m0nzderr is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.





                  New contributor





                  m0nzderr is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.






                  m0nzderr is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Data Science Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f10250%2fwhat-is-the-difference-between-objective-error-criterion-cost-loss-fun%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      How to label and detect the document text images

                      Vallis Paradisi

                      Tabula Rosettana