Odd Loss Curves for Object Detection Task












0












$begingroup$


I'm re-training a Single Shot Detector (specifically the ssdlite_mobilenet_v2_coco from the TensorFlow model zoo) to detect some new images. I have about 15k images in the training set and about 4k in the eval set. The mini-batch size is 24. Otherwise the setting are as per the defaults from the model zoo.



Note that after training the model gives excellent performance on the test set (about another 3k images). My issue isn't the quality of the model, but understanding the loss curves I'm seeing.



As expected, on the first few epochs the bounding box predictions are all over the place and wildly inaccurate. Very quickly the net learns that it's better to predict nothing. At this point I'd expect to see the training and evaluation loss drop enormously, but I see only the training loss drop - performance on the evaluation set is virtually unchanged.



As training progresses the bounding box predictions and the classifications for those boxes get more and more accurate. What I'd expect to see is both the training and evaluation losses dropping with, perhaps, the training loss dropping faster. What I actually see is the training loss remaining nearly constant but the evaluation set loss continues to drop so somehow we're not improving on the training set but our generalization performance is improving which seems quite odd to me.



The model is regularized so it's possible that the net is, in fact, learning more generalizable solutions that yield similar training set performance. However, the regularization loss continues to grow too which would seem to indicate that the model isn't doing that.



Here's some images from TensorBoard to show what I'm seeing:



enter image description here



Any insights?










share|improve this question







New contributor




Oliver is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$

















    0












    $begingroup$


    I'm re-training a Single Shot Detector (specifically the ssdlite_mobilenet_v2_coco from the TensorFlow model zoo) to detect some new images. I have about 15k images in the training set and about 4k in the eval set. The mini-batch size is 24. Otherwise the setting are as per the defaults from the model zoo.



    Note that after training the model gives excellent performance on the test set (about another 3k images). My issue isn't the quality of the model, but understanding the loss curves I'm seeing.



    As expected, on the first few epochs the bounding box predictions are all over the place and wildly inaccurate. Very quickly the net learns that it's better to predict nothing. At this point I'd expect to see the training and evaluation loss drop enormously, but I see only the training loss drop - performance on the evaluation set is virtually unchanged.



    As training progresses the bounding box predictions and the classifications for those boxes get more and more accurate. What I'd expect to see is both the training and evaluation losses dropping with, perhaps, the training loss dropping faster. What I actually see is the training loss remaining nearly constant but the evaluation set loss continues to drop so somehow we're not improving on the training set but our generalization performance is improving which seems quite odd to me.



    The model is regularized so it's possible that the net is, in fact, learning more generalizable solutions that yield similar training set performance. However, the regularization loss continues to grow too which would seem to indicate that the model isn't doing that.



    Here's some images from TensorBoard to show what I'm seeing:



    enter image description here



    Any insights?










    share|improve this question







    New contributor




    Oliver is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$















      0












      0








      0





      $begingroup$


      I'm re-training a Single Shot Detector (specifically the ssdlite_mobilenet_v2_coco from the TensorFlow model zoo) to detect some new images. I have about 15k images in the training set and about 4k in the eval set. The mini-batch size is 24. Otherwise the setting are as per the defaults from the model zoo.



      Note that after training the model gives excellent performance on the test set (about another 3k images). My issue isn't the quality of the model, but understanding the loss curves I'm seeing.



      As expected, on the first few epochs the bounding box predictions are all over the place and wildly inaccurate. Very quickly the net learns that it's better to predict nothing. At this point I'd expect to see the training and evaluation loss drop enormously, but I see only the training loss drop - performance on the evaluation set is virtually unchanged.



      As training progresses the bounding box predictions and the classifications for those boxes get more and more accurate. What I'd expect to see is both the training and evaluation losses dropping with, perhaps, the training loss dropping faster. What I actually see is the training loss remaining nearly constant but the evaluation set loss continues to drop so somehow we're not improving on the training set but our generalization performance is improving which seems quite odd to me.



      The model is regularized so it's possible that the net is, in fact, learning more generalizable solutions that yield similar training set performance. However, the regularization loss continues to grow too which would seem to indicate that the model isn't doing that.



      Here's some images from TensorBoard to show what I'm seeing:



      enter image description here



      Any insights?










      share|improve this question







      New contributor




      Oliver is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      I'm re-training a Single Shot Detector (specifically the ssdlite_mobilenet_v2_coco from the TensorFlow model zoo) to detect some new images. I have about 15k images in the training set and about 4k in the eval set. The mini-batch size is 24. Otherwise the setting are as per the defaults from the model zoo.



      Note that after training the model gives excellent performance on the test set (about another 3k images). My issue isn't the quality of the model, but understanding the loss curves I'm seeing.



      As expected, on the first few epochs the bounding box predictions are all over the place and wildly inaccurate. Very quickly the net learns that it's better to predict nothing. At this point I'd expect to see the training and evaluation loss drop enormously, but I see only the training loss drop - performance on the evaluation set is virtually unchanged.



      As training progresses the bounding box predictions and the classifications for those boxes get more and more accurate. What I'd expect to see is both the training and evaluation losses dropping with, perhaps, the training loss dropping faster. What I actually see is the training loss remaining nearly constant but the evaluation set loss continues to drop so somehow we're not improving on the training set but our generalization performance is improving which seems quite odd to me.



      The model is regularized so it's possible that the net is, in fact, learning more generalizable solutions that yield similar training set performance. However, the regularization loss continues to grow too which would seem to indicate that the model isn't doing that.



      Here's some images from TensorBoard to show what I'm seeing:



      enter image description here



      Any insights?







      neural-network tensorflow object-detection






      share|improve this question







      New contributor




      Oliver is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question







      New contributor




      Oliver is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question






      New contributor




      Oliver is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 10 hours ago









      OliverOliver

      1




      1




      New contributor




      Oliver is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      Oliver is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      Oliver is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















          0






          active

          oldest

          votes











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "557"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });






          Oliver is a new contributor. Be nice, and check out our Code of Conduct.










          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45116%2fodd-loss-curves-for-object-detection-task%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          Oliver is a new contributor. Be nice, and check out our Code of Conduct.










          draft saved

          draft discarded


















          Oliver is a new contributor. Be nice, and check out our Code of Conduct.













          Oliver is a new contributor. Be nice, and check out our Code of Conduct.












          Oliver is a new contributor. Be nice, and check out our Code of Conduct.
















          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45116%2fodd-loss-curves-for-object-detection-task%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          How to label and detect the document text images

          Vallis Paradisi

          Tabula Rosettana