Keras stateful LSTM returns NaN for validation loss












0












$begingroup$


I'm having some trouble interpreting what's going on in the training and validation loss, sensitivity, and specificity for my model. My validation sensitivity and specificity and loss are NaN, and I'm trying to diagnose why.



My training set has 50 examples of time series with 24 time steps each, and 500 binary labels (shape: (50, 24, 500)). My validation set has shape (12, 24, 500). Of course, I expect a neural network to overfit massively.



Because I was interested in a stateful LSTM, I followed philipperemy's advice and used model.train_on_batch with batch_size = 1. I had multiple inputs: one called seq_model_in that is a time series, and one called feat_in that is not a time series (and so is concatenated in to the model after the LSTM but before the classification step).



As my classes are highly imbalanced, I also used Keras's class_weights function. To make this function work in a multi-label setting, I concatenated two columns to the front of my responses (one of all 0s and one of all 1s), such that the final shape of the response is (50, 502).



 feat_in = Input(shape=(1,), batch_shape=(1, 500), name='feat_in')
feat_dense = Dense(out_dim, name='feat_dense')(feat_in)
seq_model_in = Input(shape=(1,), batch_shape=(1, 1, 500), name='seq_model_in')
lstm_layer = LSTM(10, batch_input_shape=(1, 1, 500), stateful=stateful)(seq_model_in)
merged_after_lstm = Concatenate(axis=-1)([lstm_layer, feat_dense])
dense_merged = Dense(502, activation="sigmoid")(merged_after_lstm)


I have coded this model in Keras for time series prediction (multi-label prediction at the next time step):



The training and validation metrics and loss do not change per epoch, which is worrisome (and, I think, a symptom of overfitting), but I'm also concerned about understanding the graphs themselves.



Here are the TensorBoard graphs:



training lossvalidation losstraining sensitivityvalidation sensitivitytraining specificityvalidation specificity



The training loss should (roughly) be decreasing per epoch, as should the validation loss. The training sensitivity and specificity are 92% and 97.5%, respectively (another possible sign of overfitting).



My questions are:




  • Am I right in thinking the sensitivity and specificity graphs should all have the same general shape as the train_spec graph?

  • Why does the training sensitivity graph look like this?

  • Why does the validation loss return NaN?










share|improve this question









$endgroup$

















    0












    $begingroup$


    I'm having some trouble interpreting what's going on in the training and validation loss, sensitivity, and specificity for my model. My validation sensitivity and specificity and loss are NaN, and I'm trying to diagnose why.



    My training set has 50 examples of time series with 24 time steps each, and 500 binary labels (shape: (50, 24, 500)). My validation set has shape (12, 24, 500). Of course, I expect a neural network to overfit massively.



    Because I was interested in a stateful LSTM, I followed philipperemy's advice and used model.train_on_batch with batch_size = 1. I had multiple inputs: one called seq_model_in that is a time series, and one called feat_in that is not a time series (and so is concatenated in to the model after the LSTM but before the classification step).



    As my classes are highly imbalanced, I also used Keras's class_weights function. To make this function work in a multi-label setting, I concatenated two columns to the front of my responses (one of all 0s and one of all 1s), such that the final shape of the response is (50, 502).



     feat_in = Input(shape=(1,), batch_shape=(1, 500), name='feat_in')
    feat_dense = Dense(out_dim, name='feat_dense')(feat_in)
    seq_model_in = Input(shape=(1,), batch_shape=(1, 1, 500), name='seq_model_in')
    lstm_layer = LSTM(10, batch_input_shape=(1, 1, 500), stateful=stateful)(seq_model_in)
    merged_after_lstm = Concatenate(axis=-1)([lstm_layer, feat_dense])
    dense_merged = Dense(502, activation="sigmoid")(merged_after_lstm)


    I have coded this model in Keras for time series prediction (multi-label prediction at the next time step):



    The training and validation metrics and loss do not change per epoch, which is worrisome (and, I think, a symptom of overfitting), but I'm also concerned about understanding the graphs themselves.



    Here are the TensorBoard graphs:



    training lossvalidation losstraining sensitivityvalidation sensitivitytraining specificityvalidation specificity



    The training loss should (roughly) be decreasing per epoch, as should the validation loss. The training sensitivity and specificity are 92% and 97.5%, respectively (another possible sign of overfitting).



    My questions are:




    • Am I right in thinking the sensitivity and specificity graphs should all have the same general shape as the train_spec graph?

    • Why does the training sensitivity graph look like this?

    • Why does the validation loss return NaN?










    share|improve this question









    $endgroup$















      0












      0








      0


      2



      $begingroup$


      I'm having some trouble interpreting what's going on in the training and validation loss, sensitivity, and specificity for my model. My validation sensitivity and specificity and loss are NaN, and I'm trying to diagnose why.



      My training set has 50 examples of time series with 24 time steps each, and 500 binary labels (shape: (50, 24, 500)). My validation set has shape (12, 24, 500). Of course, I expect a neural network to overfit massively.



      Because I was interested in a stateful LSTM, I followed philipperemy's advice and used model.train_on_batch with batch_size = 1. I had multiple inputs: one called seq_model_in that is a time series, and one called feat_in that is not a time series (and so is concatenated in to the model after the LSTM but before the classification step).



      As my classes are highly imbalanced, I also used Keras's class_weights function. To make this function work in a multi-label setting, I concatenated two columns to the front of my responses (one of all 0s and one of all 1s), such that the final shape of the response is (50, 502).



       feat_in = Input(shape=(1,), batch_shape=(1, 500), name='feat_in')
      feat_dense = Dense(out_dim, name='feat_dense')(feat_in)
      seq_model_in = Input(shape=(1,), batch_shape=(1, 1, 500), name='seq_model_in')
      lstm_layer = LSTM(10, batch_input_shape=(1, 1, 500), stateful=stateful)(seq_model_in)
      merged_after_lstm = Concatenate(axis=-1)([lstm_layer, feat_dense])
      dense_merged = Dense(502, activation="sigmoid")(merged_after_lstm)


      I have coded this model in Keras for time series prediction (multi-label prediction at the next time step):



      The training and validation metrics and loss do not change per epoch, which is worrisome (and, I think, a symptom of overfitting), but I'm also concerned about understanding the graphs themselves.



      Here are the TensorBoard graphs:



      training lossvalidation losstraining sensitivityvalidation sensitivitytraining specificityvalidation specificity



      The training loss should (roughly) be decreasing per epoch, as should the validation loss. The training sensitivity and specificity are 92% and 97.5%, respectively (another possible sign of overfitting).



      My questions are:




      • Am I right in thinking the sensitivity and specificity graphs should all have the same general shape as the train_spec graph?

      • Why does the training sensitivity graph look like this?

      • Why does the validation loss return NaN?










      share|improve this question









      $endgroup$




      I'm having some trouble interpreting what's going on in the training and validation loss, sensitivity, and specificity for my model. My validation sensitivity and specificity and loss are NaN, and I'm trying to diagnose why.



      My training set has 50 examples of time series with 24 time steps each, and 500 binary labels (shape: (50, 24, 500)). My validation set has shape (12, 24, 500). Of course, I expect a neural network to overfit massively.



      Because I was interested in a stateful LSTM, I followed philipperemy's advice and used model.train_on_batch with batch_size = 1. I had multiple inputs: one called seq_model_in that is a time series, and one called feat_in that is not a time series (and so is concatenated in to the model after the LSTM but before the classification step).



      As my classes are highly imbalanced, I also used Keras's class_weights function. To make this function work in a multi-label setting, I concatenated two columns to the front of my responses (one of all 0s and one of all 1s), such that the final shape of the response is (50, 502).



       feat_in = Input(shape=(1,), batch_shape=(1, 500), name='feat_in')
      feat_dense = Dense(out_dim, name='feat_dense')(feat_in)
      seq_model_in = Input(shape=(1,), batch_shape=(1, 1, 500), name='seq_model_in')
      lstm_layer = LSTM(10, batch_input_shape=(1, 1, 500), stateful=stateful)(seq_model_in)
      merged_after_lstm = Concatenate(axis=-1)([lstm_layer, feat_dense])
      dense_merged = Dense(502, activation="sigmoid")(merged_after_lstm)


      I have coded this model in Keras for time series prediction (multi-label prediction at the next time step):



      The training and validation metrics and loss do not change per epoch, which is worrisome (and, I think, a symptom of overfitting), but I'm also concerned about understanding the graphs themselves.



      Here are the TensorBoard graphs:



      training lossvalidation losstraining sensitivityvalidation sensitivitytraining specificityvalidation specificity



      The training loss should (roughly) be decreasing per epoch, as should the validation loss. The training sensitivity and specificity are 92% and 97.5%, respectively (another possible sign of overfitting).



      My questions are:




      • Am I right in thinking the sensitivity and specificity graphs should all have the same general shape as the train_spec graph?

      • Why does the training sensitivity graph look like this?

      • Why does the validation loss return NaN?







      keras lstm multilabel-classification






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Feb 3 '18 at 20:32









      StatsSorceressStatsSorceress

      1,1053623




      1,1053623






















          2 Answers
          2






          active

          oldest

          votes


















          0












          $begingroup$

          No idea about the tensor board stuff, but the NAN for val loss could be being caused by an unexpected very large or very small number. As your training dataset is very small there is a high chance that the validation data has a very different underlying distribution.
          If the latent space of your NN isn't continuous and validation samples fall too far away from the distribution of the training samples, the NN will not deal with them well. In a classification problem this would result in misclassification (most likely). However, as you are generating a prediction, this could mean that the NN creates a really large number or a really small one. Then, in your loss function might be getting something divided by zero or by infinity.



          I'd suggest looking at the the loss function you are using, if it has a divide, can the denominator be zero? One solution to this would be to use the clip function to make sure that your predicted tensor is between 0+epsilon and 1.



          def clipped_squared_error_loss(y_true, y_pred):
          y_true = K.clip(y_true, K.epsilon(), 1)
          y_pred = K.clip(y_pred, K.epsilon(), 1)
          return K.sum(K.square(y_pred - y_true))


          You may also want to look into Kullback Leibler Divergence as a means to make the latent space of your NN continuous.






          share|improve this answer









          $endgroup$





















            0












            $begingroup$

            I am sure you know the concept of Vanishing gradient.



            Maybe you want to look into the concept of Exploding Gradients as well.



            Exploding Gradients






            share|improve this answer








            New contributor




            Savinay_ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
            Check out our Code of Conduct.






            $endgroup$













              Your Answer





              StackExchange.ifUsing("editor", function () {
              return StackExchange.using("mathjaxEditing", function () {
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              });
              });
              }, "mathjax-editing");

              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "557"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f27440%2fkeras-stateful-lstm-returns-nan-for-validation-loss%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              0












              $begingroup$

              No idea about the tensor board stuff, but the NAN for val loss could be being caused by an unexpected very large or very small number. As your training dataset is very small there is a high chance that the validation data has a very different underlying distribution.
              If the latent space of your NN isn't continuous and validation samples fall too far away from the distribution of the training samples, the NN will not deal with them well. In a classification problem this would result in misclassification (most likely). However, as you are generating a prediction, this could mean that the NN creates a really large number or a really small one. Then, in your loss function might be getting something divided by zero or by infinity.



              I'd suggest looking at the the loss function you are using, if it has a divide, can the denominator be zero? One solution to this would be to use the clip function to make sure that your predicted tensor is between 0+epsilon and 1.



              def clipped_squared_error_loss(y_true, y_pred):
              y_true = K.clip(y_true, K.epsilon(), 1)
              y_pred = K.clip(y_pred, K.epsilon(), 1)
              return K.sum(K.square(y_pred - y_true))


              You may also want to look into Kullback Leibler Divergence as a means to make the latent space of your NN continuous.






              share|improve this answer









              $endgroup$


















                0












                $begingroup$

                No idea about the tensor board stuff, but the NAN for val loss could be being caused by an unexpected very large or very small number. As your training dataset is very small there is a high chance that the validation data has a very different underlying distribution.
                If the latent space of your NN isn't continuous and validation samples fall too far away from the distribution of the training samples, the NN will not deal with them well. In a classification problem this would result in misclassification (most likely). However, as you are generating a prediction, this could mean that the NN creates a really large number or a really small one. Then, in your loss function might be getting something divided by zero or by infinity.



                I'd suggest looking at the the loss function you are using, if it has a divide, can the denominator be zero? One solution to this would be to use the clip function to make sure that your predicted tensor is between 0+epsilon and 1.



                def clipped_squared_error_loss(y_true, y_pred):
                y_true = K.clip(y_true, K.epsilon(), 1)
                y_pred = K.clip(y_pred, K.epsilon(), 1)
                return K.sum(K.square(y_pred - y_true))


                You may also want to look into Kullback Leibler Divergence as a means to make the latent space of your NN continuous.






                share|improve this answer









                $endgroup$
















                  0












                  0








                  0





                  $begingroup$

                  No idea about the tensor board stuff, but the NAN for val loss could be being caused by an unexpected very large or very small number. As your training dataset is very small there is a high chance that the validation data has a very different underlying distribution.
                  If the latent space of your NN isn't continuous and validation samples fall too far away from the distribution of the training samples, the NN will not deal with them well. In a classification problem this would result in misclassification (most likely). However, as you are generating a prediction, this could mean that the NN creates a really large number or a really small one. Then, in your loss function might be getting something divided by zero or by infinity.



                  I'd suggest looking at the the loss function you are using, if it has a divide, can the denominator be zero? One solution to this would be to use the clip function to make sure that your predicted tensor is between 0+epsilon and 1.



                  def clipped_squared_error_loss(y_true, y_pred):
                  y_true = K.clip(y_true, K.epsilon(), 1)
                  y_pred = K.clip(y_pred, K.epsilon(), 1)
                  return K.sum(K.square(y_pred - y_true))


                  You may also want to look into Kullback Leibler Divergence as a means to make the latent space of your NN continuous.






                  share|improve this answer









                  $endgroup$



                  No idea about the tensor board stuff, but the NAN for val loss could be being caused by an unexpected very large or very small number. As your training dataset is very small there is a high chance that the validation data has a very different underlying distribution.
                  If the latent space of your NN isn't continuous and validation samples fall too far away from the distribution of the training samples, the NN will not deal with them well. In a classification problem this would result in misclassification (most likely). However, as you are generating a prediction, this could mean that the NN creates a really large number or a really small one. Then, in your loss function might be getting something divided by zero or by infinity.



                  I'd suggest looking at the the loss function you are using, if it has a divide, can the denominator be zero? One solution to this would be to use the clip function to make sure that your predicted tensor is between 0+epsilon and 1.



                  def clipped_squared_error_loss(y_true, y_pred):
                  y_true = K.clip(y_true, K.epsilon(), 1)
                  y_pred = K.clip(y_pred, K.epsilon(), 1)
                  return K.sum(K.square(y_pred - y_true))


                  You may also want to look into Kullback Leibler Divergence as a means to make the latent space of your NN continuous.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered 22 hours ago









                  EliEli

                  111




                  111























                      0












                      $begingroup$

                      I am sure you know the concept of Vanishing gradient.



                      Maybe you want to look into the concept of Exploding Gradients as well.



                      Exploding Gradients






                      share|improve this answer








                      New contributor




                      Savinay_ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.






                      $endgroup$


















                        0












                        $begingroup$

                        I am sure you know the concept of Vanishing gradient.



                        Maybe you want to look into the concept of Exploding Gradients as well.



                        Exploding Gradients






                        share|improve this answer








                        New contributor




                        Savinay_ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                        Check out our Code of Conduct.






                        $endgroup$
















                          0












                          0








                          0





                          $begingroup$

                          I am sure you know the concept of Vanishing gradient.



                          Maybe you want to look into the concept of Exploding Gradients as well.



                          Exploding Gradients






                          share|improve this answer








                          New contributor




                          Savinay_ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.






                          $endgroup$



                          I am sure you know the concept of Vanishing gradient.



                          Maybe you want to look into the concept of Exploding Gradients as well.



                          Exploding Gradients







                          share|improve this answer








                          New contributor




                          Savinay_ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.









                          share|improve this answer



                          share|improve this answer






                          New contributor




                          Savinay_ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.









                          answered 19 hours ago









                          Savinay_Savinay_

                          444




                          444




                          New contributor




                          Savinay_ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.





                          New contributor





                          Savinay_ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.






                          Savinay_ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.






























                              draft saved

                              draft discarded




















































                              Thanks for contributing an answer to Data Science Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f27440%2fkeras-stateful-lstm-returns-nan-for-validation-loss%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              How to label and detect the document text images

                              Vallis Paradisi

                              Tabula Rosettana