Is a good shuffle random state for training data really good for the model?












2












$begingroup$


I'm using keras to train a binary classifier neural network. To shuffle the training data I am using shuffle function from scikit-learn.

I observe that for some shuffle_random_state (seed for shuffle()), the network gives really good results (~86% accuracy) while on others not so much (~75% accuracy). So i run the model for 1-20 shuffle_random_states and choose the random_state which gives the best accuracy for production model.

I was wondering if this is a good approach and with those good shuffle_random_state the network is actually learning better?










share|improve this question









New contributor




Chirag Gupta is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$












  • $begingroup$
    The accuracy you are mentioning, is it on validation split or? If so, what is the accuracy on training split?
    $endgroup$
    – Antonio Jurić
    18 hours ago










  • $begingroup$
    Mentioned accuracy is on validation split
    $endgroup$
    – Chirag Gupta
    18 hours ago










  • $begingroup$
    What is the accuracy on training split in those two cases?
    $endgroup$
    – Antonio Jurić
    18 hours ago










  • $begingroup$
    Training loss and accuracy is almost the same in both cases. Goes till 100% if kept training. The rate of increase is also almost same for both cases (for training data)
    $endgroup$
    – Chirag Gupta
    18 hours ago
















2












$begingroup$


I'm using keras to train a binary classifier neural network. To shuffle the training data I am using shuffle function from scikit-learn.

I observe that for some shuffle_random_state (seed for shuffle()), the network gives really good results (~86% accuracy) while on others not so much (~75% accuracy). So i run the model for 1-20 shuffle_random_states and choose the random_state which gives the best accuracy for production model.

I was wondering if this is a good approach and with those good shuffle_random_state the network is actually learning better?










share|improve this question









New contributor




Chirag Gupta is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$












  • $begingroup$
    The accuracy you are mentioning, is it on validation split or? If so, what is the accuracy on training split?
    $endgroup$
    – Antonio Jurić
    18 hours ago










  • $begingroup$
    Mentioned accuracy is on validation split
    $endgroup$
    – Chirag Gupta
    18 hours ago










  • $begingroup$
    What is the accuracy on training split in those two cases?
    $endgroup$
    – Antonio Jurić
    18 hours ago










  • $begingroup$
    Training loss and accuracy is almost the same in both cases. Goes till 100% if kept training. The rate of increase is also almost same for both cases (for training data)
    $endgroup$
    – Chirag Gupta
    18 hours ago














2












2








2





$begingroup$


I'm using keras to train a binary classifier neural network. To shuffle the training data I am using shuffle function from scikit-learn.

I observe that for some shuffle_random_state (seed for shuffle()), the network gives really good results (~86% accuracy) while on others not so much (~75% accuracy). So i run the model for 1-20 shuffle_random_states and choose the random_state which gives the best accuracy for production model.

I was wondering if this is a good approach and with those good shuffle_random_state the network is actually learning better?










share|improve this question









New contributor




Chirag Gupta is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$




I'm using keras to train a binary classifier neural network. To shuffle the training data I am using shuffle function from scikit-learn.

I observe that for some shuffle_random_state (seed for shuffle()), the network gives really good results (~86% accuracy) while on others not so much (~75% accuracy). So i run the model for 1-20 shuffle_random_states and choose the random_state which gives the best accuracy for production model.

I was wondering if this is a good approach and with those good shuffle_random_state the network is actually learning better?







machine-learning neural-network keras scikit-learn






share|improve this question









New contributor




Chirag Gupta is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




Chirag Gupta is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited 21 hours ago







Chirag Gupta













New contributor




Chirag Gupta is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 21 hours ago









Chirag GuptaChirag Gupta

112




112




New contributor




Chirag Gupta is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Chirag Gupta is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Chirag Gupta is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












  • $begingroup$
    The accuracy you are mentioning, is it on validation split or? If so, what is the accuracy on training split?
    $endgroup$
    – Antonio Jurić
    18 hours ago










  • $begingroup$
    Mentioned accuracy is on validation split
    $endgroup$
    – Chirag Gupta
    18 hours ago










  • $begingroup$
    What is the accuracy on training split in those two cases?
    $endgroup$
    – Antonio Jurić
    18 hours ago










  • $begingroup$
    Training loss and accuracy is almost the same in both cases. Goes till 100% if kept training. The rate of increase is also almost same for both cases (for training data)
    $endgroup$
    – Chirag Gupta
    18 hours ago


















  • $begingroup$
    The accuracy you are mentioning, is it on validation split or? If so, what is the accuracy on training split?
    $endgroup$
    – Antonio Jurić
    18 hours ago










  • $begingroup$
    Mentioned accuracy is on validation split
    $endgroup$
    – Chirag Gupta
    18 hours ago










  • $begingroup$
    What is the accuracy on training split in those two cases?
    $endgroup$
    – Antonio Jurić
    18 hours ago










  • $begingroup$
    Training loss and accuracy is almost the same in both cases. Goes till 100% if kept training. The rate of increase is also almost same for both cases (for training data)
    $endgroup$
    – Chirag Gupta
    18 hours ago
















$begingroup$
The accuracy you are mentioning, is it on validation split or? If so, what is the accuracy on training split?
$endgroup$
– Antonio Jurić
18 hours ago




$begingroup$
The accuracy you are mentioning, is it on validation split or? If so, what is the accuracy on training split?
$endgroup$
– Antonio Jurić
18 hours ago












$begingroup$
Mentioned accuracy is on validation split
$endgroup$
– Chirag Gupta
18 hours ago




$begingroup$
Mentioned accuracy is on validation split
$endgroup$
– Chirag Gupta
18 hours ago












$begingroup$
What is the accuracy on training split in those two cases?
$endgroup$
– Antonio Jurić
18 hours ago




$begingroup$
What is the accuracy on training split in those two cases?
$endgroup$
– Antonio Jurić
18 hours ago












$begingroup$
Training loss and accuracy is almost the same in both cases. Goes till 100% if kept training. The rate of increase is also almost same for both cases (for training data)
$endgroup$
– Chirag Gupta
18 hours ago




$begingroup$
Training loss and accuracy is almost the same in both cases. Goes till 100% if kept training. The rate of increase is also almost same for both cases (for training data)
$endgroup$
– Chirag Gupta
18 hours ago










1 Answer
1






active

oldest

votes


















0












$begingroup$

If this is split is a train/validation split (not a hold out test set) then you should be doing cross-validation. You are going to be overly optimistic about the performance of your model for this set of features and hyperparameters if you try to split it "just right". Cross-validation will give you a more accurate portrayal regardless of your split. If this is for a train/test split (test being a hold out test set), this is a very bad practice, since you are informing your decision on how to make the split based on the performance of the test set.






share|improve this answer









$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "557"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });






    Chirag Gupta is a new contributor. Be nice, and check out our Code of Conduct.










    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45740%2fis-a-good-shuffle-random-state-for-training-data-really-good-for-the-model%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0












    $begingroup$

    If this is split is a train/validation split (not a hold out test set) then you should be doing cross-validation. You are going to be overly optimistic about the performance of your model for this set of features and hyperparameters if you try to split it "just right". Cross-validation will give you a more accurate portrayal regardless of your split. If this is for a train/test split (test being a hold out test set), this is a very bad practice, since you are informing your decision on how to make the split based on the performance of the test set.






    share|improve this answer









    $endgroup$


















      0












      $begingroup$

      If this is split is a train/validation split (not a hold out test set) then you should be doing cross-validation. You are going to be overly optimistic about the performance of your model for this set of features and hyperparameters if you try to split it "just right". Cross-validation will give you a more accurate portrayal regardless of your split. If this is for a train/test split (test being a hold out test set), this is a very bad practice, since you are informing your decision on how to make the split based on the performance of the test set.






      share|improve this answer









      $endgroup$
















        0












        0








        0





        $begingroup$

        If this is split is a train/validation split (not a hold out test set) then you should be doing cross-validation. You are going to be overly optimistic about the performance of your model for this set of features and hyperparameters if you try to split it "just right". Cross-validation will give you a more accurate portrayal regardless of your split. If this is for a train/test split (test being a hold out test set), this is a very bad practice, since you are informing your decision on how to make the split based on the performance of the test set.






        share|improve this answer









        $endgroup$



        If this is split is a train/validation split (not a hold out test set) then you should be doing cross-validation. You are going to be overly optimistic about the performance of your model for this set of features and hyperparameters if you try to split it "just right". Cross-validation will give you a more accurate portrayal regardless of your split. If this is for a train/test split (test being a hold out test set), this is a very bad practice, since you are informing your decision on how to make the split based on the performance of the test set.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 12 hours ago









        WesWes

        31511




        31511






















            Chirag Gupta is a new contributor. Be nice, and check out our Code of Conduct.










            draft saved

            draft discarded


















            Chirag Gupta is a new contributor. Be nice, and check out our Code of Conduct.













            Chirag Gupta is a new contributor. Be nice, and check out our Code of Conduct.












            Chirag Gupta is a new contributor. Be nice, and check out our Code of Conduct.
















            Thanks for contributing an answer to Data Science Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45740%2fis-a-good-shuffle-random-state-for-training-data-really-good-for-the-model%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            How to label and detect the document text images

            Vallis Paradisi

            Tabula Rosettana