Inverse Binary Feature












0












$begingroup$


I am feeding a binary value into my NN which represents whether the given example is a public holiday or not.



Is there a difference between assigning a 0 to public holidays and 1 to all other days or encoding it inversely?



If I am not mistaken, it should make no difference as the NN learns to assign corresponding weights/ bias anyway.










share|improve this question







New contributor




1b15 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$

















    0












    $begingroup$


    I am feeding a binary value into my NN which represents whether the given example is a public holiday or not.



    Is there a difference between assigning a 0 to public holidays and 1 to all other days or encoding it inversely?



    If I am not mistaken, it should make no difference as the NN learns to assign corresponding weights/ bias anyway.










    share|improve this question







    New contributor




    1b15 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$















      0












      0








      0





      $begingroup$


      I am feeding a binary value into my NN which represents whether the given example is a public holiday or not.



      Is there a difference between assigning a 0 to public holidays and 1 to all other days or encoding it inversely?



      If I am not mistaken, it should make no difference as the NN learns to assign corresponding weights/ bias anyway.










      share|improve this question







      New contributor




      1b15 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      I am feeding a binary value into my NN which represents whether the given example is a public holiday or not.



      Is there a difference between assigning a 0 to public holidays and 1 to all other days or encoding it inversely?



      If I am not mistaken, it should make no difference as the NN learns to assign corresponding weights/ bias anyway.







      neural-network feature-selection feature-extraction feature-engineering






      share|improve this question







      New contributor




      1b15 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question







      New contributor




      1b15 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question






      New contributor




      1b15 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 2 days ago









      1b151b15

      183




      183




      New contributor




      1b15 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      1b15 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      1b15 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$

          Yes, it does not matter how you encode your features. Remember that bias is there for a reason, think about a simple perceptron with this weight and a ReLu activation function:




          • f(x) = b + wx = 1 - 1*x


          If you have x = 1 then f(1) = 0, and no signal will pass. However, if you have a x = 0 then f(0) = 1 and we will have a signal flowing on our network. Therefore, your network will learn the appropriate parameters to classify your data correctly based on a loss function given.



          I hope this helps.






          share|improve this answer









          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "557"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });






            1b15 is a new contributor. Be nice, and check out our Code of Conduct.










            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46689%2finverse-binary-feature%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0












            $begingroup$

            Yes, it does not matter how you encode your features. Remember that bias is there for a reason, think about a simple perceptron with this weight and a ReLu activation function:




            • f(x) = b + wx = 1 - 1*x


            If you have x = 1 then f(1) = 0, and no signal will pass. However, if you have a x = 0 then f(0) = 1 and we will have a signal flowing on our network. Therefore, your network will learn the appropriate parameters to classify your data correctly based on a loss function given.



            I hope this helps.






            share|improve this answer









            $endgroup$


















              0












              $begingroup$

              Yes, it does not matter how you encode your features. Remember that bias is there for a reason, think about a simple perceptron with this weight and a ReLu activation function:




              • f(x) = b + wx = 1 - 1*x


              If you have x = 1 then f(1) = 0, and no signal will pass. However, if you have a x = 0 then f(0) = 1 and we will have a signal flowing on our network. Therefore, your network will learn the appropriate parameters to classify your data correctly based on a loss function given.



              I hope this helps.






              share|improve this answer









              $endgroup$
















                0












                0








                0





                $begingroup$

                Yes, it does not matter how you encode your features. Remember that bias is there for a reason, think about a simple perceptron with this weight and a ReLu activation function:




                • f(x) = b + wx = 1 - 1*x


                If you have x = 1 then f(1) = 0, and no signal will pass. However, if you have a x = 0 then f(0) = 1 and we will have a signal flowing on our network. Therefore, your network will learn the appropriate parameters to classify your data correctly based on a loss function given.



                I hope this helps.






                share|improve this answer









                $endgroup$



                Yes, it does not matter how you encode your features. Remember that bias is there for a reason, think about a simple perceptron with this weight and a ReLu activation function:




                • f(x) = b + wx = 1 - 1*x


                If you have x = 1 then f(1) = 0, and no signal will pass. However, if you have a x = 0 then f(0) = 1 and we will have a signal flowing on our network. Therefore, your network will learn the appropriate parameters to classify your data correctly based on a loss function given.



                I hope this helps.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered 2 days ago









                Victor OliveiraVictor Oliveira

                1114




                1114






















                    1b15 is a new contributor. Be nice, and check out our Code of Conduct.










                    draft saved

                    draft discarded


















                    1b15 is a new contributor. Be nice, and check out our Code of Conduct.













                    1b15 is a new contributor. Be nice, and check out our Code of Conduct.












                    1b15 is a new contributor. Be nice, and check out our Code of Conduct.
















                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46689%2finverse-binary-feature%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    How to label and detect the document text images

                    Vallis Paradisi

                    Tabula Rosettana