Handling a combined dataset of numerical and categorical features for Regression












2












$begingroup$


I have a dataset that has a large number of categorical features and a few numerical features, and I want to predict the probability that any given input is one of two types of a certain binary output feature. I don't want to solve a classifier problem for reasons that are well outlined in this link .



I have seen many tutorials about how to handle them independently but am less sure how to handle them together.










share|improve this question









$endgroup$




bumped to the homepage by Community 17 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.




















    2












    $begingroup$


    I have a dataset that has a large number of categorical features and a few numerical features, and I want to predict the probability that any given input is one of two types of a certain binary output feature. I don't want to solve a classifier problem for reasons that are well outlined in this link .



    I have seen many tutorials about how to handle them independently but am less sure how to handle them together.










    share|improve this question









    $endgroup$




    bumped to the homepage by Community 17 mins ago


    This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.


















      2












      2








      2





      $begingroup$


      I have a dataset that has a large number of categorical features and a few numerical features, and I want to predict the probability that any given input is one of two types of a certain binary output feature. I don't want to solve a classifier problem for reasons that are well outlined in this link .



      I have seen many tutorials about how to handle them independently but am less sure how to handle them together.










      share|improve this question









      $endgroup$




      I have a dataset that has a large number of categorical features and a few numerical features, and I want to predict the probability that any given input is one of two types of a certain binary output feature. I don't want to solve a classifier problem for reasons that are well outlined in this link .



      I have seen many tutorials about how to handle them independently but am less sure how to handle them together.







      python regression pandas






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked May 21 '18 at 12:49









      dward4dward4

      112




      112





      bumped to the homepage by Community 17 mins ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







      bumped to the homepage by Community 17 mins ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
























          3 Answers
          3






          active

          oldest

          votes


















          0












          $begingroup$


          I don't want to solve a classifier problem for reasons that are well outlined in this link.




          I doubt that the link wants to tell you to stop performing classification tasks - the problem you propose is a classical example for classification. How I understand your source, it does not want you to use scoring rules as a heuristic.



          For the problem you described I would propose a simple Naive Bayes approach. To make your numerical values discrete, you can simply use the mean of two adjacent numeric values as a treshold. E.g. for a list [1, 2] of numeric values, just split them at a threshold of 1.5 and check above and below.






          share|improve this answer









          $endgroup$





















            0












            $begingroup$

            One could approach this in two general ways:



            1) bottom up: thinking about unifying the data somehow to begin with



            2) top down: deciding how the data needs to look based on the final model you wish to use



            Do you already know which model you will use? If that is fixed (for whatever reason), you already know you need to get your data into the correct form, be it numerical or categorical.



            As you pinned your question with the tag regression, I can tell you that you need to make your data all numerical, so regression can work.





            An example of making numerical data categorical would be to put it into bins. Imagine we have values ranging from zero to ten: [0.173, 7.88, 3.91, ...]. You could simply say that values between 0.00 and 0.99 are category A, values between 1.00 and 1.99 are category B, and so on.



            [Edit:]



            A slightly more sophisticated way of defining the bins to use would be to define the bins based on some characteristic statistics of your dataset. For example, have a look at the possible ways possible implemented within python's Numpy. Of the available methods there, I have found the Doane method to work best - it will depend on your data though, so read the descriptions.





            Making categorical values numerical in a meaningful way depends a little more on you data. It is easy to make them numberic, but you should focus on doing it in such a way as to retain as much of the information each variable contains as well as the relative relationships between each of the categories that you started with. E.g. converting colours into integers would allow you to perform regression, but if yellow becomes 1 and purple 10, the model needs to be able to learn that purple isn't necessarily 10 times bigger than yellow, and that is difficult in the context of regression!






            share|improve this answer











            $endgroup$













            • $begingroup$
              Great, currently I'm using 1 hot encoding on my columns to create a larger data fame of binary variables. Thanks for the indepth way of thinking through it.
              $endgroup$
              – dward4
              May 21 '18 at 14:15










            • $begingroup$
              @dward4 - you're welcome :) - have a look at the extra information I added regarding the use of histogram methods.
              $endgroup$
              – n1k31t4
              May 21 '18 at 15:00



















            0












            $begingroup$

            Adding to the answers of above, there's one more better way to do the same i.e Target Encoding, which naively means you are e encoding your cats according to the target variable via using some aggregate (works out of the box)






            share|improve this answer









            $endgroup$














              Your Answer








              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "557"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f31926%2fhandling-a-combined-dataset-of-numerical-and-categorical-features-for-regression%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              3 Answers
              3






              active

              oldest

              votes








              3 Answers
              3






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              0












              $begingroup$


              I don't want to solve a classifier problem for reasons that are well outlined in this link.




              I doubt that the link wants to tell you to stop performing classification tasks - the problem you propose is a classical example for classification. How I understand your source, it does not want you to use scoring rules as a heuristic.



              For the problem you described I would propose a simple Naive Bayes approach. To make your numerical values discrete, you can simply use the mean of two adjacent numeric values as a treshold. E.g. for a list [1, 2] of numeric values, just split them at a threshold of 1.5 and check above and below.






              share|improve this answer









              $endgroup$


















                0












                $begingroup$


                I don't want to solve a classifier problem for reasons that are well outlined in this link.




                I doubt that the link wants to tell you to stop performing classification tasks - the problem you propose is a classical example for classification. How I understand your source, it does not want you to use scoring rules as a heuristic.



                For the problem you described I would propose a simple Naive Bayes approach. To make your numerical values discrete, you can simply use the mean of two adjacent numeric values as a treshold. E.g. for a list [1, 2] of numeric values, just split them at a threshold of 1.5 and check above and below.






                share|improve this answer









                $endgroup$
















                  0












                  0








                  0





                  $begingroup$


                  I don't want to solve a classifier problem for reasons that are well outlined in this link.




                  I doubt that the link wants to tell you to stop performing classification tasks - the problem you propose is a classical example for classification. How I understand your source, it does not want you to use scoring rules as a heuristic.



                  For the problem you described I would propose a simple Naive Bayes approach. To make your numerical values discrete, you can simply use the mean of two adjacent numeric values as a treshold. E.g. for a list [1, 2] of numeric values, just split them at a threshold of 1.5 and check above and below.






                  share|improve this answer









                  $endgroup$




                  I don't want to solve a classifier problem for reasons that are well outlined in this link.




                  I doubt that the link wants to tell you to stop performing classification tasks - the problem you propose is a classical example for classification. How I understand your source, it does not want you to use scoring rules as a heuristic.



                  For the problem you described I would propose a simple Naive Bayes approach. To make your numerical values discrete, you can simply use the mean of two adjacent numeric values as a treshold. E.g. for a list [1, 2] of numeric values, just split them at a threshold of 1.5 and check above and below.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered May 21 '18 at 13:51









                  AndréAndré

                  41810




                  41810























                      0












                      $begingroup$

                      One could approach this in two general ways:



                      1) bottom up: thinking about unifying the data somehow to begin with



                      2) top down: deciding how the data needs to look based on the final model you wish to use



                      Do you already know which model you will use? If that is fixed (for whatever reason), you already know you need to get your data into the correct form, be it numerical or categorical.



                      As you pinned your question with the tag regression, I can tell you that you need to make your data all numerical, so regression can work.





                      An example of making numerical data categorical would be to put it into bins. Imagine we have values ranging from zero to ten: [0.173, 7.88, 3.91, ...]. You could simply say that values between 0.00 and 0.99 are category A, values between 1.00 and 1.99 are category B, and so on.



                      [Edit:]



                      A slightly more sophisticated way of defining the bins to use would be to define the bins based on some characteristic statistics of your dataset. For example, have a look at the possible ways possible implemented within python's Numpy. Of the available methods there, I have found the Doane method to work best - it will depend on your data though, so read the descriptions.





                      Making categorical values numerical in a meaningful way depends a little more on you data. It is easy to make them numberic, but you should focus on doing it in such a way as to retain as much of the information each variable contains as well as the relative relationships between each of the categories that you started with. E.g. converting colours into integers would allow you to perform regression, but if yellow becomes 1 and purple 10, the model needs to be able to learn that purple isn't necessarily 10 times bigger than yellow, and that is difficult in the context of regression!






                      share|improve this answer











                      $endgroup$













                      • $begingroup$
                        Great, currently I'm using 1 hot encoding on my columns to create a larger data fame of binary variables. Thanks for the indepth way of thinking through it.
                        $endgroup$
                        – dward4
                        May 21 '18 at 14:15










                      • $begingroup$
                        @dward4 - you're welcome :) - have a look at the extra information I added regarding the use of histogram methods.
                        $endgroup$
                        – n1k31t4
                        May 21 '18 at 15:00
















                      0












                      $begingroup$

                      One could approach this in two general ways:



                      1) bottom up: thinking about unifying the data somehow to begin with



                      2) top down: deciding how the data needs to look based on the final model you wish to use



                      Do you already know which model you will use? If that is fixed (for whatever reason), you already know you need to get your data into the correct form, be it numerical or categorical.



                      As you pinned your question with the tag regression, I can tell you that you need to make your data all numerical, so regression can work.





                      An example of making numerical data categorical would be to put it into bins. Imagine we have values ranging from zero to ten: [0.173, 7.88, 3.91, ...]. You could simply say that values between 0.00 and 0.99 are category A, values between 1.00 and 1.99 are category B, and so on.



                      [Edit:]



                      A slightly more sophisticated way of defining the bins to use would be to define the bins based on some characteristic statistics of your dataset. For example, have a look at the possible ways possible implemented within python's Numpy. Of the available methods there, I have found the Doane method to work best - it will depend on your data though, so read the descriptions.





                      Making categorical values numerical in a meaningful way depends a little more on you data. It is easy to make them numberic, but you should focus on doing it in such a way as to retain as much of the information each variable contains as well as the relative relationships between each of the categories that you started with. E.g. converting colours into integers would allow you to perform regression, but if yellow becomes 1 and purple 10, the model needs to be able to learn that purple isn't necessarily 10 times bigger than yellow, and that is difficult in the context of regression!






                      share|improve this answer











                      $endgroup$













                      • $begingroup$
                        Great, currently I'm using 1 hot encoding on my columns to create a larger data fame of binary variables. Thanks for the indepth way of thinking through it.
                        $endgroup$
                        – dward4
                        May 21 '18 at 14:15










                      • $begingroup$
                        @dward4 - you're welcome :) - have a look at the extra information I added regarding the use of histogram methods.
                        $endgroup$
                        – n1k31t4
                        May 21 '18 at 15:00














                      0












                      0








                      0





                      $begingroup$

                      One could approach this in two general ways:



                      1) bottom up: thinking about unifying the data somehow to begin with



                      2) top down: deciding how the data needs to look based on the final model you wish to use



                      Do you already know which model you will use? If that is fixed (for whatever reason), you already know you need to get your data into the correct form, be it numerical or categorical.



                      As you pinned your question with the tag regression, I can tell you that you need to make your data all numerical, so regression can work.





                      An example of making numerical data categorical would be to put it into bins. Imagine we have values ranging from zero to ten: [0.173, 7.88, 3.91, ...]. You could simply say that values between 0.00 and 0.99 are category A, values between 1.00 and 1.99 are category B, and so on.



                      [Edit:]



                      A slightly more sophisticated way of defining the bins to use would be to define the bins based on some characteristic statistics of your dataset. For example, have a look at the possible ways possible implemented within python's Numpy. Of the available methods there, I have found the Doane method to work best - it will depend on your data though, so read the descriptions.





                      Making categorical values numerical in a meaningful way depends a little more on you data. It is easy to make them numberic, but you should focus on doing it in such a way as to retain as much of the information each variable contains as well as the relative relationships between each of the categories that you started with. E.g. converting colours into integers would allow you to perform regression, but if yellow becomes 1 and purple 10, the model needs to be able to learn that purple isn't necessarily 10 times bigger than yellow, and that is difficult in the context of regression!






                      share|improve this answer











                      $endgroup$



                      One could approach this in two general ways:



                      1) bottom up: thinking about unifying the data somehow to begin with



                      2) top down: deciding how the data needs to look based on the final model you wish to use



                      Do you already know which model you will use? If that is fixed (for whatever reason), you already know you need to get your data into the correct form, be it numerical or categorical.



                      As you pinned your question with the tag regression, I can tell you that you need to make your data all numerical, so regression can work.





                      An example of making numerical data categorical would be to put it into bins. Imagine we have values ranging from zero to ten: [0.173, 7.88, 3.91, ...]. You could simply say that values between 0.00 and 0.99 are category A, values between 1.00 and 1.99 are category B, and so on.



                      [Edit:]



                      A slightly more sophisticated way of defining the bins to use would be to define the bins based on some characteristic statistics of your dataset. For example, have a look at the possible ways possible implemented within python's Numpy. Of the available methods there, I have found the Doane method to work best - it will depend on your data though, so read the descriptions.





                      Making categorical values numerical in a meaningful way depends a little more on you data. It is easy to make them numberic, but you should focus on doing it in such a way as to retain as much of the information each variable contains as well as the relative relationships between each of the categories that you started with. E.g. converting colours into integers would allow you to perform regression, but if yellow becomes 1 and purple 10, the model needs to be able to learn that purple isn't necessarily 10 times bigger than yellow, and that is difficult in the context of regression!







                      share|improve this answer














                      share|improve this answer



                      share|improve this answer








                      edited May 21 '18 at 14:59

























                      answered May 21 '18 at 13:45









                      n1k31t4n1k31t4

                      6,5612421




                      6,5612421












                      • $begingroup$
                        Great, currently I'm using 1 hot encoding on my columns to create a larger data fame of binary variables. Thanks for the indepth way of thinking through it.
                        $endgroup$
                        – dward4
                        May 21 '18 at 14:15










                      • $begingroup$
                        @dward4 - you're welcome :) - have a look at the extra information I added regarding the use of histogram methods.
                        $endgroup$
                        – n1k31t4
                        May 21 '18 at 15:00


















                      • $begingroup$
                        Great, currently I'm using 1 hot encoding on my columns to create a larger data fame of binary variables. Thanks for the indepth way of thinking through it.
                        $endgroup$
                        – dward4
                        May 21 '18 at 14:15










                      • $begingroup$
                        @dward4 - you're welcome :) - have a look at the extra information I added regarding the use of histogram methods.
                        $endgroup$
                        – n1k31t4
                        May 21 '18 at 15:00
















                      $begingroup$
                      Great, currently I'm using 1 hot encoding on my columns to create a larger data fame of binary variables. Thanks for the indepth way of thinking through it.
                      $endgroup$
                      – dward4
                      May 21 '18 at 14:15




                      $begingroup$
                      Great, currently I'm using 1 hot encoding on my columns to create a larger data fame of binary variables. Thanks for the indepth way of thinking through it.
                      $endgroup$
                      – dward4
                      May 21 '18 at 14:15












                      $begingroup$
                      @dward4 - you're welcome :) - have a look at the extra information I added regarding the use of histogram methods.
                      $endgroup$
                      – n1k31t4
                      May 21 '18 at 15:00




                      $begingroup$
                      @dward4 - you're welcome :) - have a look at the extra information I added regarding the use of histogram methods.
                      $endgroup$
                      – n1k31t4
                      May 21 '18 at 15:00











                      0












                      $begingroup$

                      Adding to the answers of above, there's one more better way to do the same i.e Target Encoding, which naively means you are e encoding your cats according to the target variable via using some aggregate (works out of the box)






                      share|improve this answer









                      $endgroup$


















                        0












                        $begingroup$

                        Adding to the answers of above, there's one more better way to do the same i.e Target Encoding, which naively means you are e encoding your cats according to the target variable via using some aggregate (works out of the box)






                        share|improve this answer









                        $endgroup$
















                          0












                          0








                          0





                          $begingroup$

                          Adding to the answers of above, there's one more better way to do the same i.e Target Encoding, which naively means you are e encoding your cats according to the target variable via using some aggregate (works out of the box)






                          share|improve this answer









                          $endgroup$



                          Adding to the answers of above, there's one more better way to do the same i.e Target Encoding, which naively means you are e encoding your cats according to the target variable via using some aggregate (works out of the box)







                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered May 21 '18 at 16:06









                          AdityaAditya

                          1,4341627




                          1,4341627






























                              draft saved

                              draft discarded




















































                              Thanks for contributing an answer to Data Science Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f31926%2fhandling-a-combined-dataset-of-numerical-and-categorical-features-for-regression%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              How to label and detect the document text images

                              Vallis Paradisi

                              Tabula Rosettana