Get insights from Random forest::Variable Importance analysis












1












$begingroup$


I run variable importance on my Panel data (TV viewing over specific period) which consists of the old-Panel (Panel 0) and the new panel (Panel 1)



I am interested in understanding the differences in viewing based on the panel, as well as demographics like region. I used Avg daily total viewing in minutes as target variable and select age_group, region and big cities as predictor variables in R ( group by/repeat by panel ( 0 and 1) )



I visualized the result in R , in each chart’s Y-Axis is the prediction probability, X-Axis shows each variable’s values. How can we interpret the outcome to turn it to actionable insight?



In other words, what is the difference between plotting mean(prediction) vs. plotting mean(actual)? And how to make conclusion from it?
enter image description here










share|improve this question









New contributor




DanielG is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$

















    1












    $begingroup$


    I run variable importance on my Panel data (TV viewing over specific period) which consists of the old-Panel (Panel 0) and the new panel (Panel 1)



    I am interested in understanding the differences in viewing based on the panel, as well as demographics like region. I used Avg daily total viewing in minutes as target variable and select age_group, region and big cities as predictor variables in R ( group by/repeat by panel ( 0 and 1) )



    I visualized the result in R , in each chart’s Y-Axis is the prediction probability, X-Axis shows each variable’s values. How can we interpret the outcome to turn it to actionable insight?



    In other words, what is the difference between plotting mean(prediction) vs. plotting mean(actual)? And how to make conclusion from it?
    enter image description here










    share|improve this question









    New contributor




    DanielG is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$















      1












      1








      1





      $begingroup$


      I run variable importance on my Panel data (TV viewing over specific period) which consists of the old-Panel (Panel 0) and the new panel (Panel 1)



      I am interested in understanding the differences in viewing based on the panel, as well as demographics like region. I used Avg daily total viewing in minutes as target variable and select age_group, region and big cities as predictor variables in R ( group by/repeat by panel ( 0 and 1) )



      I visualized the result in R , in each chart’s Y-Axis is the prediction probability, X-Axis shows each variable’s values. How can we interpret the outcome to turn it to actionable insight?



      In other words, what is the difference between plotting mean(prediction) vs. plotting mean(actual)? And how to make conclusion from it?
      enter image description here










      share|improve this question









      New contributor




      DanielG is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      I run variable importance on my Panel data (TV viewing over specific period) which consists of the old-Panel (Panel 0) and the new panel (Panel 1)



      I am interested in understanding the differences in viewing based on the panel, as well as demographics like region. I used Avg daily total viewing in minutes as target variable and select age_group, region and big cities as predictor variables in R ( group by/repeat by panel ( 0 and 1) )



      I visualized the result in R , in each chart’s Y-Axis is the prediction probability, X-Axis shows each variable’s values. How can we interpret the outcome to turn it to actionable insight?



      In other words, what is the difference between plotting mean(prediction) vs. plotting mean(actual)? And how to make conclusion from it?
      enter image description here







      machine-learning r random-forest predictor-importance






      share|improve this question









      New contributor




      DanielG is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question









      New contributor




      DanielG is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question








      edited yesterday









      Andrew Mcghie

      526




      526






      New contributor




      DanielG is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked yesterday









      DanielGDanielG

      61




      61




      New contributor




      DanielG is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      DanielG is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      DanielG is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$

          There is a number of different ways to view variable importance for random forests.
          A great explanation of them



          Variable importance measures




          1. The most basic way is to look at how many times the variable was included in the trees in the random forest. This has the problem that it does not include how much of an effect the variable has on each tree


          2. An improvement on this is to calculate the decrease in impurity that the variable cause in a tree when adding that variable to a tree. This is the default measure for the importance() function in the RandomForest R library. This has the problem of being biased towards continuous variables and ones with high carnality.


          3. Another approach is the permutation importance. How this works is that it shuffles the variables in one column of the data and computes the decrease in accuracy. in R this is importance(type = 1). This is quite a good measure however it needs the features to be normalized to get the best accuracy (Shown in this paper). Its a little slower but still quite fast.


          4. The last approach is the drop column approach. This works by dropping a feature then calculating how much the accuracy decreases from training the random forest on all the features. This is the slowest but most accurate.



          Interpreting the results



          These methods will all give a list of the feature with a measure of how good it is. This will give you an idea of how the performance of the random forest will change because of that feature.



          Ie you might find Age is the most important feature for prediction viewing time. You can then look at a graph of age vs view time to see how those two variable interact.



          With a drop column approach there is the possibility of finding that removing a feature will increase accuracy by the decrease in accuracy being an negative value.



          For applying to the domain of the problem that is up to you and the real world problem that you are trying to solve with this model.



          Hope this helps :)






          share|improve this answer









          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "557"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });






            DanielG is a new contributor. Be nice, and check out our Code of Conduct.










            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45970%2fget-insights-from-random-forestvariable-importance-analysis%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0












            $begingroup$

            There is a number of different ways to view variable importance for random forests.
            A great explanation of them



            Variable importance measures




            1. The most basic way is to look at how many times the variable was included in the trees in the random forest. This has the problem that it does not include how much of an effect the variable has on each tree


            2. An improvement on this is to calculate the decrease in impurity that the variable cause in a tree when adding that variable to a tree. This is the default measure for the importance() function in the RandomForest R library. This has the problem of being biased towards continuous variables and ones with high carnality.


            3. Another approach is the permutation importance. How this works is that it shuffles the variables in one column of the data and computes the decrease in accuracy. in R this is importance(type = 1). This is quite a good measure however it needs the features to be normalized to get the best accuracy (Shown in this paper). Its a little slower but still quite fast.


            4. The last approach is the drop column approach. This works by dropping a feature then calculating how much the accuracy decreases from training the random forest on all the features. This is the slowest but most accurate.



            Interpreting the results



            These methods will all give a list of the feature with a measure of how good it is. This will give you an idea of how the performance of the random forest will change because of that feature.



            Ie you might find Age is the most important feature for prediction viewing time. You can then look at a graph of age vs view time to see how those two variable interact.



            With a drop column approach there is the possibility of finding that removing a feature will increase accuracy by the decrease in accuracy being an negative value.



            For applying to the domain of the problem that is up to you and the real world problem that you are trying to solve with this model.



            Hope this helps :)






            share|improve this answer









            $endgroup$


















              0












              $begingroup$

              There is a number of different ways to view variable importance for random forests.
              A great explanation of them



              Variable importance measures




              1. The most basic way is to look at how many times the variable was included in the trees in the random forest. This has the problem that it does not include how much of an effect the variable has on each tree


              2. An improvement on this is to calculate the decrease in impurity that the variable cause in a tree when adding that variable to a tree. This is the default measure for the importance() function in the RandomForest R library. This has the problem of being biased towards continuous variables and ones with high carnality.


              3. Another approach is the permutation importance. How this works is that it shuffles the variables in one column of the data and computes the decrease in accuracy. in R this is importance(type = 1). This is quite a good measure however it needs the features to be normalized to get the best accuracy (Shown in this paper). Its a little slower but still quite fast.


              4. The last approach is the drop column approach. This works by dropping a feature then calculating how much the accuracy decreases from training the random forest on all the features. This is the slowest but most accurate.



              Interpreting the results



              These methods will all give a list of the feature with a measure of how good it is. This will give you an idea of how the performance of the random forest will change because of that feature.



              Ie you might find Age is the most important feature for prediction viewing time. You can then look at a graph of age vs view time to see how those two variable interact.



              With a drop column approach there is the possibility of finding that removing a feature will increase accuracy by the decrease in accuracy being an negative value.



              For applying to the domain of the problem that is up to you and the real world problem that you are trying to solve with this model.



              Hope this helps :)






              share|improve this answer









              $endgroup$
















                0












                0








                0





                $begingroup$

                There is a number of different ways to view variable importance for random forests.
                A great explanation of them



                Variable importance measures




                1. The most basic way is to look at how many times the variable was included in the trees in the random forest. This has the problem that it does not include how much of an effect the variable has on each tree


                2. An improvement on this is to calculate the decrease in impurity that the variable cause in a tree when adding that variable to a tree. This is the default measure for the importance() function in the RandomForest R library. This has the problem of being biased towards continuous variables and ones with high carnality.


                3. Another approach is the permutation importance. How this works is that it shuffles the variables in one column of the data and computes the decrease in accuracy. in R this is importance(type = 1). This is quite a good measure however it needs the features to be normalized to get the best accuracy (Shown in this paper). Its a little slower but still quite fast.


                4. The last approach is the drop column approach. This works by dropping a feature then calculating how much the accuracy decreases from training the random forest on all the features. This is the slowest but most accurate.



                Interpreting the results



                These methods will all give a list of the feature with a measure of how good it is. This will give you an idea of how the performance of the random forest will change because of that feature.



                Ie you might find Age is the most important feature for prediction viewing time. You can then look at a graph of age vs view time to see how those two variable interact.



                With a drop column approach there is the possibility of finding that removing a feature will increase accuracy by the decrease in accuracy being an negative value.



                For applying to the domain of the problem that is up to you and the real world problem that you are trying to solve with this model.



                Hope this helps :)






                share|improve this answer









                $endgroup$



                There is a number of different ways to view variable importance for random forests.
                A great explanation of them



                Variable importance measures




                1. The most basic way is to look at how many times the variable was included in the trees in the random forest. This has the problem that it does not include how much of an effect the variable has on each tree


                2. An improvement on this is to calculate the decrease in impurity that the variable cause in a tree when adding that variable to a tree. This is the default measure for the importance() function in the RandomForest R library. This has the problem of being biased towards continuous variables and ones with high carnality.


                3. Another approach is the permutation importance. How this works is that it shuffles the variables in one column of the data and computes the decrease in accuracy. in R this is importance(type = 1). This is quite a good measure however it needs the features to be normalized to get the best accuracy (Shown in this paper). Its a little slower but still quite fast.


                4. The last approach is the drop column approach. This works by dropping a feature then calculating how much the accuracy decreases from training the random forest on all the features. This is the slowest but most accurate.



                Interpreting the results



                These methods will all give a list of the feature with a measure of how good it is. This will give you an idea of how the performance of the random forest will change because of that feature.



                Ie you might find Age is the most important feature for prediction viewing time. You can then look at a graph of age vs view time to see how those two variable interact.



                With a drop column approach there is the possibility of finding that removing a feature will increase accuracy by the decrease in accuracy being an negative value.



                For applying to the domain of the problem that is up to you and the real world problem that you are trying to solve with this model.



                Hope this helps :)







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered yesterday









                Andrew McghieAndrew Mcghie

                526




                526






















                    DanielG is a new contributor. Be nice, and check out our Code of Conduct.










                    draft saved

                    draft discarded


















                    DanielG is a new contributor. Be nice, and check out our Code of Conduct.













                    DanielG is a new contributor. Be nice, and check out our Code of Conduct.












                    DanielG is a new contributor. Be nice, and check out our Code of Conduct.
















                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45970%2fget-insights-from-random-forestvariable-importance-analysis%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    How to label and detect the document text images

                    Vallis Paradisi

                    Tabula Rosettana