Get insights from Random forest::Variable Importance analysis
$begingroup$
I run variable importance on my Panel data (TV viewing over specific period) which consists of the old-Panel (Panel 0) and the new panel (Panel 1)
I am interested in understanding the differences in viewing based on the panel, as well as demographics like region. I used Avg daily total viewing in minutes as target variable and select age_group, region and big cities as predictor variables in R ( group by/repeat by panel ( 0 and 1) )
I visualized the result in R , in each chart’s Y-Axis is the prediction probability, X-Axis shows each variable’s values. How can we interpret the outcome to turn it to actionable insight?
In other words, what is the difference between plotting mean(prediction) vs. plotting mean(actual)? And how to make conclusion from it?
machine-learning r random-forest predictor-importance
New contributor
$endgroup$
add a comment |
$begingroup$
I run variable importance on my Panel data (TV viewing over specific period) which consists of the old-Panel (Panel 0) and the new panel (Panel 1)
I am interested in understanding the differences in viewing based on the panel, as well as demographics like region. I used Avg daily total viewing in minutes as target variable and select age_group, region and big cities as predictor variables in R ( group by/repeat by panel ( 0 and 1) )
I visualized the result in R , in each chart’s Y-Axis is the prediction probability, X-Axis shows each variable’s values. How can we interpret the outcome to turn it to actionable insight?
In other words, what is the difference between plotting mean(prediction) vs. plotting mean(actual)? And how to make conclusion from it?
machine-learning r random-forest predictor-importance
New contributor
$endgroup$
add a comment |
$begingroup$
I run variable importance on my Panel data (TV viewing over specific period) which consists of the old-Panel (Panel 0) and the new panel (Panel 1)
I am interested in understanding the differences in viewing based on the panel, as well as demographics like region. I used Avg daily total viewing in minutes as target variable and select age_group, region and big cities as predictor variables in R ( group by/repeat by panel ( 0 and 1) )
I visualized the result in R , in each chart’s Y-Axis is the prediction probability, X-Axis shows each variable’s values. How can we interpret the outcome to turn it to actionable insight?
In other words, what is the difference between plotting mean(prediction) vs. plotting mean(actual)? And how to make conclusion from it?
machine-learning r random-forest predictor-importance
New contributor
$endgroup$
I run variable importance on my Panel data (TV viewing over specific period) which consists of the old-Panel (Panel 0) and the new panel (Panel 1)
I am interested in understanding the differences in viewing based on the panel, as well as demographics like region. I used Avg daily total viewing in minutes as target variable and select age_group, region and big cities as predictor variables in R ( group by/repeat by panel ( 0 and 1) )
I visualized the result in R , in each chart’s Y-Axis is the prediction probability, X-Axis shows each variable’s values. How can we interpret the outcome to turn it to actionable insight?
In other words, what is the difference between plotting mean(prediction) vs. plotting mean(actual)? And how to make conclusion from it?
machine-learning r random-forest predictor-importance
machine-learning r random-forest predictor-importance
New contributor
New contributor
edited yesterday
Andrew Mcghie
526
526
New contributor
asked yesterday
DanielGDanielG
61
61
New contributor
New contributor
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
There is a number of different ways to view variable importance for random forests.
A great explanation of them
Variable importance measures
The most basic way is to look at how many times the variable was included in the trees in the random forest. This has the problem that it does not include how much of an effect the variable has on each tree
An improvement on this is to calculate the decrease in impurity that the variable cause in a tree when adding that variable to a tree. This is the default measure for the
importance()
function in the RandomForest R library. This has the problem of being biased towards continuous variables and ones with high carnality.Another approach is the permutation importance. How this works is that it shuffles the variables in one column of the data and computes the decrease in accuracy. in R this is
importance(type = 1)
. This is quite a good measure however it needs the features to be normalized to get the best accuracy (Shown in this paper). Its a little slower but still quite fast.The last approach is the drop column approach. This works by dropping a feature then calculating how much the accuracy decreases from training the random forest on all the features. This is the slowest but most accurate.
Interpreting the results
These methods will all give a list of the feature with a measure of how good it is. This will give you an idea of how the performance of the random forest will change because of that feature.
Ie you might find Age is the most important feature for prediction viewing time. You can then look at a graph of age vs view time to see how those two variable interact.
With a drop column approach there is the possibility of finding that removing a feature will increase accuracy by the decrease in accuracy being an negative value.
For applying to the domain of the problem that is up to you and the real world problem that you are trying to solve with this model.
Hope this helps :)
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
DanielG is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45970%2fget-insights-from-random-forestvariable-importance-analysis%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
There is a number of different ways to view variable importance for random forests.
A great explanation of them
Variable importance measures
The most basic way is to look at how many times the variable was included in the trees in the random forest. This has the problem that it does not include how much of an effect the variable has on each tree
An improvement on this is to calculate the decrease in impurity that the variable cause in a tree when adding that variable to a tree. This is the default measure for the
importance()
function in the RandomForest R library. This has the problem of being biased towards continuous variables and ones with high carnality.Another approach is the permutation importance. How this works is that it shuffles the variables in one column of the data and computes the decrease in accuracy. in R this is
importance(type = 1)
. This is quite a good measure however it needs the features to be normalized to get the best accuracy (Shown in this paper). Its a little slower but still quite fast.The last approach is the drop column approach. This works by dropping a feature then calculating how much the accuracy decreases from training the random forest on all the features. This is the slowest but most accurate.
Interpreting the results
These methods will all give a list of the feature with a measure of how good it is. This will give you an idea of how the performance of the random forest will change because of that feature.
Ie you might find Age is the most important feature for prediction viewing time. You can then look at a graph of age vs view time to see how those two variable interact.
With a drop column approach there is the possibility of finding that removing a feature will increase accuracy by the decrease in accuracy being an negative value.
For applying to the domain of the problem that is up to you and the real world problem that you are trying to solve with this model.
Hope this helps :)
$endgroup$
add a comment |
$begingroup$
There is a number of different ways to view variable importance for random forests.
A great explanation of them
Variable importance measures
The most basic way is to look at how many times the variable was included in the trees in the random forest. This has the problem that it does not include how much of an effect the variable has on each tree
An improvement on this is to calculate the decrease in impurity that the variable cause in a tree when adding that variable to a tree. This is the default measure for the
importance()
function in the RandomForest R library. This has the problem of being biased towards continuous variables and ones with high carnality.Another approach is the permutation importance. How this works is that it shuffles the variables in one column of the data and computes the decrease in accuracy. in R this is
importance(type = 1)
. This is quite a good measure however it needs the features to be normalized to get the best accuracy (Shown in this paper). Its a little slower but still quite fast.The last approach is the drop column approach. This works by dropping a feature then calculating how much the accuracy decreases from training the random forest on all the features. This is the slowest but most accurate.
Interpreting the results
These methods will all give a list of the feature with a measure of how good it is. This will give you an idea of how the performance of the random forest will change because of that feature.
Ie you might find Age is the most important feature for prediction viewing time. You can then look at a graph of age vs view time to see how those two variable interact.
With a drop column approach there is the possibility of finding that removing a feature will increase accuracy by the decrease in accuracy being an negative value.
For applying to the domain of the problem that is up to you and the real world problem that you are trying to solve with this model.
Hope this helps :)
$endgroup$
add a comment |
$begingroup$
There is a number of different ways to view variable importance for random forests.
A great explanation of them
Variable importance measures
The most basic way is to look at how many times the variable was included in the trees in the random forest. This has the problem that it does not include how much of an effect the variable has on each tree
An improvement on this is to calculate the decrease in impurity that the variable cause in a tree when adding that variable to a tree. This is the default measure for the
importance()
function in the RandomForest R library. This has the problem of being biased towards continuous variables and ones with high carnality.Another approach is the permutation importance. How this works is that it shuffles the variables in one column of the data and computes the decrease in accuracy. in R this is
importance(type = 1)
. This is quite a good measure however it needs the features to be normalized to get the best accuracy (Shown in this paper). Its a little slower but still quite fast.The last approach is the drop column approach. This works by dropping a feature then calculating how much the accuracy decreases from training the random forest on all the features. This is the slowest but most accurate.
Interpreting the results
These methods will all give a list of the feature with a measure of how good it is. This will give you an idea of how the performance of the random forest will change because of that feature.
Ie you might find Age is the most important feature for prediction viewing time. You can then look at a graph of age vs view time to see how those two variable interact.
With a drop column approach there is the possibility of finding that removing a feature will increase accuracy by the decrease in accuracy being an negative value.
For applying to the domain of the problem that is up to you and the real world problem that you are trying to solve with this model.
Hope this helps :)
$endgroup$
There is a number of different ways to view variable importance for random forests.
A great explanation of them
Variable importance measures
The most basic way is to look at how many times the variable was included in the trees in the random forest. This has the problem that it does not include how much of an effect the variable has on each tree
An improvement on this is to calculate the decrease in impurity that the variable cause in a tree when adding that variable to a tree. This is the default measure for the
importance()
function in the RandomForest R library. This has the problem of being biased towards continuous variables and ones with high carnality.Another approach is the permutation importance. How this works is that it shuffles the variables in one column of the data and computes the decrease in accuracy. in R this is
importance(type = 1)
. This is quite a good measure however it needs the features to be normalized to get the best accuracy (Shown in this paper). Its a little slower but still quite fast.The last approach is the drop column approach. This works by dropping a feature then calculating how much the accuracy decreases from training the random forest on all the features. This is the slowest but most accurate.
Interpreting the results
These methods will all give a list of the feature with a measure of how good it is. This will give you an idea of how the performance of the random forest will change because of that feature.
Ie you might find Age is the most important feature for prediction viewing time. You can then look at a graph of age vs view time to see how those two variable interact.
With a drop column approach there is the possibility of finding that removing a feature will increase accuracy by the decrease in accuracy being an negative value.
For applying to the domain of the problem that is up to you and the real world problem that you are trying to solve with this model.
Hope this helps :)
answered yesterday
Andrew McghieAndrew Mcghie
526
526
add a comment |
add a comment |
DanielG is a new contributor. Be nice, and check out our Code of Conduct.
DanielG is a new contributor. Be nice, and check out our Code of Conduct.
DanielG is a new contributor. Be nice, and check out our Code of Conduct.
DanielG is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45970%2fget-insights-from-random-forestvariable-importance-analysis%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown