Estimating Predictive Uncertainty for unlabeled data
$begingroup$
I am trying to estimate the predictive uncertainty for a deep neural network. While I do have a labeled training set, I´m trying to measure uncertainty for some unlabeled production data.
This paper proposes the use of Deep Ensembles and Adversarial Training to compute a measurement of uncertainty. However, it uses Brier Score as a metric which requires me to know the real label of my production data.
Is there a similar way or metric which does not require labeled data?
Another approach was described by Yarin Gal utilizing Monte Carlo Dropout. However I can´t get any useful results using that technique on my model.
I want to use my model in an online learning task. As the given data may very over time, I need to detect examples where my model is highly uncertain, so I can manually classify those examples.
neural-network deep-learning unsupervised-learning prediction
$endgroup$
bumped to the homepage by Community♦ 15 hours ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
$begingroup$
I am trying to estimate the predictive uncertainty for a deep neural network. While I do have a labeled training set, I´m trying to measure uncertainty for some unlabeled production data.
This paper proposes the use of Deep Ensembles and Adversarial Training to compute a measurement of uncertainty. However, it uses Brier Score as a metric which requires me to know the real label of my production data.
Is there a similar way or metric which does not require labeled data?
Another approach was described by Yarin Gal utilizing Monte Carlo Dropout. However I can´t get any useful results using that technique on my model.
I want to use my model in an online learning task. As the given data may very over time, I need to detect examples where my model is highly uncertain, so I can manually classify those examples.
neural-network deep-learning unsupervised-learning prediction
$endgroup$
bumped to the homepage by Community♦ 15 hours ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
$begingroup$
I am trying to estimate the predictive uncertainty for a deep neural network. While I do have a labeled training set, I´m trying to measure uncertainty for some unlabeled production data.
This paper proposes the use of Deep Ensembles and Adversarial Training to compute a measurement of uncertainty. However, it uses Brier Score as a metric which requires me to know the real label of my production data.
Is there a similar way or metric which does not require labeled data?
Another approach was described by Yarin Gal utilizing Monte Carlo Dropout. However I can´t get any useful results using that technique on my model.
I want to use my model in an online learning task. As the given data may very over time, I need to detect examples where my model is highly uncertain, so I can manually classify those examples.
neural-network deep-learning unsupervised-learning prediction
$endgroup$
I am trying to estimate the predictive uncertainty for a deep neural network. While I do have a labeled training set, I´m trying to measure uncertainty for some unlabeled production data.
This paper proposes the use of Deep Ensembles and Adversarial Training to compute a measurement of uncertainty. However, it uses Brier Score as a metric which requires me to know the real label of my production data.
Is there a similar way or metric which does not require labeled data?
Another approach was described by Yarin Gal utilizing Monte Carlo Dropout. However I can´t get any useful results using that technique on my model.
I want to use my model in an online learning task. As the given data may very over time, I need to detect examples where my model is highly uncertain, so I can manually classify those examples.
neural-network deep-learning unsupervised-learning prediction
neural-network deep-learning unsupervised-learning prediction
edited Apr 4 '18 at 11:34
L. Anders
asked Apr 4 '18 at 11:18
L. AndersL. Anders
63
63
bumped to the homepage by Community♦ 15 hours ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
bumped to the homepage by Community♦ 15 hours ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
See Authors comment in the thread here
So by default the dropout is only used in training time. Then in test time the dropout probability is set to 0. But there is a easy way to enable it so that it stays 'on' even during the test time.
import keras
inputs = keras.Input(shape=(10,))
x = keras.layers.Dense(3)(inputs)
outputs = keras.layers.Dropout(0.5)(x, training=True)
model = keras.Model(inputs, outputs)
By setting training=True
it remains active in during test time. This can be called Monte Carlo dropout or more popularly dropout at inference time. I hope it helps.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f29899%2festimating-predictive-uncertainty-for-unlabeled-data%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
See Authors comment in the thread here
So by default the dropout is only used in training time. Then in test time the dropout probability is set to 0. But there is a easy way to enable it so that it stays 'on' even during the test time.
import keras
inputs = keras.Input(shape=(10,))
x = keras.layers.Dense(3)(inputs)
outputs = keras.layers.Dropout(0.5)(x, training=True)
model = keras.Model(inputs, outputs)
By setting training=True
it remains active in during test time. This can be called Monte Carlo dropout or more popularly dropout at inference time. I hope it helps.
$endgroup$
add a comment |
$begingroup$
See Authors comment in the thread here
So by default the dropout is only used in training time. Then in test time the dropout probability is set to 0. But there is a easy way to enable it so that it stays 'on' even during the test time.
import keras
inputs = keras.Input(shape=(10,))
x = keras.layers.Dense(3)(inputs)
outputs = keras.layers.Dropout(0.5)(x, training=True)
model = keras.Model(inputs, outputs)
By setting training=True
it remains active in during test time. This can be called Monte Carlo dropout or more popularly dropout at inference time. I hope it helps.
$endgroup$
add a comment |
$begingroup$
See Authors comment in the thread here
So by default the dropout is only used in training time. Then in test time the dropout probability is set to 0. But there is a easy way to enable it so that it stays 'on' even during the test time.
import keras
inputs = keras.Input(shape=(10,))
x = keras.layers.Dense(3)(inputs)
outputs = keras.layers.Dropout(0.5)(x, training=True)
model = keras.Model(inputs, outputs)
By setting training=True
it remains active in during test time. This can be called Monte Carlo dropout or more popularly dropout at inference time. I hope it helps.
$endgroup$
See Authors comment in the thread here
So by default the dropout is only used in training time. Then in test time the dropout probability is set to 0. But there is a easy way to enable it so that it stays 'on' even during the test time.
import keras
inputs = keras.Input(shape=(10,))
x = keras.layers.Dense(3)(inputs)
outputs = keras.layers.Dropout(0.5)(x, training=True)
model = keras.Model(inputs, outputs)
By setting training=True
it remains active in during test time. This can be called Monte Carlo dropout or more popularly dropout at inference time. I hope it helps.
answered Dec 19 '18 at 10:26
HaramozHaramoz
1407
1407
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f29899%2festimating-predictive-uncertainty-for-unlabeled-data%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown