What loss function to use when labels are probabilities?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}
$begingroup$
What loss function is most appropriate when training a model with target values that are probabilities? For example, I have a 3-output model with x=[some features] and y=[0.2, 0.3, 0.5].
It seems like something like cross-entropy doesn't make sense here since it assumes that a single target is the correct label.
Would something like MSE (after applying softmax) make sense, or is there a better loss function?
neural-networks loss-functions probability-distribution
New contributor
$endgroup$
add a comment |
$begingroup$
What loss function is most appropriate when training a model with target values that are probabilities? For example, I have a 3-output model with x=[some features] and y=[0.2, 0.3, 0.5].
It seems like something like cross-entropy doesn't make sense here since it assumes that a single target is the correct label.
Would something like MSE (after applying softmax) make sense, or is there a better loss function?
neural-networks loss-functions probability-distribution
New contributor
$endgroup$
add a comment |
$begingroup$
What loss function is most appropriate when training a model with target values that are probabilities? For example, I have a 3-output model with x=[some features] and y=[0.2, 0.3, 0.5].
It seems like something like cross-entropy doesn't make sense here since it assumes that a single target is the correct label.
Would something like MSE (after applying softmax) make sense, or is there a better loss function?
neural-networks loss-functions probability-distribution
New contributor
$endgroup$
What loss function is most appropriate when training a model with target values that are probabilities? For example, I have a 3-output model with x=[some features] and y=[0.2, 0.3, 0.5].
It seems like something like cross-entropy doesn't make sense here since it assumes that a single target is the correct label.
Would something like MSE (after applying softmax) make sense, or is there a better loss function?
neural-networks loss-functions probability-distribution
neural-networks loss-functions probability-distribution
New contributor
New contributor
New contributor
asked 5 hours ago
Thomas JohnsonThomas Johnson
1083
1083
New contributor
New contributor
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Actually, the cross-entropy loss function would be appropriate here, since it measures the "distance" between a distribution $q$ and the "true" distribution $p$.
You are right, though, that using a loss function called "cross_entropy" in many APIs would be a mistake. This is because these functions, as you said, assume a one-hot label. You would need to use the general cross-entropy function,
$$H(p,q)=-sum_{xin X} p(x) log q(x).$$
$ $
Note that one-hot labels would mean that
$$
p(x) =
begin{cases}
1 & text{if }x text{ is the true label}\
0 & text{otherwise}
end{cases}$$
which causes the cross-entropy $H(p,q)$ to reduce to the form you're familiar with:
$$H(p,q) = -log q(x_{label})$$
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "658"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Thomas Johnson is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f11816%2fwhat-loss-function-to-use-when-labels-are-probabilities%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Actually, the cross-entropy loss function would be appropriate here, since it measures the "distance" between a distribution $q$ and the "true" distribution $p$.
You are right, though, that using a loss function called "cross_entropy" in many APIs would be a mistake. This is because these functions, as you said, assume a one-hot label. You would need to use the general cross-entropy function,
$$H(p,q)=-sum_{xin X} p(x) log q(x).$$
$ $
Note that one-hot labels would mean that
$$
p(x) =
begin{cases}
1 & text{if }x text{ is the true label}\
0 & text{otherwise}
end{cases}$$
which causes the cross-entropy $H(p,q)$ to reduce to the form you're familiar with:
$$H(p,q) = -log q(x_{label})$$
$endgroup$
add a comment |
$begingroup$
Actually, the cross-entropy loss function would be appropriate here, since it measures the "distance" between a distribution $q$ and the "true" distribution $p$.
You are right, though, that using a loss function called "cross_entropy" in many APIs would be a mistake. This is because these functions, as you said, assume a one-hot label. You would need to use the general cross-entropy function,
$$H(p,q)=-sum_{xin X} p(x) log q(x).$$
$ $
Note that one-hot labels would mean that
$$
p(x) =
begin{cases}
1 & text{if }x text{ is the true label}\
0 & text{otherwise}
end{cases}$$
which causes the cross-entropy $H(p,q)$ to reduce to the form you're familiar with:
$$H(p,q) = -log q(x_{label})$$
$endgroup$
add a comment |
$begingroup$
Actually, the cross-entropy loss function would be appropriate here, since it measures the "distance" between a distribution $q$ and the "true" distribution $p$.
You are right, though, that using a loss function called "cross_entropy" in many APIs would be a mistake. This is because these functions, as you said, assume a one-hot label. You would need to use the general cross-entropy function,
$$H(p,q)=-sum_{xin X} p(x) log q(x).$$
$ $
Note that one-hot labels would mean that
$$
p(x) =
begin{cases}
1 & text{if }x text{ is the true label}\
0 & text{otherwise}
end{cases}$$
which causes the cross-entropy $H(p,q)$ to reduce to the form you're familiar with:
$$H(p,q) = -log q(x_{label})$$
$endgroup$
Actually, the cross-entropy loss function would be appropriate here, since it measures the "distance" between a distribution $q$ and the "true" distribution $p$.
You are right, though, that using a loss function called "cross_entropy" in many APIs would be a mistake. This is because these functions, as you said, assume a one-hot label. You would need to use the general cross-entropy function,
$$H(p,q)=-sum_{xin X} p(x) log q(x).$$
$ $
Note that one-hot labels would mean that
$$
p(x) =
begin{cases}
1 & text{if }x text{ is the true label}\
0 & text{otherwise}
end{cases}$$
which causes the cross-entropy $H(p,q)$ to reduce to the form you're familiar with:
$$H(p,q) = -log q(x_{label})$$
answered 5 hours ago
Philip RaeisghasemPhilip Raeisghasem
963119
963119
add a comment |
add a comment |
Thomas Johnson is a new contributor. Be nice, and check out our Code of Conduct.
Thomas Johnson is a new contributor. Be nice, and check out our Code of Conduct.
Thomas Johnson is a new contributor. Be nice, and check out our Code of Conduct.
Thomas Johnson is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Artificial Intelligence Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f11816%2fwhat-loss-function-to-use-when-labels-are-probabilities%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown