If I can make up priors, why can't I make up posteriors?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}
$begingroup$
My question is not meant to be a criticism of Bayesian methods; I am simply trying to understand the Bayesian view. Why is it reasonable to believe we know the distribution of our parameters, but not our parameters given data?
bayesian mathematical-statistics
$endgroup$
add a comment |
$begingroup$
My question is not meant to be a criticism of Bayesian methods; I am simply trying to understand the Bayesian view. Why is it reasonable to believe we know the distribution of our parameters, but not our parameters given data?
bayesian mathematical-statistics
$endgroup$
1
$begingroup$
Priors are determined 'prior' to seeing data, presumably according to a reasonable assessment of the situation.{'Made up' has an unfortunate feel of caprice--or snark-- and doesn't seem justified.) The prior dist'n and information from the sample (presumably not a matter of opinion) are combined to get the posterior. If you believe your prior distribution is reasonable and that the data were collected honestly, then you logically should believe the posterior. // The choice of prior indirectly affects the posterior but you are not allowed to 'make up' the posterior.
$endgroup$
– BruceET
5 hours ago
2
$begingroup$
"I am simply trying to understand the Bayesian view." Take (a) what you already believe about the world (prior), and (b) new experiences (data), and mush them together, to make a new belief about the world (posterior). Wash, rinse, repeat.
$endgroup$
– Alexis
3 hours ago
2
$begingroup$
@Alexis - "mush them together in the optimal way", where the latter four words mark the difference between Bayesian updating and other updating. BTW, I'm going to steal your comment (+1) for future non-CV use!
$endgroup$
– jbowman
3 hours ago
$begingroup$
Be my guest, @jbowman ! "Mush them together" was of course far too much of a poetic license to be a term of art. :)
$endgroup$
– Alexis
3 hours ago
add a comment |
$begingroup$
My question is not meant to be a criticism of Bayesian methods; I am simply trying to understand the Bayesian view. Why is it reasonable to believe we know the distribution of our parameters, but not our parameters given data?
bayesian mathematical-statistics
$endgroup$
My question is not meant to be a criticism of Bayesian methods; I am simply trying to understand the Bayesian view. Why is it reasonable to believe we know the distribution of our parameters, but not our parameters given data?
bayesian mathematical-statistics
bayesian mathematical-statistics
asked 5 hours ago
purpleostrichpurpleostrich
1748
1748
1
$begingroup$
Priors are determined 'prior' to seeing data, presumably according to a reasonable assessment of the situation.{'Made up' has an unfortunate feel of caprice--or snark-- and doesn't seem justified.) The prior dist'n and information from the sample (presumably not a matter of opinion) are combined to get the posterior. If you believe your prior distribution is reasonable and that the data were collected honestly, then you logically should believe the posterior. // The choice of prior indirectly affects the posterior but you are not allowed to 'make up' the posterior.
$endgroup$
– BruceET
5 hours ago
2
$begingroup$
"I am simply trying to understand the Bayesian view." Take (a) what you already believe about the world (prior), and (b) new experiences (data), and mush them together, to make a new belief about the world (posterior). Wash, rinse, repeat.
$endgroup$
– Alexis
3 hours ago
2
$begingroup$
@Alexis - "mush them together in the optimal way", where the latter four words mark the difference between Bayesian updating and other updating. BTW, I'm going to steal your comment (+1) for future non-CV use!
$endgroup$
– jbowman
3 hours ago
$begingroup$
Be my guest, @jbowman ! "Mush them together" was of course far too much of a poetic license to be a term of art. :)
$endgroup$
– Alexis
3 hours ago
add a comment |
1
$begingroup$
Priors are determined 'prior' to seeing data, presumably according to a reasonable assessment of the situation.{'Made up' has an unfortunate feel of caprice--or snark-- and doesn't seem justified.) The prior dist'n and information from the sample (presumably not a matter of opinion) are combined to get the posterior. If you believe your prior distribution is reasonable and that the data were collected honestly, then you logically should believe the posterior. // The choice of prior indirectly affects the posterior but you are not allowed to 'make up' the posterior.
$endgroup$
– BruceET
5 hours ago
2
$begingroup$
"I am simply trying to understand the Bayesian view." Take (a) what you already believe about the world (prior), and (b) new experiences (data), and mush them together, to make a new belief about the world (posterior). Wash, rinse, repeat.
$endgroup$
– Alexis
3 hours ago
2
$begingroup$
@Alexis - "mush them together in the optimal way", where the latter four words mark the difference between Bayesian updating and other updating. BTW, I'm going to steal your comment (+1) for future non-CV use!
$endgroup$
– jbowman
3 hours ago
$begingroup$
Be my guest, @jbowman ! "Mush them together" was of course far too much of a poetic license to be a term of art. :)
$endgroup$
– Alexis
3 hours ago
1
1
$begingroup$
Priors are determined 'prior' to seeing data, presumably according to a reasonable assessment of the situation.{'Made up' has an unfortunate feel of caprice--or snark-- and doesn't seem justified.) The prior dist'n and information from the sample (presumably not a matter of opinion) are combined to get the posterior. If you believe your prior distribution is reasonable and that the data were collected honestly, then you logically should believe the posterior. // The choice of prior indirectly affects the posterior but you are not allowed to 'make up' the posterior.
$endgroup$
– BruceET
5 hours ago
$begingroup$
Priors are determined 'prior' to seeing data, presumably according to a reasonable assessment of the situation.{'Made up' has an unfortunate feel of caprice--or snark-- and doesn't seem justified.) The prior dist'n and information from the sample (presumably not a matter of opinion) are combined to get the posterior. If you believe your prior distribution is reasonable and that the data were collected honestly, then you logically should believe the posterior. // The choice of prior indirectly affects the posterior but you are not allowed to 'make up' the posterior.
$endgroup$
– BruceET
5 hours ago
2
2
$begingroup$
"I am simply trying to understand the Bayesian view." Take (a) what you already believe about the world (prior), and (b) new experiences (data), and mush them together, to make a new belief about the world (posterior). Wash, rinse, repeat.
$endgroup$
– Alexis
3 hours ago
$begingroup$
"I am simply trying to understand the Bayesian view." Take (a) what you already believe about the world (prior), and (b) new experiences (data), and mush them together, to make a new belief about the world (posterior). Wash, rinse, repeat.
$endgroup$
– Alexis
3 hours ago
2
2
$begingroup$
@Alexis - "mush them together in the optimal way", where the latter four words mark the difference between Bayesian updating and other updating. BTW, I'm going to steal your comment (+1) for future non-CV use!
$endgroup$
– jbowman
3 hours ago
$begingroup$
@Alexis - "mush them together in the optimal way", where the latter four words mark the difference between Bayesian updating and other updating. BTW, I'm going to steal your comment (+1) for future non-CV use!
$endgroup$
– jbowman
3 hours ago
$begingroup$
Be my guest, @jbowman ! "Mush them together" was of course far too much of a poetic license to be a term of art. :)
$endgroup$
– Alexis
3 hours ago
$begingroup$
Be my guest, @jbowman ! "Mush them together" was of course far too much of a poetic license to be a term of art. :)
$endgroup$
– Alexis
3 hours ago
add a comment |
4 Answers
4
active
oldest
votes
$begingroup$
If you have a belief about the distribution of your data after seeing data, then why would you be estimating its parameters with data? You already have the parameters.
$endgroup$
$begingroup$
Indeed, this is where my confusion lies.
$endgroup$
– purpleostrich
just now
add a comment |
$begingroup$
Well, in Bayesian statistics, you don't just "make up" your priors. You should be building a prior that best captures your knowledge before seeing the data. Otherwise, why anyone should care about the output of your Bayesian analysis is very hard to justify.
So while it's true that the practitioner has some sense of freedom in creating a prior, it should be tied to something meaningful in order for an analysis to be useful. With that said, the prior isn't the only part of a Bayesian analysis that allows this freedom. A practitioner is offered the same freedom in constructing the likelihood function, which defines the relation between the data and the model. Just as using nonsense priors will lead to a nonsense posterior, using a nonsense likelihood will also lead to a nonsense posterior. So in practice, ideally one should chose a likelihood function such that it is flexible enough to handle one's uncertainty, yet constrained enough to make inference with limited data possible.
To demonstrate, consider two somewhat extreme examples. Suppose we are interested in determining the effect of a continuous-valued treatment on patients. In order to learn anything from the data, we must choice a model with that flexibility. If we were to simply leave out "treatment" from our set of regression parameters, no matter what our outcome was, we could report "given the data, our model estimates no effect of treatment". On the other extreme, suppose we have a model so flexible that we don't constrain the treatment effect to have a finite number of discontinuities. Then, (without strong priors, at least), we have almost no hope of having any sort of convergence of our estimated treatment effect no matter our sample size. Thus, our inference can be completely butchered by poor choices of likelihood functions, just as it could be by poor choices of priors.
Of course, in reality we wouldn't chose either of these extremes, but we still do make these types of choices. How flexible a treatment effect are we going to allow: linear, splines, interaction with other variables? There's always the tradeoff between "sufficiently flexible" and "estimatable given our sample size". If we're smart, our likelihood functions should include reasonable constraints (i.e., treatment continuous treatment effect probably relatively smooth, probably doesn't include very high order interaction effects). This is essentially the same art as picking a prior: you want to constrain your inference with prior knowledge, and allow flexibility where there is uncertainty. The whole point of using data is to help constrain some of that the flexibility that stems from our uncertainty.
In summary, a practitioner has freedom in selection of both the prior and the likelihood function. In order for an analysis to be in anyway meaningful, both choices should be a relatively good approximation of real phenomena.
$endgroup$
add a comment |
$begingroup$
In case of many problems in statistics you have some data, let's denote it as $X$, and want to learn about some "parameter" $theta$ of the distribution of the data, i.e. calculate the $theta|X$ kind of things (conditional distribution, conditional expectation etc.). There are several ways how can this be achieved, including maximum likelihood, and without getting into discussion if and which of them is better, you can consider using Bayes theorem as one of them. One of the advantages of using Bayes theorem, is that it let's you directly given that you know conditional distribution of the data given the parameter (likelihood) and the distribution of the parameter (prior), then you simply calculate
$$
overbrace{p(theta|X)}^text{posterior} = frac{overbrace{p(X|theta)}^text{likelihood};overbrace{p(theta)}^text{prior}}{p(X)}
$$
Likelihood is the conditional distribution of your data, so it is a matter of understanding your data and choosing some distribution that approximates it best, and it is rather uncontroversial concept. As about prior, notice that for the above formula to work you need some prior. In perfect world, you would know a priori the distribution of $theta$ and applied it to get the posterior. In real world, this is something that you assume, given your best knowledge, and plug-in to Bayes theorem. You could choose an "uninformative" prior $p(theta) propto 1$, but there are many arguments that such priors are neither "uninformative", nor reasonable. What I'm trying to say, is that there are many ways how you could come up with some distribution for a prior. Some consider priors as a blessing, since they make it possible to bring your out-of-data knowledge into the model, while others, for exactly the same reason, consider them as problematic.
Answering your question, sure you can assume that the distribution of the parameter given data is something. On day-to-day basis all the time we make our decisions based on some assumptions, that not always are rigorously validated. However the difference between prior and posterior is that the posterior is something that you learned from the data (and the prior). If it isn't, but your wild guess, then it's not a posterior any more. As about why we allow ourselves to "make up" priors, there are two answers depending on who you ask: either it is that (a) for the machinery to work we need some prior, or (b) we know something in advance that want to include it in our model, and thanks to priors this is possible. In either case, we usually expect the data to have "final word" rather then the priors.
$endgroup$
add a comment |
$begingroup$
Philosophically, there is nothing wrong with “eliciting a posterior.” It’s a bit more difficult to do in a coherent manner than with priors (because you need to respect the likelihood), but IMO you are asking a really good question.
To turn this into something practical, “making up” a posterior is a potentially useful way to elicit a prior. That is, I take all data realizations $X = x$ and ask myself what the posterior $pi(theta mid x)$ would be. If I do this in a fashion that is consistent with the likelihood, then I will have equivalently specified $pi(theta)$. This is sometimes called “downdating.” Once you realize this, you will see that “making up the prior” and “making up the posterior” are basically the same thing. As I said, it is tricky to do this ina manner which is consistent with the likelihood, but even if you do it for just a few values of $x$ it can be very illuminating about what a good prior will look like.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "65"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f403013%2fif-i-can-make-up-priors-why-cant-i-make-up-posteriors%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
If you have a belief about the distribution of your data after seeing data, then why would you be estimating its parameters with data? You already have the parameters.
$endgroup$
$begingroup$
Indeed, this is where my confusion lies.
$endgroup$
– purpleostrich
just now
add a comment |
$begingroup$
If you have a belief about the distribution of your data after seeing data, then why would you be estimating its parameters with data? You already have the parameters.
$endgroup$
$begingroup$
Indeed, this is where my confusion lies.
$endgroup$
– purpleostrich
just now
add a comment |
$begingroup$
If you have a belief about the distribution of your data after seeing data, then why would you be estimating its parameters with data? You already have the parameters.
$endgroup$
If you have a belief about the distribution of your data after seeing data, then why would you be estimating its parameters with data? You already have the parameters.
answered 3 hours ago
AksakalAksakal
39.2k452120
39.2k452120
$begingroup$
Indeed, this is where my confusion lies.
$endgroup$
– purpleostrich
just now
add a comment |
$begingroup$
Indeed, this is where my confusion lies.
$endgroup$
– purpleostrich
just now
$begingroup$
Indeed, this is where my confusion lies.
$endgroup$
– purpleostrich
just now
$begingroup$
Indeed, this is where my confusion lies.
$endgroup$
– purpleostrich
just now
add a comment |
$begingroup$
Well, in Bayesian statistics, you don't just "make up" your priors. You should be building a prior that best captures your knowledge before seeing the data. Otherwise, why anyone should care about the output of your Bayesian analysis is very hard to justify.
So while it's true that the practitioner has some sense of freedom in creating a prior, it should be tied to something meaningful in order for an analysis to be useful. With that said, the prior isn't the only part of a Bayesian analysis that allows this freedom. A practitioner is offered the same freedom in constructing the likelihood function, which defines the relation between the data and the model. Just as using nonsense priors will lead to a nonsense posterior, using a nonsense likelihood will also lead to a nonsense posterior. So in practice, ideally one should chose a likelihood function such that it is flexible enough to handle one's uncertainty, yet constrained enough to make inference with limited data possible.
To demonstrate, consider two somewhat extreme examples. Suppose we are interested in determining the effect of a continuous-valued treatment on patients. In order to learn anything from the data, we must choice a model with that flexibility. If we were to simply leave out "treatment" from our set of regression parameters, no matter what our outcome was, we could report "given the data, our model estimates no effect of treatment". On the other extreme, suppose we have a model so flexible that we don't constrain the treatment effect to have a finite number of discontinuities. Then, (without strong priors, at least), we have almost no hope of having any sort of convergence of our estimated treatment effect no matter our sample size. Thus, our inference can be completely butchered by poor choices of likelihood functions, just as it could be by poor choices of priors.
Of course, in reality we wouldn't chose either of these extremes, but we still do make these types of choices. How flexible a treatment effect are we going to allow: linear, splines, interaction with other variables? There's always the tradeoff between "sufficiently flexible" and "estimatable given our sample size". If we're smart, our likelihood functions should include reasonable constraints (i.e., treatment continuous treatment effect probably relatively smooth, probably doesn't include very high order interaction effects). This is essentially the same art as picking a prior: you want to constrain your inference with prior knowledge, and allow flexibility where there is uncertainty. The whole point of using data is to help constrain some of that the flexibility that stems from our uncertainty.
In summary, a practitioner has freedom in selection of both the prior and the likelihood function. In order for an analysis to be in anyway meaningful, both choices should be a relatively good approximation of real phenomena.
$endgroup$
add a comment |
$begingroup$
Well, in Bayesian statistics, you don't just "make up" your priors. You should be building a prior that best captures your knowledge before seeing the data. Otherwise, why anyone should care about the output of your Bayesian analysis is very hard to justify.
So while it's true that the practitioner has some sense of freedom in creating a prior, it should be tied to something meaningful in order for an analysis to be useful. With that said, the prior isn't the only part of a Bayesian analysis that allows this freedom. A practitioner is offered the same freedom in constructing the likelihood function, which defines the relation between the data and the model. Just as using nonsense priors will lead to a nonsense posterior, using a nonsense likelihood will also lead to a nonsense posterior. So in practice, ideally one should chose a likelihood function such that it is flexible enough to handle one's uncertainty, yet constrained enough to make inference with limited data possible.
To demonstrate, consider two somewhat extreme examples. Suppose we are interested in determining the effect of a continuous-valued treatment on patients. In order to learn anything from the data, we must choice a model with that flexibility. If we were to simply leave out "treatment" from our set of regression parameters, no matter what our outcome was, we could report "given the data, our model estimates no effect of treatment". On the other extreme, suppose we have a model so flexible that we don't constrain the treatment effect to have a finite number of discontinuities. Then, (without strong priors, at least), we have almost no hope of having any sort of convergence of our estimated treatment effect no matter our sample size. Thus, our inference can be completely butchered by poor choices of likelihood functions, just as it could be by poor choices of priors.
Of course, in reality we wouldn't chose either of these extremes, but we still do make these types of choices. How flexible a treatment effect are we going to allow: linear, splines, interaction with other variables? There's always the tradeoff between "sufficiently flexible" and "estimatable given our sample size". If we're smart, our likelihood functions should include reasonable constraints (i.e., treatment continuous treatment effect probably relatively smooth, probably doesn't include very high order interaction effects). This is essentially the same art as picking a prior: you want to constrain your inference with prior knowledge, and allow flexibility where there is uncertainty. The whole point of using data is to help constrain some of that the flexibility that stems from our uncertainty.
In summary, a practitioner has freedom in selection of both the prior and the likelihood function. In order for an analysis to be in anyway meaningful, both choices should be a relatively good approximation of real phenomena.
$endgroup$
add a comment |
$begingroup$
Well, in Bayesian statistics, you don't just "make up" your priors. You should be building a prior that best captures your knowledge before seeing the data. Otherwise, why anyone should care about the output of your Bayesian analysis is very hard to justify.
So while it's true that the practitioner has some sense of freedom in creating a prior, it should be tied to something meaningful in order for an analysis to be useful. With that said, the prior isn't the only part of a Bayesian analysis that allows this freedom. A practitioner is offered the same freedom in constructing the likelihood function, which defines the relation between the data and the model. Just as using nonsense priors will lead to a nonsense posterior, using a nonsense likelihood will also lead to a nonsense posterior. So in practice, ideally one should chose a likelihood function such that it is flexible enough to handle one's uncertainty, yet constrained enough to make inference with limited data possible.
To demonstrate, consider two somewhat extreme examples. Suppose we are interested in determining the effect of a continuous-valued treatment on patients. In order to learn anything from the data, we must choice a model with that flexibility. If we were to simply leave out "treatment" from our set of regression parameters, no matter what our outcome was, we could report "given the data, our model estimates no effect of treatment". On the other extreme, suppose we have a model so flexible that we don't constrain the treatment effect to have a finite number of discontinuities. Then, (without strong priors, at least), we have almost no hope of having any sort of convergence of our estimated treatment effect no matter our sample size. Thus, our inference can be completely butchered by poor choices of likelihood functions, just as it could be by poor choices of priors.
Of course, in reality we wouldn't chose either of these extremes, but we still do make these types of choices. How flexible a treatment effect are we going to allow: linear, splines, interaction with other variables? There's always the tradeoff between "sufficiently flexible" and "estimatable given our sample size". If we're smart, our likelihood functions should include reasonable constraints (i.e., treatment continuous treatment effect probably relatively smooth, probably doesn't include very high order interaction effects). This is essentially the same art as picking a prior: you want to constrain your inference with prior knowledge, and allow flexibility where there is uncertainty. The whole point of using data is to help constrain some of that the flexibility that stems from our uncertainty.
In summary, a practitioner has freedom in selection of both the prior and the likelihood function. In order for an analysis to be in anyway meaningful, both choices should be a relatively good approximation of real phenomena.
$endgroup$
Well, in Bayesian statistics, you don't just "make up" your priors. You should be building a prior that best captures your knowledge before seeing the data. Otherwise, why anyone should care about the output of your Bayesian analysis is very hard to justify.
So while it's true that the practitioner has some sense of freedom in creating a prior, it should be tied to something meaningful in order for an analysis to be useful. With that said, the prior isn't the only part of a Bayesian analysis that allows this freedom. A practitioner is offered the same freedom in constructing the likelihood function, which defines the relation between the data and the model. Just as using nonsense priors will lead to a nonsense posterior, using a nonsense likelihood will also lead to a nonsense posterior. So in practice, ideally one should chose a likelihood function such that it is flexible enough to handle one's uncertainty, yet constrained enough to make inference with limited data possible.
To demonstrate, consider two somewhat extreme examples. Suppose we are interested in determining the effect of a continuous-valued treatment on patients. In order to learn anything from the data, we must choice a model with that flexibility. If we were to simply leave out "treatment" from our set of regression parameters, no matter what our outcome was, we could report "given the data, our model estimates no effect of treatment". On the other extreme, suppose we have a model so flexible that we don't constrain the treatment effect to have a finite number of discontinuities. Then, (without strong priors, at least), we have almost no hope of having any sort of convergence of our estimated treatment effect no matter our sample size. Thus, our inference can be completely butchered by poor choices of likelihood functions, just as it could be by poor choices of priors.
Of course, in reality we wouldn't chose either of these extremes, but we still do make these types of choices. How flexible a treatment effect are we going to allow: linear, splines, interaction with other variables? There's always the tradeoff between "sufficiently flexible" and "estimatable given our sample size". If we're smart, our likelihood functions should include reasonable constraints (i.e., treatment continuous treatment effect probably relatively smooth, probably doesn't include very high order interaction effects). This is essentially the same art as picking a prior: you want to constrain your inference with prior knowledge, and allow flexibility where there is uncertainty. The whole point of using data is to help constrain some of that the flexibility that stems from our uncertainty.
In summary, a practitioner has freedom in selection of both the prior and the likelihood function. In order for an analysis to be in anyway meaningful, both choices should be a relatively good approximation of real phenomena.
edited 41 mins ago
answered 4 hours ago
Cliff ABCliff AB
13.8k12567
13.8k12567
add a comment |
add a comment |
$begingroup$
In case of many problems in statistics you have some data, let's denote it as $X$, and want to learn about some "parameter" $theta$ of the distribution of the data, i.e. calculate the $theta|X$ kind of things (conditional distribution, conditional expectation etc.). There are several ways how can this be achieved, including maximum likelihood, and without getting into discussion if and which of them is better, you can consider using Bayes theorem as one of them. One of the advantages of using Bayes theorem, is that it let's you directly given that you know conditional distribution of the data given the parameter (likelihood) and the distribution of the parameter (prior), then you simply calculate
$$
overbrace{p(theta|X)}^text{posterior} = frac{overbrace{p(X|theta)}^text{likelihood};overbrace{p(theta)}^text{prior}}{p(X)}
$$
Likelihood is the conditional distribution of your data, so it is a matter of understanding your data and choosing some distribution that approximates it best, and it is rather uncontroversial concept. As about prior, notice that for the above formula to work you need some prior. In perfect world, you would know a priori the distribution of $theta$ and applied it to get the posterior. In real world, this is something that you assume, given your best knowledge, and plug-in to Bayes theorem. You could choose an "uninformative" prior $p(theta) propto 1$, but there are many arguments that such priors are neither "uninformative", nor reasonable. What I'm trying to say, is that there are many ways how you could come up with some distribution for a prior. Some consider priors as a blessing, since they make it possible to bring your out-of-data knowledge into the model, while others, for exactly the same reason, consider them as problematic.
Answering your question, sure you can assume that the distribution of the parameter given data is something. On day-to-day basis all the time we make our decisions based on some assumptions, that not always are rigorously validated. However the difference between prior and posterior is that the posterior is something that you learned from the data (and the prior). If it isn't, but your wild guess, then it's not a posterior any more. As about why we allow ourselves to "make up" priors, there are two answers depending on who you ask: either it is that (a) for the machinery to work we need some prior, or (b) we know something in advance that want to include it in our model, and thanks to priors this is possible. In either case, we usually expect the data to have "final word" rather then the priors.
$endgroup$
add a comment |
$begingroup$
In case of many problems in statistics you have some data, let's denote it as $X$, and want to learn about some "parameter" $theta$ of the distribution of the data, i.e. calculate the $theta|X$ kind of things (conditional distribution, conditional expectation etc.). There are several ways how can this be achieved, including maximum likelihood, and without getting into discussion if and which of them is better, you can consider using Bayes theorem as one of them. One of the advantages of using Bayes theorem, is that it let's you directly given that you know conditional distribution of the data given the parameter (likelihood) and the distribution of the parameter (prior), then you simply calculate
$$
overbrace{p(theta|X)}^text{posterior} = frac{overbrace{p(X|theta)}^text{likelihood};overbrace{p(theta)}^text{prior}}{p(X)}
$$
Likelihood is the conditional distribution of your data, so it is a matter of understanding your data and choosing some distribution that approximates it best, and it is rather uncontroversial concept. As about prior, notice that for the above formula to work you need some prior. In perfect world, you would know a priori the distribution of $theta$ and applied it to get the posterior. In real world, this is something that you assume, given your best knowledge, and plug-in to Bayes theorem. You could choose an "uninformative" prior $p(theta) propto 1$, but there are many arguments that such priors are neither "uninformative", nor reasonable. What I'm trying to say, is that there are many ways how you could come up with some distribution for a prior. Some consider priors as a blessing, since they make it possible to bring your out-of-data knowledge into the model, while others, for exactly the same reason, consider them as problematic.
Answering your question, sure you can assume that the distribution of the parameter given data is something. On day-to-day basis all the time we make our decisions based on some assumptions, that not always are rigorously validated. However the difference between prior and posterior is that the posterior is something that you learned from the data (and the prior). If it isn't, but your wild guess, then it's not a posterior any more. As about why we allow ourselves to "make up" priors, there are two answers depending on who you ask: either it is that (a) for the machinery to work we need some prior, or (b) we know something in advance that want to include it in our model, and thanks to priors this is possible. In either case, we usually expect the data to have "final word" rather then the priors.
$endgroup$
add a comment |
$begingroup$
In case of many problems in statistics you have some data, let's denote it as $X$, and want to learn about some "parameter" $theta$ of the distribution of the data, i.e. calculate the $theta|X$ kind of things (conditional distribution, conditional expectation etc.). There are several ways how can this be achieved, including maximum likelihood, and without getting into discussion if and which of them is better, you can consider using Bayes theorem as one of them. One of the advantages of using Bayes theorem, is that it let's you directly given that you know conditional distribution of the data given the parameter (likelihood) and the distribution of the parameter (prior), then you simply calculate
$$
overbrace{p(theta|X)}^text{posterior} = frac{overbrace{p(X|theta)}^text{likelihood};overbrace{p(theta)}^text{prior}}{p(X)}
$$
Likelihood is the conditional distribution of your data, so it is a matter of understanding your data and choosing some distribution that approximates it best, and it is rather uncontroversial concept. As about prior, notice that for the above formula to work you need some prior. In perfect world, you would know a priori the distribution of $theta$ and applied it to get the posterior. In real world, this is something that you assume, given your best knowledge, and plug-in to Bayes theorem. You could choose an "uninformative" prior $p(theta) propto 1$, but there are many arguments that such priors are neither "uninformative", nor reasonable. What I'm trying to say, is that there are many ways how you could come up with some distribution for a prior. Some consider priors as a blessing, since they make it possible to bring your out-of-data knowledge into the model, while others, for exactly the same reason, consider them as problematic.
Answering your question, sure you can assume that the distribution of the parameter given data is something. On day-to-day basis all the time we make our decisions based on some assumptions, that not always are rigorously validated. However the difference between prior and posterior is that the posterior is something that you learned from the data (and the prior). If it isn't, but your wild guess, then it's not a posterior any more. As about why we allow ourselves to "make up" priors, there are two answers depending on who you ask: either it is that (a) for the machinery to work we need some prior, or (b) we know something in advance that want to include it in our model, and thanks to priors this is possible. In either case, we usually expect the data to have "final word" rather then the priors.
$endgroup$
In case of many problems in statistics you have some data, let's denote it as $X$, and want to learn about some "parameter" $theta$ of the distribution of the data, i.e. calculate the $theta|X$ kind of things (conditional distribution, conditional expectation etc.). There are several ways how can this be achieved, including maximum likelihood, and without getting into discussion if and which of them is better, you can consider using Bayes theorem as one of them. One of the advantages of using Bayes theorem, is that it let's you directly given that you know conditional distribution of the data given the parameter (likelihood) and the distribution of the parameter (prior), then you simply calculate
$$
overbrace{p(theta|X)}^text{posterior} = frac{overbrace{p(X|theta)}^text{likelihood};overbrace{p(theta)}^text{prior}}{p(X)}
$$
Likelihood is the conditional distribution of your data, so it is a matter of understanding your data and choosing some distribution that approximates it best, and it is rather uncontroversial concept. As about prior, notice that for the above formula to work you need some prior. In perfect world, you would know a priori the distribution of $theta$ and applied it to get the posterior. In real world, this is something that you assume, given your best knowledge, and plug-in to Bayes theorem. You could choose an "uninformative" prior $p(theta) propto 1$, but there are many arguments that such priors are neither "uninformative", nor reasonable. What I'm trying to say, is that there are many ways how you could come up with some distribution for a prior. Some consider priors as a blessing, since they make it possible to bring your out-of-data knowledge into the model, while others, for exactly the same reason, consider them as problematic.
Answering your question, sure you can assume that the distribution of the parameter given data is something. On day-to-day basis all the time we make our decisions based on some assumptions, that not always are rigorously validated. However the difference between prior and posterior is that the posterior is something that you learned from the data (and the prior). If it isn't, but your wild guess, then it's not a posterior any more. As about why we allow ourselves to "make up" priors, there are two answers depending on who you ask: either it is that (a) for the machinery to work we need some prior, or (b) we know something in advance that want to include it in our model, and thanks to priors this is possible. In either case, we usually expect the data to have "final word" rather then the priors.
answered 3 hours ago
Tim♦Tim
60.1k9133229
60.1k9133229
add a comment |
add a comment |
$begingroup$
Philosophically, there is nothing wrong with “eliciting a posterior.” It’s a bit more difficult to do in a coherent manner than with priors (because you need to respect the likelihood), but IMO you are asking a really good question.
To turn this into something practical, “making up” a posterior is a potentially useful way to elicit a prior. That is, I take all data realizations $X = x$ and ask myself what the posterior $pi(theta mid x)$ would be. If I do this in a fashion that is consistent with the likelihood, then I will have equivalently specified $pi(theta)$. This is sometimes called “downdating.” Once you realize this, you will see that “making up the prior” and “making up the posterior” are basically the same thing. As I said, it is tricky to do this ina manner which is consistent with the likelihood, but even if you do it for just a few values of $x$ it can be very illuminating about what a good prior will look like.
$endgroup$
add a comment |
$begingroup$
Philosophically, there is nothing wrong with “eliciting a posterior.” It’s a bit more difficult to do in a coherent manner than with priors (because you need to respect the likelihood), but IMO you are asking a really good question.
To turn this into something practical, “making up” a posterior is a potentially useful way to elicit a prior. That is, I take all data realizations $X = x$ and ask myself what the posterior $pi(theta mid x)$ would be. If I do this in a fashion that is consistent with the likelihood, then I will have equivalently specified $pi(theta)$. This is sometimes called “downdating.” Once you realize this, you will see that “making up the prior” and “making up the posterior” are basically the same thing. As I said, it is tricky to do this ina manner which is consistent with the likelihood, but even if you do it for just a few values of $x$ it can be very illuminating about what a good prior will look like.
$endgroup$
add a comment |
$begingroup$
Philosophically, there is nothing wrong with “eliciting a posterior.” It’s a bit more difficult to do in a coherent manner than with priors (because you need to respect the likelihood), but IMO you are asking a really good question.
To turn this into something practical, “making up” a posterior is a potentially useful way to elicit a prior. That is, I take all data realizations $X = x$ and ask myself what the posterior $pi(theta mid x)$ would be. If I do this in a fashion that is consistent with the likelihood, then I will have equivalently specified $pi(theta)$. This is sometimes called “downdating.” Once you realize this, you will see that “making up the prior” and “making up the posterior” are basically the same thing. As I said, it is tricky to do this ina manner which is consistent with the likelihood, but even if you do it for just a few values of $x$ it can be very illuminating about what a good prior will look like.
$endgroup$
Philosophically, there is nothing wrong with “eliciting a posterior.” It’s a bit more difficult to do in a coherent manner than with priors (because you need to respect the likelihood), but IMO you are asking a really good question.
To turn this into something practical, “making up” a posterior is a potentially useful way to elicit a prior. That is, I take all data realizations $X = x$ and ask myself what the posterior $pi(theta mid x)$ would be. If I do this in a fashion that is consistent with the likelihood, then I will have equivalently specified $pi(theta)$. This is sometimes called “downdating.” Once you realize this, you will see that “making up the prior” and “making up the posterior” are basically the same thing. As I said, it is tricky to do this ina manner which is consistent with the likelihood, but even if you do it for just a few values of $x$ it can be very illuminating about what a good prior will look like.
answered 3 mins ago
guyguy
4,65311337
4,65311337
add a comment |
add a comment |
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f403013%2fif-i-can-make-up-priors-why-cant-i-make-up-posteriors%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
Priors are determined 'prior' to seeing data, presumably according to a reasonable assessment of the situation.{'Made up' has an unfortunate feel of caprice--or snark-- and doesn't seem justified.) The prior dist'n and information from the sample (presumably not a matter of opinion) are combined to get the posterior. If you believe your prior distribution is reasonable and that the data were collected honestly, then you logically should believe the posterior. // The choice of prior indirectly affects the posterior but you are not allowed to 'make up' the posterior.
$endgroup$
– BruceET
5 hours ago
2
$begingroup$
"I am simply trying to understand the Bayesian view." Take (a) what you already believe about the world (prior), and (b) new experiences (data), and mush them together, to make a new belief about the world (posterior). Wash, rinse, repeat.
$endgroup$
– Alexis
3 hours ago
2
$begingroup$
@Alexis - "mush them together in the optimal way", where the latter four words mark the difference between Bayesian updating and other updating. BTW, I'm going to steal your comment (+1) for future non-CV use!
$endgroup$
– jbowman
3 hours ago
$begingroup$
Be my guest, @jbowman ! "Mush them together" was of course far too much of a poetic license to be a term of art. :)
$endgroup$
– Alexis
3 hours ago