If I can make up priors, why can't I make up posteriors?





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







5












$begingroup$


My question is not meant to be a criticism of Bayesian methods; I am simply trying to understand the Bayesian view. Why is it reasonable to believe we know the distribution of our parameters, but not our parameters given data?










share|cite|improve this question









$endgroup$








  • 5




    $begingroup$
    Priors are determined 'prior' to seeing data, presumably according to a reasonable assessment of the situation.{'Made up' has an unfortunate feel of caprice--or snark-- and doesn't seem justified.) The prior dist'n and information from the sample (presumably not a matter of opinion) are combined to get the posterior. If you believe your prior distribution is reasonable and that the data were collected honestly, then you logically should believe the posterior. // The choice of prior indirectly affects the posterior but you are not allowed to 'make up' the posterior.
    $endgroup$
    – BruceET
    Apr 14 at 18:26








  • 6




    $begingroup$
    "I am simply trying to understand the Bayesian view." Take (a) what you already believe about the world (prior), and (b) new experiences (data), and mush them together, to make a new belief about the world (posterior). Wash, rinse, repeat.
    $endgroup$
    – Alexis
    Apr 14 at 20:41








  • 5




    $begingroup$
    @Alexis - "mush them together in the optimal way", where the latter four words mark the difference between Bayesian updating and other updating. BTW, I'm going to steal your comment (+1) for future non-CV use!
    $endgroup$
    – jbowman
    Apr 14 at 20:56










  • $begingroup$
    Be my guest, @jbowman ! "Mush them together" was of course far too much of a poetic license to be a term of art. :)
    $endgroup$
    – Alexis
    Apr 14 at 20:58






  • 2




    $begingroup$
    @BruceET, why not adapt that into an official answer?
    $endgroup$
    – gung
    Apr 15 at 11:04


















5












$begingroup$


My question is not meant to be a criticism of Bayesian methods; I am simply trying to understand the Bayesian view. Why is it reasonable to believe we know the distribution of our parameters, but not our parameters given data?










share|cite|improve this question









$endgroup$








  • 5




    $begingroup$
    Priors are determined 'prior' to seeing data, presumably according to a reasonable assessment of the situation.{'Made up' has an unfortunate feel of caprice--or snark-- and doesn't seem justified.) The prior dist'n and information from the sample (presumably not a matter of opinion) are combined to get the posterior. If you believe your prior distribution is reasonable and that the data were collected honestly, then you logically should believe the posterior. // The choice of prior indirectly affects the posterior but you are not allowed to 'make up' the posterior.
    $endgroup$
    – BruceET
    Apr 14 at 18:26








  • 6




    $begingroup$
    "I am simply trying to understand the Bayesian view." Take (a) what you already believe about the world (prior), and (b) new experiences (data), and mush them together, to make a new belief about the world (posterior). Wash, rinse, repeat.
    $endgroup$
    – Alexis
    Apr 14 at 20:41








  • 5




    $begingroup$
    @Alexis - "mush them together in the optimal way", where the latter four words mark the difference between Bayesian updating and other updating. BTW, I'm going to steal your comment (+1) for future non-CV use!
    $endgroup$
    – jbowman
    Apr 14 at 20:56










  • $begingroup$
    Be my guest, @jbowman ! "Mush them together" was of course far too much of a poetic license to be a term of art. :)
    $endgroup$
    – Alexis
    Apr 14 at 20:58






  • 2




    $begingroup$
    @BruceET, why not adapt that into an official answer?
    $endgroup$
    – gung
    Apr 15 at 11:04














5












5








5


3



$begingroup$


My question is not meant to be a criticism of Bayesian methods; I am simply trying to understand the Bayesian view. Why is it reasonable to believe we know the distribution of our parameters, but not our parameters given data?










share|cite|improve this question









$endgroup$




My question is not meant to be a criticism of Bayesian methods; I am simply trying to understand the Bayesian view. Why is it reasonable to believe we know the distribution of our parameters, but not our parameters given data?







bayesian mathematical-statistics






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Apr 14 at 18:12









purpleostrichpurpleostrich

1899




1899








  • 5




    $begingroup$
    Priors are determined 'prior' to seeing data, presumably according to a reasonable assessment of the situation.{'Made up' has an unfortunate feel of caprice--or snark-- and doesn't seem justified.) The prior dist'n and information from the sample (presumably not a matter of opinion) are combined to get the posterior. If you believe your prior distribution is reasonable and that the data were collected honestly, then you logically should believe the posterior. // The choice of prior indirectly affects the posterior but you are not allowed to 'make up' the posterior.
    $endgroup$
    – BruceET
    Apr 14 at 18:26








  • 6




    $begingroup$
    "I am simply trying to understand the Bayesian view." Take (a) what you already believe about the world (prior), and (b) new experiences (data), and mush them together, to make a new belief about the world (posterior). Wash, rinse, repeat.
    $endgroup$
    – Alexis
    Apr 14 at 20:41








  • 5




    $begingroup$
    @Alexis - "mush them together in the optimal way", where the latter four words mark the difference between Bayesian updating and other updating. BTW, I'm going to steal your comment (+1) for future non-CV use!
    $endgroup$
    – jbowman
    Apr 14 at 20:56










  • $begingroup$
    Be my guest, @jbowman ! "Mush them together" was of course far too much of a poetic license to be a term of art. :)
    $endgroup$
    – Alexis
    Apr 14 at 20:58






  • 2




    $begingroup$
    @BruceET, why not adapt that into an official answer?
    $endgroup$
    – gung
    Apr 15 at 11:04














  • 5




    $begingroup$
    Priors are determined 'prior' to seeing data, presumably according to a reasonable assessment of the situation.{'Made up' has an unfortunate feel of caprice--or snark-- and doesn't seem justified.) The prior dist'n and information from the sample (presumably not a matter of opinion) are combined to get the posterior. If you believe your prior distribution is reasonable and that the data were collected honestly, then you logically should believe the posterior. // The choice of prior indirectly affects the posterior but you are not allowed to 'make up' the posterior.
    $endgroup$
    – BruceET
    Apr 14 at 18:26








  • 6




    $begingroup$
    "I am simply trying to understand the Bayesian view." Take (a) what you already believe about the world (prior), and (b) new experiences (data), and mush them together, to make a new belief about the world (posterior). Wash, rinse, repeat.
    $endgroup$
    – Alexis
    Apr 14 at 20:41








  • 5




    $begingroup$
    @Alexis - "mush them together in the optimal way", where the latter four words mark the difference between Bayesian updating and other updating. BTW, I'm going to steal your comment (+1) for future non-CV use!
    $endgroup$
    – jbowman
    Apr 14 at 20:56










  • $begingroup$
    Be my guest, @jbowman ! "Mush them together" was of course far too much of a poetic license to be a term of art. :)
    $endgroup$
    – Alexis
    Apr 14 at 20:58






  • 2




    $begingroup$
    @BruceET, why not adapt that into an official answer?
    $endgroup$
    – gung
    Apr 15 at 11:04








5




5




$begingroup$
Priors are determined 'prior' to seeing data, presumably according to a reasonable assessment of the situation.{'Made up' has an unfortunate feel of caprice--or snark-- and doesn't seem justified.) The prior dist'n and information from the sample (presumably not a matter of opinion) are combined to get the posterior. If you believe your prior distribution is reasonable and that the data were collected honestly, then you logically should believe the posterior. // The choice of prior indirectly affects the posterior but you are not allowed to 'make up' the posterior.
$endgroup$
– BruceET
Apr 14 at 18:26






$begingroup$
Priors are determined 'prior' to seeing data, presumably according to a reasonable assessment of the situation.{'Made up' has an unfortunate feel of caprice--or snark-- and doesn't seem justified.) The prior dist'n and information from the sample (presumably not a matter of opinion) are combined to get the posterior. If you believe your prior distribution is reasonable and that the data were collected honestly, then you logically should believe the posterior. // The choice of prior indirectly affects the posterior but you are not allowed to 'make up' the posterior.
$endgroup$
– BruceET
Apr 14 at 18:26






6




6




$begingroup$
"I am simply trying to understand the Bayesian view." Take (a) what you already believe about the world (prior), and (b) new experiences (data), and mush them together, to make a new belief about the world (posterior). Wash, rinse, repeat.
$endgroup$
– Alexis
Apr 14 at 20:41






$begingroup$
"I am simply trying to understand the Bayesian view." Take (a) what you already believe about the world (prior), and (b) new experiences (data), and mush them together, to make a new belief about the world (posterior). Wash, rinse, repeat.
$endgroup$
– Alexis
Apr 14 at 20:41






5




5




$begingroup$
@Alexis - "mush them together in the optimal way", where the latter four words mark the difference between Bayesian updating and other updating. BTW, I'm going to steal your comment (+1) for future non-CV use!
$endgroup$
– jbowman
Apr 14 at 20:56




$begingroup$
@Alexis - "mush them together in the optimal way", where the latter four words mark the difference between Bayesian updating and other updating. BTW, I'm going to steal your comment (+1) for future non-CV use!
$endgroup$
– jbowman
Apr 14 at 20:56












$begingroup$
Be my guest, @jbowman ! "Mush them together" was of course far too much of a poetic license to be a term of art. :)
$endgroup$
– Alexis
Apr 14 at 20:58




$begingroup$
Be my guest, @jbowman ! "Mush them together" was of course far too much of a poetic license to be a term of art. :)
$endgroup$
– Alexis
Apr 14 at 20:58




2




2




$begingroup$
@BruceET, why not adapt that into an official answer?
$endgroup$
– gung
Apr 15 at 11:04




$begingroup$
@BruceET, why not adapt that into an official answer?
$endgroup$
– gung
Apr 15 at 11:04










4 Answers
4






active

oldest

votes


















7












$begingroup$

Well, in Bayesian statistics, you don't just "make up" your priors. You should be building a prior that best captures your knowledge before seeing the data. Otherwise, why anyone should care about the output of your Bayesian analysis is very hard to justify.



So while it's true that the practitioner has some sense of freedom in creating a prior, it should be tied to something meaningful in order for an analysis to be useful. With that said, the prior isn't the only part of a Bayesian analysis that allows this freedom. A practitioner is offered the same freedom in constructing the likelihood function, which defines the relation between the data and the model. Just as using nonsense priors will lead to a nonsense posterior, using a nonsense likelihood will also lead to a nonsense posterior. So in practice, ideally one should chose a likelihood function such that it is flexible enough to handle one's uncertainty, yet constrained enough to make inference with limited data possible.



To demonstrate, consider two somewhat extreme examples. Suppose we are interested in determining the effect of a continuous-valued treatment on patients. In order to learn anything from the data, we must choice a model with that flexibility. If we were to simply leave out "treatment" from our set of regression parameters, no matter what our outcome was, we could report "given the data, our model estimates no effect of treatment". On the other extreme, suppose we have a model so flexible that we don't constrain the treatment effect to have a finite number of discontinuities. Then, (without strong priors, at least), we have almost no hope of having any sort of convergence of our estimated treatment effect no matter our sample size. Thus, our inference can be completely butchered by poor choices of likelihood functions, just as it could be by poor choices of priors.



Of course, in reality we wouldn't chose either of these extremes, but we still do make these types of choices. How flexible a treatment effect are we going to allow: linear, splines, interaction with other variables? There's always the tradeoff between "sufficiently flexible" and "estimatable given our sample size". If we're smart, our likelihood functions should include reasonable constraints (i.e., treatment continuous treatment effect probably relatively smooth, probably doesn't include very high order interaction effects). This is essentially the same art as picking a prior: you want to constrain your inference with prior knowledge, and allow flexibility where there is uncertainty. The whole point of using data is to help constrain some of that the flexibility that stems from our uncertainty.



In summary, a practitioner has freedom in selection of both the prior and the likelihood function. In order for an analysis to be in anyway meaningful, both choices should be a relatively good approximation of real phenomena.



EDIT:



In the comments, @nanoman brings up an interesting take on the problem. One way we can think that the likelihood function is a generic, non-subjective function. As such, all possible models can be included in the functional form likelihood before the prior. But typically, the prior only puts positive probability on a finite set of functional forms of the likelihood. Thus, without the prior, inference is impossible as the likelihood would be too flexible to ever make any form of inference.



While this isn't the universally accepted definition of prior and likelihood function, this view does have a few advantages. For one, this is very natural in Bayesian model selection. In this case, rather than just putting priors on parameters of a single model, the prior puts probability over a set of competing models. But second, and I believe more to @nanoman's point, is that this view cleanly divides inference into subjective (prior) and non-subjective (likelihood function). This is nice, because it clearly demonstrates one cannot learn anything without some subjective constraints as the likelihood would be too flexible. It also clearly demonstrates that once someone hands you a tractable likelihood function, some subjective information must have snuck in.






share|cite|improve this answer











$endgroup$









  • 1




    $begingroup$
    I disagree: The prior is the only part with this freedom, and this is sometimes considered a strength of Bayesian analysis (all the assumptions in one place). What you describe as choosing the likelihood is really just a version of choosing the prior. The prior is defined over the space of all possible models. Each model has a well-defined likelihood function. If we limit the analysis to a particular type of model, then we are choosing a prior restricted to this subspace. There is no fundamental distinction between model form uncertainty and model parameter uncertainty.
    $endgroup$
    – nanoman
    Apr 15 at 1:01






  • 1




    $begingroup$
    @nanoman: I'm not sure we really disagree. Whether you view the likelihood as infinitely flexible, but constrained by the prior, or just constrained is a matter of philosophy, but not mathematics; either way, the function defined as $p(d|theta)$ given by any formula written explicitly as $p(d|theta)p(theta)$ is subjective in nature.
    $endgroup$
    – Cliff AB
    Apr 15 at 1:14










  • $begingroup$
    I would put the argument this way: The other approaches are sneaking a prior in the back door rather than stating it up front. Bayesian statistics allows us to put all the subjectivity in one construct, the prior. "How flexible a treatment effect are we going to allow" is decided in the prior, so I disagree with considering it as separate from the prior. Think of Bayesian model selection where the prior puts some weight on a null model, some weight on a simple (say linear) model, and some weight on a complex model. Or, if we're confident, we take a prior with support on just one of these.
    $endgroup$
    – nanoman
    Apr 15 at 1:32










  • $begingroup$
    The point is that the prior is zero on all the models we aren't considering.
    $endgroup$
    – nanoman
    Apr 15 at 1:32










  • $begingroup$
    Saw your comment edit. My perspective is that $theta$ includes all the information (model form and parameters) needed to uniquely define $p(d|theta)$. That is, $theta$ is really just an index to (potentially all possible) distributions on $d$ (likelihoods). However we index them, the key part is the prior $p(theta)$ (mostly zero) which tells us which such models we contemplate and with what weight.
    $endgroup$
    – nanoman
    Apr 15 at 1:43



















6












$begingroup$

If you have a belief about the distribution of your data after seeing data, then why would you be estimating its parameters with data? You already have the parameters.






share|cite|improve this answer









$endgroup$









  • 1




    $begingroup$
    Indeed, this is where my confusion lies.
    $endgroup$
    – purpleostrich
    Apr 15 at 0:01



















5












$begingroup$

In case of many problems in statistics you have some data, let's denote it as $X$, and want to learn about some "parameter" $theta$ of the distribution of the data, i.e. calculate the $theta|X$ kind of things (conditional distribution, conditional expectation etc.). There are several ways how can this be achieved, including maximum likelihood, and without getting into discussion if and which of them is better, you can consider using Bayes theorem as one of them. One of the advantages of using Bayes theorem, is that it let's you directly given that you know conditional distribution of the data given the parameter (likelihood) and the distribution of the parameter (prior), then you simply calculate



$$
overbrace{p(theta|X)}^text{posterior} = frac{overbrace{p(X|theta)}^text{likelihood};overbrace{p(theta)}^text{prior}}{p(X)}
$$



Likelihood is the conditional distribution of your data, so it is a matter of understanding your data and choosing some distribution that approximates it best, and it is rather uncontroversial concept. As about prior, notice that for the above formula to work you need some prior. In perfect world, you would know a priori the distribution of $theta$ and applied it to get the posterior. In real world, this is something that you assume, given your best knowledge, and plug-in to Bayes theorem. You could choose an "uninformative" prior $p(theta) propto 1$, but there are many arguments that such priors are neither "uninformative", nor reasonable. What I'm trying to say, is that there are many ways how you could come up with some distribution for a prior. Some consider priors as a blessing, since they make it possible to bring your out-of-data knowledge into the model, while others, for exactly the same reason, consider them as problematic.



Answering your question, sure you can assume that the distribution of the parameter given data is something. On day-to-day basis all the time we make our decisions based on some assumptions, that not always are rigorously validated. However the difference between prior and posterior is that the posterior is something that you learned from the data (and the prior). If it isn't, but your wild guess, then it's not a posterior any more. As about why we allow ourselves to "make up" priors, there are two answers depending on who you ask: either it is that (a) for the machinery to work we need some prior, or (b) we know something in advance that want to include it in our model, and thanks to priors this is possible. In either case, we usually expect the data to have "final word" rather then the priors.






share|cite|improve this answer









$endgroup$





















    0












    $begingroup$

    Philosophically, there is nothing wrong with “eliciting a posterior.” It’s a bit more difficult to do in a coherent manner than with priors (because you need to respect the likelihood), but IMO you are asking a really good question.



    To turn this into something practical, “making up” a posterior is a potentially useful way to elicit a prior. That is, I take all data realizations $X = x$ and ask myself what the posterior $pi(theta mid x)$ would be. If I do this in a fashion that is consistent with the likelihood, then I will have equivalently specified $pi(theta)$. This is sometimes called “downdating.” Once you realize this, you will see that “making up the prior” and “making up the posterior” are basically the same thing. As I said, it is tricky to do this ina manner which is consistent with the likelihood, but even if you do it for just a few values of $x$ it can be very illuminating about what a good prior will look like.






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      Can you motivate why you would want to do this? I would guess you could be thinking of something like how one would want to use a spike and slab prior. Of course, the irony here is that we are perverting Bayesian statistics in order to obtain estimators whose frequentist properties we prefer.
      $endgroup$
      – Cliff AB
      Apr 15 at 0:43










    • $begingroup$
      @Cliff this type of reasoning can suggest, for example, why we want heavy-tail priors in the normal means problem. Suppose I want a prior which is symmetric, median 0, and has some natural scale $s$. I can ask “what would I believe about $theta$ if I observed data $x = B s$ for some large $B$.” For most problems, an honest assessment of what I would believe about $theta$ would preclude the use, for example, or a normal prior.
      $endgroup$
      – guy
      Apr 15 at 2:27












    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "65"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f403013%2fif-i-can-make-up-priors-why-cant-i-make-up-posteriors%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    4 Answers
    4






    active

    oldest

    votes








    4 Answers
    4






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    7












    $begingroup$

    Well, in Bayesian statistics, you don't just "make up" your priors. You should be building a prior that best captures your knowledge before seeing the data. Otherwise, why anyone should care about the output of your Bayesian analysis is very hard to justify.



    So while it's true that the practitioner has some sense of freedom in creating a prior, it should be tied to something meaningful in order for an analysis to be useful. With that said, the prior isn't the only part of a Bayesian analysis that allows this freedom. A practitioner is offered the same freedom in constructing the likelihood function, which defines the relation between the data and the model. Just as using nonsense priors will lead to a nonsense posterior, using a nonsense likelihood will also lead to a nonsense posterior. So in practice, ideally one should chose a likelihood function such that it is flexible enough to handle one's uncertainty, yet constrained enough to make inference with limited data possible.



    To demonstrate, consider two somewhat extreme examples. Suppose we are interested in determining the effect of a continuous-valued treatment on patients. In order to learn anything from the data, we must choice a model with that flexibility. If we were to simply leave out "treatment" from our set of regression parameters, no matter what our outcome was, we could report "given the data, our model estimates no effect of treatment". On the other extreme, suppose we have a model so flexible that we don't constrain the treatment effect to have a finite number of discontinuities. Then, (without strong priors, at least), we have almost no hope of having any sort of convergence of our estimated treatment effect no matter our sample size. Thus, our inference can be completely butchered by poor choices of likelihood functions, just as it could be by poor choices of priors.



    Of course, in reality we wouldn't chose either of these extremes, but we still do make these types of choices. How flexible a treatment effect are we going to allow: linear, splines, interaction with other variables? There's always the tradeoff between "sufficiently flexible" and "estimatable given our sample size". If we're smart, our likelihood functions should include reasonable constraints (i.e., treatment continuous treatment effect probably relatively smooth, probably doesn't include very high order interaction effects). This is essentially the same art as picking a prior: you want to constrain your inference with prior knowledge, and allow flexibility where there is uncertainty. The whole point of using data is to help constrain some of that the flexibility that stems from our uncertainty.



    In summary, a practitioner has freedom in selection of both the prior and the likelihood function. In order for an analysis to be in anyway meaningful, both choices should be a relatively good approximation of real phenomena.



    EDIT:



    In the comments, @nanoman brings up an interesting take on the problem. One way we can think that the likelihood function is a generic, non-subjective function. As such, all possible models can be included in the functional form likelihood before the prior. But typically, the prior only puts positive probability on a finite set of functional forms of the likelihood. Thus, without the prior, inference is impossible as the likelihood would be too flexible to ever make any form of inference.



    While this isn't the universally accepted definition of prior and likelihood function, this view does have a few advantages. For one, this is very natural in Bayesian model selection. In this case, rather than just putting priors on parameters of a single model, the prior puts probability over a set of competing models. But second, and I believe more to @nanoman's point, is that this view cleanly divides inference into subjective (prior) and non-subjective (likelihood function). This is nice, because it clearly demonstrates one cannot learn anything without some subjective constraints as the likelihood would be too flexible. It also clearly demonstrates that once someone hands you a tractable likelihood function, some subjective information must have snuck in.






    share|cite|improve this answer











    $endgroup$









    • 1




      $begingroup$
      I disagree: The prior is the only part with this freedom, and this is sometimes considered a strength of Bayesian analysis (all the assumptions in one place). What you describe as choosing the likelihood is really just a version of choosing the prior. The prior is defined over the space of all possible models. Each model has a well-defined likelihood function. If we limit the analysis to a particular type of model, then we are choosing a prior restricted to this subspace. There is no fundamental distinction between model form uncertainty and model parameter uncertainty.
      $endgroup$
      – nanoman
      Apr 15 at 1:01






    • 1




      $begingroup$
      @nanoman: I'm not sure we really disagree. Whether you view the likelihood as infinitely flexible, but constrained by the prior, or just constrained is a matter of philosophy, but not mathematics; either way, the function defined as $p(d|theta)$ given by any formula written explicitly as $p(d|theta)p(theta)$ is subjective in nature.
      $endgroup$
      – Cliff AB
      Apr 15 at 1:14










    • $begingroup$
      I would put the argument this way: The other approaches are sneaking a prior in the back door rather than stating it up front. Bayesian statistics allows us to put all the subjectivity in one construct, the prior. "How flexible a treatment effect are we going to allow" is decided in the prior, so I disagree with considering it as separate from the prior. Think of Bayesian model selection where the prior puts some weight on a null model, some weight on a simple (say linear) model, and some weight on a complex model. Or, if we're confident, we take a prior with support on just one of these.
      $endgroup$
      – nanoman
      Apr 15 at 1:32










    • $begingroup$
      The point is that the prior is zero on all the models we aren't considering.
      $endgroup$
      – nanoman
      Apr 15 at 1:32










    • $begingroup$
      Saw your comment edit. My perspective is that $theta$ includes all the information (model form and parameters) needed to uniquely define $p(d|theta)$. That is, $theta$ is really just an index to (potentially all possible) distributions on $d$ (likelihoods). However we index them, the key part is the prior $p(theta)$ (mostly zero) which tells us which such models we contemplate and with what weight.
      $endgroup$
      – nanoman
      Apr 15 at 1:43
















    7












    $begingroup$

    Well, in Bayesian statistics, you don't just "make up" your priors. You should be building a prior that best captures your knowledge before seeing the data. Otherwise, why anyone should care about the output of your Bayesian analysis is very hard to justify.



    So while it's true that the practitioner has some sense of freedom in creating a prior, it should be tied to something meaningful in order for an analysis to be useful. With that said, the prior isn't the only part of a Bayesian analysis that allows this freedom. A practitioner is offered the same freedom in constructing the likelihood function, which defines the relation between the data and the model. Just as using nonsense priors will lead to a nonsense posterior, using a nonsense likelihood will also lead to a nonsense posterior. So in practice, ideally one should chose a likelihood function such that it is flexible enough to handle one's uncertainty, yet constrained enough to make inference with limited data possible.



    To demonstrate, consider two somewhat extreme examples. Suppose we are interested in determining the effect of a continuous-valued treatment on patients. In order to learn anything from the data, we must choice a model with that flexibility. If we were to simply leave out "treatment" from our set of regression parameters, no matter what our outcome was, we could report "given the data, our model estimates no effect of treatment". On the other extreme, suppose we have a model so flexible that we don't constrain the treatment effect to have a finite number of discontinuities. Then, (without strong priors, at least), we have almost no hope of having any sort of convergence of our estimated treatment effect no matter our sample size. Thus, our inference can be completely butchered by poor choices of likelihood functions, just as it could be by poor choices of priors.



    Of course, in reality we wouldn't chose either of these extremes, but we still do make these types of choices. How flexible a treatment effect are we going to allow: linear, splines, interaction with other variables? There's always the tradeoff between "sufficiently flexible" and "estimatable given our sample size". If we're smart, our likelihood functions should include reasonable constraints (i.e., treatment continuous treatment effect probably relatively smooth, probably doesn't include very high order interaction effects). This is essentially the same art as picking a prior: you want to constrain your inference with prior knowledge, and allow flexibility where there is uncertainty. The whole point of using data is to help constrain some of that the flexibility that stems from our uncertainty.



    In summary, a practitioner has freedom in selection of both the prior and the likelihood function. In order for an analysis to be in anyway meaningful, both choices should be a relatively good approximation of real phenomena.



    EDIT:



    In the comments, @nanoman brings up an interesting take on the problem. One way we can think that the likelihood function is a generic, non-subjective function. As such, all possible models can be included in the functional form likelihood before the prior. But typically, the prior only puts positive probability on a finite set of functional forms of the likelihood. Thus, without the prior, inference is impossible as the likelihood would be too flexible to ever make any form of inference.



    While this isn't the universally accepted definition of prior and likelihood function, this view does have a few advantages. For one, this is very natural in Bayesian model selection. In this case, rather than just putting priors on parameters of a single model, the prior puts probability over a set of competing models. But second, and I believe more to @nanoman's point, is that this view cleanly divides inference into subjective (prior) and non-subjective (likelihood function). This is nice, because it clearly demonstrates one cannot learn anything without some subjective constraints as the likelihood would be too flexible. It also clearly demonstrates that once someone hands you a tractable likelihood function, some subjective information must have snuck in.






    share|cite|improve this answer











    $endgroup$









    • 1




      $begingroup$
      I disagree: The prior is the only part with this freedom, and this is sometimes considered a strength of Bayesian analysis (all the assumptions in one place). What you describe as choosing the likelihood is really just a version of choosing the prior. The prior is defined over the space of all possible models. Each model has a well-defined likelihood function. If we limit the analysis to a particular type of model, then we are choosing a prior restricted to this subspace. There is no fundamental distinction between model form uncertainty and model parameter uncertainty.
      $endgroup$
      – nanoman
      Apr 15 at 1:01






    • 1




      $begingroup$
      @nanoman: I'm not sure we really disagree. Whether you view the likelihood as infinitely flexible, but constrained by the prior, or just constrained is a matter of philosophy, but not mathematics; either way, the function defined as $p(d|theta)$ given by any formula written explicitly as $p(d|theta)p(theta)$ is subjective in nature.
      $endgroup$
      – Cliff AB
      Apr 15 at 1:14










    • $begingroup$
      I would put the argument this way: The other approaches are sneaking a prior in the back door rather than stating it up front. Bayesian statistics allows us to put all the subjectivity in one construct, the prior. "How flexible a treatment effect are we going to allow" is decided in the prior, so I disagree with considering it as separate from the prior. Think of Bayesian model selection where the prior puts some weight on a null model, some weight on a simple (say linear) model, and some weight on a complex model. Or, if we're confident, we take a prior with support on just one of these.
      $endgroup$
      – nanoman
      Apr 15 at 1:32










    • $begingroup$
      The point is that the prior is zero on all the models we aren't considering.
      $endgroup$
      – nanoman
      Apr 15 at 1:32










    • $begingroup$
      Saw your comment edit. My perspective is that $theta$ includes all the information (model form and parameters) needed to uniquely define $p(d|theta)$. That is, $theta$ is really just an index to (potentially all possible) distributions on $d$ (likelihoods). However we index them, the key part is the prior $p(theta)$ (mostly zero) which tells us which such models we contemplate and with what weight.
      $endgroup$
      – nanoman
      Apr 15 at 1:43














    7












    7








    7





    $begingroup$

    Well, in Bayesian statistics, you don't just "make up" your priors. You should be building a prior that best captures your knowledge before seeing the data. Otherwise, why anyone should care about the output of your Bayesian analysis is very hard to justify.



    So while it's true that the practitioner has some sense of freedom in creating a prior, it should be tied to something meaningful in order for an analysis to be useful. With that said, the prior isn't the only part of a Bayesian analysis that allows this freedom. A practitioner is offered the same freedom in constructing the likelihood function, which defines the relation between the data and the model. Just as using nonsense priors will lead to a nonsense posterior, using a nonsense likelihood will also lead to a nonsense posterior. So in practice, ideally one should chose a likelihood function such that it is flexible enough to handle one's uncertainty, yet constrained enough to make inference with limited data possible.



    To demonstrate, consider two somewhat extreme examples. Suppose we are interested in determining the effect of a continuous-valued treatment on patients. In order to learn anything from the data, we must choice a model with that flexibility. If we were to simply leave out "treatment" from our set of regression parameters, no matter what our outcome was, we could report "given the data, our model estimates no effect of treatment". On the other extreme, suppose we have a model so flexible that we don't constrain the treatment effect to have a finite number of discontinuities. Then, (without strong priors, at least), we have almost no hope of having any sort of convergence of our estimated treatment effect no matter our sample size. Thus, our inference can be completely butchered by poor choices of likelihood functions, just as it could be by poor choices of priors.



    Of course, in reality we wouldn't chose either of these extremes, but we still do make these types of choices. How flexible a treatment effect are we going to allow: linear, splines, interaction with other variables? There's always the tradeoff between "sufficiently flexible" and "estimatable given our sample size". If we're smart, our likelihood functions should include reasonable constraints (i.e., treatment continuous treatment effect probably relatively smooth, probably doesn't include very high order interaction effects). This is essentially the same art as picking a prior: you want to constrain your inference with prior knowledge, and allow flexibility where there is uncertainty. The whole point of using data is to help constrain some of that the flexibility that stems from our uncertainty.



    In summary, a practitioner has freedom in selection of both the prior and the likelihood function. In order for an analysis to be in anyway meaningful, both choices should be a relatively good approximation of real phenomena.



    EDIT:



    In the comments, @nanoman brings up an interesting take on the problem. One way we can think that the likelihood function is a generic, non-subjective function. As such, all possible models can be included in the functional form likelihood before the prior. But typically, the prior only puts positive probability on a finite set of functional forms of the likelihood. Thus, without the prior, inference is impossible as the likelihood would be too flexible to ever make any form of inference.



    While this isn't the universally accepted definition of prior and likelihood function, this view does have a few advantages. For one, this is very natural in Bayesian model selection. In this case, rather than just putting priors on parameters of a single model, the prior puts probability over a set of competing models. But second, and I believe more to @nanoman's point, is that this view cleanly divides inference into subjective (prior) and non-subjective (likelihood function). This is nice, because it clearly demonstrates one cannot learn anything without some subjective constraints as the likelihood would be too flexible. It also clearly demonstrates that once someone hands you a tractable likelihood function, some subjective information must have snuck in.






    share|cite|improve this answer











    $endgroup$



    Well, in Bayesian statistics, you don't just "make up" your priors. You should be building a prior that best captures your knowledge before seeing the data. Otherwise, why anyone should care about the output of your Bayesian analysis is very hard to justify.



    So while it's true that the practitioner has some sense of freedom in creating a prior, it should be tied to something meaningful in order for an analysis to be useful. With that said, the prior isn't the only part of a Bayesian analysis that allows this freedom. A practitioner is offered the same freedom in constructing the likelihood function, which defines the relation between the data and the model. Just as using nonsense priors will lead to a nonsense posterior, using a nonsense likelihood will also lead to a nonsense posterior. So in practice, ideally one should chose a likelihood function such that it is flexible enough to handle one's uncertainty, yet constrained enough to make inference with limited data possible.



    To demonstrate, consider two somewhat extreme examples. Suppose we are interested in determining the effect of a continuous-valued treatment on patients. In order to learn anything from the data, we must choice a model with that flexibility. If we were to simply leave out "treatment" from our set of regression parameters, no matter what our outcome was, we could report "given the data, our model estimates no effect of treatment". On the other extreme, suppose we have a model so flexible that we don't constrain the treatment effect to have a finite number of discontinuities. Then, (without strong priors, at least), we have almost no hope of having any sort of convergence of our estimated treatment effect no matter our sample size. Thus, our inference can be completely butchered by poor choices of likelihood functions, just as it could be by poor choices of priors.



    Of course, in reality we wouldn't chose either of these extremes, but we still do make these types of choices. How flexible a treatment effect are we going to allow: linear, splines, interaction with other variables? There's always the tradeoff between "sufficiently flexible" and "estimatable given our sample size". If we're smart, our likelihood functions should include reasonable constraints (i.e., treatment continuous treatment effect probably relatively smooth, probably doesn't include very high order interaction effects). This is essentially the same art as picking a prior: you want to constrain your inference with prior knowledge, and allow flexibility where there is uncertainty. The whole point of using data is to help constrain some of that the flexibility that stems from our uncertainty.



    In summary, a practitioner has freedom in selection of both the prior and the likelihood function. In order for an analysis to be in anyway meaningful, both choices should be a relatively good approximation of real phenomena.



    EDIT:



    In the comments, @nanoman brings up an interesting take on the problem. One way we can think that the likelihood function is a generic, non-subjective function. As such, all possible models can be included in the functional form likelihood before the prior. But typically, the prior only puts positive probability on a finite set of functional forms of the likelihood. Thus, without the prior, inference is impossible as the likelihood would be too flexible to ever make any form of inference.



    While this isn't the universally accepted definition of prior and likelihood function, this view does have a few advantages. For one, this is very natural in Bayesian model selection. In this case, rather than just putting priors on parameters of a single model, the prior puts probability over a set of competing models. But second, and I believe more to @nanoman's point, is that this view cleanly divides inference into subjective (prior) and non-subjective (likelihood function). This is nice, because it clearly demonstrates one cannot learn anything without some subjective constraints as the likelihood would be too flexible. It also clearly demonstrates that once someone hands you a tractable likelihood function, some subjective information must have snuck in.







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Apr 15 at 15:29

























    answered Apr 14 at 19:02









    Cliff ABCliff AB

    13.9k12567




    13.9k12567








    • 1




      $begingroup$
      I disagree: The prior is the only part with this freedom, and this is sometimes considered a strength of Bayesian analysis (all the assumptions in one place). What you describe as choosing the likelihood is really just a version of choosing the prior. The prior is defined over the space of all possible models. Each model has a well-defined likelihood function. If we limit the analysis to a particular type of model, then we are choosing a prior restricted to this subspace. There is no fundamental distinction between model form uncertainty and model parameter uncertainty.
      $endgroup$
      – nanoman
      Apr 15 at 1:01






    • 1




      $begingroup$
      @nanoman: I'm not sure we really disagree. Whether you view the likelihood as infinitely flexible, but constrained by the prior, or just constrained is a matter of philosophy, but not mathematics; either way, the function defined as $p(d|theta)$ given by any formula written explicitly as $p(d|theta)p(theta)$ is subjective in nature.
      $endgroup$
      – Cliff AB
      Apr 15 at 1:14










    • $begingroup$
      I would put the argument this way: The other approaches are sneaking a prior in the back door rather than stating it up front. Bayesian statistics allows us to put all the subjectivity in one construct, the prior. "How flexible a treatment effect are we going to allow" is decided in the prior, so I disagree with considering it as separate from the prior. Think of Bayesian model selection where the prior puts some weight on a null model, some weight on a simple (say linear) model, and some weight on a complex model. Or, if we're confident, we take a prior with support on just one of these.
      $endgroup$
      – nanoman
      Apr 15 at 1:32










    • $begingroup$
      The point is that the prior is zero on all the models we aren't considering.
      $endgroup$
      – nanoman
      Apr 15 at 1:32










    • $begingroup$
      Saw your comment edit. My perspective is that $theta$ includes all the information (model form and parameters) needed to uniquely define $p(d|theta)$. That is, $theta$ is really just an index to (potentially all possible) distributions on $d$ (likelihoods). However we index them, the key part is the prior $p(theta)$ (mostly zero) which tells us which such models we contemplate and with what weight.
      $endgroup$
      – nanoman
      Apr 15 at 1:43














    • 1




      $begingroup$
      I disagree: The prior is the only part with this freedom, and this is sometimes considered a strength of Bayesian analysis (all the assumptions in one place). What you describe as choosing the likelihood is really just a version of choosing the prior. The prior is defined over the space of all possible models. Each model has a well-defined likelihood function. If we limit the analysis to a particular type of model, then we are choosing a prior restricted to this subspace. There is no fundamental distinction between model form uncertainty and model parameter uncertainty.
      $endgroup$
      – nanoman
      Apr 15 at 1:01






    • 1




      $begingroup$
      @nanoman: I'm not sure we really disagree. Whether you view the likelihood as infinitely flexible, but constrained by the prior, or just constrained is a matter of philosophy, but not mathematics; either way, the function defined as $p(d|theta)$ given by any formula written explicitly as $p(d|theta)p(theta)$ is subjective in nature.
      $endgroup$
      – Cliff AB
      Apr 15 at 1:14










    • $begingroup$
      I would put the argument this way: The other approaches are sneaking a prior in the back door rather than stating it up front. Bayesian statistics allows us to put all the subjectivity in one construct, the prior. "How flexible a treatment effect are we going to allow" is decided in the prior, so I disagree with considering it as separate from the prior. Think of Bayesian model selection where the prior puts some weight on a null model, some weight on a simple (say linear) model, and some weight on a complex model. Or, if we're confident, we take a prior with support on just one of these.
      $endgroup$
      – nanoman
      Apr 15 at 1:32










    • $begingroup$
      The point is that the prior is zero on all the models we aren't considering.
      $endgroup$
      – nanoman
      Apr 15 at 1:32










    • $begingroup$
      Saw your comment edit. My perspective is that $theta$ includes all the information (model form and parameters) needed to uniquely define $p(d|theta)$. That is, $theta$ is really just an index to (potentially all possible) distributions on $d$ (likelihoods). However we index them, the key part is the prior $p(theta)$ (mostly zero) which tells us which such models we contemplate and with what weight.
      $endgroup$
      – nanoman
      Apr 15 at 1:43








    1




    1




    $begingroup$
    I disagree: The prior is the only part with this freedom, and this is sometimes considered a strength of Bayesian analysis (all the assumptions in one place). What you describe as choosing the likelihood is really just a version of choosing the prior. The prior is defined over the space of all possible models. Each model has a well-defined likelihood function. If we limit the analysis to a particular type of model, then we are choosing a prior restricted to this subspace. There is no fundamental distinction between model form uncertainty and model parameter uncertainty.
    $endgroup$
    – nanoman
    Apr 15 at 1:01




    $begingroup$
    I disagree: The prior is the only part with this freedom, and this is sometimes considered a strength of Bayesian analysis (all the assumptions in one place). What you describe as choosing the likelihood is really just a version of choosing the prior. The prior is defined over the space of all possible models. Each model has a well-defined likelihood function. If we limit the analysis to a particular type of model, then we are choosing a prior restricted to this subspace. There is no fundamental distinction between model form uncertainty and model parameter uncertainty.
    $endgroup$
    – nanoman
    Apr 15 at 1:01




    1




    1




    $begingroup$
    @nanoman: I'm not sure we really disagree. Whether you view the likelihood as infinitely flexible, but constrained by the prior, or just constrained is a matter of philosophy, but not mathematics; either way, the function defined as $p(d|theta)$ given by any formula written explicitly as $p(d|theta)p(theta)$ is subjective in nature.
    $endgroup$
    – Cliff AB
    Apr 15 at 1:14




    $begingroup$
    @nanoman: I'm not sure we really disagree. Whether you view the likelihood as infinitely flexible, but constrained by the prior, or just constrained is a matter of philosophy, but not mathematics; either way, the function defined as $p(d|theta)$ given by any formula written explicitly as $p(d|theta)p(theta)$ is subjective in nature.
    $endgroup$
    – Cliff AB
    Apr 15 at 1:14












    $begingroup$
    I would put the argument this way: The other approaches are sneaking a prior in the back door rather than stating it up front. Bayesian statistics allows us to put all the subjectivity in one construct, the prior. "How flexible a treatment effect are we going to allow" is decided in the prior, so I disagree with considering it as separate from the prior. Think of Bayesian model selection where the prior puts some weight on a null model, some weight on a simple (say linear) model, and some weight on a complex model. Or, if we're confident, we take a prior with support on just one of these.
    $endgroup$
    – nanoman
    Apr 15 at 1:32




    $begingroup$
    I would put the argument this way: The other approaches are sneaking a prior in the back door rather than stating it up front. Bayesian statistics allows us to put all the subjectivity in one construct, the prior. "How flexible a treatment effect are we going to allow" is decided in the prior, so I disagree with considering it as separate from the prior. Think of Bayesian model selection where the prior puts some weight on a null model, some weight on a simple (say linear) model, and some weight on a complex model. Or, if we're confident, we take a prior with support on just one of these.
    $endgroup$
    – nanoman
    Apr 15 at 1:32












    $begingroup$
    The point is that the prior is zero on all the models we aren't considering.
    $endgroup$
    – nanoman
    Apr 15 at 1:32




    $begingroup$
    The point is that the prior is zero on all the models we aren't considering.
    $endgroup$
    – nanoman
    Apr 15 at 1:32












    $begingroup$
    Saw your comment edit. My perspective is that $theta$ includes all the information (model form and parameters) needed to uniquely define $p(d|theta)$. That is, $theta$ is really just an index to (potentially all possible) distributions on $d$ (likelihoods). However we index them, the key part is the prior $p(theta)$ (mostly zero) which tells us which such models we contemplate and with what weight.
    $endgroup$
    – nanoman
    Apr 15 at 1:43




    $begingroup$
    Saw your comment edit. My perspective is that $theta$ includes all the information (model form and parameters) needed to uniquely define $p(d|theta)$. That is, $theta$ is really just an index to (potentially all possible) distributions on $d$ (likelihoods). However we index them, the key part is the prior $p(theta)$ (mostly zero) which tells us which such models we contemplate and with what weight.
    $endgroup$
    – nanoman
    Apr 15 at 1:43













    6












    $begingroup$

    If you have a belief about the distribution of your data after seeing data, then why would you be estimating its parameters with data? You already have the parameters.






    share|cite|improve this answer









    $endgroup$









    • 1




      $begingroup$
      Indeed, this is where my confusion lies.
      $endgroup$
      – purpleostrich
      Apr 15 at 0:01
















    6












    $begingroup$

    If you have a belief about the distribution of your data after seeing data, then why would you be estimating its parameters with data? You already have the parameters.






    share|cite|improve this answer









    $endgroup$









    • 1




      $begingroup$
      Indeed, this is where my confusion lies.
      $endgroup$
      – purpleostrich
      Apr 15 at 0:01














    6












    6








    6





    $begingroup$

    If you have a belief about the distribution of your data after seeing data, then why would you be estimating its parameters with data? You already have the parameters.






    share|cite|improve this answer









    $endgroup$



    If you have a belief about the distribution of your data after seeing data, then why would you be estimating its parameters with data? You already have the parameters.







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered Apr 14 at 20:44









    AksakalAksakal

    39.3k452120




    39.3k452120








    • 1




      $begingroup$
      Indeed, this is where my confusion lies.
      $endgroup$
      – purpleostrich
      Apr 15 at 0:01














    • 1




      $begingroup$
      Indeed, this is where my confusion lies.
      $endgroup$
      – purpleostrich
      Apr 15 at 0:01








    1




    1




    $begingroup$
    Indeed, this is where my confusion lies.
    $endgroup$
    – purpleostrich
    Apr 15 at 0:01




    $begingroup$
    Indeed, this is where my confusion lies.
    $endgroup$
    – purpleostrich
    Apr 15 at 0:01











    5












    $begingroup$

    In case of many problems in statistics you have some data, let's denote it as $X$, and want to learn about some "parameter" $theta$ of the distribution of the data, i.e. calculate the $theta|X$ kind of things (conditional distribution, conditional expectation etc.). There are several ways how can this be achieved, including maximum likelihood, and without getting into discussion if and which of them is better, you can consider using Bayes theorem as one of them. One of the advantages of using Bayes theorem, is that it let's you directly given that you know conditional distribution of the data given the parameter (likelihood) and the distribution of the parameter (prior), then you simply calculate



    $$
    overbrace{p(theta|X)}^text{posterior} = frac{overbrace{p(X|theta)}^text{likelihood};overbrace{p(theta)}^text{prior}}{p(X)}
    $$



    Likelihood is the conditional distribution of your data, so it is a matter of understanding your data and choosing some distribution that approximates it best, and it is rather uncontroversial concept. As about prior, notice that for the above formula to work you need some prior. In perfect world, you would know a priori the distribution of $theta$ and applied it to get the posterior. In real world, this is something that you assume, given your best knowledge, and plug-in to Bayes theorem. You could choose an "uninformative" prior $p(theta) propto 1$, but there are many arguments that such priors are neither "uninformative", nor reasonable. What I'm trying to say, is that there are many ways how you could come up with some distribution for a prior. Some consider priors as a blessing, since they make it possible to bring your out-of-data knowledge into the model, while others, for exactly the same reason, consider them as problematic.



    Answering your question, sure you can assume that the distribution of the parameter given data is something. On day-to-day basis all the time we make our decisions based on some assumptions, that not always are rigorously validated. However the difference between prior and posterior is that the posterior is something that you learned from the data (and the prior). If it isn't, but your wild guess, then it's not a posterior any more. As about why we allow ourselves to "make up" priors, there are two answers depending on who you ask: either it is that (a) for the machinery to work we need some prior, or (b) we know something in advance that want to include it in our model, and thanks to priors this is possible. In either case, we usually expect the data to have "final word" rather then the priors.






    share|cite|improve this answer









    $endgroup$


















      5












      $begingroup$

      In case of many problems in statistics you have some data, let's denote it as $X$, and want to learn about some "parameter" $theta$ of the distribution of the data, i.e. calculate the $theta|X$ kind of things (conditional distribution, conditional expectation etc.). There are several ways how can this be achieved, including maximum likelihood, and without getting into discussion if and which of them is better, you can consider using Bayes theorem as one of them. One of the advantages of using Bayes theorem, is that it let's you directly given that you know conditional distribution of the data given the parameter (likelihood) and the distribution of the parameter (prior), then you simply calculate



      $$
      overbrace{p(theta|X)}^text{posterior} = frac{overbrace{p(X|theta)}^text{likelihood};overbrace{p(theta)}^text{prior}}{p(X)}
      $$



      Likelihood is the conditional distribution of your data, so it is a matter of understanding your data and choosing some distribution that approximates it best, and it is rather uncontroversial concept. As about prior, notice that for the above formula to work you need some prior. In perfect world, you would know a priori the distribution of $theta$ and applied it to get the posterior. In real world, this is something that you assume, given your best knowledge, and plug-in to Bayes theorem. You could choose an "uninformative" prior $p(theta) propto 1$, but there are many arguments that such priors are neither "uninformative", nor reasonable. What I'm trying to say, is that there are many ways how you could come up with some distribution for a prior. Some consider priors as a blessing, since they make it possible to bring your out-of-data knowledge into the model, while others, for exactly the same reason, consider them as problematic.



      Answering your question, sure you can assume that the distribution of the parameter given data is something. On day-to-day basis all the time we make our decisions based on some assumptions, that not always are rigorously validated. However the difference between prior and posterior is that the posterior is something that you learned from the data (and the prior). If it isn't, but your wild guess, then it's not a posterior any more. As about why we allow ourselves to "make up" priors, there are two answers depending on who you ask: either it is that (a) for the machinery to work we need some prior, or (b) we know something in advance that want to include it in our model, and thanks to priors this is possible. In either case, we usually expect the data to have "final word" rather then the priors.






      share|cite|improve this answer









      $endgroup$
















        5












        5








        5





        $begingroup$

        In case of many problems in statistics you have some data, let's denote it as $X$, and want to learn about some "parameter" $theta$ of the distribution of the data, i.e. calculate the $theta|X$ kind of things (conditional distribution, conditional expectation etc.). There are several ways how can this be achieved, including maximum likelihood, and without getting into discussion if and which of them is better, you can consider using Bayes theorem as one of them. One of the advantages of using Bayes theorem, is that it let's you directly given that you know conditional distribution of the data given the parameter (likelihood) and the distribution of the parameter (prior), then you simply calculate



        $$
        overbrace{p(theta|X)}^text{posterior} = frac{overbrace{p(X|theta)}^text{likelihood};overbrace{p(theta)}^text{prior}}{p(X)}
        $$



        Likelihood is the conditional distribution of your data, so it is a matter of understanding your data and choosing some distribution that approximates it best, and it is rather uncontroversial concept. As about prior, notice that for the above formula to work you need some prior. In perfect world, you would know a priori the distribution of $theta$ and applied it to get the posterior. In real world, this is something that you assume, given your best knowledge, and plug-in to Bayes theorem. You could choose an "uninformative" prior $p(theta) propto 1$, but there are many arguments that such priors are neither "uninformative", nor reasonable. What I'm trying to say, is that there are many ways how you could come up with some distribution for a prior. Some consider priors as a blessing, since they make it possible to bring your out-of-data knowledge into the model, while others, for exactly the same reason, consider them as problematic.



        Answering your question, sure you can assume that the distribution of the parameter given data is something. On day-to-day basis all the time we make our decisions based on some assumptions, that not always are rigorously validated. However the difference between prior and posterior is that the posterior is something that you learned from the data (and the prior). If it isn't, but your wild guess, then it's not a posterior any more. As about why we allow ourselves to "make up" priors, there are two answers depending on who you ask: either it is that (a) for the machinery to work we need some prior, or (b) we know something in advance that want to include it in our model, and thanks to priors this is possible. In either case, we usually expect the data to have "final word" rather then the priors.






        share|cite|improve this answer









        $endgroup$



        In case of many problems in statistics you have some data, let's denote it as $X$, and want to learn about some "parameter" $theta$ of the distribution of the data, i.e. calculate the $theta|X$ kind of things (conditional distribution, conditional expectation etc.). There are several ways how can this be achieved, including maximum likelihood, and without getting into discussion if and which of them is better, you can consider using Bayes theorem as one of them. One of the advantages of using Bayes theorem, is that it let's you directly given that you know conditional distribution of the data given the parameter (likelihood) and the distribution of the parameter (prior), then you simply calculate



        $$
        overbrace{p(theta|X)}^text{posterior} = frac{overbrace{p(X|theta)}^text{likelihood};overbrace{p(theta)}^text{prior}}{p(X)}
        $$



        Likelihood is the conditional distribution of your data, so it is a matter of understanding your data and choosing some distribution that approximates it best, and it is rather uncontroversial concept. As about prior, notice that for the above formula to work you need some prior. In perfect world, you would know a priori the distribution of $theta$ and applied it to get the posterior. In real world, this is something that you assume, given your best knowledge, and plug-in to Bayes theorem. You could choose an "uninformative" prior $p(theta) propto 1$, but there are many arguments that such priors are neither "uninformative", nor reasonable. What I'm trying to say, is that there are many ways how you could come up with some distribution for a prior. Some consider priors as a blessing, since they make it possible to bring your out-of-data knowledge into the model, while others, for exactly the same reason, consider them as problematic.



        Answering your question, sure you can assume that the distribution of the parameter given data is something. On day-to-day basis all the time we make our decisions based on some assumptions, that not always are rigorously validated. However the difference between prior and posterior is that the posterior is something that you learned from the data (and the prior). If it isn't, but your wild guess, then it's not a posterior any more. As about why we allow ourselves to "make up" priors, there are two answers depending on who you ask: either it is that (a) for the machinery to work we need some prior, or (b) we know something in advance that want to include it in our model, and thanks to priors this is possible. In either case, we usually expect the data to have "final word" rather then the priors.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Apr 14 at 20:29









        TimTim

        60.5k9133230




        60.5k9133230























            0












            $begingroup$

            Philosophically, there is nothing wrong with “eliciting a posterior.” It’s a bit more difficult to do in a coherent manner than with priors (because you need to respect the likelihood), but IMO you are asking a really good question.



            To turn this into something practical, “making up” a posterior is a potentially useful way to elicit a prior. That is, I take all data realizations $X = x$ and ask myself what the posterior $pi(theta mid x)$ would be. If I do this in a fashion that is consistent with the likelihood, then I will have equivalently specified $pi(theta)$. This is sometimes called “downdating.” Once you realize this, you will see that “making up the prior” and “making up the posterior” are basically the same thing. As I said, it is tricky to do this ina manner which is consistent with the likelihood, but even if you do it for just a few values of $x$ it can be very illuminating about what a good prior will look like.






            share|cite|improve this answer









            $endgroup$













            • $begingroup$
              Can you motivate why you would want to do this? I would guess you could be thinking of something like how one would want to use a spike and slab prior. Of course, the irony here is that we are perverting Bayesian statistics in order to obtain estimators whose frequentist properties we prefer.
              $endgroup$
              – Cliff AB
              Apr 15 at 0:43










            • $begingroup$
              @Cliff this type of reasoning can suggest, for example, why we want heavy-tail priors in the normal means problem. Suppose I want a prior which is symmetric, median 0, and has some natural scale $s$. I can ask “what would I believe about $theta$ if I observed data $x = B s$ for some large $B$.” For most problems, an honest assessment of what I would believe about $theta$ would preclude the use, for example, or a normal prior.
              $endgroup$
              – guy
              Apr 15 at 2:27
















            0












            $begingroup$

            Philosophically, there is nothing wrong with “eliciting a posterior.” It’s a bit more difficult to do in a coherent manner than with priors (because you need to respect the likelihood), but IMO you are asking a really good question.



            To turn this into something practical, “making up” a posterior is a potentially useful way to elicit a prior. That is, I take all data realizations $X = x$ and ask myself what the posterior $pi(theta mid x)$ would be. If I do this in a fashion that is consistent with the likelihood, then I will have equivalently specified $pi(theta)$. This is sometimes called “downdating.” Once you realize this, you will see that “making up the prior” and “making up the posterior” are basically the same thing. As I said, it is tricky to do this ina manner which is consistent with the likelihood, but even if you do it for just a few values of $x$ it can be very illuminating about what a good prior will look like.






            share|cite|improve this answer









            $endgroup$













            • $begingroup$
              Can you motivate why you would want to do this? I would guess you could be thinking of something like how one would want to use a spike and slab prior. Of course, the irony here is that we are perverting Bayesian statistics in order to obtain estimators whose frequentist properties we prefer.
              $endgroup$
              – Cliff AB
              Apr 15 at 0:43










            • $begingroup$
              @Cliff this type of reasoning can suggest, for example, why we want heavy-tail priors in the normal means problem. Suppose I want a prior which is symmetric, median 0, and has some natural scale $s$. I can ask “what would I believe about $theta$ if I observed data $x = B s$ for some large $B$.” For most problems, an honest assessment of what I would believe about $theta$ would preclude the use, for example, or a normal prior.
              $endgroup$
              – guy
              Apr 15 at 2:27














            0












            0








            0





            $begingroup$

            Philosophically, there is nothing wrong with “eliciting a posterior.” It’s a bit more difficult to do in a coherent manner than with priors (because you need to respect the likelihood), but IMO you are asking a really good question.



            To turn this into something practical, “making up” a posterior is a potentially useful way to elicit a prior. That is, I take all data realizations $X = x$ and ask myself what the posterior $pi(theta mid x)$ would be. If I do this in a fashion that is consistent with the likelihood, then I will have equivalently specified $pi(theta)$. This is sometimes called “downdating.” Once you realize this, you will see that “making up the prior” and “making up the posterior” are basically the same thing. As I said, it is tricky to do this ina manner which is consistent with the likelihood, but even if you do it for just a few values of $x$ it can be very illuminating about what a good prior will look like.






            share|cite|improve this answer









            $endgroup$



            Philosophically, there is nothing wrong with “eliciting a posterior.” It’s a bit more difficult to do in a coherent manner than with priors (because you need to respect the likelihood), but IMO you are asking a really good question.



            To turn this into something practical, “making up” a posterior is a potentially useful way to elicit a prior. That is, I take all data realizations $X = x$ and ask myself what the posterior $pi(theta mid x)$ would be. If I do this in a fashion that is consistent with the likelihood, then I will have equivalently specified $pi(theta)$. This is sometimes called “downdating.” Once you realize this, you will see that “making up the prior” and “making up the posterior” are basically the same thing. As I said, it is tricky to do this ina manner which is consistent with the likelihood, but even if you do it for just a few values of $x$ it can be very illuminating about what a good prior will look like.







            share|cite|improve this answer












            share|cite|improve this answer



            share|cite|improve this answer










            answered Apr 14 at 23:57









            guyguy

            4,65811338




            4,65811338












            • $begingroup$
              Can you motivate why you would want to do this? I would guess you could be thinking of something like how one would want to use a spike and slab prior. Of course, the irony here is that we are perverting Bayesian statistics in order to obtain estimators whose frequentist properties we prefer.
              $endgroup$
              – Cliff AB
              Apr 15 at 0:43










            • $begingroup$
              @Cliff this type of reasoning can suggest, for example, why we want heavy-tail priors in the normal means problem. Suppose I want a prior which is symmetric, median 0, and has some natural scale $s$. I can ask “what would I believe about $theta$ if I observed data $x = B s$ for some large $B$.” For most problems, an honest assessment of what I would believe about $theta$ would preclude the use, for example, or a normal prior.
              $endgroup$
              – guy
              Apr 15 at 2:27


















            • $begingroup$
              Can you motivate why you would want to do this? I would guess you could be thinking of something like how one would want to use a spike and slab prior. Of course, the irony here is that we are perverting Bayesian statistics in order to obtain estimators whose frequentist properties we prefer.
              $endgroup$
              – Cliff AB
              Apr 15 at 0:43










            • $begingroup$
              @Cliff this type of reasoning can suggest, for example, why we want heavy-tail priors in the normal means problem. Suppose I want a prior which is symmetric, median 0, and has some natural scale $s$. I can ask “what would I believe about $theta$ if I observed data $x = B s$ for some large $B$.” For most problems, an honest assessment of what I would believe about $theta$ would preclude the use, for example, or a normal prior.
              $endgroup$
              – guy
              Apr 15 at 2:27
















            $begingroup$
            Can you motivate why you would want to do this? I would guess you could be thinking of something like how one would want to use a spike and slab prior. Of course, the irony here is that we are perverting Bayesian statistics in order to obtain estimators whose frequentist properties we prefer.
            $endgroup$
            – Cliff AB
            Apr 15 at 0:43




            $begingroup$
            Can you motivate why you would want to do this? I would guess you could be thinking of something like how one would want to use a spike and slab prior. Of course, the irony here is that we are perverting Bayesian statistics in order to obtain estimators whose frequentist properties we prefer.
            $endgroup$
            – Cliff AB
            Apr 15 at 0:43












            $begingroup$
            @Cliff this type of reasoning can suggest, for example, why we want heavy-tail priors in the normal means problem. Suppose I want a prior which is symmetric, median 0, and has some natural scale $s$. I can ask “what would I believe about $theta$ if I observed data $x = B s$ for some large $B$.” For most problems, an honest assessment of what I would believe about $theta$ would preclude the use, for example, or a normal prior.
            $endgroup$
            – guy
            Apr 15 at 2:27




            $begingroup$
            @Cliff this type of reasoning can suggest, for example, why we want heavy-tail priors in the normal means problem. Suppose I want a prior which is symmetric, median 0, and has some natural scale $s$. I can ask “what would I believe about $theta$ if I observed data $x = B s$ for some large $B$.” For most problems, an honest assessment of what I would believe about $theta$ would preclude the use, for example, or a normal prior.
            $endgroup$
            – guy
            Apr 15 at 2:27


















            draft saved

            draft discarded




















































            Thanks for contributing an answer to Cross Validated!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f403013%2fif-i-can-make-up-priors-why-cant-i-make-up-posteriors%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Plaza Victoria

            In PowerPoint, is there a keyboard shortcut for bulleted / numbered list?

            How to put 3 figures in Latex with 2 figures side by side and 1 below these side by side images but in...