Note Figure Lurking in Background |
A common assertion (of which I was reminded in Leiden*) is that in scientific practice, by and large, the frequentist sampling theorist (error statistician) ends up in essentially the "same place" as Bayesians, as if to downplay the importance of disagreements within the Bayesian family, let alone between the Bayesian and frequentist. Such an utterance, in my experience, is indicative of a frequentist in exile (as described on this blog). [1] Perhaps the claim makes the frequentist feel less in exile; but it also renders any subsequent claims to prefer the frequentist philosophy as just that---a matter of preference, without a pressing foundational imperative. Yet, even if one were to grant an agreement in numbers, it is altogether crucial to ascertain who or what is really doing the work. If we don’t understand what is really responsible for success stories in statistical inference, we cannot hope to improve those methods, adjudicate rival assessments when they do arise, or get ideas for extending and developing tools when entering brand new arenas. Clearly, understanding the underlying foundations of one or another approach is crucial for a philosopher of statistics, but practitioners too should care, at least some of the time.
Some who downplay discordance, even those who hold in their heart of hearts that frequentist principles and guarantees are really behind statistical success stories, would just as soon keep the Bayesian talons from coming out. (Can’t we all just get along? as Rodney King asked.) Philosophical discussion and intellectual honesty should be free of talons of all sorts. Granting that practitioners have pressing applications to attend to, those with a bent toward foundations will hopefully share their insights, especially needful nowadays, as elements of current practice seem to be implicitly disinterring frequentist roots, even if only subliminally. As a first baby step, the deep (and fascinating) family feuds among Bayesians should be plainly acknowledged.
Notably, while one group of Bayesians insists we must introduce prior probability distributions (on an exhaustive set of hypotheses) if we are to properly take account of prior uncertainty, ignorance, or degree of belief (see Oct. 31 post); contemporary “reference” Bayesians---arguably the predominant Bayesian subgroup (yes?)---work assiduously to develop conventional priors that are not supposed to be considered expressions of uncertainty, ignorance, or degree of belief. While the first group of Bayesians lambast frequentists for daring to evaluate a given set of data without the influence of a prior probability (perhaps attained through elicitations of degrees of belief), the latter group of Bayesians see their research mission as finding conventional priors that have the most minimal influence on resulting inferences, letting the data dominate. I am back to a question I asked long ago: if prior probabilities in hypotheses are intended to allow subjective background beliefs to influence statistical assessments of hypotheses, then why do we want them? If the priors are designed to have minimal influence on any inferences, then why do we need them? As remarked in Cox and Mayo (2010, p. 301):
Some who downplay discordance, even those who hold in their heart of hearts that frequentist principles and guarantees are really behind statistical success stories, would just as soon keep the Bayesian talons from coming out. (Can’t we all just get along? as Rodney King asked.) Philosophical discussion and intellectual honesty should be free of talons of all sorts. Granting that practitioners have pressing applications to attend to, those with a bent toward foundations will hopefully share their insights, especially needful nowadays, as elements of current practice seem to be implicitly disinterring frequentist roots, even if only subliminally. As a first baby step, the deep (and fascinating) family feuds among Bayesians should be plainly acknowledged.
Notably, while one group of Bayesians insists we must introduce prior probability distributions (on an exhaustive set of hypotheses) if we are to properly take account of prior uncertainty, ignorance, or degree of belief (see Oct. 31 post); contemporary “reference” Bayesians---arguably the predominant Bayesian subgroup (yes?)---work assiduously to develop conventional priors that are not supposed to be considered expressions of uncertainty, ignorance, or degree of belief. While the first group of Bayesians lambast frequentists for daring to evaluate a given set of data without the influence of a prior probability (perhaps attained through elicitations of degrees of belief), the latter group of Bayesians see their research mission as finding conventional priors that have the most minimal influence on resulting inferences, letting the data dominate. I am back to a question I asked long ago: if prior probabilities in hypotheses are intended to allow subjective background beliefs to influence statistical assessments of hypotheses, then why do we want them? If the priors are designed to have minimal influence on any inferences, then why do we need them? As remarked in Cox and Mayo (2010, p. 301):
“Reference priors yield inferences with some good frequentist properties, at least in one-dimensional problems – a feature usually called matching. Although welcome, it falls short of showing their success as objective methods. First, as is generally true in science, the fact that a theory can be made to match known successes does not redound as strongly to that theory as did the successes that emanated from first principles or basic foundations. This must be especially so where achieving the matches seems to impose swallowing violations of its initial basic theories or principles.
Even if there are some cases where good frequentist solutions are more neatly generated through Bayesian machinery, it would show only their technical value for goals that differ fundamentally from their own. But producing identical numbers could only be taken as performing the tasks of frequentist inference by reinterpreting them to mean confidence levels and significance levels, not posteriors.”
In putting forth the “error statistical philosophy of science” I have simply attempted to organize and weave together the main strands of what I take to be doing the work in frequentist statistical practice (also informally reflected outside statistical contexts). It claims to offer (at least one) clear and consistent interpretation of, as well as a rationale for, the right uses of frequentist methods. By contrast, I argue, Bayesian foundations today are in shambles. For specifics, see my contribution to the special RMM volume (2011). A key purpose of that contribution was and is to invite Bayesian philosophers (or practitioners) to supply their favorite Bayesian account with a new foundation. Even an admission that this is needed would be an important start. As it is now, we do not even get a clear interpretation for posterior probability inferences. If priors are not probabilities, what then is the interpretation of a posterior? It may be stipulated, by definition, that the posteriors based on a conventional prior are objective degrees of belief, but this is still no argument.
Andrew Gelman, in his RMM contribution remarks:
“Nowadays ‘Bayesian’ is often taken to be a synonym for rationality, and I can see how this can irritate thoughtful philosophers and statisticians alike: To start with, lots of rational thinking—even lots of rational statistical inference— does not occur within the Bayesian formalism. And, to look at it from the other direction, lots of self-proclaimed Bayesian inference hardly seems rational at all. And in what way is ‘subjective probability’ a model for rational scientific inquiry? On the contrary, subjectivity and rationality are in many ways opposites!”
Having abandoned the traditional foundational justification for Bayesianism in terms of rationality, some defend their methods by pointing to their “usefulness.” Unfortunately, the question of what if any general methods, reasoning strategies, or underlying rationales are actually doing the work in getting the results they value is left glaringly open. Some seem to think that if it is called Bayesian, or occurs under that umbrella, that it follows that the results are to be credited to Bayesian, as opposed to frequentist, capabilities. Until and unless it is shown that it is because of a given method that some valuable result obtains, the method gets no gold stars!
*The painting is by van Mieris from Leiden (is that Rev Bayes lurking?).
[1] Do Bayesian also say this at times? I'm interested to hear of cases.
No comments:
Post a Comment
===========================