Let me pick up where I left off in “Neyman’s Nursery,” [built to house Giere's statistical papers-in-exile]--please see Oct. 22 post. The main goal of the discussion is to get us to exercise correctly our "will to understand power", if only little by little. One of the two surprising papers I came across the night our house was hit by lightening has the tantalizing title “The Problem of Inductive Inference” (Neyman 1955). It reveals a use of statistical tests strikingly different from the long-run behavior construal most associated with Neyman. Surprising too, Neyman is talking to none other than the logical positivist philosopher of confirmation, Rudof Carnap:
Neyman alludes to a one-sided test of the mean of a Normal distribution with n iid samples, and known standard deviation, call it test T+. (Whether Greek symbols will appear where they should, I cannot say; it's being worked on back at Elba).
The test statistic d(X) is the standardized sample mean.
The test rule: Infer a (positive) discrepancy from µ0 iff {d(x0) > cα) where cα corresponds to a difference statistically significant at the α level.
In Carnap's example the test could not reject the null hypothesis, i.e., d(x0) ≤ cα, but (to paraphrase Neyman) the problem is that the chance of detecting the presence of discrepancy δ from the null, with so few observations, is extremely slim, even if [δ is present].
We are back to our old friend: interpreting negative results!
The power of the test T+ to detect discrepancy δ:
(1) P(d(X) > cα; µ = µ0 + δ)
It is interesting to hear Neyman talk this way since it is at odds with the more behavioristic construal he usually championed. He sounds like a Cohen-style power analyst! Still, power is calculated relative to an outcome just missing the cutoff cα. This is, in effect, the worst case of a negative (non significant) result, and if the actual outcome corresponds to a larger p-value, that should be taken into account in interpreting the results. It is more informative, therefore, to look at the probability of getting a worse fit (with the null hypothesis) than you did:
(2) P(d(X) > d(x0); µ = µ0 + δ)
In this example, this gives a measure of the severity (or degree of corroboration) for the inference µ < µ0 + δ.
Although (1) may be low, (2) may be high (For numbers, see Mayo and Spanos 2006).
Spanos and I (Mayo and Spanos 2006) couldn't find a term in the literature defined precisely this way--the way I'd defined it in Mayo (1996) and before. We were thinking at first of calling it "attained power" but then came across what some have called “observed power" which is very different (and very strange). Those measures are just like ordinary power but calculated assuming the value of the mean equals the observed mean! (Why anyone would want to do this and then apply power analytic reasoning is unclear. I'll come back to this in my next post.) Anyway, we refer to it as the Severity Interpretation of "Acceptance" (SIA) in Mayo and Spanos 2006.
The claim in (2) could also be made out viewing the p-value as a random variable, calculating its distribution for various alternatives (Cox 2006, 25). This reasoning yields a core frequentist principle of evidence (FEV) in Mayo and Cox 2010, 256):
FEV:1 A moderate p-value is evidence of the absence of a discrepancy d from H0 only if there is a high probability the test would have given a worse fit with H0 (i.e., smaller p value) were a discrepancy d to exist.
It is important to see that it is only in the case of a negative result that severity for various inferences is in the same direction as power. In the case of significant results, d(x) in excess of the cutoff, the opposite concern arises—namely, the test is too sensitive. So severity is always relative to the particular inference being entertained: speaking of the “severity of a test” simpliciter is an incomplete statement in this account. These assessments enable sidestepping classic fallacies of tests that are either too sensitive or not sensitive enough.2
________________________________________
The full version of our frequentist principle of evidence FEV corresponds to the interpretation of a small p-value:
x is evidence of a discrepancy d from H0 iff, if H0 is a correct description of the mechanism generating x, then, with high probability a less discordant result would have occurred.
By making a SEV assessment relevant to the inference under consideration, we obtain a measure where high (low) values always correspond to good (poor) evidential warrant.
It didn't have to be done this way, but I decided it was best, even though it means appropriately swapping out the claim H for which one wants to assess SEV.
I am concerned with the term “degree of confirmation” introduced by Carnap. …We have seen that the application of the locally best one-sided test to the data … failed to reject the hypothesis [that the n observations come from a source in which the null hypothesis is true]. The question is: does this result “confirm” the hypothesis that H0 is true of the particular data set? (Neyman, pp 40-41).Neyman continues:
The answer … depends very much on the exact meaning given to the words “confirmation,” “confidence,” etc. If one uses these words to describe one’s intuitive feeling of confidence in the hypothesis tested H0, then…. the attitude described is dangerous.… [T]he chance of detecting the presence [of discrepancy from the null], when only [n] observations are available, is extremely slim, even if [the discrepancy is present]. Therefore, the failure of the test to reject H0 cannot be reasonably considered as anything like a confirmation of H0. The situation would have been radically different if the power function [corresponding to a discrepancy of interest] were, for example, greater than 0.95.The general conclusion is that it is a little rash to base one’s intuitive confidence in a given hypothesis on the fact that a test failed to reject this hypothesis. A more cautious attitude would be to form one’s intuitive opinion only after studying the power function of the test applied.
Neyman alludes to a one-sided test of the mean of a Normal distribution with n iid samples, and known standard deviation, call it test T+. (Whether Greek symbols will appear where they should, I cannot say; it's being worked on back at Elba).
H0: µ ≤ µ0 against H1: µ > µ0.
The test statistic d(X) is the standardized sample mean.
The test rule: Infer a (positive) discrepancy from µ0 iff {d(x0) > cα) where cα corresponds to a difference statistically significant at the α level.
In Carnap's example the test could not reject the null hypothesis, i.e., d(x0) ≤ cα, but (to paraphrase Neyman) the problem is that the chance of detecting the presence of discrepancy δ from the null, with so few observations, is extremely slim, even if [δ is present].
We are back to our old friend: interpreting negative results!
“One may be confident in the absence of that discrepancy only if the power to detect it were high.”
The power of the test T+ to detect discrepancy δ:
(1) P(d(X) > cα; µ = µ0 + δ)
It is interesting to hear Neyman talk this way since it is at odds with the more behavioristic construal he usually championed. He sounds like a Cohen-style power analyst! Still, power is calculated relative to an outcome just missing the cutoff cα. This is, in effect, the worst case of a negative (non significant) result, and if the actual outcome corresponds to a larger p-value, that should be taken into account in interpreting the results. It is more informative, therefore, to look at the probability of getting a worse fit (with the null hypothesis) than you did:
(2) P(d(X) > d(x0); µ = µ0 + δ)
In this example, this gives a measure of the severity (or degree of corroboration) for the inference µ < µ0 + δ.
Although (1) may be low, (2) may be high (For numbers, see Mayo and Spanos 2006).
Spanos and I (Mayo and Spanos 2006) couldn't find a term in the literature defined precisely this way--the way I'd defined it in Mayo (1996) and before. We were thinking at first of calling it "attained power" but then came across what some have called “observed power" which is very different (and very strange). Those measures are just like ordinary power but calculated assuming the value of the mean equals the observed mean! (Why anyone would want to do this and then apply power analytic reasoning is unclear. I'll come back to this in my next post.) Anyway, we refer to it as the Severity Interpretation of "Acceptance" (SIA) in Mayo and Spanos 2006.
The claim in (2) could also be made out viewing the p-value as a random variable, calculating its distribution for various alternatives (Cox 2006, 25). This reasoning yields a core frequentist principle of evidence (FEV) in Mayo and Cox 2010, 256):
FEV:1 A moderate p-value is evidence of the absence of a discrepancy d from H0 only if there is a high probability the test would have given a worse fit with H0 (i.e., smaller p value) were a discrepancy d to exist.
It is important to see that it is only in the case of a negative result that severity for various inferences is in the same direction as power. In the case of significant results, d(x) in excess of the cutoff, the opposite concern arises—namely, the test is too sensitive. So severity is always relative to the particular inference being entertained: speaking of the “severity of a test” simpliciter is an incomplete statement in this account. These assessments enable sidestepping classic fallacies of tests that are either too sensitive or not sensitive enough.2
________________________________________
The full version of our frequentist principle of evidence FEV corresponds to the interpretation of a small p-value:
x is evidence of a discrepancy d from H0 iff, if H0 is a correct description of the mechanism generating x, then, with high probability a less discordant result would have occurred.
By making a SEV assessment relevant to the inference under consideration, we obtain a measure where high (low) values always correspond to good (poor) evidential warrant.
It didn't have to be done this way, but I decided it was best, even though it means appropriately swapping out the claim H for which one wants to assess SEV.
Cohen, J. (1988), Statistical Power Analysis for the Behavioral Sciences, 2nd ed. Hillsdale, Erlbaum, NJ.
Mayo, D. and Spanos, A. (2006), “Severe Testing as a Basic Concept in a Neyman-Pearson Philosophy of Induction,” British Journal of Philosophy of Science, 57: 323-357.
Mayo, D. and Cox, D. (2010), "Frequentist Statistics as a Theory of Inductive Inference," in D. Mayo and A. Spanos (2011), pp. 247-275.
Mayo, D. and Spanos, A. (eds.) (2010), Error and Inference, Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science, CUP.
Neyman, J. (1955), “The Problem of Inductive Inference,” Communications on Pure and Applied Mathematics, VIII, 13-46.
No comments:
Post a Comment
===========================