Given some slight recuperation delays, interested readers might wish to poke around the multiple layers of goodies on the left hand side of this web page, wherein all manner of foundational/statistical controversies are considered. In a recent attempt by Aris Spanos and I to address the age-old criticisms from the perspective of the “error statistical philosophy,” we delineate 13 criticisms. Here they are:
Ø (#1) error statistical tools forbid using any background knowledge.
Ø (#2) All statistically significant results are treated the same.Ø (#1) error statistical tools forbid using any background knowledge.
Ø (#3) The p-value does not tell us how large a discrepancy is found.
Ø (#4) With large enough sample size even a trivially small discrepancy from the null can be detected.
Ø (#5) Whether there is a statistically significant difference from the null depends on which is the null and which is the alternative.
Ø (#6) Statistically insignificant results are taken as evidence that the null hypothesis is true.
Ø (#7) Error probabilities are invariably misinterpreted as posterior probabilities.
Ø (#8) Error statistical tests are justified only in cases where there is a very long (if not infinite) series of repetitions of the same experiment.
Ø (#9) Specifying statistical tests is too arbitrary.
Ø (#10) We should be doing confidence interval estimation rather than significance tests.
Ø (#11) Error statistical methods take into account the intentions of the scientists analyzing the data.
Ø (#12) All models are false anyway.
Ø (#13) Testing assumptions involves illicit data-mining.
HAVE WE LEFT ANY OUT?
…more soon.
(for problems accessing links, please write to: jemille6@vt.edu)
No comments:
Post a Comment
===========================