Search This Blog

Saturday, December 31, 2011

Midnight With Birnbaum


You know how in that recent movie, “Midnight in Paris,” the main character (I forget who plays it, I saw it on a plane) is a writer finishing a novel, and he steps into a cab that mysteriously picks him up at midnight and transports him back in time where he gets to run his work by such famous authors as Hemingway and Virginia Wolf?  He is impressed when his work earns their approval and he comes back each night in the same mysterious cab…Well, imagine an error statistical philosopher is picked up in a mysterious taxi at midnight (New Year’s Eve 2011) and is taken back fifty years and, lo and behold, finds herself in the company of Allan Birnbaum.[i]

ERROR STATISTICIAN: It’s wonderful to meet you Professor Birnbaum; I’ve always been extremely impressed with the important impact your work has had on philosophical foundations of statistics.  I happen to be writing on your famous argument about the likelihood principle (LP).  (whispers: I can’t believe this!)


Thursday, December 29, 2011

JIM BERGER ON JIM BERGER!

Fortunately, we have Jim Berger interpreting himself this evening (see December 11)
Jim Berger writes: 
A few comments:   
  • 1. Objective Bayesian priors are often improper (i.e., have infinite total mass), but this is not a problem when they are developed correctly. But not every improper prior is satisfactory. For instance, the constant prior is known to be unsatisfactory in many situations. The 'solution' pseudo-Bayesians often use is to choose a constant prior over a large but bounded set (a 'weakly informative' prior), saying it is now proper and so all is well. This is not true; if the constant prior on the whole parameter space is bad, so will be the constant prior over the bounded set. The problem is, in part, that some people confuse proper priors with subjective priors and, having learned that true subjective priors are fine, incorrectly presume that weakly informative proper priors are fine.

Monday, December 26, 2011

Contributed Deconstructions: Irony & Bad Faith 3

My efficient Errorstat Blogpeople1 have put forward the following 3 reader-contributed interpretive efforts2 as a result of the “deconstruction” exercise from December 11, (mine, from the earlier blog, is at the end) of what I consider:


“….an especially intriguing remark by Jim Berger that I think bears upon the current mindset (Jim is aware of my efforts):


Too often I see people pretending to be subjectivists, and then using “weakly informative” priors that the objective Bayesian community knows are terrible and will give ridiculous answers; subjectivism is then being used as a shield to hide ignorance. . . . In my own more provocative moments, I claim that the only true subjectivists are the objective Bayesians, because they refuse to use subjectivism as a shield against criticism of sloppy pseudo-Bayesian practice. (Berger 2006, 463)” (From blogpost, Dec. 11, 2011)
_________________________________________________
Andrew Gelman:
The statistics literature is big enough that I assume there really is some bad stuff out there that Berger is reacting to, but I think that when he's talking about weakly informative priors, Berger is not referring to the work in this area that I like, as I think of weakly informative priors as specifically being designed to give answers that are _not_ "ridiculous."

Sunday, December 25, 2011

Little Bit of Blog Log-ic

I have a logic license
My “Logic” chariot,  crunched from behind before my travels, you might recall (blogpost Nov. 15, “Logic Takes a Bit of a Hit”), has been robustly repaired and beautifully corrected, all in my absence!1  So here’s a little bit of blog logic….

In a couple of the early posts (e.g., Sept. 9 post), some logical terms were noted (e.g., the valid form of modes tollens); but it can’t hurt to review them with a mind toward the specific patterns of arguments that arise in the Birnbaum case.

Thursday, December 22, 2011

The 3 stages of the acceptance of novel truths


 There is an often-heard slogan about the stages of the acceptance of novel truths:

First people deny a thing. 
Then they belittle it. 
Then they say they knew it all along.

I don’t know who was first to state it in one form or another.  Here’s Schopenhauer with a slightly different variant:
"All truth passes through three stages: First, it is ridiculed; Second, it is violently opposed; and Third, it is accepted as self-evident."        - Arthur Schopenhauer

After recently presenting my paper criticizing the Birnbaum result on the likelihood principle (LP)[1] the reception of my analysis seems somewhere around stage two, in some cases, moving into stage three (see my blogposts of December 6 and 7, 2011).

But it is time to make good on my promise to return to concerns of those (at least in the blogosphere), who were or are still at the first stage of denial (or Schopenhauer's second stage of violent opposition).  Doing so will advance our goal of drilling deeply into some fundamental, puzzling misunderstandings of frequentist error statistical (or sampling) theory.  

Wednesday, December 21, 2011

Censorship of science?

Does this really count as censorship of science? Is it warranted? Likely to be effective in avoiding "Contagion"?
               The government advisory board that oversees biosecurity in the U.S. is asking the scientific journals Nature and Science to censor details of recent studies on bird flu due to concerns about biological terrorism. Researchers created mutations of the H5N1 virus, making it transferable between mammals through the air. In 60 percent of human cases, this strain of avian flu is fatal. At present, only 350 people worldwide have died because of the flu, only because it can be contracted via direct contact with infected birds.
                D. A. Henderson, Christine Gorman

Tuesday, December 20, 2011

Do chronic cell phone users have an exaggerated sense of self-importance?

Do chronic cell phone users have an exaggerated sense of self-importance?  What is it that’s so annoying about a person talking at length on their cell phone in public?  (I was reminded of this while waiting in airport lines as of late.)  The same person could be talking to an embodied person and I wouldn’t notice; it certainly wouldn't irk me this way. But put me in line with this person on their cell phone—or put us on the bus or in an elevator together—and I can’t help but feeling just the tiniest bit outraged.  But by what?  Their bloated sense of self-importance?  Why do I find it so obnoxious?  I’m guessing I’m not alone*, yet it seems somewhat irrational (assuming the issue isn’t the loudness).  It’s worst I think when the user is just shooting the breeze, or sounding as if they’re closing a deal, or describing (in “real-time”) being in line.


Monday, December 19, 2011

Deconstructing and Deep-Drilling* 2

Constructing Thebes Library: 2002
Deconstructing: The deconstructionist idea, initially associated with French philosophers like Derrida, and literary theory, denies that a “text” has a single interpretation, intended by the author, but rather that the reader constructs its meaning, unearthing conscious or unconscious significations. While the general philosophy is linked with relativism, postmodernism, and social constructivism---positions to which I am highly allergic---one needn’t embrace them to accord validity to the activity of disinterring meanings: ironies, deceptions, and unintended assumptions and twists in an author’s writing. The passage I cited from Berger seems to offer an example for creative deconstruction of the statistical kind. I wouldn’t have proposed the exercise if I didn’t suspect we might learn something of relevance to our deep-sea drilling activity…. Please continue to send your ponderings….

*DO stock is nearly at a year low! (I surmise a fairly quick trip back up 10 points)

Friday, December 16, 2011

Done!

The hole in Birnbaum's famous (alleged) proof of the Likelihood Principle has withstood a rather severe test of a knowledgeable (mostly Bayesian leaning) audience at a conference on controversies in statistical hypothesis testing (Testing2011, Madrid).  Thanks for various e-mail comments/corrections on the blogposted draft (Dec 6-7), to Santiago and Casa de Madrid*!

*and of course to the main conference organizer, David Tiera.

Sunday, December 11, 2011

Irony and Bad Faith: Deconstructing Bayesians 1

Some time in 2006 (shortly after my ERROR06 conference), the trickle of irony and sometime flood of family feuds issuing from Bayesian forums drew me back into the Bayesian-frequentist debates.1 2  Suddenly sparks were flying, mostly kept shrouded within Bayesian walls, but nothing can long be kept secret even there. Spontaneous combustion is looming. The true-blue subjectivists were accusing the increasingly popular “objective” and “reference” Bayesians of practicing in bad faith; the new O-Bayesians (and frequentist-Bayesian unificationists) were taking pains to show they were not subjective; and some were calling the new Bayesian kids on the block “pseudo Bayesian.” Then there were the Bayesians somewhere in the middle (or perhaps out in left field) who, though they still use the Bayesian umbrella, were flatly denying the very idea that Bayesian updating fits anything they actually do in statistics.3 Obeisance to Bayesian reasoning remained, but on some kind of a priori philosophical grounds. Doesn’t the methodology used in practice really need a philosophy of its own? I say it does, and I want to provide this. 

Thursday, December 8, 2011

Tapping through philosophical minefields


Well I finally did it!  After weeks of passing by tap classes at Pineapple Dance Studio in London (a main reason I arranged an apt in Covent Garden to begin with) I successfully made it through a Derek Hartley tap class (“elementary”).   First time since the bizarre knee injury of October (doctors never did say just when I could go back).  It’s quite amazing to walk in off the street, and see he is doing exactly what I have seen him do for years---teaching super-cool jazz-tap routines, telling the same tap jokes, and dancing just as great as ever!  Mind-altering to find myself tapping to a smooth and jazzy “Lullaby of Birdland” again,(a favorite).  Limping just a little, but exhilarated.  Now it's back to tapping my way through philosophical minefields, or perhaps, pirouetting (as Christian Robert put it).

I don’t know why some of the pictures on this blog do not appear on some devices---need the Elba crew to get to the bottom of this.  I’ll be traveling back there soon enough….

Wednesday, December 7, 2011

Part II: Breaking Through the Breakthrough* (please start with Dec 6 post)

This is a first draft of part II of the presentation begun in the December 6 blogpost.  This completes the proposed presentation. I expect errors, and I will be grateful for feedback! (NOTE: I did not need to actually rip a cover of EGEK to obtain this effect!)

SEVEN: NOW FOR THE BREAKTHROUGH
You have observed y”, the .05 significant result from E”, the optional stopping rule, ending at n = 100.
Birnbaum claims he can show that you, as a frequentist error statistician, must grant that it is equivalent to having fixed n= 100 at the start (i.e., experiment E’)
Reminder:
The (strong) Likelihood Principle (LP) is a universal conditional claim:
If two data sets y’ and y” from experiments E’ and E” respectively, have likelihood functions which are functions of the same parameter(s) µ
and are proportional to each other, then y’ and y” should lead to identical inferential conclusions about µ.
As with conditional proofs, we assume the antecedent and try to derive the consequent, or equivalently, show a contradiction results whenever the antecedent holds and the consequent does not (reductio proof).

Tuesday, December 6, 2011

Putting the Brakes on the Breakthrough Part I*




I am going to post a FIRST draft (for a brief presentation next week in Madrid).  [I thank David Cox for the idea!] I expect errors, and I will be very grateful for feedback!  This is part I; part II will be posted tomorrow.  These posts may disappear once I've replaced them with a corrected draft.  I'll then post the draft someplace.
  If you wish to share queries/corrections please post as a comment or e-mail: error@vt.edu.  (ignore Greek symbols that are not showing correctly, I await fixes by Elbians.) Thanks much!

ONE: A Conversation between Sir David Cox and D. Mayo (June, 2011)
Toward the end of this exchange, the issue of the Likelihood Principle (LP)[1] arose:
COX: It is sometimes claimed that there are logical inconsistencies in frequentist theory, in particular surrounding the strong Likelihood Principle (LP). I know you have written about this, what is your view at the moment.
MAYO: What contradiction?

Friday, December 2, 2011

Getting Credit (or blame) for Something You Don't Deserve (and first honorable mention)

Ruler at the Bottom of Ocean
It was three months ago that I began this blog with “overheard at the comedy hour at the Bayesian retreat” …and we’re near the end of the repertoire of jokes (unless I hear any new ones).  This last, in effect, accuses the frequentist error-statistical account of licensing the following (make-believe) argument after the oil spill in the Gulf of Mexico in 2010:


  Oil Exec: We had highly reliable evidence that H: the pressure was at normal levels on April 20, 2010!

Senator: But you conceded that whenever your measuring tool showed dangerous or ambiguous readings, you continually lowered the pressure, and that the stringent “cement bond log” test was entirely skipped. 

Tuesday, November 29, 2011

If you try sometime, you find you get what you need!

picking up the pieces

Thanks to Nancy Cartwright, a little ad hoc discussion group has formed: “PhilErrorStat: LSE: Three weeks in (Nov-Dec) 2011.”  I’ll be posting related items on this blog, in the column to your left, over its short lifetime. We’re taking a look at some articles and issues leading up to a paper I’m putting together to give in Madrid next month on the Birnbaum-likelihood principle business (“Breaking Through the Breakthrough”) at a conference (“The Controversy about Hypothesis Testing,” Madrid, December 15-16, 2011).  I hope also to get this group’s feedback as I follow through on responses I’ve been promising to some of the comments and queries I’ve received these past few months.  

Our very first meeting already reminded me of an issue Christian Robert raised in his blog about Error and Inference: Is the frequentist (error-statistical) interest in probing discrepancies, and the ways in which statistical hypotheses and models can be false, akin to a Bayesian call for setting out rival hypotheses with prior probability assignments?

Sunday, November 27, 2011

The UN Charter: double-counting and data snooping

John Worrall, 26 Nov. 2011
Last night we went to a 65th birthday party for John Worrall, philosopher of science and guitarist in his band Critique of Pure Rhythm. For the past 20 or more of these years, Worrall and I have been periodically debating one of the most contested principles in philosophy of science: whether evidence in support of a hypothesis or theory should in some sense be “novel.”

A novel fact for a hypothesis H may be: (1) one not already known, (2) one not already predicted (or counter-predicted) by available hypotheses, or (3) one not already used in arriving at or constructing H. The first corresponds to temporal novelty (Popper), the second, to theoretical novelty (Popper, Lakatos), the third, to heuristic or use-novelty. It is the third, use-novelty (UN), best articulated by John Worrall, that seems to be the most promising at capturing a common intuition against the “double use” of evidence:

If data x have been used to construct a hypothesis H(x), then x should not be used again as evidence in support of H(x).

(Note: Writing H(x) in this way emphasizes that, one way or another, the inferred hypothesis selected or constructed to fit or agree with data x. The particular instantiation can be written as H(x0).)

The UN requirement, or, as Worrall playfully puts it, the “UN Charter,” is this:

Use-novelty requirement (UN Charter): for data x to support hypothesis H (or for x to be a good test of H), H should not only agree with or “fit” the evidence x, but x itself must not have been used in H's construction.

The problem has arisen as a general prohibition against data mining, hunting for significance, tuning on the signal, ad hoc hypotheses, and data peeking, and as a preference for predesignated hypotheses and novel predictions.