Wikipedia says:

“Trollope’s downfall in the eyes of the critics stemmed largely from this volume.[51][52] Even during his writing career, reviewers tended increasingly to shake their heads over his prodigious output, but when Trollope revealed that he strictly adhered to a daily writing quota, he confirmed his critics’ worst fears.[53] The Muse, in their view, might prove immensely prolific, but she would never ever follow a schedule.[54] Furthermore, Trollope admitted that he wrote for money; at the same time he called the disdain of money false and foolish. The Muse, claimed the critics[who?], should not be aware of money.”

It’s sad that this author’s legacy could be smeared by an attitude like this. There were other, much more famous, authors in this period who also wrote with a lot of discipline and needed (sometimes desperately) the income from their writing. Did they really think that true artists, like true aristocrats, do not dirty their hands with work?

I recently stumbled on another English author that I had never heard of: Matthew Lewis. Around 1800 he wrote a Gothic masterpiece called The Monk. What makes this novel especially amazing is that it was his first and only novel, and he finished writing it by his early twenties. He was put on trial for blasphemy and later editions were censored. If you’re already considered to be the most virtuous of the virtuous monks in Madrid, where can you go from there? Down. Way way down. The Penguin Classics version is unabridged and does not capitalize all pronouns like the Oxford World’s Classic version.

]]>– On one reading, the probability for heads is either 1 or 0 (so definitely different from 1/2) (generating probability or objective bias);

– on another reading it is 1/2 (subjective probability).

The answer 1/2 can be obtained by collapsing the ‘higher-order probabilities’ to an overall subjective probability. The subjective probability(bias-probability is 1)=1/2=subjective probability(bias-probability is 0), so we obtain 1/2*1+1/2*0 = 1/2. (And likewise for any symmetrical distribution of probabilities over possible bias values.)

In the example, after the update on the new information, the new subjective probability is still 1/2. If someone finds this weird, I would clarify the different kinds of probability at play here (as above).

It may be helpful to consult Laplace on this (Essay, chapter 7)

He also explains that more interesting things may happen when you ask, e.g., about the probability of two consecutive heads (see bottom of p. 34 – top of p. 35).

Guess I am with Laplace. I think the probability of heads for coin 1

is 1/2 in the flip you told us about and in any other flip. This is

the only objective probability we’re given by the setup. Before the

toss I would assign equal credence to coin 2 coming up heads or tails

as you stipulated I didn’t know anything about p. Even if you had said

“coin two falls heads with probability p^2” I would still give equal

credence to heads or tails because you said I truly did not know

anything about p. It seems to me with no information on p it doesn’t

even matter that it is a coin you can flip more than once! And of

course after learning the result I would still give equal credence to

heads or tails for coin 2. No update necessary.

If coin 1 instead was a coin with a given known probability 1/4 of

coming up heads, then, I would update: after knowing the toss came up

equal my credence in heads for coin 2 in that particular toss would

have decreased.

In both cases, I am not making a mathematical statement, it is just

something about how I think and you should feel free to disagree with

me. If you do disagree I hope we can meet and have a betting game

based on this sometime (although mysterious unknown real numbers p

about which we know we don’t know anything are hard to come by in

practice).

Sure, it seems very easy to misuse POI (as is shown for the example

for the square in the thing you linked to — in fact in the above

example we would run into exactly this problem if we tried to

partition the space [0, 1] that the p’s lie in), but maybe this is one

of those cases where it works fine.

As Aaron says, there’s a frequentist version of the problem (repeat the game many times, keeping only those coin-tosses where the two coins agree), which seems to make perfect sense. And the frequentist probability that, in those trials, coin1 comes up heads is surely p.

Wouldn’t it be “weirder” if the Bayesian version of the problem came up with a DIFFERENT answer?

]]>So was it similarly weird for people to have talked about the branching ratio of the Higgs (the relative probability for the Higgs to decay into various different final states) as a (somewhat complicated) function of the Higgs mass, BEFORE they knew what the Higgs mass was?

How is the unknown parameter (m_Higgs) which entered into THAT probability distribution different from the unknown parameter (p) which enters into THIS one?

Does NOT giving the parameter a name change how weird (or not weird) it is to talk about various probabilities being functions of that parameter?

]]>