What does it mean to say that I’ve given someone a 1 in 1 million risk of death? Is he now just 99.9999% alive? Is he 0.0001% dead? Is either a "loss"? Must we await the verdict of the Fates to decide the reasonableness of my conduct? If so, why does some other agency decide whether an act in the past was good or bad?

What does it mean to say that I’ve given one million customers a 1 in 1 million risk of death? Is my conduct judged myopically on the basis of the one who died; or, shall we consider the 999,999 who wound up with a useful product and suffered no harm? And how is my conduct to be judged? Shall we add up all the good, subtract out the bad (consequence’s books having been audited by the trial court), and have the jury decide whether my decision-making was, being Monday morning after all, "good" in light of the final tally?

If 1 in 1 million is too risky then so is getting out of bed in the morning. Yet 1 in 1 million is 300 American lives nonetheless. And if 1 in 1 million is a reasonable risk why isn’t 1 in 100,000 or 1 in 1,000 or 1 in 10? Where do we draw the line, and why?

Does it, or should it, make any difference whether I actually knew the person on whom I imposed a risk? Does the answer change if the risk materializes? Does it, or should it, make any difference whether I actually estimated the risk before imposing it? Why do jurors punish the diligently knowledgeable while being far less wrathful towards the consciously ignorant? What, if anything, should the law do about it?

Ultimately, how should courts deal with the risks we impose on each other in this world of inevitable risks? A very good discussion can be found in "Statistical Knowledge Deconstructed".

I have one quibble and a few takes. First, the effort seems sometimes to be aimed at reconciling a Bayesian decision-making approach ("degress of belief" tending to sound rather appealingly deontological to me) with a consequentialist ex post assessment of the ultimate utility of an act. Consequentialism isn’t generally thought to be informative on the ex ante side of decision-making – thus my objection to "… I mean to endorse an epistemic and (thusly) Bayesian conception of risk, not a frequentist conception". Second, his comments about cost/benefit analyses are dead on. Companies are abandoning the process and adopting "Nobody gets hurt, ever!" policies instead. One has to wonder about a legal system that advantages willful ignorance. Third, his suggestion that we let risk more openly inform determinations about where an act falls along the intentional – reckless – negligent – non-negligent spectrum would be especially helpful in mass tort cases in which risks are widely distributed.

Finally, we are in the midst of a scientific revolution in which the product of biological systems are being discovered to be unpredictable and invariably greater than the sum of their parts. Emergent phenomena arising out of vastly complex systems means that the balance sheets needed to make a consequentialist assessment of an act can never be closed nor the credits and debits ever intelligently summed. Perhaps then, like the earth’s most ancient and successful organisms, we ought to have rules, or principles, as our guides rather than approaching every problem ad hoc. In that case, knowingly, or willfully ignorantly, imposing a significant risk on your fellow man that manifests might be such a rule for liability.