The Worst Possible Misery for Everyone is Bad (Reasoning)? – Part 1

Ninth Circle of Hell, Dante's Inferno. Artist: Paul Gustave Dore Sam Harris’s “worst possible misery for everyone” argument (WPME, for short) attempts to defend this conclusion:

Increases/decreases in the well-being of conscious creatures fully determine which states of the world are morally better/worse.

The argument is named for its key premise: “The worst possible misery for everyone is bad.”

In this post, I’ll lay out WPME. As a bonus, I’ll connect it to two more of Sam’s arguments. One is his argument that all value depends on consciousness. The other is his main argument for a scientific theory of objective morality.


Let’s begin with Sam’s own words, excerpted from The Moral Landscape (pages 39—40). You can also watch Sam make WPME.

[U]niversal morality can be defined with reference to the negative end of the spectrum of conscious experience … [Imagine] a state of the universe in which everyone suffers as much as he or she (or it) possibly can. If you think we cannot say this would be “bad,” then I don’t know what you could mean by the word “bad” (and I don’t think you know what you could mean by it either). Once we conceive of “the worst possible misery for everyone,” then we can talk about taking incremental steps toward this abyss … I am saying that a universe in which all conscious beings suffer the worst possible misery is worse than a universe in which they experience well-being. This is all we need to talk about “moral truth” in the context of science. Once we admit that the extremes of absolute misery and absolute flourishing … are different and dependent on facts about the universe, then we have admitted  that there are right and wrong answers to questions of morality.

This excerpt captures the core of WPME. What’s more, the last two sentences give us a sense of Sam’s main argument for his science of morality. Below is a plausible reconstruction of both arguments, plus one more about consciousness as the source of all value. You’ll see some claims not found in the above excerpt that talk about the relationship between values, conscious creatures, and the natural world. These claims—namely, (1), (2), and (6)—are found elsewhere in The Moral Landscape and in this video.

  1. No value/disvalue (good/bad) exists in a world permanently devoid of conscious creatures.
  2. Therefore, All value/disvalue that exists in the world is value/disvalue to (good/bad for) conscious creatures.
  3. A state S of the world in which every conscious creature is maximally miserable is bad.
  4. Therefore, A state T of the world that replaces at least some of the misery in S with the experience of well-being is better than S.
  5. Therefore, Increases/decreases in the well-being of conscious creatures fully determine which states of the world are morally better/worse.
  6. Facts about the natural world, science’s domain of inquiry, fully determine increases/decreases in the well-being of conscious creatures.
  7. Therefore, Facts about the natural world, science’s domain of inquiry, fully determine which states of the world are morally better/worse.

Sam might consider just (3) and (4) to be WPME, whereas I’m considering it (3), (4), and (5), since (5) packs more conclusive punch than (4). In fact, (5) opens Sam’s main argument for his science of morality, stated in (5)—(7).

Sam, and readers of The Moral Landscape, might wonder why (5) and (6) don’t refer to the collective well-being of  conscious creatures. After all, the goal Sam sets for his science of morality is to “maximize collective well-being.” I choose to leave out “collective” in (5) and, consequently, (6) based on the principle of charity. In other words, I’m trying to present what strikes me as the best version of Sam’s argument, based on reasoning he appears to follow. Compare (1)—(7) to what Sam has identified as the central argument of The Moral Landscape (see Challenge FAQ, question #1). I’ve enumerated Sam’s claims for easy reference.

(i) Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe. (ii) Conscious minds and their states are natural phenomena, fully constrained by the laws of the universe (whatever these turn out to be in the end). Therefore, (iii) questions of morality and values must have right and wrong answers that fall within the purview of science (in principle, if not in practice). Consequently, (iv) some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life.

Sam makes no mention here of collective well-being. And as best I can tell from reading The Moral Landscape, although Sam endorses maximizing collective well-being, he offers no explicit reason to believe that producing the greatest overall amount of well-being trumps, say, achieving the most equitable distribution, were those two ends to clash.

Now, here’s how (i)—(iv) relate to (1)—(7).

  • (i) combines (2) and (5). I add (1) based on Sam’s intuition that evaluative concepts (good/bad, right/wrong) do not apply in a universe without conscious creatures, an intuition he pumps using an apparent thought experiment about a universe sans consciousness.
  • (ii) matches up with (6).
  • (iii) corresponds to (7).
  • (iv) takes us a step beyond (7), seemingly to emphasize that Sam’s argument, if successful, refutes moral relativism.

Based on this analysis, either (i)—(iv) simply omit WPME or else (i) can be understood to encapsulate (1)—(5). In either case, (1)—(7) strike me as a fair representation of Sam’s overall case for a scientific theory of objective morality. What’s more, (5)—(7) form a deductively valid argument, whereas (i)—(iii) do not. So, I actually may have helped Sam’s case a bit.

In my next post, I’ll finish up this two-parter with a critique of WPME. That is, I’ll evaluate (3)—(5). I’ll likely also address (1)—(2), since (2) could be construed as support for (5).

 

Comments 15

  • I’ll be curious to see what problem you think lurks in 3-5.

    You say: “…as best I can tell from reading The Moral Landscape, although Sam endorses maximizing collective well-being, he offers no explicit reason to believe that producing the greatest overall amount of well-being trumps, say, achieving the most equitable distribution, were those two ends to clash.”

    In The Moral Landscape, Sam claims that it’s an objective moral truth, certified by science, that we should seek “the heights of happiness for the greatest number of people” (p. 28). This is quite different from the uncontroversial claim that increasing well being above WPME is a moral good, something we don’t need science to certify. No one wants everyone to be miserable since that would include them. The question is whether the fact that everyone desires to flourish in some sense (an “is”) somehow entails Harris’s normative claim that everyone, not just a privileged few, should flourish (an “ought”), and whether science is in a position to ground that entailment.

    • Thanks for commenting, Tom. I’ll be posting part 2 this week.

      About the quote you cite, here’s the full sentence:

      As we come to understand how human beings can best collaborate and thrive in this world, science can help us find a path leading away from the lowest depths of misery and toward the heights of happiness for the greatest number of people.

      Neither in this quote nor in the surrounding context do I see Sam give explicit reasons for accepting a moral obligation to maximize collective well-being. In the quote, he predicts science will eventually help us maximize collective well-being. But he does not say that science has proven or will prove that we should maximize collective well-being.

      The philosopher John Stuart Mill attempted to prove that we should promote the greatest happiness for the greatest number. His proof is generally considered problematic and, in any case, philosophical rather than scientific. For any readers who are unfamiliar with it, it’s available here: http://plato.stanford.edu/entries/mill-moral-political/#ProUti

      • Thanks Ryan. If Sam is only saying that science can help us find our way to a peak in the moral landscape, then there’s nothing to object to in that instrumental conception of the role of science vis a vis morality. But I was under the impression that he’s arguing something stronger: that science can determine what the peaks should be, for instance that science proves we should maximize collective well-being. Here science isn’t just instrumental, but prescriptive. If science really can bridge the is-ought gap, as I think he claims, and dictate moral truths to us (Sam is a moral realist), then we should be able to discern a basic moral norm such as equal rights as something like a natural law in the fabric of the cosmos. But of course we don’t; it’s a hard-won, potentially reversible cultural achievement that builds on our innate moral dispositions.

        In any case I look forward to your diagnosis of what mistake Sam actually makes, since it seems it isn’t the one I’m suggesting.

        • Sorry, one last comment. Science is never prescriptive. Science is diagnostic. It can tell use how something effects the metric we are looking at. It is always up to the people using science to decide if they want to use the data. Also, in reference to your ‘natural law in the fabric of the cosmos’ I have to play the “no such thing card” We write the laws of physics to describe our observations of the universe. Some hold extremely accurately. Think about Newtons laws of gravitation, then think about Einsteins Theory. We are not discovering natural laws, we are taking better and better notes and making more and more accurate models. The universe is what it is, it is never what we think it is. Science is a man made tool. Language is a man made tool. Math is a man made tool. Morality is a man made tool. The question becomes can it be better informed by science or by intuition, or worse, by superstition.

  • I believe Sam is arguing that there is no is/ought gap. That an ‘ought’ if just a specific form of an ‘is’ He clearly discusses this repeatedly in his lectures at least. Science itself isn’t self justifying. You must hold certain values in order to use the Scientific method. Morality isn’t self justifying either, it requires accepting the premise that suffering is bad, well being is good. Neither are self justifying. Sam doesn’t argue that science will tell us we should value well being, but once we accept that premise, science can illuminate how to maximize well being. We can look at infant mortality, literacy rates, crime statistics, psychological tests, economic factors, opinion polls, brain scan data, on and on. Science has ways to look at the effects of our behaviors, our laws, our governments our economic systems ect. You must first accept the premise that suffering is bad and well being is good, then Science comes into play. Science doesn’t have to tell us we ought to value well being, I thought this was made explicitly clear in his book.

    • It sounds like you’re replying to my reddit post summarizing my ML challenge essay. You’re right that Sam claims the is/ought gap is illusory. I don’t agree with that claim, but I also don’t deny that he makes it. You’re also right that Sam does NOT make this claim: science–or else its normative foundation–is self-justifying. On the contrary, his view is this: science and all its branches are not foundationally self-justifying in the ways I suggest. I seem to have misread Sam’s use of “self-justifying” in the ML challenge FAQ (discussed in this post). In our exchange (which is nearly done), Sam corrects me on this point. However, the thrust of my critique remains unchanged. He suggests his moral theory (well-being maximizing consequentialism) is on par with value for truth, logic, evidence or even health in the sciences. But I don’t think his moral theory—nor any other—has (nor perhaps ever could have) the same uncontroversial axiomatic status in science. This is not to say we can’t have a scientifically-informed ethics. Such an ethics is usually not what critics of Sam’s book protest. Rather, critics (including myself) are typically worried that Sam has understated the depth of debate in moral philosophy and overstated the power of science to advance that debate.

      • Point well taken. I agree that Sam overstates sciences role in the debate. I think the best and most significant contribution of his work is in showing that morality can have a firm and solid foundation outside of religion and faith. Some will disagree but I am sure millions of non religious people already knew that. I look forward to reading your essay. I’m a Harris fan but really like to see the debate. Congrats on winning the contest!

  • Thanks for this, Mr. Born. I am a fan of Sam Harris and have been anxiously waiting for him to publish his exchange with you and his final assessment of the Moral Landscape Challenge.

    One minor critique: I think 1-4 is invalid; 4 should probably be stated simply as a premise, not a conclusion.

    Cheers,
    Brad

    • Ryan’s fine, Brad. And my exchange with Sam should be published soon.

      About the validity of 1-4, I think 1-2 may be invalid. If (1) is true, then it seems we must say conscious creatures are necessary for the existence of moral value. But contrary to Sam, I don’t think we must say that all moral value reduces to the well-being of conscious creatures.

      About whether (4) should be a conclusion from (3), watch this video shortly after the 12:00 mark. Sam basically says “the moment you admit” (3) “then you have to admit” (4). Because of this apparently inferential phrasing, I went with (3) Tf, (4).

      • Thanks for the video link. I think Harris should better clarify the connection between 3 & 4 (my discussions in the past with critics have usually tripped up on this point). But I also now think, Ryan, that you fudge the principle of charity a little at this point and that 1-4 in your deductive model of Sam’s argument is invalid because premise 3 is not Sam’s premise. Your premise 3 reads:

        A state S of the world in which every conscious creature is maximally miserable is bad.

        The predicate of this premise “is bad” is not quite right, though. Sam argues that such “is the worst”; the worst possible misery for every is “the worst possible outcome”, he says–it is the baddest of bad; so it is not just bad, but the MOST bad. And now with this minor correction, premise 4 will logically follow. Because we now don’t just have a set of possible conscious experiences, but rather one admits (on some level) an ordered set(!) of objectively valued better and worse conscious experiences. So I think your argument could be improved as follows:

        A state S of the world in which every conscious creature is maximally miserable is [necessarily the worst of all possible states].
        [Hidden premise] If there exists a state S such that S is (W) the worst of all possible states because it is (M) maximally miserable, then for any state x and for any state y, if x entails (*) less well-being (i.e. more misery) than y, then x is () better than x.

        4′. (for some state S)[ W(S) & W(S) iff M(S)] → [(AxAy)(x * y → (x x))]

        Therefore, a state T of the world that replaces at least some of the misery in S with the experience of well-being is better than S. [ I.e. if S * T, then T > S .]

        Obviously, this is more complicated and less pretty than your argument. (Perhaps you might be able to restate with more clarity than I have). But I think it is also more accurate and eliminates the apparent absurdity your argument attributes to Sam Harris.

        What do you think, Ryan. (Thank you for your time and effort!)

        • I copied and pasted your final symbolization into your original comment (the one I’m replying to) and deleted your other comments, Brad, since those other comments were simply meant to update your symbolization.

          I take your point that Sam says the “worst possible misery for everyone” isn’t just bad; it’s the worst world imaginable. However, I think we can still infer (4) from (3) as written. If (3) assumed that the “worst possible misery for everyone” is the best world imaginable, then we couldn’t infer (4). You can’t do better than the best. But you can, conceivably, do better than bad.

          Still, we might worry that even if (3) faithfully captures the absolute extreme misery Sam has in mind, it doesn’t capture the absolute extreme badness he thinks such misery signifies. To address this worry, I could rewrite (3) more or less as you’ve suggested. Perhaps something like this: “A state S of the world in which every conscious creature is maximally miserable is bad, and no state could be worse.” I suggest a conjunction just to keep a little more of Sam’s voice in there–namely, the “bad.” After all, I’ve already made things more jargon-laden than Sam would ever make them (despite any potential gains in precision). In any case, I think I may just leave (3) as is for simplicity’s sake. As I said, the “bad” leaves room for “better”; it also preserves more of Sam’s own words. Further, the “maximally miserable” gets at much of the extremeness Sam is after.

          About your hidden premise, shouldn’t the final consequent be “y is better than x”, since T will replace y, and S will replace x?

          • About your hidden premise, shouldn’t the final consequent be “y is better than x”, since T will replace y, and S will replace x?

            Yes, that post is still jumbled and not what I originally wrote. Please see HERE for what I originally wrote, which should make more sense.

            Also, I like your premise (3) rewrite, and I appreciate (and think it fair) that you are trying to keep the argument in Sam’s voice with language he uses, “despite any potential gains in precision”. (If there is one critique I think is warranted of Sam’s work, it’s of his refusal to state his arguments more formally–something he ought to do if he wants his work to be influential beyond the pop culture level.) But I agree that the original premise (3) is acceptable “for simplicity’s sake”.

            However, what I have felt compelled to point out is that it be acknowledged that this “simplicity” entails the negative consequence that it, prima facie, makes Sam’s argument appear invalid. But if the argument is stated more fully (and complexly), with the terminology unpacked, then it is valid after-all.

  • Thanks for the google docs link.

    I think your suggestions may help make 3-4 more clearly valid. But I’m not sure its prima facie invalid as is. At the very least, I don’t think I’d be distorting or short-changing Sam’s argument if I didn’t unpack it as you suggest. Consider this argument:

    • S is bad.
    • If S is bad, then T (not-S) is better than S.
    • Therefore, T is better than S.

    Think of the conditional as a very rough go at an unstated premise in WPME. It carries us from the “bad” to “better” (though the consequent would be better stated “possibly, some T (not-S) is better than S”).

    Arguably, what’s still missing is something to link “bad” to misery and “better” to well-being. Sam thinks these relations our analytic. For instance, in his book (ch 1, endnote 22), he says it “seems analytically confused” to ask whether universal absolute misery is good. Maybe some analytic bridge principles are needed then? I’m not sure what they’d look like. Here are some simple ideas that seem to serve WPME.

    • S is bad iff S entails more misery than well-being.
    • T is better than S iff T replaces at least some of the misery in S with the experience of well-being.

    I think we’d also need a description of the natural states that constitute misery and well-being, since the latter terms already have strong valences that, presumably, Sam would say reduce to terms that are more neutral (e.g., neural terms). Still, working with what I’ve now proposed, we could construct this argument:

    1. A state S of the world in which every conscious creature is maximally miserable is bad.
    2. If S is bad, then state T (not-S) of the world is better than S.
    3. Therefore, state T (not-S) of the world is better than S.
    4. T is better than S iff T replaces at least some of the misery in S with the experience of well-being.
    5. Therefore, T replaces at least some of the misery in S with the experience of well-being.
    6. Therefore, T replaces at least some of the misery in S with the experience of well-being, and T is better than S.

    This argument is valid, and the main conclusion is nearly the same as (4) in my post. Here’s the form.

    1. P.
    2. If P, then Q.
    3. Therefore, Q.
    4. Q iff R.
    5. Therefore, R.
    6. Therefore, R and Q.

    The key additions are the conditional and the biconditional. But Sam seems to think those claims are built right into the meanings of “bad” and “better,” such that the added premises don’t actually add much of anything—except perhaps, as you’ve been concerned, validity.

    Lastly, I still don’t think we have to specify that state S is the worst possible state. If we’re concerned about opening up an “ordered set(!) of objectively valued better and worse conscious experiences,” specifying that S is the most miserable state possible gives us no where to go but up regarding well-being. And saying that S is at least bad leaves us room to make things better.

  • Ryan, this is good work. Thank you. I think I have, as a result, a better understanding of what you meant, and I think we are now on the same page—and largely in agreement—on the topic of your formal presentation of Sam’s argument.

    Thanks again! I will continue to think this over 🙂