In my essay for Sam Harris’s Moral Landscape Challenge, I assumed that serious competing theories of morality and value exist. Sam rejects this assumption. He contends that the only serious ethical theory is the one he accepts—consequentialism (explained after the jump).
Sam also rejects my assumption that ethics must be prescriptive. He says an ethical theory doesn’t have to impose any “shoulds” or “oughts” on us, such as “one ought to maximum collective well-being.” It only has to tell us what is morally good or bad. And Sam believes science can do so objectively (if we assume that “the worst possible misery for everyone is bad.”)
In this final reply to Sam, I argue against his defense of consequentialism and his rejection of moral obligations.
- Consequentialism and its competitors (200 words)
- Sam Harris’s three defenses of consequentialism
- Sam Harris’s defense of maximizing collective well-being—and why it’s not obligatory
Consequentialism and its competitors
What is consequentialism? It’s an answer to this question: “What makes something morally right or morally wrong?” Consequentialism replies: “The consequences. Nothing more.” For instance, what makes giving to UNICEF morally right? A donation improves the lives of needy children (and likely doesn’t worsen yours).
Sam believes that consequences alone dictate what’s morally right or wrong. In his view, whatever maximizes collective well-being is morally right and whatever decreases it is morally wrong. In my last post, I called this idea welfare maximizing consequentialism. It’s much like (and often labeled) utilitarianism, an ethical theory that’s hundreds of years old.
Sam doesn’t like to use either label. He rejects the term “consequentialism” in particular because it implies that an ethical theory might be guided by something other than consequences. Two standard proposals for such a theory are deontology and virtue (or aretaic) ethics. Deontological theories set strong or absolute duties, such as “never kill an innocent person.” Virtue ethics says that being moral requires developing moral skill through practice—emulating virtuous people just as you’d try to emulate virtuoso musicians.
Sam Harris’s three defenses of consequentialism
Sam gives at least three reasons to fold deontology and virtue ethics into consequentialism.
A moral theory must not make everyone miserable
Sam claims that if duties or virtues tended to make all of us miserable, we’d reject them as a basis for morality. Sam writes:
[I]f the categorical imperative [an example of deontology] reliably made everyone miserable, no one would defend it as an ethical principle. Similarly, if virtues such as generosity, wisdom, and honesty caused nothing but pain and chaos, no sane person could consider them good.
Here Sam proposes what I’ll call the misery rule for the correct moral theory.
- The Misery Rule: The correct moral theory must not reliably make everyone miserable.
At best, this rule sets up a single low hurdle for a moral theory to clear. It does not establish Sam’s core claims. For instance, the misery rule doesn’t show that the correct theory maximizes collective well-being. After all, a theory that leads to the persistent flourishing of at least some individuals (e.g., a socio-economic elite) doesn’t make everyone routinely miserable.
Or suppose a theory dictates that a better world awaits us in which everyone’s life is worse—indeed, barely worth living. If recommending such a world violates the misery rule, then the misery rule may invalidate welfare maximizing consequentialism. (See below: Parfit’s Repugnant Conclusion is like Zeno’s Paradox of Motion).
Moral principles are valued solely as a means to well-being
Sam believes that whatever’s morally good or bad about duties or virtues is found in their consequences for conscious creatures. He writes:
Either [duties and virtues] have consequences for the minds involved, or they have no consequences.
True. But this tautology doesn’t get us far. Actions that accord with duties or virtues will (like all actions) have consequences. Why does Sam believe duties and virtues get their value solely from their consequences? He says he doesn’t believe “any sane person” can value “abstract principles and virtues…independent of the ways they affect our lives.” But insinuating that it’s insane to oppose your view isn’t an argument. Arguments attempt to convince others, not diagnose them.
Sam does attempt to convince us that principles of justice get their value solely from increasing well-being. He targets the views of philosopher John Rawls. Rawls talked about justice as fairness in social and economic systems. He didn’t believe justice exists solely as a means of maximizing collective well-being. He believed justice requires us to protect the well-being of the least advantaged (e.g., persons born poor), even if by doing so we fail to promote the greatest total (or average) welfare.
Sam argues that concerns about justice pertain solely to its effects on conscious experience. He writes:
These concerns predate our humanity. Do you think that capuchin monkeys are worried about fairness as an abstract principle, or do you think they just don’t like the way it feels to be treated unfairly?
In the linked video, two monkeys (whom I’ll call “Pu” and “Chin”) receive different rewards for performing the same task for an experimenter (“Dr. B”). First, Pu gets a cucumber for giving Dr. B a rock. Then Chin gets a much tastier grape for doing the same. Pu sees Chin get the grape. Pu gets upset and rejects her next cucumber reward. Apparently, Pu has a sense of fairness. She demands equal pay for equal work!
Sam’s capuchin-inspired argument seems to be this: some creatures value justice but cannot understand it as a rational principle; thus, some creatures must value justice solely for its good effects. However, this argument doesn’t show that humans must value justice solely for its good effects. After all, humans can understand justice as a rational principle. Thus, Sam still hasn’t shown that we can’t value it as such. Sam has only asserted that we can’t or don’t (unless we’re insane).
Critics set “arbitrary limits” on which consequences to consider
Sam says that philosophical critics of consequentialism don’t consider all the relevant consequences.
According to Sam, the entire basis for distinguishing right from wrong is found in the conscious experiences that result (even just potentially) from what we do, what we mean to do, and what we tend to do (i.e., our actions, intentions, and dispositions).
Sam issues this challenge:
[C]ome up with an action that is obviously right or wrong for reasons that are not fully accounted for by its (actual or potential) consequences.
If there were an action that obviously met this challenge to the satisfaction of a committed consequentialist like Sam (who thinks earnest rejection of consequentialism is insane), I doubt we’d be debating consequentialism.
Moreover, Sam’s challenge sidesteps the criticisms consequentialism tends to face. For instance, thought experiments like “The Inhospitable Hospital” and “The Trolley Problem” suggest that consequentialism can violate our basic intuitions about right and wrong. In these scenarios, consequentialism appears to permit killing one innocent (e.g., by forcing her in front of a train) to save five others. Sam often uses thought experiments (“Imagine that…”) and puts great weight on intuitions (“the worst possible misery for everyone is bad”). So it seems the criticisms in question should have some merit.
The trouble for consequentialism, I propose, is that one of its central strengths is also a weakness.
- Strength: Consequentialism fits with the intuitive idea that the right thing to do is produce the best overall outcome—for you, your children, and others.
- Weakness: The “best overall outcome” most easily translates into the greatest overall welfare—an aggregated mass of moral “goodness” that’s impartial and impersonal. An individual’s well-being matters only insofar as it swells this mass. If the collective good is maximized despite or even because of the suffering of innocents (you, your children, whomever), then their suffering is approved.
Indeed, overall welfare can rise even as every single person’s well-being falls. This paradox, identified by philosopher Derek Parfit, is known as “The Repugnant Conclusion.”
Sam Harris’s defense of maximizing collective well-being—and why it’s not obligatory
Sam addresses The Repugnant Conclusion in his response to my essay. He maintains that maximizing collective well-being is good—indeed, best. What’s more, he insists that his moral theory does not obligate us to increase overall welfare.
Parfit’s Repugnant Conclusion is like Zeno’s Paradox of Motion
In The Moral Landscape, Sam captures the repugnance of The Repugnant Conclusion when he writes:
If we are concerned only about total welfare, we should prefer a world with hundreds of billions of people whose lives are just barely worth living to a world in which 7 billion of us live in perfect ecstasy.
The picture’s no prettier when we envision the potential peaks of average well-being. In fact, Sam concluded that “we cannot rely on a simple summation or averaging of welfare as our only metric.”
Sam now proposes that The Repugnant Conclusion is a mere puzzle, comparable to Zeno’s Paradox of Motion. In his response, he writes:
How do any of us get to the coffee pot in the morning if we must first travel half the distance to it, and then half again, ad infinitum? … Once mathematicians showed us how to sum an infinite series, [this] problem vanished. Whether or not we ever shake off Parfit’s paradoxes, there is no question that the limit cases exist: The worst possible misery for everyone really is worse than the greatest possible happiness.
Contrary to what Sam suggests, the “limit cases” for collective welfare aren’t as plain as the position of your coffee pot. The point of Parfit’s paradox is that at least the upper limit Sam proposes is in question.
In the motion paradox, you traverse an infinitely divisible path, yet you do so in a finite number of steps. In the maximization paradox, we achieve universal absolute flourishing, yet a “better” world (i.e., a higher peak of collective well-being) emerges in which everyone’s life turns out to be worse.
The motion paradox raises this question:
- How do we get from point A to point B (the coffee pot) if, by virtue of space’s divisibility, the path between never ends?
The maximization paradox raises this question:
- Why do we stop at absolute flourishing for all (Sam’s “point B”) if, by virtue of well-being’s summability, the path beyond leads to higher and higher peaks?
Zeno’s paradox, introduced in the 5th century BC, has been resolved. But Parfit’s paradox, introduced in 1984, has not. Maybe with time someone will find an answer that preserves Sam’s two basic claims:
- The experience of well-being is the sole source of all moral value.
- The best possible world hosts the greatest amount of well-being.
For now, these premises appear to lead to this cringeworthy conclusion: a massive pile of people on the brink of misery morally surpasses a smaller population of people at the peak of well-being.
Still, I haven’t addressed Sam’s “point A”—that is, “the worst possible misery for everyone.” Sam assumes universal absolute misery is bad. From this basic assumption, he infers that more well-being must be better. He also seems to infer the two basic claims I just bulleted. But I don’t think inferring either of those claims is warranted.
Imagine all heat in the universe is replaced with absolute cold. Consequently, death replaces life. But it doesn’t follow that heat is the sole source of all life. Chemical building blocks, for instance, are also required.
In “the worst possible misery for everyone,” Sam imagines all well-being in the universe is replaced with absolute misery. In his view, “bad” thus replaces “good.” But it doesn’t follow that well-being is the sole source of all goodness. Other fundamental moral values may exist.
What other values might we find in morality’s foundation? Consider Sam’s own imaginings. Sam envisions that both:
- Every person gets an equal share of well-being.
- Maximizing collective well-being requires maximizing personal well-being.
For instance, Sam imagines “every person” gains “a little” well-being at the press of a button. Similarly, the opposite of “the worst possible misery for everyone” is absolute flourishing for “all of us.” Yet Sam’s moral theory seems to contradict (i) and (ii). An even distribution of well-being isn’t required for collective maximization. After all, everyone’s well-being is lumped together. Moreover, because of the impersonal nature of well-being’s aggregation, we’re plagued by Parfit’s paradox.
If Sam assumes (i) and (ii), it seems he’ll value more than just well-being. He’ll value distributive justice, particularly some principle of equality of well-being. He’ll also value persons such that their individual well-being matters more than maximizing overall welfare.
No one’s obligated to be moral, just as no one’s obligated to be rational
Sam acknowledges that his theory must establish moral facts, but he doesn’t think that it must establish any moral obligations. He writes:
There need be no imperative to be good—just as there’s no imperative to be smart or even sane.
Here, I believe, is Sam’s basic idea. We make moral claims. For instance, we say “free markets are good” or “you should give to charity.” If these claims are objectively true or false, then objective features of the world make them so. In Sam’s view, scientifically discoverable features of conscious experience provide the objective basis for morality. And that basis is enough, he says, “to show moral truths exist.” We can deny or fail to understand these truths, just like any other that science might reveal. If we do, we’ll be bad, stupid, or irrational. But the facts will still be the facts.
Sam is at least half-right. If objective moral truths exist, they don’t depend on our attitudes or opinions, anymore than the truth about Earth’s orbit does. Medieval Europeans affirmed geocentrism; they were wrong. Early Americans approved of slavery; they were wrong. Being objectively wrong in these two cases means being contradicted by the facts, physical and moral, respectively.
But there’s a difference between these two errors. Planetary motion is an impersonal phenomenon that we discover. When we determine that geocentrism is incorrect, we don’t rearrange the solar system. We don’t prescribe that the Earth orbit the Sun (given that heliocentrism is correct). Rather, we describe the solar system’s movements more accurately. In contrast, slavery is a personal phenomenon that we institute. When we determine that slavery is morally bad, Sam would say we don’t rearrange the moral landscape. We see its contours more clearly. Even if he’s right, we also prescribe that slavery cease. Moral facts issue a call to action. They entail that we should or should not rearrange our social system. Might we fail to do so? Yes. But the imperative to do so seems to remain.
I’d again like to thank Sam Harris and Russell Blackford for engaging me in The Moral Landscape Challenge. I hope our exchange has set many minds in motion on the topic of science, philosophy, and moral truth.