The Fight for Moral Truth

The Peak of Haplessness.In my essay for Sam Harris’s Moral Landscape Challenge, I assumed that serious competing theories of morality and value exist. Sam rejects this assumption. He contends that the only serious ethical theory is the one he accepts—consequentialism (explained after the jump).

Sam also rejects my assumption that ethics must be prescriptive. He says an ethical theory doesn’t have to impose any “shoulds” or “oughts” on us, such as “one ought to maximum collective well-being.” It only has to tell us what is morally good or bad. And Sam believes science can do so objectively (if we assume that “the worst possible misery for everyone is bad.”)

In this final reply to Sam, I argue against his defense of consequentialism and his rejection of moral obligations.


Consequentialism and its competitors

What is consequentialism? It’s an answer to this question: “What makes something morally right or morally wrong?” Consequentialism replies: “The consequences. Nothing more.” For instance, what makes giving to UNICEF morally right? A donation improves the lives of needy children (and likely doesn’t worsen yours).

Sam believes that consequences alone dictate what’s morally right or wrong. In his view, whatever maximizes collective well-being is morally right and whatever decreases it is morally wrong. In my last post, I called this idea welfare maximizing consequentialism. It’s much like (and often labeled) utilitarianism, an ethical theory that’s hundreds of years old.

Sam doesn’t like to use either label. He rejects the term “consequentialism” in particular because it implies that an ethical theory might be guided by something other than consequences. Two standard proposals for such a theory are deontology and virtue (or aretaic) ethics. Deontological theories set strong or absolute duties, such as “never kill an innocent person.” Virtue ethics says that being moral requires developing moral skill through practice—emulating virtuous people just as you’d try to emulate virtuoso musicians.

Sam Harris’s three defenses of consequentialism

Sam gives at least three reasons to fold deontology and virtue ethics into consequentialism.

A moral theory must not make everyone miserable

Sam claims that if duties or virtues tended to make all of us miserable, we’d reject them as a basis for morality. Sam writes:

[I]f the categorical imperative [an example of deontology] reliably made everyone miserable, no one would defend it as an ethical principle. Similarly, if virtues such as generosity, wisdom, and honesty caused nothing but pain and chaos, no sane person could consider them good.

Here Sam proposes what I’ll call the misery rule for the correct moral theory.

  • The Misery Rule: The correct moral theory must not reliably make everyone miserable.

At best, this rule sets up a single low hurdle for a moral theory to clear. It does not establish Sam’s core claims. For instance, the misery rule doesn’t show that the correct theory maximizes collective well-being. After all, a theory that leads to the persistent flourishing of at least some individuals (e.g., a socio-economic elite) doesn’t make everyone routinely miserable.

Or suppose a theory dictates that a better world awaits us in which everyone’s life is worse—indeed, barely worth living. If recommending such a world violates the misery rule, then the misery rule may invalidate welfare maximizing consequentialism. (See below: Parfit’s Repugnant Conclusion is like Zeno’s Paradox of Motion).

Moral principles are valued solely as a means to well-being

Sam believes that whatever’s morally good or bad about duties or virtues is found in their consequences for conscious creatures. He writes:

Either [duties and virtues] have consequences for the minds involved, or they have no consequences.

True. But this tautology doesn’t get us far. Actions that accord with duties or virtues will (like all actions) have consequences. Why does Sam believe duties and virtues get their value solely from their consequences? He says he doesn’t believe “any sane person” can value “abstract principles and virtues…independent of the ways they affect our lives.” But insinuating that it’s insane to oppose your view isn’t an argument. Arguments attempt to convince others, not diagnose them.

Sam does attempt to convince us that principles of justice get their value solely from increasing well-being. He targets the views of philosopher John Rawls. Rawls talked about justice as fairness in social and economic systems. He didn’t believe justice exists solely as a means of maximizing collective well-being. He believed justice requires us to protect the well-being of the least advantaged (e.g., persons born poor), even if by doing so we fail to promote the greatest total (or average) welfare.

Sam argues that concerns about justice pertain solely to its effects on conscious experience. He writes:

These concerns predate our humanity. Do you think that capuchin monkeys are worried about fairness as an abstract principle, or do you think they just don’t like the way it feels to be treated unfairly?

In the linked video, two monkeys (whom I’ll call “Pu” and “Chin”) receive different rewards for performing the same task for an experimenter (“Dr. B”). First, Pu gets a cucumber for giving Dr. B a rock. Then Chin gets a much tastier grape for doing the same. Pu sees Chin get the grape. Pu gets upset and rejects her next cucumber reward. Apparently, Pu has a sense of fairness. She demands equal pay for equal work!

Sam’s capuchin-inspired argument seems to be this: some creatures value justice but cannot understand it as a rational principle; thus, some creatures must value justice solely for its good effects. However, this argument doesn’t show that humans must value justice solely for its good effects. After all, humans can understand justice as a rational principle. Thus, Sam still hasn’t shown that we can’t value it as such. Sam has only asserted that we can’t or don’t (unless we’re insane).

Critics set “arbitrary limits” on which consequences to consider

Sam says that philosophical critics of consequentialism don’t consider all the relevant consequences.

According to Sam, the entire basis for distinguishing right from wrong is found in the conscious experiences that result (even just potentially) from what we do, what we mean to do, and what we tend to do (i.e., our actions, intentions, and dispositions).

Sam issues this challenge:

[C]ome up with an action that is obviously right or wrong for reasons that are not fully accounted for by its (actual or potential) consequences.

If there were an action that obviously met this challenge to the satisfaction of a committed consequentialist like Sam (who thinks earnest rejection of consequentialism is insane), I doubt we’d be debating consequentialism.

Moreover, Sam’s challenge sidesteps the criticisms consequentialism tends to face. For instance, thought experiments like “The Inhospitable Hospital” and “The Trolley Problem” suggest that consequentialism can violate our basic intuitions about right and wrong. In these scenarios, consequentialism appears to permit killing one innocent (e.g., by forcing her in front of a train) to save five others. Sam often uses thought experiments (“Imagine that…”) and puts great weight on intuitions (“the worst possible misery for everyone is bad”). So it seems the criticisms in question should have some merit.

The trouble for consequentialism, I propose, is that one of its central strengths is also a weakness.

  • Strength: Consequentialism fits with the intuitive idea that the right thing to do is produce the best overall outcome—for you, your children, and others.
  • Weakness: The “best overall outcome” most easily translates into the greatest overall welfare—an aggregated mass of moral “goodness” that’s impartial and impersonal. An individual’s well-being matters only insofar as it swells this mass. If the collective good is maximized despite or even because of the suffering of innocents (you, your children, whomever), then their suffering is approved.

Indeed, overall welfare can rise even as every single person’s well-being falls. This paradox, identified by philosopher Derek Parfit, is known as “The Repugnant Conclusion.”

Sam Harris’s defense of maximizing collective well-being—and why it’s not obligatory

Sam addresses The Repugnant Conclusion in his response to my essay. He maintains that maximizing collective well-being is good—indeed, best. What’s more, he insists that his moral theory does not obligate us to increase overall welfare.

Parfit’s Repugnant Conclusion is like Zeno’s Paradox of Motion

In The Moral Landscape, Sam captures the repugnance of The Repugnant Conclusion when he writes:

If we are concerned only about total welfare, we should prefer a world with hundreds of billions of people whose lives are just barely worth living to a world in which 7 billion of us live in perfect ecstasy.

The picture’s no prettier when we envision the potential peaks of average well-being. In fact, Sam concluded that “we cannot rely on a simple summation or averaging of welfare as our only metric.”

Sam now proposes that The Repugnant Conclusion is a mere puzzle, comparable to Zeno’s Paradox of Motion. In his response, he writes:

How do any of us get to the coffee pot in the morning if we must first travel half the distance to it, and then half again, ad infinitum? … Once mathematicians showed us how to sum an infinite series, [this] problem vanished. Whether or not we ever shake off Parfit’s paradoxes, there is no question that the limit cases exist: The worst possible misery for everyone really is worse than the greatest possible happiness.

Contrary to what Sam suggests, the “limit cases” for collective welfare aren’t as plain as the position of your coffee pot. The point of Parfit’s paradox is that at least the upper limit Sam proposes is in question.

Comparison of Zeno's Paradox of Motion and Parfit's Paradox of Welfare Maximization (
Comparison of Zeno’s Paradox of Motion and Parfit’s Paradox of Welfare Maximization (“The Repugnant Conclusion”)

In the motion paradox, you traverse an infinitely divisible path, yet you do so in a finite number of steps. In the maximization paradox, we achieve universal absolute flourishing, yet a “better” world (i.e., a higher peak of collective well-being) emerges in which everyone’s life turns out to be worse.

The motion paradox raises this question:

  • How do we get from point A to point B (the coffee pot) if, by virtue of space’s divisibility, the path between never ends?

The maximization paradox raises this question:

  • Why do we stop at absolute flourishing for all (Sam’s “point B”) if, by virtue of well-being’s summability, the path beyond leads to higher and higher peaks?

Zeno’s paradox, introduced in the 5th century BC, has been resolved. But Parfit’s paradox, introduced in 1984, has not. Maybe with time someone will find an answer that preserves Sam’s two basic claims:

  • The experience of well-being is the sole source of all moral value.
  • The best possible world hosts the greatest amount of well-being.

For now, these premises appear to lead to this cringeworthy conclusion: a massive pile of people on the brink of misery morally surpasses a smaller population of people at the peak of well-being.

Still, I haven’t addressed Sam’s “point A”—that is, “the worst possible misery for everyone.” Sam assumes universal absolute misery is bad. From this basic assumption, he infers that more well-being must be better. He also seems to infer the two basic claims I just bulleted. But I don’t think inferring either of those claims is warranted.

Imagine all heat in the universe is replaced with absolute cold. Consequently, death replaces life. But it doesn’t follow that heat is the sole source of all life. Chemical building blocks, for instance, are also required.

In “the worst possible misery for everyone,” Sam imagines all well-being in the universe is replaced with absolute misery. In his view, “bad” thus replaces “good.” But it doesn’t follow that well-being is the sole source of all goodness. Other fundamental moral values may exist.

What other values might we find in morality’s foundation? Consider Sam’s own imaginings. Sam envisions that both:

  1. Every person gets an equal share of well-being.
  2. Maximizing collective well-being requires maximizing personal well-being.

For instance, Sam imagines “every person” gains “a little” well-being at the press of a button. Similarly, the opposite of “the worst possible misery for everyone” is absolute flourishing for “all of us.” Yet Sam’s moral theory seems to contradict (i) and (ii). An even distribution of well-being isn’t required for collective maximization. After all, everyone’s well-being is lumped together. Moreover, because of the impersonal nature of well-being’s aggregation, we’re plagued by Parfit’s paradox.

If Sam assumes (i) and (ii), it seems he’ll value more than just well-being. He’ll value distributive justice, particularly some principle of equality of well-being. He’ll also value persons such that their individual well-being matters more than maximizing overall welfare.

No one’s obligated to be moral, just as no one’s obligated to be rational

Sam acknowledges that his theory must establish moral facts, but he doesn’t think that it must establish any moral obligations. He writes:

There need be no imperative to be good—just as there’s no imperative to be smart or even sane.

Here, I believe, is Sam’s basic idea. We make moral claims. For instance, we say “free markets are good” or “you should give to charity.” If these claims are objectively true or false, then objective features of the world make them so. In Sam’s view, scientifically discoverable features of conscious experience provide the objective basis for morality. And that basis is enough, he says, “to show moral truths exist.” We can deny or fail to understand these truths, just like any other that science might reveal. If we do, we’ll be bad, stupid, or irrational. But the facts will still be the facts.

Sam is at least half-right. If objective moral truths exist, they don’t depend on our attitudes or opinions, anymore than the truth about Earth’s orbit does. Medieval Europeans affirmed geocentrism; they were wrong. Early Americans approved of slavery; they were wrong. Being objectively wrong in these two cases means being contradicted by the facts, physical and moral, respectively.

But there’s a difference between these two errors. Planetary motion is an impersonal phenomenon that we discover. When we determine that geocentrism is incorrect, we don’t rearrange the solar system. We don’t prescribe that the Earth orbit the Sun (given that heliocentrism is correct). Rather, we describe the solar system’s movements more accurately. In contrast, slavery is a personal phenomenon that we institute. When we determine that slavery is morally bad, Sam would say we don’t rearrange the moral landscape. We see its contours more clearly. Even if he’s right, we also prescribe that slavery cease. Moral facts issue a call to action. They entail that we should or should not rearrange our social system. Might we fail to do so? Yes. But the imperative to do so seems to remain.

I’d again like to thank Sam Harris and Russell Blackford for engaging me in The Moral Landscape Challenge. I hope our exchange has set many minds in motion on the topic of science, philosophy, and moral truth.

37 replies on “The Fight for Moral Truth”

  1. At the risk of sounding sycophantic, brilliant! All the cards are on the table, well most of them. Now I think its time for Sam to show his hand, what’s eating away at him that convinces him he’s right. As I’ve said of the religious: in the face of overwhelming refutation it’s up the them to convince us they are right or we are are wrong. Using the get out of jail card, intuition, just isn’t good enough, not only because the stakes are high in the social sphere i,e. what would we do without religion? but they are even higher when it comes to the role of philosophy as a function in society.(all to be explained in the future) and even higher when it comes to the responsibility of the philosopher in expounding rationality in these times of torrid ideas.

  2. Ryan,

    You’ve demonstrated to my satisfaction that Harris hasn’t rebutted your critique. He hasn’t shown that the is-ought distinction is merely a “trick of language”, hasn’t shown why the quest for flourishing should minimize disparities in opportunities for well-being, and hasn’t shown how science, a quintessentially descriptive enterprise, somehow becomes prescriptive in establishing objective moral truths. I’m sympathetic to consequentialism/utilitarianism but find Joshua Greene’s defense of it in Moral Tribes – what he calls deep pragmatism – considerably more illuminating than what Harris offers.

    I think the challenge remains of articulating Harris’s science-based moral realism and the non-realist alternatives in ordinary language such that non-philosophers can grasp what’s primarily at stake, and why any of this matters. I’ve tried my hand at this in my response to Harris at

    Thanks again for your patient and refreshingly dispassionate analysis, which I think should have changed Harris’s mind and maybe still might at some point.

    1. Thanks, Tom.

      I’ve heard good things about Greene’s Moral Tribes. I’ll have to check it out.

      And thanks for the link to your piece. Public communication of philosophy isn’t easy. But then public communication of science isn’t easy, yet Neil deGrasse Tyson, Bill Nye, et al. aren’t doing too bad. Perhaps we need a Phil Phi the Philosophy Guy! 🙂

  3. Hi, Ryan. This is a very nice response to Sam Harris. I’m glad you pointed out that “anyone who disagrees with me is insane” is a weak argument (not to mention terrifying, when we consider how his view might play out in terms of public policy). His response to this point would probably be something like this: It’s not that they are insane, but they are either confused about their own thinking or suffering from some neurological malformation. They should be treated or, if that is impossible, prevented from harming themselves and the rest of humanity.

    His question is: Why should we take somebody seriously who claims to value anything over well-being? To your credit, you’ve given a cogent argument for why well-being cannot be the only value needed to sustain our moral judgments. However, you have not shown that we must value anything over well-being. Harris might still claim that a science of well-being provides a foundation for morality, and that we could scientifically take up other values, if needs be.

    I think the more important point you make is at the end, that if we do away with moral imperatives, we do away with morality itself. I would like to see more on this&mdashand it connects directly with my own criticisms of Harris, as you know.

    1. Thanks, Jason. You’re right that Harris can be read as asking this:

      Why should we take somebody seriously who claims to value anything over well-being?

      I think I give reason to believe that sometimes other values must take priority over well-being. For instance, one way me might block the Repugnant Conclusion is by valuing some justice-oriented notion of desert or respect. That is, in virtue of being persons (not just vessels for well-being), we are each owed a greater level of well-being than the Repugnant Conclusion entails. Further, the well-being of some (e.g., the least advantaged) might be due greater protection. We’re still talking about well-being, of course. But we’re checking or constraining its role in our moral judgments by appeal to other values.

      1. I don’t see how that would give justice precedence over well-being. Harris can still say that some basis for making moral judgments has been established, even if it’s application sometimes requires additional considerations. It might be interesting to try to formulate a variation on Parfit’s argument: The worst possible misery is not a definite limit, because we can always increase misery by adding one, two or millions of mostly happy people. Instead of a Repugnant Conclusion, this is more of a Pleasant Conclusion.

        The question Harris has to answer is: Why isn’t it better to have a world with 500 miserable people than 5,000,000 mostly happy ones?

        1. Sam’s view is that well-being is the only consideration. All other considerations reduce to a concern for the well-being of conscious creatures. Further, to quote Sam’s response to my essay, he says “the well-being of the whole group is the only global standard by which we can judge specific outcomes to be good.” If he’s right, it seems there shouldn’t come a point when justice (coherently) blocks us from maximizing collective welfare.

          But your point, I take it, is that if we admit universal absolute misery is bad, then we admit that well-being is the sole basis for moral judgment. I argue in my post that we need only admit that well-being is necessary (not sufficient) for moral judgment. I’ll add here that the intuition Sam taps into may also suggest that misery has rather weighty disvalue. But I’m unsure whether we could invert the Repugnant Conclusion based on the aggregation of misery.

          1. Hi Ryan,

            Harris has often said that he is not claiming science can answer every moral dilemma. So he does not have to change his position. You would have to show that well-being is never sufficient, or not sufficient in a significant enough number of cases. I don’t think you’ve done that.

            Also, I think you’re giving away too much by allowing that it is necessary.

          2. Indeed, challenges remain. Hypothetical moral dilemmas like The Trolley Problem and especially The Inhospitable Hospital don’t have much bite for Harris. He does take The Repugnant Conclusion pretty seriously in his book, though seemingly only as a practical (not, as I see it, deep conceptual) problem for his moral theory. But now it appears Parfit’s paradox also strikes him as toothless.

            Maybe you’re right that I’ve given away too much. But I’m okay with at least starting from a compromise position that says well-being has a necessary role to play in any complete moral theory. There will always be room for debate 🙂

          3. Nothing wrong with compromise, but this one doesn’t work. The problem, which you suggested at the end of your reply, is that Harris isn’t really talking about morality at all. How can you say (1) that Harris is not really talking about morality, and (2) that he has made a case for well-being as a necessary value in our morality? It is inconsistent. That’s why I prefer focusing on the fact that Harris is changing the subject and playing games with the language.

          4. How can you say (1) that Harris is not really talking about morality, and (2) that he has made a case for well-being as a necessary value in our morality?

            On (1), my view is that he’s talking about at least a portion of morality. He’s working on a theory of the Good, but he’s seemingly leaving off a theory of the Right—or at least a full-throated one. On (2), I say Harris’s theory of the Good is incomplete, whereas he would say it’s complete. So I think I can make room for well-being in a theory of the Good, but still consistently press the issue about developing a proper theory of the Right. Indeed, whether or not well-being is a necessary moral value, we might still deny that moral imperatives exist. For instance, Sam might Say that the dictates of justice can be properly understood, even if this understanding entails no imperative to act. We could be like the infamous ‘amoralist’ who (purportedly) grasps the meaning of moral rules but remains indifferent to them.

  4. I hope that I don’t sound too naïve but I’ve never really understood why utiliarianism takes well being or pleasure, or happiness to be something over which we could value nothing more. These are things that we experience and abstracted from the beings that experience them, they are nothing – just ideas. The value of well-being is only intelligible in the context of those beings who experience it, so it is not well-being abstracted that we are expected to value but the well-being of other people. Surely it is perfectly intelligible to ask why we value the well-being of other people? The answer most of us would give is that it is because we value those other people to begin with – so it seems to me that utilitarianism has the relationship between the value of people and their well-being back to front.

  5. I very much appreciated your winning essay and thought it highlighted the apparent weaknesses of Sam’s theory. However, I am much less convinced by your two responses to his rebuttal. I think the core of Sam’s theory lies in a claim about human intuitions. This is the claim you did not grapple with in your responses but he highlighted in his rebuttal to your essay. He posits that ultimately, every human being (along with all conscious creatures) care most about their well-being. In fact, it is the only thing they can care about. We all define well-being differently: some achieve a sense of well-being through sexual gratification, others culinary delights, some a thought-provoking philosophical debate, and some through violence and murder.

    It is this claim I think you need to respond to in order to fully refute Sam’s central thesis. That is because, given that intuition is present in us all, a science of morality is as possible as a science of health or a science of plumbing. They all rely on certain core intuitions, like causality and the veracity of empirical evidence. If we acknowledge that well-being is the chief determinant of good and bad, then science can easily be directed to help us discover what best leads us up a peak on the moral landscape. The same tools of the scientific method can be employed in this new search for scientific truth. You’ve presented many difficult questions for such a science when you discussed the merits of equally distributing well-being, the Repugnant Hypothesis, etc. but none of those claims refute what Sam is fundamentally arguing. “Well-being” is left nebulous in the book for a reason. Sam does not want to presuppose what well-being actually means. That is up to science to determine. He merely wants to highlight that “well-being” is the intuitive base concept we all share, a guiding light towards a more moral world.

    1. Your statement of Harris’s position sounds fair, Michael. Granting that we care most about—indeed, care only about—well-being, we can proceed to investigate its causes and other characteristics, just as we do with health. We’ll have a science of morality just as we have a science of medicine. But I’m not convinced Sam has supported his claim about well-being.

      Setting aside his statement that it’s insane to question his account of well-being’s value, his defense of that account seems to come down to the intuition that “the worst possible misery for everyone is bad.” I argue in my post that this intuition shows that well-being has moral value, but not that well-being is the sole source of all moral value. Further, when we look at the other extreme Sam imagines (“absolute flourishing for all”), it appears we need to value persons and perhaps even some notion of equality as well to achieve what Sam sees as the morally best world.

  6. Hi, Ryan. Perhaps that is consistent. In any case, we still don’t quite agree on how best to respond to Harris, but I appreciate you taking the time to clarify your views. Thanks.

  7. I think that sometimes the resolutions to problems get lost when thought experiments are completely conceptualized, rather than grounded in a pragmatic realism.

    The Repugnant Conclusion relies on removing every other factor that it is possible to consider, for example temporal factors. What will happen to these two possible societies as time goes on? Society A can be the 10 billion flourishing, Society B can be the 200 million tolerating.

    Society A can reproduce as much as they want, if every couple had 4 children they would have an equal number of people to Society B within 5 generations. So they would immediately trump Society B in terms of wellbeing.

    Society B live in extremely poor conditions. With generation after generation going by, they would have a long, hard struggle to get anywhere near Society A, especially because in conditions as bad as they are under the infant mortality rate would be high so they couldn’t catch up by mere reproduction.

    If we situate the thought experiment on Earth, we find that there will be a maximum number of people that can possibly live on the planet at maximum flourishing levels. We can play around with the numbers, to add in extra people. Adding in these extra people could cause problems that we cannot foresee by imagining an imaginary society. Would there be enough food? What about land, and mating opportunities, and jobs to be done that allow them to connect to and contribute to society at large?

    This is why the practicalities of applying morality to the real world ought to lie with science, thinking rationally isn’t enough.

  8. Ryan, your work is clear and eloquent, and your critiques are fair and illuminating. Thank you! But with all due respect, I would like to push back, if I may, against your criticisms here, because I once again think Sam’s arguments can be redeemed with some philosophical reinforcement. 🙂

    Part I

    In response to “A Moral Theory Must Not Make Everyone Miserable”:
    Your “socio-economic elite” counter example fails here because you misidentify Sam’s “core claim” in the passage you critique. The Misery Rule may not establish the good of “maximizing well-being”, but it does establish that any successful moral theory (or any one worth taking seriously) will be fundamentally consequentialistic—which is what Sam was actually arguing for here.

    1. I think consequentialism stands or falls with the principle of maximization. Consequentialism says that what’s morally right or wrong depends solely on consequences. For Sam, the relevant consequences are conscious experiences, and the best possible world hosts the most goodness (i.e., the conscious experience of well-being). If all that’s correct, then in principle there shouldn’t come a point when anything other than the production of more goodness bears on our moral evaluations. But Parfit’s paradox suggests otherwise. So do the conditions Sam sets in his thought experiments about a good or best world. (See my comments about “equal share” and “personal well-being”).

      I also don’t think the Misery Rule shows that the correct moral theory is fundamentally consequentialist. Rather, I think it (EDIT: at best) shows that consequences can matter, and that misery can have rather weighty disvalue that requires moral consideration (EDIT: which is also what I take away from “the worst possible misery for everyone”). However, it does not show that consequences—particularly conscious states—are all that matters morally. Essentially, I’m taking a stance that permits both consequentialist and non-consequentialist considerations. I’m not taking an anti-consequentialist stance that denies consequences play any role in a successful moral theory (in case that was unclear).

      1. “Consequentialism says that what’s morally right or wrong depends solely on consequences.”

        I don’t know about this. If we make the distinction (as I recommend below, in part III of my reply) between “intrinsic” and ” instrumental” values, then it may be the case that a successful moral theory will depend on a complex framework of instrumentally valuable principles and methods, where what is morally right is not solely dependent on what will result in the greatest sum of well-being. But it will none-the-less be fundamentally grounded on “well-being”, and so it’s consequences will necessarily matter.

        Put another way, if we agree that well-being is a necessary condition of a successful moral theory, then I think a basic notion of consequentialism also becomes a necessary condition. For if any theory were to result in the violation of that agreed upon necessary value, then it would necessarily fail as a moral theory. Or more intuitively, if a successful moral theory were truly inconsequential and unconcerned for how it effected the well-being of conscious agents, then it could possibly violate the Misery Rule. But a successful moral theory cannot violate the Misery Rule. Therefore, a successful moral theory cannot be inconsequential and unconcerned for how it effects the well-being of conscious agents. So I reject your following claim:

        In summary, a successful moral theory will surely entail a more complicated framework than the proposition, “Maximize the production of well-being.” But it will necessarily be concerned with, and fundamentally grounded upon, the value of well-being; and so we will necessarily be concerned about and judge a theory, according to its consequences. And I think all this is coherent with what Sam has argued.

        1. It seems we’re not agreeing on the definition of consequentialism. The definition I employ is the one that prevails in the philosophical literature. You appear to define consequentialism in opposition to anti-consequentialism: what is morally right or wrong never depends on consequences. Again, I endorsed non-consequentialism: what is morally right or wrong does not depend solely on consequences; rather, it depends on other things as well. If you wish to reject maximizing consequentialism (perhaps in favor of satisficing consequentialism or the like), I understand that position. But I believe it pushes us toward non-consequentialism.

  9. Just one thing I noted, here:

    Harris: “[C]ome up with an action that is obviously right or wrong for reasons that are not fully accounted for by its (actual or potential) consequences.”

    Ryan: “If there were an action that obviously met this challenge to the satisfaction of a committed consequentialist like Sam (who thinks earnest rejection of consequentialism is insane), I doubt we’d be debating consequentialism.”

    Instead of actually trying to think of something, you seem to just say that would change the subject. Perhaps you were justified, but I wondered why you didn’t even try.

    1. I’m not saying an attempt to answer Harris’s challenge would change the subject. I see a few problems with Harris’s challenge.

      First, he asks for an action that “obviously” satisfies his demand. (The emphasis on “obviously” is Harris’s.) I’m not sure what he means. But given the protracted debate over consequentialism, it appears the issue is a complicated and even subtle one, not an obvious one.

      Second, the standard criticisms of consequentialism attempt to show that if we take consequences to be all that matters morally, then we arrive at moral judgments that don’t actually appear to be moral. Basically, the method of critique is reductio ad absurdum. Parfit’s Repugnant Conclusion is an example, as are the thought experiments I cite.

      Third, Harris sets the burden of proof as though there’s a presumption in favor of consequentialism. There’s not. Philosophical opinion is split. Only about 1 in 4 professional philosophers report being consequentialists. Given that Harris’s repeated defense of consequentialism is to say that rejecting it is insane, he appears to think roughly 3 in 4 philosophers are insane. But it seems more likely that there are arguments to consider (as I suggest in my second point in this comment).

      1. Ok, I agree he shouldn’t have made the sane comment, that was wrong and the reason it was wrong is because of the consequences…it has bogged down the debate as you now seem to be unwilling to proffer an example. I’m not saying there is no example, I just can’t think of one. I’m not calling anyone insane so could you give me an example of an action that is obviously right or wrong for reasons that are not fully accounted for by its (actual or potential) consequences? I think the “obviously” qualifier is important so we don’t get tangled in odd examples where it is difficult to tell whether it is right or wrong.

        1. You haven’t responded to my first or second points. Why think that the debate can be settled with an example that would “obviously” satisfy Harris (especially when he appears cognitively closed to such an example, as he insists it is neither “psychologically credible [nor] conceptually coherent”)? Why is it unsatisfactory to attempt a reductio ad absurdum critique of consequentialism?

        2. The problem with this challenge is that what is “obviously morally right” or “obviously morally wrong” is not necessarily a matter of fact, and it may entirely depend on your point of view. It is extremely easy to find examples today and throughout history of people making judgments about what is obviously morally right or wrong without ultimately appealing to consequences. People value all sorts of things–fairness, honesty, loyalty, dignity, etc.–and appeal to them to justify their judgments about what is “obviously” right or wrong. You or Sam Harris might say that these things are not obviously right or wrong, or you might say that they are right or wrong because of their consequences. But that is you making a moral judgment according to your own sense of right and wrong. You favor justifications that ultimately appeal to consequences. That is obviously not the only way of going about it.

          You might try to argue that the values of fairness and honesty, for example, lead to desirable consequences. They might or they might not, but that does not mean that the outcomes justify the values. The fact that one values well-being might increase the probability of a desirable outcome, but that does not entail that one is justified in valuing well-being . . . unless you beg the question by assuming that a value is justified if it increases the probability of a desirable outcome.

          1. “The problem with this challenge is that what is “obviously morally right” or “obviously morally wrong” is not necessarily a matter of fact, and it may entirely depend on your point of view.”

            Give me one example of an obviously right or wrong action IN YOUR OPINION that doesn’t have any (actual or potential) consequences IN YOUR OPINION.

            “You might try to argue that the values of fairness and honesty, for example, lead to desirable consequences. They might or they might not, but that does not mean that the outcomes justify the values. The fact that one values well-being might increase the probability of a desirable outcome, but that does not entail that one is justified in valuing well-being . . . unless you beg the question by assuming that a value is justified if it increases the probability of a desirable outcome.”

            Values of fairness and honesty might or might not be valuable. A society would probably have to agree with that. I can think of reasons why it would be valuable, but that is beside the point.
            IF you think fairness and honesty are valuable, can you think actions judged right or wrong that relate to fairness and honesty that do not have actual or potential consequences in terms of fairness or honesty? Do you think lying might have some negative actual or potential consequences in a society that values honesty?

            “You favor justifications that ultimately appeal to consequences.”

            I don’t know what I favor. But I can’t think of any action that I might define as morally right or wrong that doesn’t have actual or potential consequences.

            “That is obviously not the only way of going about it.”

            What is another way to go about it?

          2. A personal example: I am a generally honest person. I’ve known people who lied often and seemingly at random, but that seems wrong to me. Not because of any particular consequences, though I am sure that lying can have all sorts of consequences–some good, some bad. But it seems to me that gains won through lying are not deserved, and therefore morally unjust–even if nobody is worse off for it. I think it has to do with the sense of unfair manipulation. If you gain something from somebody through dishonest means, you are not respecting them as an equal, and you are not giving them an equal chance to understand the situation. Then again, it might just be because I’m born with an instinct to value honesty. It might not be for any deep reason.

            Now, you might say that, in fact, people generally are worse off if unfair manipulation is permitted. I really don’t know. Maybe you would be right, maybe not. But that supposition is not why I think it is better to tell the truth.

            By the way, obviously all actions have consequences. That’s not the point. The point is whether the actions can only be justified by appeal to those consequences.

  10. Part II

    I don’t think Parfit’s “Repugnant Conclusion” demonstrates a flaw in valuing well-being, in consequentialism, or in Harris’s Moral Landscape. Rather, I think it demonstrates that a reasoning error occurs when well-being is merely summed (or averaged). And what is required to properly reason in such a case is something like Nick Bostrom’s “Self Sampling Assumption”, which he presents in his book, Anthropic Bias, on the anthropic principle and observation selection effects:

    (SSA) One should reason as if one were a random sample from the set of all observers in one’s reference class.

    As Bostrom states, SSA is useful when objectively evaluating evidence with an indexical component (57). (And he offers the possibility of it being a “requirement of rationality” (58)). And he claims:
    “SSA purports to determine conditional probabilities of the form P(“I’m an observer with such and such properties” | “The world is such and such”), and it applies even when you were never ignorant of who you are and what properties you have.

    And so SSA helps reveal that the desired consequent of a moral theory is not merely a maximal sum of well-being, but rather the maximal probability of a conscious agent in a given population experiencing a maximal amount of well-being–which is perfectly coherent with what Sam argues. I.e., the way to reason out of Parfits dilemma is to take a random sample of some given population of conscious agents and determine the probability of an agent leading a flourishing, happy and fulfilled life. The rational choice for any person is the population which SSA gives the highest probability.

    1. I haven’t heard this sort of response to the Repugnant Conclusion. Is there something like it in the philosophical literature? Maybe I’m even overlooking it in the SEP entry? I’m just curious whether I can find further discussion of it somewhere.

      In any case, it sounds like your proposal fits the strategy of finding a new way to calculate overall welfare. My immediate impression is this: even if the probabilistic calculation you propose obviates the Repugnant Conclusion, it appears to do so by introducing a non-consequentialist principle. Indeed, it sounds much like we’re reasoning behind Rawls’s “Veil of Ignorance.” But then you suggest that SSA “applies even when you were never ignorant of who you are and what properties you have.” Perhaps you can elaborate here.

  11. Part III

    And this leads me to my final thought—which is that a distinction is needed between “intrinsic” goods and “instrumental” goods. And I would argue that no values (including justice) answer Moore’s “open question”–but well-being–because they are all instrumental values of well-being, which is intrinsically good. (I still don’t know what you mean when you say that a principle of justice is good in and of itself. What makes any one principle more intrinsically valuable than an infinite number of others?)

    What do you think of this, Ryan? I tried to make some hard points without writing a novel. Namely, 1) that your counter-example(s) attack summing well-being, but don’t refute Sam’s argument about consequentialism–a theories consequences are indeed vitally important; 2) that the problem of the “Repugnant Conclusion”–which is the most serious problem offered–is very likely a reasoning problem, not a defeater of utilitarianism (as Sam argues); and 3) we must distinct between “intrinsic” and ” intrumental” values, where well-being is necessary (as you agree) and intrinsically good; but justice is merely instrumentally valuable, because it does not bottom out Moore’s “open-question” as Sam argues well-being does.

    Peace and thanks for your time!

    1. I accept a distinction between “intrinsic” and “instrumental” goods. But I don’t follow your claim about justice and Moore’s open question argument. As I understand that argument, his conclusion is that goodness is a fundamental, unanalyzable property. So, in his view, to say that justice is intrinsically good is not to say that goodness = justice. Rather, it is to say that justice bears the property of goodness independent of other entities or states of affairs (such as pleasure and beauty, both of which Moore says also bear the property of goodness).

      My view about the intrinsic value of justice comes back to my response to your initial comment (“Part I”). If justice lacks intrinsic value (i.e., value that is not instrumental to the production of well-being), then justice places no constraints on welfare maximization (constraints that, as I note in my post, Sam appears to include in some of his own thought experiments). But justice does appear to set such constraints. So justice appears to have intrinsic value.

      1. I am here referencing pages 10-12 of the Moral Landscape, and I am using Moore’s open question argument as a condition (or test) for a value being “intrinsic” or not. The wiki on the Open Question Argument is pretty good; it reduces the argument to the following:

        If X is analytically equivalent to “good”, then the question “Is it true that X is good?” is a meaningless one. The question “Is it true that X is good?” is not meaningless (i.e. it is an open question).Therefore, X is not (analytically equivalent to) “good”.

        Sam argues that the conclusion does not follow in the case of well-being. “…[T]he regress initiated by Moore’s ‘open question argument’ really does stop…it makes no sense to ask whether maximizing well-being is ‘good'” (12). Thus, well-being is not good for some other reasons, but is necessary and intrinsically good, in and of itself. But the same cannot be for other moral values; it will indeed be meaningful to ask whether they are good, and the answer will depend on a line of reasoning connecting the value to well-being, and its promotion thereof– thus demonstrating its “instrumental” nature.

        (Furthermore, I think this context might make sense of Sam’s use of such words as “obvious” and “insane”; I don’t think he is trying to insult anybody in such instances, but rather he is trying to maintain a response to Moore’s influential argument.)

        1. I see where you’re going. However, I think we’re understanding Moore’s open question argument differently.

          Moore’s argument intends to show that “good” resists definition in terms of something else. For instance, he claimed that pleasure is not analytically equivalent to “good.” But he appeared to allow that things can be good intrinsically, even if they’re not good analytically. For instance, he appeared to allow that pleasure, justice, and beauty are all intrinsically good. However, he denied that any of these are analytically equivalent to goodness (in the way Harris believes well-being = goodness). Taking the open question argument a step further (that is, a step beyond what’s stated in the wiki), Moore held that goodness is a fundamental, unanalyzable property.

          Regardless of whether we take that step, we can draw a distinction between “X is analytically Y” and “X is intrinsically Y.” To offer a non-moral example, I’d say that water is intrinsically fluid/liquid, but it is analytically H20. Likewise, both well-being and justice can be intrinsically good, regardless of what we say about Moore’s open question argument (which, again, is about things being analytically good).

  12. As a neophyte lay philosopher I hope my points aren’t overly juvenile. But the idea of “maximizing the general well-being” terrifies me. It seems to just paraphrase some very misguided notions from the past; “the needs of the many outweigh the needs of the few” or “the ends justify the means”. The six million deaths in the Holocaust were no problem because the consequences were increased well-being for the eighty-million Germans who had the good sense to not be Jewish. I haven’t read Mr. Harris’s book yet, but I already have my doubts. I agree with your assessment that his theory presupposes its own conclusion.

    I do, however, agree with Mr. Harris that there is no obligation to be moral. While I may agree it is immoral to kick a beggar and moral to give a beggar a dollar I cannot agree that I am morally obligated to give a beggar a dollar. I can just as easily choose the amoral route and walk indifferently past the beggar. The problem with moral obligations is that they cannot be universal. There are always loopholes for people who can’t keep the obligation for one reason or another. A conditional morality is not a morality.

    1. Well, I did read his book, and he isn’t really trying to establish what an ideal society is although he does make some effort to say what he thinks it might be. He certainly recognizes that determining the best society isn’t captured by single slogans. And maximizing the general well-being was just his way of describing a society that is fair to all, maximizing everyone’s well being to the extent possible. But the real thrust of the book was to say that IF members of a society can agree on what their goals are for a good society, that allows them to judge, objectively, whether actions generally move us toward that goal or away from it.

      The thing he said that I was struck by is that being moral is not always easy. Being moral, just like being healthy, takes work. And just because everyone doesn’t agree on exactly what “being healthy” is, they all have a pretty good idea of whether a given decision/action is generally going to move one toward improved health or away from it. The same would go for having a good financial basis. Everyone might not agree on that, but we generally can tell when decisions folks make seem to be moving them toward a better financial basis vs away from it.

      You are right, he makes the point that nothing is going to force folks to be moral…just like with Christianity…or even God. Folks are free. But for those that do want to live better lives that is in the best interest of all, there are ways to help them understand how to achieve that goal and science can help.

      At least that is what I took away from the book.

Comments are closed.