In part 1 of this two-part series, I gave Sam Harris’s “worst possible misery for everyone” argument (WPME). I also situated WPME within Sam’s overall defense of a scientific theory of objective morality.
Now, I’ll finish up by evaluating WPME. I’ll contend that WPME fails to secure its conclusion that morality depends entirely on increases and decreases in well-being. Specifically, WPME fails to rule out justice—particularly in well-being’s distribution—as something that matters along with the amount of well-being in the world.
Recall from part 1 that WPME comes in the middle of a larger seven-step argument. Two steps precede it, and two steps follow it. Below I give just WPME.
- A state S of the world in which every conscious creature is maximally miserable is bad.
- Therefore, A state T of the world that replaces at least some of the misery in S with the experience of well-being is better than S.
- Therefore, Increases/decreases in the well-being of conscious creatures fully determine which states of the world are morally better/worse.
The “worst possible misery for everyone” is the subject of (3). Every human and non-human being in the world that can suffer is suffering as much as it can for as long as it can. Well-being is at its lowest level for every single individual.
Sam believes (3) is an uncontroversial moral judgment. In fact, he considers it a foundational moral truth from which we can infer other moral truths, starting with (4), which says that the world in (3) is made morally better by raising the level of well-being, however small the increase.
The contrast Sam seems to emphasize most, however, is not between the conditions in (3) and (4). It’s between “the extremes of absolute misery and absolute flourishing.” Here’s the full quote from The Moral Landscape (page 39).
Once we admit that the extremes of absolute misery and absolute flourishing—whatever these states amount to for each particular being in the end—are different and dependent on facts about the universe, then we have admitted that there are right and wrong answers to questions of morality.
The extreme of “absolute misery” refers to the “worst possible misery for everyone.” So, the extreme of “absolute flourishing” refers, presumably, to the polar opposite: an (according to Sam) uncontroversially morally good state of the world in which every being that can flourish, i.e., enjoy well-being, flourishes as much as it can for as long as it can. Well-being is at its highest level for every single individual. Indeed, as I understand the latter extreme, no individual has to flourish less so that some other individual can flourish more. On the contrary, no individual could possibly be better off than he, she, or it is. Uncompromising utopian perfection prevails.
As you may have figured out, Sam assumes well-being can be quantified (“in principle, if not in practice,” he says). Quantifying well-being, if possible, likely wouldn’t be as straight-forward as quantifying, say, height. But Sam thinks it would be similar to quantifying physical health.
Supposing well-being can be measured and compared across individuals, it can also be summed or averaged across individuals, yielding the collective well-being. In my last post, I pointed out that Sam believes we should maximize collective well-being, yet he gives no explicit argument to show he’s right. I think we can fashion one from the second step of his full argument (also in my last post) for his science of morality. Here’s that second step, plus the one that precedes it.
- No value/disvalue (good/bad) exists in a world permanently devoid of conscious creatures.
- Therefore, All value/disvalue that exists in the world is value/disvalue to (good/bad for) conscious creatures.
Here, saying “good for” is another way of saying “promotes well-being.” So, (2) is saying that nothing can be morally good except as a means to well-being. 1 Well-being, in other words, is the only thing of intrinsic moral value. Let’s assume that intrinsic moral value is not subject to the law of diminishing returns: more intrinsic moral value is always better. Thus, maximizing the intrinsic moral value found in the world is best. On Sam’s view, doing so means maximizing collective well-being.
- Frankly, I think we can infer (5) from (2). But Sam seems to give (3) and (4) as support for (5), so I’ll evaluate them as such. In the course of evaluating (3) and (4), doubts will arise about (2). ↩
(1) This argument falls under the 1st strategic option of the four to win the Moral Landscape Challenge, correct?
(2) While not identical, your argument seems equivalent to W. L. Craig’s “Knock Down Argument” he presents in his debate with Harris. Would you agree with this?
Craig summarizes this argument HERE as follows:
“…there is a possible world which we can conceive in which the continuum of human well-being is not a moral landscape. The peaks of well-being could be occupied by evil people. But that entails that in the actual world the continuum of well-being and the moral landscape are not identical either. For identity is a necessary relation. There is no possible world in which some entity A is not identical to A. So if there is any possible world in which A is not identical to B, it follows that A is not in fact identical to B.
Since it’s possible that human well-being and moral goodness are not identical, it follows necessarily that human well-being and moral goodness are not the same, as Harris has asserted.”
Craig makes the point that if a scenario can be presented which turns well-being and goodness against each other, then it follows that they are not identical concepts–and he stops there. You seem, Ryan, to be making a more finely grained point that such a possible scenario implies that well-being may be a NECESSARY condition to moral goodness, but it is NOT sufficient. This critique would be that Sam’s theory is too simplistic, and that morality is a more complicated subject.
Is this right? I find this plausible, but:
(3) Like Sam, I find it hard to be persuaded by the highly abstract scenario’s given (e.g. by Craig and by you, in this essay) as defeaters of utilitarianism, because they always seem to work by being fallacious, over-simplified intuition pumps. For example, you offer an “enslavement” scenario as one where well-being is maximized; but it seems obvious to me that your argument is a false dichotomy: a possibility where, say, the slaves are free’d and replaced with technology–say, unconscious robots–is one that would in fact maximize well-being and is really good. Your argument does not present all possibilities, and so it also does not present well-being maximized; therefore, it does not present a defeater to Sam.
While I have a minute, I just wanted to further flesh out my thinking in point (2) above for clarification:
To my knowledge, the truth conditions of identity and the biconditional (if and only if; i.e. “iff”) are equivalent; therefore, they are logically equivalent. And Craig has Sam claiming that “well-being” and “morally good” are identical; while here we might attribute to Sam, what is logically equivalent (though perhaps a more accurate and interesting phrasing), the biconditional claim (BC):
(BC) [If a state S of the world is good, then S entails collective conscious well-being.] AND [If a state S entails collective conscious well-being, then S is good.]
And what I hear you maybe arguing, Ryan, is that the latter conditional of the conjunctive statement (BC) is false, but the former may be true. Well-being may be a necessary, but not a sufficient condition of moral goodness. Moral goodness entails other necessary conditions, like perhaps, e.g., “Justice”–as you argue.
Does this seem right to you, Ryan?
You ask whether I affirm this proposal:
Yes, I’m inclined to say that well-being is necessary. But it’s not sufficient, no. Sometimes a proper moral evaluation will need to account for more than just increases/decreases in aggregate well-being. My intention in this post was to suggest that distributive justice plausibly requires consideration as well.
Brad, I edited your comment to include the numbering you described in your other (now deleted) comment. I’ll check the comment formatting controls to see whether there’s anything I can to do to prevent future difficulties.
To (1), yes, this post is a response to Sam’s invitation, in the ML Challenge, to show his “‘worst possible misery for everyone’ argument fails.”
To (2), no, I don’t think my argument is equivalent to Craig’s. Craig seems to be arguing this: Sam’s moral theory assumes that the identity “well-being = good” is a priori necessary; it’s true in all possible worlds. But, by Sam’s own admission, “well-being = good” is not true in all possible worlds. Therefore, Sam’s moral theory fails.
I’m not sure what Sam’s thinking about identity relations. On the one hand, he seems to claim it’s incoherent to suggest moral goodness amounts to anything other than well-being. On the other hand, he considers it an open empirical question whether “saints and sinners would occupy equivalent peaks [on the moral landscape].” Even if the answer is “no,” that answer would remain a contingent fact. So, perhaps Sam thinks “well-being = good” is an a posteriori necessary identity like “water = H20”, and so “well-being = good” is likely to be confirmed at least in part by empirical investigation.
In any case, I’m putting forward a value pluralist proposal: well-being has intrinsic moral value but so does justice (and maybe even other things). Like Sam, I offer a thought experiment to pump our intuitions about moral value. The slavery hypothetical is a standard move against utilitarianism (act or rule-based), which Sam’s theory strongly resembles. Another standard hypothetical that pits intuitions about justice against intuitions about maximizing collective welfare is one in which an innocent person is executed to placate the masses. (This second hypothetical seems to work better against act utilitarianism than rule utilitarianism.)
To (3), Sam does tend to reject these hypotheticals as implausible or oversimplified. I worry that he’s rejecting them as silly in practice, even if they suggest something serious in principle. (See Ch 1, note 11 about slavery. For my “silly…serious” point, see especially Ch 2, note 45.) If that’s what he’s doing, it seems at odds with his insistence that, for his thesis to stand, science needn’t be able to answer moral questions in practice–just in principle. He needs to explain when possibility and principle matter and when they don’t. Otherwise, it appears they matter only when they favor his theory.
Back to my hypothetical, you’re right that we could imagine making robots slaves instead of people. But my omitting that possibility from my thought experiment doesn’t mean I’ve put forward a false dilemma. Consider: is the Trolley Problem a false dilemma because it omits the possibility that the train operator might have a radio that could be used to contact a dispatch operator, who could then contact the workers on the track? No, I don’t think so. Like an actual lab experiment, a thought experiment seeks to isolate and investigate only certain variables. My variables did not include robots. Working with just the variables I’ve provided, what result do we obtain? Put another way, given the choices presented, what would be the morally right thing to do if all we ultimately value is maximizing collective well-being? The answer is seemingly a matter of intuition that either cuts for or, as appears to be the case with slavery, against the principle being tested.
Could we still question the construction of my—indeed, any—thought experiment? Yes. For instance, as with an actual lab experiment, we could worry that “lurking variables” are affecting our results—i.e., the intuitions being pumped. If my slavery scenario includes such variables, I would see that as a problem.
I think you are right: what I find objectionable should not be that your argument commits a false dilemma.
What i don’t like, and was critiquing initially, was that your argument rests on basically an assertion (A) that for some population of persons, if they instantiate slavery over a minority subset of the population, then well-being will be maximized. I find the conditional of (A) to be unrealistic and entirely implausible. But what is important, I now see, is that you merely claimed (at the bottom of page 3) that, on Sam’s moral landscape (utilitarian/consequentialist ethics), “…the enslavement scenario [is] morally permissible in principle (however unlikely its occurrence).” And this, you maintain, is a problem–a “quandary”, you say, presumably because an ideal theory of morality should not entail this possibility; from which you conclude that a theory which entails other intrinsic values would solve this and defeat Sam’s ML. Your argument seems (I think) to reduce to the following:
(1) If Sam’s ML, then the biconditional (BC) above.
(2) If the latter conjuct of BC, then it is possible (in principle, though unlikely in practice) that a “slavery scenario”–where a persons rights and liberties are sacrificed for another’s greater well-being–is moral.
(3) It is not possible for a “slavery scenario” to be moral.
(4) Therefore, Sam’s ML is false and there exist intrinsic moral values other than well-being (such as justice.)
If this is all accurate, then my response is that (3) doesn’t immediately seem true to me (aren’t prisons a good example of premise 2 and the denial of 3). And I have a hard time seeing how justice is merely an intrinsic value, which cannot be reduced to something valued for the sake of well-being. Perhaps you can help me with this.
Let’s come back to the biconditional you propose as my target:
Here’s what I consider a more accurate statement of my target: (Maximize) An action A is morally right iff it maximizes collective well-being.
I think a complete picture of intrinsic moral goodness must include well-being. But I don’t think maximizing collective well-being is sufficient for moral rightness, nor do I think it’s necessary. As (sort of) captured by your (1)-(4), I assume that Sam affirms the Maximize principle, and I argue that the slavery scenario undermines that principle by showing that maximizing collective well-being is neither necessary nor sufficient for morally right action. Given the conditions in the slavery scenario, the Maximize principle appears to sanction morally wrong action. Thus, Sam’s moral theory—well-being maximizing consequentialism—fails as a guide to morally right action.
How does justice fit in? What appears to be morally wrong about the slavery scenario is that slavery is unjust. It appears the morally right action would be to seek an arrangement of rights and liberties that prohibits slavery and redistributes some well-being from the top two-thirds to the bottom third. This distribution of well-being meets the demands of justice, but it’s conceivable that it doesn’t maximize collective well-being. In fact, the Maximize principle appears to be false basically because it violates the demands of justice. If this refutation of the Maximize principle is sound, then justice must be more than a mere means to adding well-being to the world. In the slavery scenario, the net effect of meeting the demands of justice was to subtract well-being.
Given that you say you “have a hard time seeing how justice…cannot be reduced to something valued for the sake of well-being,” you might also say that justice for the bottom third added to their well-being, even if it subtracted from the aggregate. That’s a fair point: somebody’s well-being improved because justice was done. So, you might reasonably maintain that I’ve yet to refute the conclusion of WPME, which I’ve written as
Prohibiting the enslavement of the bottom third both (a) is morally better than permitting it and (b) increases well-being for the bottom third. Here’s a potentially critical question for me to answer: Does the justice that makes (a) true reduce to (b)? If the answer is yes, parity of reasoning suggests the decrease in well-being for the top two-thirds, which is conceivably greater than the increase for the bottom third, means prohibiting slavery (not-a) is morally worse than permitting it. But then prohibiting slavery would be both (a) and (not-a). Saying that (a) reduces to (b) yields a contradiction. So, by reductio (a) does not reduce to (b).
Recall that I offer an argument for the Maximize principle that assumes that more intrinsic moral value is always better. If that assumption is correct, and if well-being is the only thing of intrinsic moral value, then the slavery scenario shouldn’t be a problem for the Maximize principle. But it is a problem. Other problems arise when we start talking about what it means to maximize collective well-being. Say we seek to maximize the overall total. Then we run into Derek Parfit’s Repugnant Conclusion:
If we seek instead to maximize the average, then we appear justified in preferring a world with one really well-off person to a world with many modestly well-off people. Yet the latter seems better than the former.
Here’s where I think we’re at. Either (i) more moral intrinsic value is NOT always better; or (ii) well-being isn’t the only thing of intrinsic moral value. If we accept (i), then we seem to reject an imperative to maximize the good, whatever the good is. If we accept (ii), then moral goodness consists in more than well-being; some sort of distributive principles of justice seem to be required, too.
Now, to your prison example. You suggest the slavery scenario might not be bad after all, given that it bears some similarities to imprisonment, which is morally permissible. Granted, both slaves and prisoners have rights and liberties taken away at least partly for the sake of promoting others’ well-being. For instance, prison is intended to protect the populace from crime by containing criminals and deterring crime. But the moral permissibility of prison also seems to depend on whether the prisoners deserve to have their rights and liberties taken away. For instance, even if achieving the greatest deterrent effect required occasionally punishing the innocent, we might prefer to protect the innocent and settle for a weaker deterrent effect. Also, prisoners maintain their status as persons along with some basic rights. In contrast, slaves are reduced to mere property and thus stripped of all rights.
I’ve seen the debate in which Craig gives this “knock-down argument”. Sadly this is rather typical of his debating style; i.e. say enough things that are wrong in such a way that an opponent will require significant time to unravel them, if he gets enough of them in his opponent will not have time to address them all. He can then claim that at least some of his mis-statements must be true because they were not refuted and just ignore the fact that some of them were effectively refuted. Clever but it just proves that debating is not a good forum in which to make progress.
In this instance the graphic used on Sam’s book requires some elaboration, i.e. what is plotted on his 3D curve? His book suggests that well being is plotted against other parameters, and this well being is a summation of the well being of all the individuals in a group/society/culture not the well being of a individual. Therefore, Craig’s “knock down argument” that a peak can be occupied by evil people is just wrong, a peak is occupied by the entire group not a subset, something akin to a performance index. Craig’s argument is that if evil people can occupy a peak on the well-being graph then it can’t be a moral landscape therefore A is not equal to B etc.
I think the enslavement scenario and similar one’s is what Sam is talking about when he says we can apply science. Whether well-being is equivalent to anti-suffering is open to some debate but if we take that as being the case he suggests that the subjective suffering of an individual may become objective through advances in neurology, i.e. it may become possible to literally feel another person’s pain. So obtaining the measure of suffering due to enslavement vs well-being (anti-suffering) of having slaves can be quantified and a performance index obtained for the group of slaves vs slave owners or any other such social structure. This with all the caveats covering different personalities and their responses to different circumstances. Even justice itself could (or perhaps must) be defined in these terms. It would be in the grey areas of minimal distinctions where the method would most likely find itself most used.
Your argument, while focuses on Sam Harris’ possible incompleteness, seems to be confused on the definition of well-being. The “WPME”, using your term, is general and all encompassing, therefore injustice is covered. If Smith the NON-rapist gets more well-being, the collective well-being increases. But if Smith the rapist gets more well-being, it is called “adding insult to injury”. Therefore what you thought as WPME, is in fact not “Worst Possible” yet. Having bad people rewarded for bad behavior is one scenario of added suffering. I assume you don’t want to see that, and neither do I. That makes me angry, an emotion I don’t like, therefore misery to me or other fair minded people. Now you have a new WPME state. Whatever argument you can come up with, including the unjust distribution on well-being just updates the describable state of “the” WPME. It is like finding the value of biggest number. I may suggest your analysis is too technical and missed the forest. I hope you don’t find my comments offensive.
I take no offense to your comments, Wai. I understand that Sam thinks well-being encompasses all moral goodness, justice included. Regarding Smith the rapist, I believe I see your point. When Smith the Rapist became less miserable than everyone else, everyone else then became more miserable than they had been. In other words, everyone else was not yet as miserable as possible prior to Smith the Rapist’s becoming less miserable. Still, the Smith the Rapist example is a peripheral point (made in a footnote). What’s most central to my argument in this post is the slavery scenario, and I’m not yet clear on why you think that scenario doesn’t support my claim that, contrary to Sam, well-being does not encompass all moral goodness; justice carries some intrinsic moral goodness of its own.
Ryan, thanks for the cordial conversation. For the slavery scenario you portrayed, I think the justice element is not necessary. I don’t believe we can just use number of people to calculate the gain/loss of well-being. We also have to use the degree, e.g., one person (slave) may lose 2 units of well-being, for 2 other people to gain 1 unit of well-being each (we can leave out efficiency consideration for now). So just because 1/3 being enslaved for the other 2/3, at best the gain/loss evens out (most likely loss is much more than gain, if you consider the anguish involved). You can also see this way: instead of 1/3 being “enslaved”, this 1/3 simply take turn to support the other 2/3. If your example is valid, then this will have exactly the same gain/loss, which does not involve slavery. So either way the slavery scenario is definitely not producing more well-being, regardless the justice consideration.
However, I have a more fundamental argument. Having slavery system cannot possibly lead to well-being for even the “non-slaves” in general and in long run, for a simple reason: Everyone will have the fear that they may become slaves, and worse, their children become slaves to others. Remember we are talking about maximized well-being for long term. Therefore “no slavery” is the only sensible and logical answer. We can definitely draw this conclusion scientifically.
I have more complete argument in covering more angles but I think this is sufficient to bring in “reasonable doubt”…
I agree that individual gains and losses must be considered. We can’t calculate the net collective welfare without aggregating them. Further, as you suggest, we could consider it an empirical question whether collective welfare would, in fact, be greater in the slavery scenario I’ve imagined. However, I’m proposing a thought experiment; the relevant findings are conceptual, not empirical. And my point is that it doesn’t seem to be a conceptual truth that a focus on collective welfare prohibits slavery. If science is able to quantify individual and collective well-being, and if science discovers that the slavery scenario I imagine does produce greater overall welfare, then the moral principle being tested — namely, morally right acts maximize collective welfare — permits slavery and, thus, that moral principle seems to be very much mistaken. In other words, no matter what we find in practice, Harris’s moral theory appears, at least in principle, to violate our basic intuitions about what’s morally right.
So the key is in “our basic intuitions about what’s morally right”.
Where is the basis for this intuition? For slavery, we feel it is wrong when one’s well-being is being taken involuntarily and claimed by another. Even it is not feasible to quantify precisely, nevertheless, it is fundamentally well-being, and nothing else. We wouldn’t feel it is wrong if we don’t care about well-being, therefore there is no morality concern at all. So as far as individual action is being analyzed, I see well-being and morality is in complete equivalent, necessary and sufficient.
However we still have the question, does this apply to collective well-being? Which I need a bit more time to elaborate on.
Considering well-being does seem to be necessary part of telling moral right from wrong, Wai. But when we say it’s also sufficient, we appear to run into conceptual troubles with collective well-being, as the slavery scenario is intended to suggest. So, yes, I think more will need to be said about those troubles.
I hope this is rigorous enough for the collective well-being and slavery analysis:
Let’s call the slavery system S. As I reasoned before, for the resulting level of collective well-being strictly from the producer/consumer aspect, it can be easily shown that, there is an alternative system, let’s call it S’, which will have everyone taking turns to play the roles of “slave” and “master”, that will create the same exact type and level of well-being as S. However, S will always create misery from the bitterness of “injustice”, as you put it, which decreases well-being, quite substantial in fact, which must be taken into the calculation of total collective well-being. Therefore S is always less than S’.
I think that shows the sufficiency, at least in the slavery scenario. I suspect it can be easily generalized to include all systems containing elements of injustice. What do you think?
I see at least one difficulty here, Wai. About S’, I’m not sure we can so easily stipulate that having “everyone taking turns to play the roles of “slave” and “master”…will create the same exact type and level of well-being as S.” Recall that in S (that is, the slavery scenario I imagine) slaves are only those who fall in the bottom-third of the population for what I’ll call ‘WQ’—that is, ‘well-being quotient’, on analogy with IQ. I’m not sure they’d perform quite the same in the “master” role as the top one-third do, in which case the “slave” role might be more or less miserable when taken by the top one-third. Simply put, I’m not sure individual gains and losses would balance out if individuals had equal time as “slave” and “master.” In any case, I think we’re back to a dispute that requires empirical resolution. If so, then your conclusion (“Therefore S is always less than S’) isn’t the conceptual truth we’re after.
Here’s the reasoning I propose. Suppose that, empirically speaking, S maximizes collective well-being. Then, conceptually speaking, Harris’s moral theory permits S. But S seems morally wrong—namely, unjust. Therefore, Harris’s moral theory appears flawed. I worry we’re still hung up on the empirical supposition I start from. I think what’s needed (to defend Harris) is an argument that justice reduces conceptually to well-being. What Harris has appealed to is the fact that justice/injustice correlate with certain positive/negative emotions. But that correlation isn’t enough to support conceptual reduction. And he’s not entitled to the claim that justice is inconceivable or has no rational force within moral discourse except as a felt experience.
Ryan, I need to understand exactly what you meant by “bottom” or “top” one-third, and the ‘WQ’ implied. A population of human may comprise people with different level of competence, and different degree of longanimity, either in specific areas or in general. But slavery is not based on those. The only motivation for slavery is the forced labor, which is group A, subjugating group B, to increase their own well-being (Wa), in the expense of the other (Wb). The collective well-being, Wc, is the sum of Wa and Wb. It is quite obvious, by precisely trading of the roles, with exchange rate considered, Wc is always greater in a fair exchange system (S’) than the slavery system (S). If your concern is the tyranny of the majority, than that is a trivial case under my theory. I also left out other trivial cases like sadism, which is even more obvious they contribute negatively to Wc.
Here’s a point where we diverge. You write: “But slavery is not based on [differences in WQ]. The only motivation for slavery is the forced labor.” I’ve stipulated that the slave population is the bottom-third for WQ and that the master population is the top two-thirds for WQ. You’ve rejected this stipulation. Your reason for rejecting it seems to be this claim: the only reason one group would enslave another is that the latter can perform labor. In the US, black slavery was justified not just based on labor output or other economic concerns. Blacks were believed to be morally and intellectually inferior to their masters. Black slavery aside, as I say in my post the slavery scenario I imagine is roughly analogous to using non-human animals for food and labor. Non-human animals’ WQ is lower than humans. When calculating the collective welfare, their loss is more than offset by humans’ gain. In the slavery scenario, I make stipulations regarding (conceivable, if empirically questionable) differences in WQ that factor into the amount of collective well-being possible with and without slavery.
Comments are closed.