Sunday, January 8, 2023

Moral certainties versus moral tradeoffs

 An article and a commentary in PNAS raise the possibility that  economists and psychologists and moral philosophers concerned with morally contested transactions may be able to engage in more useful discussions. A problem is that economists mostly think about tradeoffs while many moral philosophers (or at least those who write about medical ethics) often think of morality as involving absolutes. (This is clearly illustrated in discussions about repugnant transactions, such as those involving compensation of donors of blood plasma or kidneys, for example.)

The PNAS article is   

Guzmán, Ricardo Andrés, María Teresa Barbato, Daniel Sznycer, and Leda Cosmides. "A moral trade-off system produces intuitive judgments that are rational and coherent and strike a balance between conflicting moral values." Proceedings of the National Academy of Sciences 119, no. 42 (2022): e2214005119. https://doi.org/10.1073/pnas.2214005119

"Significance: Intuitions about right and wrong clash in moral dilemmas. We report evidence that dilemmas activate a moral trade-off system: a cognitive system that is well designed for making trade-offs between conflicting moral values. When asked which option for resolving a dilemma is morally right, many people made compromise judgments, which strike a balance between conflicting moral values by partially satisfying both. Furthermore, their moral judgments satisfied a demanding standard of rational choice: the Generalized Axiom of Revealed Preferences. Deliberative reasoning cannot explain these results, nor can a tug-of-war between emotion and reason. The results are the signature of a cognitive system that weighs competing moral considerations and chooses the solution that maximizes rightness.

"Abstract: How does the mind make moral judgments when the only way to satisfy one moral value is to neglect another? Moral dilemmas posed a recurrent adaptive problem for ancestral hominins, whose cooperative social life created multiple responsibilities to others. For many dilemmas, striking a balance between two conflicting values (a compromise judgment) would have promoted fitness better than neglecting one value to fully satisfy the other (an extreme judgment). We propose that natural selection favored the evolution of a cognitive system designed for making trade-offs between conflicting moral values. Its nonconscious computations respond to dilemmas by constructing “rightness functions”: temporary representations specific to the situation at hand. A rightness function represents, in compact form, an ordering of all the solutions that the mind can conceive of (whether feasible or not) in terms of moral rightness. An optimizing algorithm selects, among the feasible solutions, one with the highest level of rightness. The moral trade-off system hypothesis makes various novel predictions: People make compromise judgments, judgments respond to incentives, judgments respect the axioms of rational choice, and judgments respond coherently to morally relevant variables (such as willingness, fairness, and reciprocity). We successfully tested these predictions using a new trolley-like dilemma. This dilemma has two original features: It admits both extreme and compromise judgments, and it allows incentives—in this case, the human cost of saving lives—to be varied systematically. No other existing model predicts the experimental results, which contradict an influential dual-process model."

Here is their first example:

"Two countries, A and B, have been at war for years (you are not a citizen of either country). The war was initiated by the rulers of B, against the will of the civilian population. Recently, the military equilibrium has broken, and it is certain that A will win. The question is how, when, and at what cost.

"Country A has two strategies available: attacking the opposing army with conventional weapons and bombing the civilian population. They could use one, the other, or a combination of both. Bombing would demoralize country B: The more civilians are killed, the sooner B will surrender, and the fewer soldiers will die—about half from both sides, all forcibly drafted. Conventional fighting will minimize civilian casualties but maximize lives lost (all soldiers).

"More precisely: If country A chooses not to bomb country B, then 6 million soldiers will die, but almost no civilians. If 4 million civilians are sacrificed in the bombings, B will surrender immediately, and almost no soldiers will die. And, if A chooses an intermediate solution, for every four civilians sacrificed, approximately six fewer soldiers will die.

"How should country A end the war? What do you feel is morally right?"

**********

Here is the followup commentary:

Lieberman, Debra, and Steven Shenouda. "The superior explanatory power of models that admit trade-offs in moral judgment and decision-making." Proceedings of the National Academy of Sciences 119, no. 51 (2022): e2216447119.

"We make “moral” decisions each day (should I stay and help my graduate student with her thesis thereby delaying dinner for my children? And if I do stay, how long is acceptable until the trade-off tips in favor of my children—30 min? An hour? Longer?). There are costs associated with every act, and part of the human condition is that we seek to balance our duties to everyone in our social network.

"Moral judgments, as the above example illustrates, lead to intermediate, compromise solutions. For this reason, the value of moral dilemmas like the trolley problem that yield only binary outcomes is limited to the superficial exploration of normative theories within philosophy—not the underlying mental software driving moral cognition

...

"As a philosophical tool, the trolley problem playfully probes certain (limited) contours of moral decision-making. But, as a methodology imported from philosophy into cognitive science to illuminate moral cognition, the translation is impoverished because it yields only binary, extreme solutions and prevents moral trade-offs or compromise judgments. "

No comments: