There is a tendency in philosophy to think in linear, one-dimensional terms that I believe is false. I am probably not breaking new ground here, and I can imagine my arguments have strong defeaters. However, this is my thinking.
Consider that there is a way in which one could conceive of a number of small painful things adding up to one very painful thing. Commonly in the literature this is done thinking of things like pinches or headaches.
Obviously, a single, painful pinch is not equivalent to a death. But I think neither is 1,000 or 1 million painful pinches. This to me looks like a fallacy of composition. This process is comparing things falsely because they exist in different realms.
To the layman it generally is obvious that X number of [very small discomfort] compared to something very terrible (e.g., death) is a silly comparison. Only a philosopher would be so confused. To be fair the philosophical process here is sound, and the layman should broaden his mind to consider such contemplation. Still, the layman’s intuition is on to something.
I think this has implications for animal welfare and other difficult thought experiments and moral questions. In fact I think this may be the road to follow for finding fallacy with the repugnant conclusion.
Basically I am arguing for the intransitivity objection to the repugnant conclusion and related problems/paradoxes. It can be a category error to think more of this small thing eventually is equivalent to this big thing.
Along with this, we have to consider what is within reasonable affordability.
In the same stylized graphs I have now imposed some arbitrary budget constraints for illustrative purposes. At any give point in time/history, we can only afford so much of anything.
There are always tradeoffs. We certainly want more/less of that which we desire/desire to avoid, but we are subject to reality which imposes limits.
These thresholds mean achieving a better possible outcome or combination requires growth in capabilities (aka, wealth). For example in the graphs at a certain point the only achievable outcomes are within the light-gray zone. At another we can now achieve outcomes within the darker-gray zone.
Obviously, this has some dangerous potential misinterpretations. We walk along some cliffs’ edges that could allow us to easily fall into morally incorrect conclusions. Yet this is a wrestling between moral relativism and moral absolutism that needs to be reconciled.
Today we live in a world that can afford certain luxuries and considerations that were beyond our grasp in eras past. We also have more knowledge today than was available prior to understand the implications of our actions. Along these lines knowing how animals experience pain and discomfort and how sentient they are may give us new moral standards to adhere to.1
Similarly now that we can afford better practices from the position of the animals' well being, we likely have created a world where previous practices are no longer morally tolerable. This reasoning is not limited to animal welfare. And I cannot stress enough the need for moral absolutes in some respect or another lest someone mistakenly think this means at some point in human societal evolution and economic progress inflicting pure suffering on animals, human slavery, etc. were ever morally tolerable.
Applying this line of argument in a way that justifies actions in the past of that sort would be a category error in reverse. We would be making an argument about toleration for the violation of a moral absolute on the grounds that absolute rules fail when they conflict with opposing desires. To wit: the slaveholder would suffer if he discontinued being a slaveholder; therefore, slavery in that case would be justified—this is obviously wrong.
The ultimate determination to make is in which cases is there a moral imperative or absolute and in which cases is it truly to some degree relative. I would argue that examples of animal cruelty tend strongly to be within the realm of moral absolutes while the eating of animals, et al. tend typically to be within the realm of moral relativity.
On this and countless other considerations one need not agree with my categorization to still agree that the concept of moral absolutes is of a different nature than that of moral relatives. Further, my contention remains that we too often fall into the trap of thinking one-dimensionally in these thought experiments—a trap that has problems in both directions.
Imagine how this potentially shifts dramatically once we can "talk" with or at least understand better animals via AI. This is coming very soon it seems.
I've pondered/analyzed this very same thing. Humans continue to fail in thinking multidimensionaly.