Daniel Story
  • Home
  • Research
  • Teaching
  • Information
  • Blog

Trophy Hunting Is Wrong Only If Hunting for Meat Is Wrong

7/15/2019

 
Picture
(Top Left) Ernest Hemingway with Lion; (Top Right) Teddy Roosevelt with Elephant; (Bottom Left) American Dentist Walter Palmer with Lion; (Bottom Right) American Accountant Sabrina Corgatelli with Impala
On July 2nd, 2015, American dentist Walter Palmer (legally) killed a lion named Cecil, a favorite of visitors of Hwange National Park in Zimbabwe. The news of Cecil’s death and several unsavory pictures of Palmer went viral, prompting a vicious backlash against Palmer and an international discussion about the morality of trophy hunting. People all over the world condemned the practice, and many people became convinced that trophy hunting is immoral.

My topic in this post is the morality of trophy hunting. Instead of denouncing or defending the practice, I argue that a distinction often drawn by opponents of the practice cannot be maintained. Some people find trophy hunting inherently reprehensible yet believe that hunting for meat is not. In my view, there is no inherent morally significant difference between trophy hunting and hunting for meat. So, trophy hunting ought to be universally condemned only if meat hunting ought to be universally condemned.

First, let’s get clear on some terms and the scope of my claims. By ‘trophy hunting,’ I mean hunting (or fishing) purely for sport, trophies, or prestige, without the intention of keeping some of the meat for consumption. By ‘meat hunting,’ I mean hunting (or fishing) with the intention of keeping some meat for consumption.[1] Importantly, I limit my discussion to hunting as it is practiced by middle-class and upper-class Westerners who do not need to hunt to sustain themselves.

Now, to the argument.

Most people believe that animals matter, morally speaking. Although people disagree about how much and in what ways animals matter, there are zones of clear consensus. For instance, almost everyone would agree that it would be wrong to vivisect a stray dog in order to amuse guests at a cocktail party, mainly because the great harm that would be done to the dog by such an action would not be outweighed by other sufficiently important moral considerations. Likewise, almost everyone would agree that hunting is permissible only if the harm or setback to the hunted animal is outweighed by other morally important considerations. If hunting were, in general, perfectly analogous to frivolous vivisection, everyone would universally condemn it.

As it stands, hunting is not perfectly analogous to frivolous vivisection. While both activities involve animal suffering and death, the former but not the latter is associated with morally important goods. For one, hunting can have beneficial environmental and social effects. Hunting can be used to control invasive species, raise money for conservation, and so forth. Then there are the benefits to the hunter. I’m told hunting can be deeply pleasurable. It can be exhilarating, relaxing, challenging, satisfying, even transcendent. Consider the philosopher José Ortega y Gasset’s description of the experience:

When one is hunting, the air has another, more exquisite feel as it glides over the skin or enters the lungs, the rocks acquire a more expressive physiognomy, and the vegetation becomes loaded with meaning. But all this is due to the fact that the hunter, while he advances or waits crouching, feels tied through the earth to the animal he pursues, whether the animal is in view, hidden, or absent.[2]

Unlike the experience of a few drunken partygoers swilling Negronis while gawking in morbid fascination at the innards of a dying dog, the experience Gasset describes seems significantly valuable and worth promoting. Apart from the experience of hunting, the projects, skills, activities, and communities connected with the practice are part of what makes life meaningful and interesting to many hunters. And finally, there are the spoils. Trophy hunters obtain war stories, heads, antlers—that sort of thing. Meat hunters obtain meat. Hunters desire these spoils and are pleased when they obtain them, and since we have moral reason to care about whether a person is pleased and gets what they want, the spoils are morally important too.

Now you might think that the goods associated with hunting can never outweigh its morally objectionable features. If so, then you already believe that there is no morally important distinction between trophy and meat hunting, since both are always wrong. Most people, however, believe that hunting is permissible so long as it yields some combination of the goods just enumerated. In other words, the overall benefits of the practice can outweigh the harm to the hunted animal. For instance, you might think that deer hunting is permissible so long as the practice benefits the ecosystem and the hunter eats the meat.

I believe that consequentialist ideas of this sort are what usually lead people to conclude that there is some inherent moral difference between meat hunting and trophy hunting. Somehow, the fact that someone consumes parts of the hunted animal is supposed to justify the harm done to the animal in a way that nothing else, except perhaps direct environmental or social benefits, can.

The problem with this line of reasoning is that the value gained by eating hunted meat is not relevantly different than the value associated with the hunting experience itself or with the procurement of trophies. Eating hunted meat may be especially pleasurable, but it does not provide a well-off Westerner with any more sustenance than could be obtained by eating beans and a B12 supplement. Thus, when trying to determine if the suffering and death of a hunted animal is compensated for by the good that comes of it, we shouldn’t count the fact that the hunter will obtain sustenance by hunting, since the hunter will have sustenance either way. All the value gained by eating a hunted animal as opposed to letting the animal be and eating beans comes from the special pleasure obtained by eating the hunted animal.[3]

And here’s the thing. In principle, a trophy hunter can get the same amount of pleasure out of admiring a stuffed lion’s head or telling a great story as the meat hunter can get from eating hunted meat. In fact, the trophy hunter’s pleasure is likely to be longer lasting, since trophies, unlike meat, needn’t be consumed to be enjoyed. So, if trophy hunting is universally morally problematic because the suffering and death of the animal can never be outweighed by the benefits of the practice, then meat hunting is universally problematic, too, since both produce basically the same types of benefits. There is simply no inherent morally important difference between meat hunting and trophy hunting.

Let me consider two objections.

An objector might point out that trophy hunting is more likely than meat hunting to have negative environmental and social effects. If so, then trophy hunters need to be more careful in selecting their targets than meat hunters. But at most this is a contingent feature of trophy hunting and doesn’t tell us anything about the nature of the practice itself.

An objector might argue that eating a hunted animal’s meat is the only way to properly respect its dignity. But I find this hard to accept. First, it’s all the same to the dead animal; unlike humans, animals do not have wishes or customs concerning the handling of their corpses. Second, a carcass left in the field by a hunter undergoes the same fate as a carcass of an animal that died naturally. How, then, can this fate constitute an indignity?

My argument, if successful, shows that from a moral perspective there is nothing special about trophy hunting. When an incident like the one involving Palmer and Cecil next captures the world’s attention, I think it would be a mistake for us to focus on the trophy hunting aspect. The relevant questions concern the morality of hunting the type of animal killed and of hunting (by well-to-do Westerners) generally.
 
[1] Notice that according to these definitions someone who is both hunting for meat and for trophies counts as a meat hunter, not a trophy hunter. I am interested in recreational hunting, so I ignore cases where the hunter hunts primarily in order to produce some environmental or social benefit (e.g. killing a rabid bear that threatens a populace). But I leave it open as to whether hunting is permissible only if it produces environmental or social benefits. Since both trophy and meat hunting can, in principle, produce such benefits, it is not necessary for me to settle this question here.
[2] Trans. by Howard B. Wescott. Meditations on Hunting, Wilderness Adventures Press, Inc, 1995, p. 131.
[3] An analogy might make this point clearer. Suppose you are trying to decide between eating dinner at two equally healthy but differently priced restaurants. The fact that you will eat something healthy if you go to the more expensive restaurant cannot play a part in justifying the extra money you would spend going there, because you will eat something healthy in either case. Spending the extra money is worth it only if the more expensive restaurant will provide you with a sufficiently more pleasurable gustatory experience.

Life and Death without the Present

4/1/2019

 
‘It was’—that is the name of the will’s gnashing of teeth and most secret melancholy.
 –Friedrich Nietzsche[1]
Picture
"Vanitas with Violin and Glass Ball" by Pieter Claesz (c. 1628)

​Contemporary philosophers who study time disagree about its fundamental nature. Philosophical debate about this topic has in the last century been dominated by two competing views. These views are the A-theory and the B-theory of time.[2]

According to the A-theory, times (e.g. the year 1908, the day you were born) objectively have tense properties like presentness, pastness, and futurity (these are called ‘A-properties’).[3] For instance, 1908 objectively has the property of being in the past and the moment at which the sun dies objectively has the property of being in the future.

In contrast, B-theorists do not think that A-properties objectively apply to times. Rather, times only have A-properties relative to perspectives. For instance, 1908 is present relative to people in 1908, although of course it is not present relative to our perspective. The most we can objectively say, according to B-theory, is that some times are earlier than, later than, or simultaneous with others (these relations are called ‘B-relations’). For instance, 1908 objectively has the property of being earlier than 2019. This is not a perspectival fact; it was just as true for people living in 1908 as it is for us today.

We are creatures that are oriented towards the future and away from the past. Yet B-theorists argue that we should not project this idiosyncratic feature of human experience onto the objective world. The most powerful consideration against A-theory is that it looks to be inconsistent with physics. According to special relativity, there is no such thing as absolutely simultaneity, which means that for any two spatially separated events, there’s no perspective-independent answer as to whether those events occurred at the same time. It follows that there is no perspective-independent present and that A-theory is false. B-theory, however, is consistent with the relativity of simultaneity. For this reason, many (including me) believe that B-theory is the true theory of time. You probably should to.

As innocuous as this might sound, B-theory is actually quite weird. It forces us to rethink many of our beliefs about ourselves. For instance, it is natural to think that we are wholly present at each moment that we exist. But because they hold that there is no objective present moment for someone to be wholly located at, B-theorists are compelled to say that persons (and other ordinary objects) are stretched through time in much the same way that roads are stretched through space.[4] You are a kind of spacetime worm that has both spatial parts and temporal parts (e.g. a part that is turning five, a part that is experiencing your first kiss). All your temporal parts are just as real from a tenseless, objective perspective as the one that is currently reading this blog. It’s just that they aren’t located here and now.

This idea is counterintuitive. Arthur Prior highlighted one aspect of its counterintuitiveness in an argument against B-theory.[5] It goes something like this. Think of the most acutely painful experience you have ever had. When that experience ended, you probably felt a great deal of relief. It would have been reasonable for you to think, “Thank goodness that’s over!” Yet such a response is reasonable only if A-theory is true. For while it is reasonable to be relieved if the pain is objectively in the past, it’s not clear why you should be relieved by the mere fact that the pain occurred earlier than your thought, which is all it means to say that the pain is “over” according to B-theorists. Plus, if B-theory is correct, then your earlier temporal part is stuck with that pain, since it is tenselessly experiencing it. Rather than relief, it seems that you should feel horror and pity for your earlier temporal part.[6]

Prior’s argument brings out in an especially forceful way how our thinking about our own lives occurs in an A-theoretic framework. Part of what it is to be a person with plans, projects, hopes, desires, regrets, reliefs, and so on is to think of oneself as moving towards the future and away from the past. For this reason, it is probably impossible to fully integrate B-theoretic thinking into one’s practical outlook.

Still, it seems to me that accepting B-theory should affect how one thinks about what it means to be a mortal with a finite amount of time left on this earth. B-theory entails that the passage of time is an illusion. Recognizing this casts in a softer, more equivocal light the feeling that one’s time is growing short, that one is, with each passing second, careening towards a moment wherein one will suddenly cease to be part of the furniture of the world. True, your current temporal part has considerably fewer temporal parts ahead of it than your five-year-old temporal part does. True, you will not exist at any times later than your death. But it is unclear why this should provoke any special regret or dread. For it’s not as if when you die, you suddenly become a fixture of the objective past while time and the present flow on without you. There is a very real sense in which your entire life—a short, vibrant streak across spacetime—constitutes an utterly indelible mark on the world. Your birth and death merely convey information about where in the universe you are located. It is sometimes intelligible to regret that you are in one place and not another (e.g. you might be sad you missed your cousin’s wedding). But none of this is worth getting too worked up about.
​
This outlook is not entirely stable for creatures like us, of course, and adopting it wholeheartedly, even if that were possible, would obscure a great deal of what is important in human life, like the bads of deprivation and decay and the goods of gain and progress. But I’ve found that in reflective moments I can adopt this mindset for a short time. And I’ve found that the results percolate into my less reflective moments. The thought of death and grief stings just a tiny bit less. And it seems to me that my choices somehow become imbued with a greater significance and permanence. For my actions and experiences do not pass away. They are not slowly enveloped by the fog of the past. They are, for those parts of me that live them, in a sense eternally recurrent. And this fact has the smell of something that matters.


[1] Thus Spoke Zarathustra, trans. Walter Kaufmann, Random House, p. 139 (1995).
[2] This distinction was introduced by J. M. E. McTaggart in “The Unreality of Time” Mind, vol. 17, no. 68, pp. 457-474 (1908).
[3] Not all A-theorists believe that the future and the past exist. For simplicity of explication I am going to restrict my discussion to a version of A-theory according to which the past and the future do exist (this version of A-theory is often called ‘the moving spotlight theory’).
[4] Strictly speaking, this view (called ‘four-dimmensionalism’) is not entailed by B-theory. But a B-theorist can deny it only by denying that ordinary objects persist through time.
[5] Prior, Arthur. “Thank Goodness That’s Over.” Philosophy, vol. 34, iss. 128, pp. 12-17 (1959)
[6] There is another side to Prior’s coin. If B-theory is correct, then every pleasurable experience you have ever had and will ever have is tenselessly being experienced by one of your temporal parts. Yet it might not make sense for you to look forward to those experiences, since the temporal part that is looking forward to those experiences will never experience them.

This New Year’s, Don’t Make Resolutions. Make Promises.

1/5/2019

 
Note: I wrote this piece several years ago and have shared it with a few friends since then. I'm posting it here after the 1st of January, which of course temporarily decreases its relevance for readers, but I hope it will be of some interest nonetheless.

If you’re anything like me, you associate New Year’s with ambitious resolutions to do thinks like drink less, eat healthier, and exercise every day. And if you’re anything like me, the majority of these resolutions are soon abandoned, casualties of indolence, overwork, or forgetfulness. Yet many of us who are bad at sticking to resolutions keep making them, year after year. Why?

New Year’s affords us a valuable opportunity to reflect on our lives and commit to improving them. We could do this at any time, of course, but the new year imparts to our commitments a communal significance that makes them feel more substantial. We make resolutions because we want to utilize this opportunity. This is surely a worthy motive, but I’m convinced our method is mistaken. We shouldn’t make resolutions. Instead, we should make promises. We’re more likely to keep them.

To see why, note that there are important structural differences between the type of commitment involved in a resolution and the type involved in a promise, which make it more likely that promises will be kept. Commitments, in general, are made to someone. Resolutions are commitments made to oneself, but promises are commitments made to someone else. This difference has important consequences for how others can hold you accountable for your commitments.

If you make a commitment to yourself, no one else is entitled to demand of you that you keep it. This might not deter a busybody from doing so, but the point is that in making a resolution, you are really only accountable to yourself. On the other hand, if you make a promise to someone, you become accountable to them. They are entitled to demand of you that you keep your promise because you owe it to them to do so. The very fact that a promise, as opposed to a resolution, is not a wholly private affair may provide you with extra motivation to stick with it, but even if it doesn’t, the demands of a solicitous promisee probably will.

My contention here shouldn’t be overstated. The details of why any particular New Year’s commitment ends in failure will depend on the idiosyncrasies of both the person making the commitment and their situation, and promises are no panacea. But the interpersonal nature of promises affords them extra significance relative to self-directed resolutions, and it’s commonsensical to think that this extra significance is likely to be advantageous. Indeed, studies have shown that interpersonal support is a predictor of success in the long term.[1]

To whom should you make your promise? A promise to a stranger is likely to be worse than a resolution. For your promise to be effective as a mechanism for change, it’s important that you select someone who cares about you and sees you often enough to monitor your progress. In my opinion, you should try to find someone who is willing to exchange New Year’s promises with you, because this introduces an element of reliance and camaraderie that can strengthen your resolve, just like having a gym buddy.

At their best, New Year’s resolutions are steps toward self-actualization that reflect the best life we can imagine for ourselves. But no matter how strong one’s resolve, resolutions involve a relatively cheap form of commitment and, as a result, have relatively tepid motivational power. One only risks letting oneself down. The ancient Romans rang in the new year, not with resolutions, but with solemn promises to Janus, the god of beginnings and endings. This year, if you’re serious about committing to change, you too should make a promise.

Here’s to endings and, Janus willing, new beginnings.

[1] Norcross, Vangarelli. “The Resolution Solution: Longitudinal Examination of New Year’s Change Attempts.” Journal of Substance Abuse, 1, 127-134 (1989).

    Daniel's Blog

    This is a blog about general philosophical topics that serves as a creative outlet. Browse around, and hopefully you'll find something interesting (but no promises!).

    Categories

    All
    Ethics
    Metaphysics
    Social Philosophy

    Archives

    March 2020
    January 2020
    September 2019
    July 2019
    April 2019
    March 2019
    January 2019

Daniel Story
Cal Poly, SLO
  • Home
  • Research
  • Teaching
  • Information
  • Blog