My spouse and I are at an age when it’s not usually much of a surprise when one of our friends calls us to exclaim, “We’re having a baby!” We know exactly what to say. We offer enthusiastic congratulations, gab about baby names, and so on. Probably you’ve heard an exclamation like this too; probably you responded in roughly the same way.
As commonplace as this type of declaration is, there’s something interesting about it. On the face of it, having a baby is something that exactly one person—the person from whom the baby emerges—does. Yet much of the time “We’re having a baby!” and related statements are not taken to be infelicitous. So what gives?
I think ‘having a baby’ is ambiguous. Sometimes that expression refers to one type of event, a biological process, and other times to another, an intentional action. This ambiguity explains why “We’re having a baby!” sometimes sounds weird but other times sounds perfectly natural.
On the one hand, ‘having a baby’ sometimes denotes the biological process (or family of processes) of childbirth, whereby a baby leaves a person’s uterus and emerges alive into the world. It might be more accurate to say that childbirth is something that happens to a person rather than something that someone does. In any case, childbirth is a process of which exactly one person can be a subject. When ‘having a baby’ denotes childbirth, it doesn’t make sense to say “We’re having a baby!”.
On the other hand, ‘having a baby’ sometimes denotes something broader than childbirth. It denotes something like the process of responsibly bringing a baby into the world. As everyone knows, responsibly bringing a baby into the world requires a significant amount of work. The details of what this looks like vary across time and space. In our society, money must be saved; information must be gathered; time must be set aside; doctors must be seen and paid for. Childbirth is normally part of this process, but the process is not mainly a biological one. Rather, it is a complex and protracted intentional action, something people do rather than something that happens to them.
When ‘having a baby’ denotes an intentional action, it makes perfect sense to say, “We’re having a baby!”. After all, most actions can be performed jointly by several individuals, and participants in a joint action can perfectly well use the first-person plural pronoun in describing what they are doing (e.g. “We’re painting a house.”). The action of having a baby is no different. Two or more individuals who harmonize their plans and activities so that a baby can be responsibly brought into the world are jointly having a baby. And those individuals can legitimately say, “We’re having a baby!”.
So we now have an explanation as to why “We’re having a baby!” sometimes sounds weird but usually doesn’t: the expression is ambiguous, sometimes denoting a biological process of which exactly one person can be a subject and sometimes denoting an intentional action of which multiple people can be a subject. Occasionally people exploit this ambiguity in witty ways (“We’re having a baby? I don’t see you doing any pushing!”). But most of the time, we navigate the expression’s ambiguity so effortlessly that we hardly think about it.
A further upshot of this discussion is that we now have available a charitable interpretation of similar expressions that on the surface look rather patriarchal and problematic. Sometimes you hear the partners of people who are giving birth say things like “Our cervix is dilating!” and “Our contractions are five minutes apart.”. While these statements might be used by a speaker to assert ownership over another person’s body, in contexts involving joint action, possessive plural pronouns are frequently used in a way that does not imply ownership.
For instance, suppose that a group of students is putting on a school play. Someone asks the dramaturge how the first dress rehearsal went, and the dramaturge responds dejectedly by saying, “Well, our costumes were great, and our lead was inspired, but everything else was a mess.” Responding to this statement by pointing out that neither the speaker nor the group owns the costumes or the lead would be infelicitous, because the dramaturge’s uses of the possessive merely indicate that the target objects were being used by the group or occupied an integral role in the group’s activities.
Similarly, when the partner of a person giving birth says something like “Our cervix is dilating!”, the partner may be using the plural possessive simply to indicate that the cervix in question is playing an integral role in the joint action of having a baby in which the partner is participating. The cervix normally does play an integral part in the joint action of having a baby (childbirth is normally a part of having a baby, and the dilation of the cervix is normally a part of childbirth), so this use is perhaps only natural.
 To simplify things, I assume that the baby is not, at the time of birth, a person. It’s true that medical professionals often assist and monitor childbirth. But it doesn’t follow that those medical professionals are subjects of the childbirth or that the person pushing out the baby and the medical professionals are together birthing a child. Analogously, my spouse sometimes rubs our pet rabbit’s tummy in order to help the rabbit digest hay. But it doesn’t follow that the rabbit and my spouse are jointly digesting hay.
 Actually, there is a way of reading this according to which it could make sense for someone to say it. To see this, consider the statement “We lifted a table.”. On a collective reading of this statement, the people referred to by the 'we' jointly lifted a table. But on a distributive reading of this statement, the people referred to by the 'we' separately lifted a table (and perhaps they lifted different tables). Similarly, “We’re having a baby!” could mean that we are jointly having a baby, or it could mean that each of us is separately having a baby. My claim is that this statement is infelicitous when read in the collective way. All the statements I’m interested in here should be read in the collective way.
On July 2nd, 2015, American dentist Walter Palmer (legally) killed a lion named Cecil, a favorite of visitors of Hwange National Park in Zimbabwe. The news of Cecil’s death and several unsavory pictures of Palmer went viral, prompting a vicious backlash against Palmer and an international discussion about the morality of trophy hunting. People all over the world condemned the practice, and many people became convinced that trophy hunting is immoral.
My topic in this post is the morality of trophy hunting. Instead of denouncing or defending the practice, I argue that a distinction often drawn by opponents of the practice cannot be maintained. Some people find trophy hunting inherently reprehensible yet believe that hunting for meat is not. In my view, there is no inherent morally significant difference between trophy hunting and hunting for meat. So, trophy hunting ought to be universally condemned only if meat hunting ought to be universally condemned.
First, let’s get clear on some terms and the scope of my claims. By ‘trophy hunting,’ I mean hunting (or fishing) purely for sport, trophies, or prestige, without the intention of keeping some of the meat for consumption. By ‘meat hunting,’ I mean hunting (or fishing) with the intention of keeping some meat for consumption. Importantly, I limit my discussion to hunting as it is practiced by middle-class and upper-class westerners who do not need to hunt to sustain themselves.
Now, to the argument.
Most people believe that animals matter, morally speaking. Although people disagree about how much and in what ways animals matter, there are zones of clear consensus. For instance, almost everyone would agree that it would be wrong to vivisect a stray dog in order to amuse guests at a cocktail party, mainly because the great harm that would be done to the dog by such an action would not be outweighed by other sufficiently important moral considerations. Likewise, almost everyone would agree that hunting is permissible only if the harm or setback to the hunted animal is outweighed by other morally important considerations. If hunting were, in general, perfectly analogous to frivolous vivisection, everyone would universally condemn it.
As it stands, hunting is not perfectly analogous to frivolous vivisection. While both activities involve animal suffering and death, the former but not the latter is associated with morally important goods. For one, hunting can have beneficial environmental and social effects. Hunting can be used to control invasive species, raise money for conservation, and so forth. Then there are the benefits to the hunter. I’m told hunting can be deeply pleasurable. It can be exhilarating, relaxing, challenging, satisfying, even transcendent. Consider the philosopher José Ortega y Gasset’s description of the experience:
When one is hunting, the air has another, more exquisite feel as it glides over the skin or enters the lungs, the rocks acquire a more expressive physiognomy, and the vegetation becomes loaded with meaning. But all this is due to the fact that the hunter, while he advances or waits crouching, feels tied through the earth to the animal he pursues, whether the animal is in view, hidden, or absent.
Unlike the experience of a few drunken partygoers swilling Negronis while gawking in morbid fascination at the innards of a dying dog, the experience Gasset describes seems significantly valuable and worth promoting. Apart from the experience of hunting, the projects, skills, activities, and communities connected with the practice are part of what makes life meaningful and interesting to many hunters. And finally, there are the spoils. Trophy hunters obtain war stories, heads, antlers—that sort of thing. Meat hunters obtain meat. Hunters desire these spoils and are pleased when they obtain them, and since we have moral reason to care about whether a person is pleased and gets what they want, the spoils are morally important too.
Now you might think that the goods associated with hunting can never outweigh its morally objectionable features. If so, then you already believe that there is no morally important distinction between trophy and meat hunting, since both are always wrong. Most people, however, believe that hunting is permissible so long as it yields some combination of the goods just enumerated. In other words, the overall benefits of the practice can outweigh the harm to the hunted animal. For instance, you might think that deer hunting is permissible so long as the practice benefits the ecosystem and the hunter eats the meat.
I believe that consequentialist ideas of this sort are what usually lead people to conclude that there is some inherent moral difference between meat hunting and trophy hunting. Somehow, the fact that someone consumes parts of the hunted animal is supposed to justify the harm done to the animal in a way that nothing else, except perhaps direct environmental or social benefits, can.
The problem with this line of reasoning is that the value gained by eating hunted meat is not relevantly different than the value associated with the hunting experience itself or with the procurement of trophies. Eating hunted meat may be especially pleasurable, but it does not provide a well-off westerner with any more sustenance than could be obtained by eating beans and a B12 supplement. Thus, when trying to determine if the suffering and death of a hunted animal is compensated for by the good that comes of it, we shouldn’t count the fact that the hunter will obtain sustenance by hunting, since the hunter will have sustenance either way. All the value gained by eating a hunted animal as opposed to letting the animal be and eating beans comes from the special pleasure obtained by eating the hunted animal.
And here’s the thing. In principle, a trophy hunter can get the same amount of pleasure out of admiring a stuffed lion’s head or telling a great story as the meat hunter can get from eating hunted meat. In fact, the trophy hunter’s pleasure is likely to be longer lasting, since trophies, unlike meat, needn’t be consumed to be enjoyed. So, if trophy hunting is universally morally problematic because the suffering and death of the animal can never be outweighed by the benefits of the practice, then meat hunting is universally problematic, too, since both produce basically the same types of benefits. There is simply no inherent morally important difference between meat hunting and trophy hunting.
Let me consider two objections.
An objector might point out that trophy hunting is more likely than meat hunting to have negative environmental and social effects. If so, then trophy hunters need to be more careful in selecting their targets than meat hunters. But at most this is a contingent feature of trophy hunting and doesn’t tell us anything about the nature of the practice itself.
An objector might argue that eating a hunted animal’s meat is the only way to properly respect its dignity. But I find this hard to accept. First, it’s all the same to the dead animal; unlike humans, animals do not have wishes or customs concerning the handling of their corpses. Second, a carcass left in the field by a hunter undergoes the same fate as a carcass of an animal that died naturally. How, then, can this fate constitute an indignity?
My argument, if successful, shows that from a moral perspective there is nothing special about trophy hunting. When an incident like the one involving Palmer and Cecil next captures the world’s attention, I think it would be a mistake for us to focus on the trophy hunting aspect. The relevant questions concern the morality of hunting the type of animal killed and of hunting (by well-to-do westerners) generally.
 Notice that according to these definitions someone who is both hunting for meat and for trophies counts as a meat hunter, not a trophy hunter. I am interested in recreational hunting, so I ignore cases where the hunter hunts primarily in order to produce some environmental or social benefit (e.g. killing a rabid bear that threatens a populace). But I leave it open as to whether hunting is permissible only if it produces environmental or social benefits. Since both trophy and meat hunting can, in principle, produce such benefits, it is not necessary for me to settle this question here.
 Trans. by Howard B. Wescott. Meditations on Hunting, Wilderness Adventures Press, Inc, 1995, p. 131.
 An analogy might make this point clearer. Suppose you are trying to decide between eating dinner at two equally healthy but differently priced restaurants. The fact that you will eat something healthy if you go to the more expensive restaurant cannot play a part in justifying the extra money you would spend going there, because you will eat something healthy in either case. Spending the extra money is worth it only if the more expensive restaurant will provide you with a sufficiently more pleasurable gustatory experience.
‘It was’—that is the name of the will’s gnashing of teeth and most secret melancholy.
Contemporary philosophers who study time disagree about its fundamental nature. Philosophical debate about this topic has in the last century been dominated by two competing views. These views are the A-theory and the B-theory of time.
According to the A-theory, times (e.g. the year 1908, the day you were born) objectively have tense properties like presentness, pastness, and futurity (these are called ‘A-properties’). For instance, 1908 objectively has the property of being in the past and the moment at which the sun dies objectively has the property of being in the future.
In contrast, B-theorists do not think that A-properties objectively apply to times. Rather, times only have A-properties relative to perspectives. For instance, 1908 is present relative to people in 1908, although of course it is not present relative to our perspective. The most we can objectively say, according to B-theory, is that some times are earlier than, later than, or simultaneous with others (these relations are called ‘B-relations’). For instance, 1908 objectively has the property of being earlier than 2019. This is not a perspectival fact; it was just as true for people living in 1908 as it is for us today.
We are creatures that are oriented towards the future and away from the past. Yet B-theorists argue that we should not project this idiosyncratic feature of human experience onto the objective world. The most powerful consideration against A-theory is that it looks to be inconsistent with physics. According to special relativity, there is no such thing as absolutely simultaneity, which means that for any two spatially separated events, there’s no perspective-independent answer as to whether those events occurred at the same time. It follows that there is no perspective-independent present and that A-theory is false. B-theory, however, is consistent with the relativity of simultaneity. For this reason, many (including me) believe that B-theory is the true theory of time. You probably should to.
As innocuous as this might sound, B-theory is actually quite weird. It forces us to rethink many of our beliefs about ourselves. For instance, it is natural to think that we are wholly present at each moment that we exist. But because they hold that there is no objective present moment for someone to be wholly located at, B-theorists are compelled to say that persons (and other ordinary objects) are stretched through time in much the same way that roads are stretched through space. You are a kind of spacetime worm that has both spatial parts and temporal parts (e.g. a part that is turning five, a part that is experiencing your first kiss). All your temporal parts are just as real from a tenseless, objective perspective as the one that is currently reading this blog. It’s just that they aren’t located here and now.
This idea is counterintuitive. Arthur Prior highlighted one aspect of its counterintuitiveness in an argument against B-theory. It goes something like this. Think of the most acutely painful experience you have ever had. When that experience ended, you probably felt a great deal of relief. It would have been reasonable for you to think, “Thank goodness that’s over!” Yet such a response is reasonable only if A-theory is true. For while it is reasonable to be relieved if the pain is objectively in the past, it’s not clear why you should be relieved by the mere fact that the pain occurred earlier than your thought, which is all it means to say that the pain is “over” according to B-theorists. Plus, if B-theory is correct, then your earlier temporal part is stuck with that pain, since it is tenselessly experiencing it. Rather than relief, it seems that you should feel horror and pity for your earlier temporal part.
Prior’s argument brings out in an especially forceful way how our thinking about our own lives occurs in an A-theoretic framework. Part of what it is to be a person with plans, projects, hopes, desires, regrets, reliefs, and so on is to think of oneself as moving towards the future and away from the past. For this reason, it is probably impossible to fully integrate B-theoretic thinking into one’s practical outlook.
Still, it seems to me that accepting B-theory should affect how one thinks about what it means to be a mortal with a finite amount of time left on this earth. B-theory entails that the passage of time is an illusion. Recognizing this casts in a softer, more equivocal light the feeling that one’s time is growing short, that one is, with each passing second, careening towards a moment wherein one will suddenly cease to be part of the furniture of the world. True, your current temporal part has considerably fewer temporal parts ahead of it than your five-year-old temporal part does. True, you will not exist at any times later than your death. But it is unclear why this should provoke any special regret or dread. For it’s not as if when you die, you suddenly become a fixture of the objective past while time and the present flow on without you. There is a very real sense in which your entire life—a short, vibrant streak across spacetime—constitutes an utterly indelible mark on the world. Your birth and death merely convey information about where in the universe you are located. It is sometimes intelligible to regret that you are in one place and not another (e.g. you might be sad you missed your cousin’s wedding). But none of this is worth getting too worked up about.
This outlook is not entirely stable for creatures like us, of course, and adopting it wholeheartedly, even if that were possible, would obscure a great deal of what is important in human life, like the bads of deprivation and decay and the goods of gain and progress. But I’ve found that in reflective moments I can adopt this mindset for a short time. And I’ve found that the results percolate into my less reflective moments. The thought of death and grief stings just a tiny bit less. And it seems to me that my choices somehow become imbued with a greater significance and permanence. For my actions and experiences do not pass away. They are not slowly enveloped by the fog of the past. They are, for those parts of me that live them, in a sense eternally recurrent. And this fact has the smell of something that matters.
 Thus Spoke Zarathustra, trans. Walter Kaufmann, Random House, p. 139 (1995).
 This distinction was introduced by J. M. E. McTaggart in “The Unreality of Time” Mind, vol. 17, no. 68, pp. 457-474 (1908).
 Not all A-theorists believe that the future and the past exist. For simplicity of explication I am going to restrict my discussion to a version of A-theory according to which the past and the future do exist (this version of A-theory is often called ‘the moving spotlight theory’).
 Strictly speaking, this view (called ‘four-dimmensionalism’) is not entailed by B-theory. But a B-theorist can deny it only by denying that ordinary objects persist through time.
 Prior, Arthur. “Thank Goodness That’s Over.” Philosophy, vol. 34, iss. 128, pp. 12-17 (1959)
 There is another side to Prior’s coin. If B-theory is correct, then every pleasurable experience you have ever had and will ever have is tenselessly being experienced by one of your temporal parts. Yet it might not make sense for you to look forward to those experiences, since the temporal part that is looking forward to those experiences will never experience them.
I recently spent a good amount of time in the office of my local Department of Motor Vehicles (DMV). In my experience, the DMV is an unhappy place. Typically one must go there to pay a tax, take an exam, or do something else unpleasant. One can count on the fact that one’s task will require one to populate a surfeit of detailed forms with the most banal autobiographical information possible, and, on top of that, one will likely have to queue for a long time in a dreary, crowded room with uncomfortable seats only to be told by an official that one has filled out the wrong forms and that one must start the entire process again.
It seems likely that in every society there are certain commonly shared experiences that typify in important ways characteristic features of life in that society and for this reason occupy a special place in the society’s collective consciousness. I think that visiting the DMV is one such experience for those of us who live in the United States. If you live in the United States, then you’ve probably been to the DMV at least once, and you’ve probably experienced a familiar sort of nebulous dissatisfaction with what goes on there. Try telling someone you’re going to the DMV, and you’ll likely get a sympathetic look and a horror story of some sort (similar things happen when you tell someone you’re getting your wisdom teeth removed). The DMV is one of the regular points of physical contact most of us have with the sprawling, depersonalized bureaucracies that form the foundation of our society. The rigidity and impersonality of the process, combined with the prima facie inconsequence of the tasks one is required to perform, tend to grate on creatures like us.
I’d like to suggest that the experience of going to the DMV is a microcosm of the experience of living in a society with a highly complex bureaucracy. In particular, I suggest that the emotional position one finds oneself in at the DMV represents in miniature the emotional position that our society tends to put us in. Let me explain.
The nebulous dissatisfaction one is prone to experience while visiting the DMV is, I think, a natural reaction to a rather bizarre, unpleasant, and perhaps sometimes unreasonable social situation. Normally in social situations dissatisfaction manifests itself in the form of attitudes that are directed at persons. Some of these, such as resentment, indignation, and disapprobation, are rooted in a perceived lack of good will or regard on the part of the attitude’s object towards oneself or others. Other attitudes, such as anger, exasperation, and disdain, are not inherently interpersonal but nevertheless often are directed at persons. Person-directed attitudes such as these play a role in regulating and ameliorating the interactions of individuals who inevitably have different points of view. They partly constitute the affective lens through which we perceive the world and our place in it, and without them human life and relationships would be unrecognizable. The problem is that these person-directed attitudes are often naturally elicited by, and yet inappropriate in, the DMV.
To make my point entirely concrete, let’s consider a specific case. In California, new residents are required to register their vehicles within sixty days, and to be registered one’s vehicle must first pass a rigorous smog test. A certain first-year philosophy graduate student and new California resident discovered that his car could not pass a smog test without expensive repairs, which he could not afford. Consequently, this graduate student delayed registering his car for some time. Eventually, however, this student was compelled to register his vehicle. His visit to the DMV for that purpose was, by most metrics, a disaster. Our hero had to queue for over an hour, bungled the paperwork at least once, and—murderer at the door!—was honest about how overdue his registration was, which resulted in his paying an exorbitant penalty.
I hope it will not take too great a feat of imagination to envision this student’s mental state. He was indignant about having to pay such a large penalty for failing to register his vehicle sooner even though he could not afford to do so. He was resentful that he had to queue so long in order to pay this unreasonable penalty. No doubt these attitudes were less than ideally rational. They were also entirely human and entirely understandable.
The difficulty for our student was that there was no person at which these natural attitudes could be appropriately directed. After all, he would have been mistaken if he had directed these attitudes towards anyone he actually encountered at the DMV. It’s true that DMV clerks were a cause of the unpleasantness. They took his money; they refused to hear his pleas. But, like cogs in a machine, they were not responsible for it. The same goes for the DMV managers and even, I think, individual California legislators, all of whom have little to no effective control over the policies that the clerks are required to enact. The requirements imposed upon the student were social requirements, and yet there was no particular person who was responsible for them. The requirements were a product of the rigid and impersonal machine of state. Our student, then, found himself in an unsettled position. He experienced person-directed attitudes that were in one way apt, and yet there was no appropriate target of those attitudes. There was a mismatch between the student’s emotional repertoire and his social environment, which required him to suppress rather than express his emotions.
All this may strike you as a tad melodramatic. It is. But I think the student’s conundrum at the DMV typifies a much broader phenomenon. Our social lives are structured by bureaucracies that are to a large extent self-standing. Without them our form of life would not be possible. And yet, as is well known, bureaucracies can reify and propagate all manner of injustice, unreasonableness, and idiocy. This is one of the most salient moral problems of our age. Unfortunately, most of our most important moral emotions—indignation, resentment, disapprobation, and so on—are in the first instance person-directed. They are (at least partly) the product of millions of years of evolution in social environments characterized almost exclusively by face-to-face interactions with familiar conspecifics. Our evolutionary history did not equip us with emotional capacities entirely fit for life in our world, and this, I speculate, causes a great deal of discomfort and misplaced emotions. Take a minute to recall some instances when you felt frustration while interacting with a representative of some inept organization, and you’ll get an idea of what I mean.
What is the upshot of all this? If I am on to something here, then we should recognize that it is part of the (post)modern condition that we do not have emotional capacities which fit entirely well with many of the social and moral problems we face. There will be great temptation to direct apt emotions at inapt targets, and a great dissatisfaction when we cannot. I suspect that one of the reasons it is so collectively cathartic to skewer a public figure who has done something wrong, especially when that wrong embodies broader social trends, is that our moral emotions finally have full purchase. But in frustrating situations like our student’s DMV visit where they do not, we must somehow come to grips with the fact that catharsis is unavailable. And we must find a way to navigate these situations without relying unrestrainedly on emotions which, in other social contexts, serve us well.
 These are examples of what the philosopher Peter Strawson called ‘reactive attitudes’. Strawson, “Freedom and Resentment” in Freedom and Resentment and Other Essays, pp. 1-28 (2008).
Note: I wrote this piece several years ago and have shared it with a few friends since then. I'm posting it here after the 1st of January, which of course temporarily decreases its relevance for readers, but I hope it will be of some interest nonetheless.
If you’re anything like me, you associate New Year’s with ambitious resolutions to do thinks like drink less, eat healthier, and exercise every day. And if you’re anything like me, the majority of these resolutions are soon abandoned, casualties of indolence, overwork, or forgetfulness. Yet many of us who are bad at sticking to resolutions keep making them, year after year. Why?
New Year’s affords us a valuable opportunity to reflect on our lives and commit to improving them. We could do this at any time, of course, but the new year imparts to our commitments a communal significance that makes them feel more substantial. We make resolutions because we want to utilize this opportunity. This is surely a worthy motive, but I’m convinced our method is mistaken. We shouldn’t make resolutions. Instead, we should make promises. We’re more likely to keep them.
To see why, note that there are important structural differences between the type of commitment involved in a resolution and the type involved in a promise, which make it more likely that promises will be kept. Commitments, in general, are made to someone. Resolutions are commitments made to oneself, but promises are commitments made to someone else. This difference has important consequences for how others can hold you accountable for your commitments.
If you make a commitment to yourself, no one else is entitled to demand of you that you keep it. This might not deter a busybody from doing so, but the point is that in making a resolution, you are really only accountable to yourself. On the other hand, if you make a promise to someone, you become accountable to them. They are entitled to demand of you that you keep your promise because you owe it to them to do so. The very fact that a promise, as opposed to a resolution, is not a wholly private affair may provide you with extra motivation to stick with it, but even if it doesn’t, the demands of a solicitous promisee probably will.
My contention here shouldn’t be overstated. The details of why any particular New Year’s commitment ends in failure will depend on the idiosyncrasies of both the person making the commitment and their situation, and promises are no panacea. But the interpersonal nature of promises affords them extra significance relative to self-directed resolutions, and it’s commonsensical to think that this extra significance is likely to be advantageous. Indeed, studies have shown that interpersonal support is a predictor of success in the long term.
To whom should you make your promise? A promise to a stranger is likely to be worse than a resolution. For your promise to be effective as a mechanism for change, it’s important that you select someone who cares about you and sees you often enough to monitor your progress. In my opinion, you should try to find someone who is willing to exchange New Year’s promises with you, because this introduces an element of reliance and camaraderie that can strengthen your resolve, just like having a gym buddy.
At their best, New Year’s resolutions are steps toward self-actualization that reflect the best life we can imagine for ourselves. But no matter how strong one’s resolve, resolutions involve a relatively cheap form of commitment and, as a result, have relatively tepid motivational power. One only risks letting oneself down. The ancient Romans rang in the new year, not with resolutions, but with solemn promises to Janus, the god of beginnings and endings. This year, if you’re serious about committing to change, you too should make a promise.
Here’s to endings and, Janus willing, new beginnings.
 Norcross, Vangarelli. “The Resolution Solution: Longitudinal Examination of New Year’s Change Attempts.” Journal of Substance Abuse, 1, 127-134 (1989).
This is a blog about general philosophical topics that serves as a creative outlet. Browse around, and hopefully you'll find something interesting (but no promises!).