Jun 14

Success of Philosophy Students: Selection Bias or Skill Development?

In an earlier post, I claimed that one reason to study philosophy is that it teaches valuable skills. I supplied as a piece evidence for this view the fact that philosophy majors do better on post-graduate exams than almost every other discipline. Several departments have raised this point on their web pages as a reason to choose a philosophy major as well. However, one concern with claims of this nature is the possibility of selection bias. It could turn out that the reason philosophy majors do better isn’t because they learn skills in college, but that they have skills entering college which encourage them to choose philosophy as a major, and that these skills include doing well on these sorts of exams. Jason Brennan recently complained about the practice of using this as a basis for encouraging students to study philosophy on course web pages, claiming that without evidence against the possibility of selection bias, it is dishonest to suggest that philosophy teaches students these skills. The concern here is legitimate, and it is also important to know whether or not philosophy actually teaches students skills. So, is there any evidence showing that philosophy teaches skills instead of merely attracting the already skilled?

I think there is at least one way to test this. GRE and LSAT scores are a measure of a specific set of skills. Those skills are very similar to the skills one would need to perform well on SAT exams. So, one way to test whether there is improvement with respect to these skills is to see whether or not, as a group, people who study philosophy improve in their relative rankings compared to other departments from their average SAT scores to the average scores on post-graduate exams. When we look at this data, it suggests that the success majors have on these exams is causal. The mean SAT score for a student planning to major in philosophy or religious studies in 2013 was 1603. This was 10th overall among various groups of prospective majors (excluding those pursuing multiple majors). Despite a finer-grained division of majors, this ranking improves to 1st or 2nd annually among GRE scores. This suggests that philosophy doesn’t attract incoming students who are already ideally skilled to do well on these exams, but instead teaches students the skills they need to perform well on them.

As with all results, this data is open to alternative explanations. Perhaps the students who enter with a plan to study philosophy are very different from those who actually finish with a philosophy degree. Perhaps elite philosophy students are more likely to take these exams than elite students in other fields. But the most natural and initially plausible explanation for the data is that philosophy students learn the skills necessary to succeed on these exams. While this isn’t the only relevant issue to deciding what major to pursue, it should offset some concerns students may have that philosophy is an impractical discipline to study. It appears that philosophy really can prepare you to think and write effectively.

Apr 29

Self-interest and Affirmative Action: Why Employers should voluntarily adopt Affirmative Action Policies

I am uncertain how the ethical considerations surrounding affirmative action balance out as an issue of public policy. However, there appears to be an argument for private companies to adopt a policy favoring minorities and other under-represented groups in their hiring decisions that bypasses the tricky ethical issues raised in the debate. This argument is one from pure economic self-interest. As a general rule, the rational choice when deciding between multiple, comparably qualified candidates is to hire the one who is under-represented in your field. The reason for this is that, whether you know it or not, there is an above average chance that in creating your initial evaluation of the candidates, you misevaluated the qualifications of the candidates as a result of unconscious biases.

There is a growing body of data suggesting that unconscious biases affect large portions of the population, and that they negatively affect hiring decisions. The data on implicit or unconscious biases is growing rapidly in the research of cognitive psychologists. People are far more receptive to subtle environmental cues and social prejudices than they realize, and they appear to subject to them without regard to the individual’s conscious declarations a respect for equality. If you are interested in checking your own biases, you can google “project implicit,” which is being run by Harvard. In terms of jobs or applications, studies have shown that having an African American sounding name will make it harder to find a job or a place to live, that being a woman will cause psychology professors to think you are worse in every category than a man with an identical CV, and that in general not resembling your potential employer in many ways is a good way to get your resume quickly chucked in the trash bin.

In terms of self-evaluation, people rarely recognize their own biases when they have them. Despite the fact that studies have repeatedly shown that the vast majority of people suffer from a wide variety of implicit biases, people routinely exempt themselves from these statistical facts. This is an instance of what is known as the “Dunning-Kruger” effect. Other examples include the fact that almost everyone thinks they are an above average driver and that 93% of psychology professors believe their work is superior to that of their peers. So, there is both a good chance that you are biased and that you don’t realize this fact.

Given the fact that a large portion of the population is influenced by bias without realizing it, and that this is relevant in decisions about who to hire, the rational thing to do if you are in this position is to take these facts into account in your decision-making. If you have two candidates for a job who you believe to be roughly equal in qualifications, one of whom is of a group often biased against in your profession, there are two possibilities. First, you may not be subject to a bias. If that is the case, you are no worse off hiring the minority candidate, or the candidate you may be biased against. Since they initially appeared to be just as qualified, it won’t hurt you to go with the underrepresented individual. Second, you may be subject to a bias without realizing it. In that case, the candidate you thought to be equally qualified is actually more qualified, and so you are better off hiring the minority candidate. So, if you are in this position, the best decision is to hire the minority candidate. Everyone who is not currently considering me as a job candidate should take this to heart.

Mar 20

What is Plagiarism?

After tiring of the excuse that “I didn’t know that counted as plagiarism,” I wrote up a lengthy explanation of the different types of plagiarism out there. It hasn’t stopped it from occurring, but it has diminished it, and it does negate the most common excuses. If anyone finds it valuable, you should feel free to use it.

What is Plagiarism?

Plagiarism occurs whenever you use another person’s words or ideas and represent them as your own. Since philosophy deals almost exclusively with the development and evaluation of ideas, plagiarism is particularly troublesome in this field. My rule for papers is as follows: if there is anything in your paper that is not clearly cited, I take that as a statement from you that the words and ideas expressed in the paper are your own. If I find out that they are not, the paper is plagiarized. Here are types of plagiarism that are encountered often, in diminishing degrees of severity. Note, however, that all of the following are forms of plagiarism, and any of them could result in a 0 for your work:

-Cutting and pasting: When a student takes work directly from another source without alteration. This is plagiarism unless all the material is in quotation marks, and a citation clearly indicates that the content is from that source. Since philosophical works are supposed to represent your own understanding of an issue or idea, large blocks of quotations should not constitute much of your paper.

-Cutting and Thesaurus-ing: When a student takes work directly from another source but changes a few words. Using the thesaurus function does not make the work your own. This sort of content should be left unchanged, put in quotation marks, and clearly cited. Even with a citation, this can still count as plagiarism in the absence of quotation marks since it is, by and large, using another person’s words (or their equivalent) while representing the phrasing as your own.

-Re-wording or re-arranging Content: When a student looks at a source, and writes down the exact same ideas, but changes around some of the phrasing or ordering of the sentences. This sort of work does not require any actual understanding of the issues, only a grasp of the English language, and so does not represent your own ideas about the issue. As a general rule, if you could not have written the paragraph without looking at the source, then you didn’t really understand it, and so you must cite the source. If the content of the ideas is sufficiently similar, this can count as plagiarism even with a citation, and certainly counts as plagiarism without one. Again, you would be better off leaving the material in quotes and then explaining what it means in your own words after the quote.

-Paraphrasing without Citation: One of the main differences between a paraphrase of an idea and a re-wording of an idea is that paraphrasing requires understanding the idea well enough to express it on your own, while re-wording an idea does not. A paraphrase of an idea you got from another source should still cite that source. A properly cited paraphrase does not count as plagiarism. An uncited or improperly cited paraphrase does.

-Ambiguous Citations: Sometimes students will have citations either at the end of a paragraph or in the bibliography, but it will not be clear from reading the text which ideas in the relevant portion of the paper are the student’s, and which ideas are from the source. In this case, students are still failing to properly identify whether or not the ideas are your own, and so it still counts as plagiarism. All sources must be cited in the body of the paper itself in a way that makes it clear which ideas in the paper are the student’s and which ideas come from somewhere else.

Jun 18

The Crime of Immigration Restrictions

Social contracts have long been held by political philosophers to be a just foundation for civil societies. Such contracts are supposed take into account the interests of citizens by ensuring that the state is governed by rules that society has accepted. The fundamental problem with social contracts throughout history, however, has been that actual contracts are not reached in a way that takes into account the interests of everyone. Instead, they are formed by and only take into account the interests of a smaller group of people who seek mutual benefit for themselves by uniting their power. These people then use the contract as a basis for oppressing others; for dictating to them what they can and cannot do, and using the power of the group the contract is designed to serve to control the behavior of the rest. Such acts prevent the oppressed groups from choosing any actions that could improve their state in the world or free them from their suffering. We have seen this with rulers and fiefdoms controlling subjects and serfs, with men uniting to control and limit the opportunities of women, with whites coming together to enslave other races to work for them; in many respects, the history of the world is the history of one group of people obtaining power and collectively using their prestigious state in a way that needlessly and unjustifiably perpetuates the suffering of others.

What is so shocking about these societies is that even though the situation is clearly little more than a mask for oppression, at the time they exist, such societies is broadly accepted, both by those in power and by those who suffer. Even good and decent people find rationalizations or excuses for the perpetuation of unjust suffering. These range from the idea that it is good for those who suffer to be under the control of those in power, to the idea that society would greatly suffer from any change to the existing structure, to the idea that if those in power sought to include others in their group it might harm some in that group, and so, unfortunately, it is necessary to maintain the oppression and suffering of the rest. John Stuart Mill did a brilliant job exposing the mendacity of such assertions in his work “The Subjection of Women,” showing how the pretenses of defenders of such subjection were shallow and hollow, and amounted to little more than the rule of those who are mighty without any moral justification.

We like to think that we have moved past this as a society, that we are now inclusive and see the benefits of others. We think we have moved passed arbitrary distinctions and seen that our fellow brothers and sisters have a place with us, and have the right to control their own lives. We see now that they, too, should be party to our contract, and be permitted to share in its benefits. Of course, not everyone feels this way. Some still think that those of another sex or another race aren’t “one of us,” and so aren’t really entitled to the benefits of our society. But the rest of us have come to see such expressions for the mindless bigotry that they are, and words like ‘racist’ and ‘sexist’ have come to be filled with the disgust and condemnation they deserve. However, patterns of human behavior that persist for millennia should never be confidently felt to be behind us. And, sadly, our own society is not different from those of the past. We include more people, but our contract is not all-inclusive. To see the pattern, we need merely look at who we include in the title “one of us,” at which groups of innocent people suffer from a lack of inclusion in that group, and at where we find a sense in those who have prestige that they can use their power to keep that innocent group in a position of needless suffering. Today’s label for “those it is okay to keep suffering needlessly because they aren’t part of our group” isn’t one of race or sex, it is one of nationality. ‘Foreigner’ is label people use to justify the same attitudes and behaviors in today’s society that those in power have always used to justify the unjust treatment of others who aren’t “one of us.” The consequences are at least as horrific, and the blind sense of justification by the perpetrators of the needless suffering of innocents all the more vociferous and self-satisfied because they think themselves above such acts.

Controlling where another human being can or cannot live despite the fact that they are innocent, that they have the means to relocate, that there are available places to relocate to and persons willing to rent or sell to them, and that their relocation would do more than anything else could to ease their suffering and improve their lives and the lives of their families is something so horrific that if we saw it done to an American citizen we would be outraged. When people hear of imminent domain cases where someone has been thrust out of their own home most people are outraged. Who is the government to tell someone where they can and cannot live, to prevent them from owning a spot of land someone was once willing to sell them merely because some powerful company or group wants to use it for a cause the people in power prefer? Our sense of outrage is just, but baffling, because that is precisely what we do when we prohibit immigrants from moving to our shores. The only real difference is that the consequences are far, far worse for the potential immigrant.

We see ourselves as party to the contract; the foreigner as “not one of us.” We have nothing but luck to account for fact that we have a position of prestige and power. They have nothing but bad luck to account for the fact that the country of their birth was full of suffering and poverty. Yet, because they aren’t a part of our group, we think it right to control their movements and their behavior, knowing full well this will cause them great hardship, because we are powerful and we can. We are no different from the oppressors of the past, and no less blind to our own unjust actions. We also use the same, hollow excuses; pretending that minor harms to those in our group justify great suffering in those we choose to exclude from it. Pretending that society would falter if the group were expanded. Pretending that somehow they are different from us in a way that makes us better. The perpetuation of the suffering of innocents throughout the world through the establishment of immigration restrictions by those in more prosperous lands is a moral crime. It is as great a moral crime as the history of oppression of women, of the enslavement of others to serve the needs of those in power, or of any other act of unjust control over the life of another that those with power have always engaged in to serve their own interests. Such a crime is unconscionable, no matter how comfortable we feel with it. Hopefully one day people will look back on us as see us with the same contempt as we now see those who abused their power in the past. Hopefully one day ‘nationalist’ will join terms like ‘racist’ and ‘sexist’ in expressing great disgust and condemnation. And hopefully it will happen soon so that the needless suffering of millions can begin to be eased, and the recognition of their basic rights to control their own lives and to pursue their own happiness can be recognized throughout the world.

May 15

An Argument for Limited Redistribution of Wealth

For a while now I have thought that redistribution of wealth was unjustified since it involved a form of taxation that I thought violated people’s rights. However, there is what appears to be a sound argument for the view that some redistribution of wealth through taxation is morally justified, even though taxation is a violation of people rights. The argument goes as follows:

(1) It is permissible for someone in a sufficiently dire state of need through no fault of their own to steal in order to provide for that need, assuming that there are no other viable options for securing what they need.
(2) This entails that some actions that constitute a violation of someone’s rights can be overridden by considerations of extreme circumstance.
(3) If it is permissible for someone in these circumstances to visit a rights violation on a random individual, then it would be better if we could ensure that the rights violation was visited on someone to whom the harm of the violation would be lessened.
(4) Taxation, though typically a rights violation, would be a means of controlling the distribution of harm in violating people’s rights as a means of providing for the needs people face in these situations.
(5) These harms would be lessened if they were disproportionately levied against people who were better off to begin with, since losing a comparable amount of a good is less damaging to those who have more of that good.
(6) So, if there are no viable alternative means for a society to ensure that people are not facing, through no fault of their own, conditions that would justify visiting a rights violation on a random individual, then taxing the rich in order to help these individuals would be morally justified.

This situation of redistributing wealth does still involve a rights violation on my view, and it would be better if individual citizens would take it upon themselves to create alternatives to taxation that prevented the possibility of a situation arising where the need for such a rights violation existed. Until people create such alternatives, however, the justification for redistribution seems to hold up for any situation where we would say that individuals facing it would be justified in visiting a comparable rights violation on a random individual in order to get out of that situation. People who advocate smaller governments are therefore morally obligated to establish and fund alternative private means of providing for the basic needs of those who are suffering greatly through no fault of their own. I think this is a sound argument. It has implications for a number of views that I am now beginning to think I had too strong of a position on. For example, suffering from a horrible disease for which there is an available cure that you couldn’t possibly afford would seem to meet the requirements for (1), so long as you aren’t to blame for your ailment. Therefore, the state may be justified in taxing people to pay for medical treatments in extreme cases, at least until charities arise to provide such care when the need arises. This account does have the nice feature that most redistributive taxation is unjustified, since typically people have other means of providing for their needs, or are largely to blame for their condition. It also preserves the, in my view correct intuition that taxation, particularly when it isn’t used to pay for services you wish to pay for in such a way, are an instance of a rights violation.

May 12

Is the War on Terror Over?

I’ve been reading a very interesting book over the weekend called The Gift of Fear. It was written by Gavin de Becker, who is an expert at analyzing the causes of violence and has received numerous awards for his work in the field. The book is fascinating on its own, primarily because it clearly shows that one of the most surprising and often seemingly incomprehensible aspects of our society is actually predictable, understandable, and, to some extent, preventable. The account of violent activity is detailed and valuable. One aspect of it, however, piqued my interest in terms of its relevance to international terrorism. According to de Becker, there are four elements that are almost always present in the mind of someone who commits a violent act. He labels these elements JACA: Justification, Alternatives, Consequences, and Ability. The basic idea is that a person who threatens violence will usually only act on it if they can justify to themselves or rationalize their behavior, if they see very few alternatives to violence as practical or desirable, if they are willing to accept the consequences of the act, and if they believe they have the ability to engage in the violence in an effective manner. Often these beliefs are delusional to a certain extent, but in the absence of sincere beliefs about these issues, people will almost invariably choose a non-violent alternative.

In terms of the war on terror, the death of bin Laden seems to have gone a long way in stripping terrorists of each of these elements. First, bin Laden was a charismatic leader; he convinced his followers of the justice and necessity of their actions, and he served as a symbol of powerful defiance and leadership in what he was able to convince people was a just cause. This may be hard for us to accept, but many people in certain parts of the world respected bin Laden and acted to follow him. Without him, there is far less of a voice to rally people to a strong feeling of justification that accompanies decisions to join or fund such a group. In addition, President Obama has handled this killing brilliantly. Bin Laden was killed in a way that makes it almost impossible for people to view him as a martyr for the cause. He was treated with respect and dignity in his burial by following his religious customs, he acted in a cowardly manner by using human shields, and gruesome pictures of his corpse have been kept from the public so they cannot be used to stir fervor of sympathy for him merely in virtue of the horrors of his death. There is good reason to think that the death of bin Laden will greatly diminish the justification most people were able to create for supporting or joining his cause.

In addition, recent events have made it clear that there are viable alternatives in the Muslim world to terrorism as a means of improving the lives of the citizens of those nations. Recent revolutions, and the inactivity or support of America and the rest of the world in those revolutions have given people suffering through these regimes legitimate cause to believe there is an alternative to terrorism as a means of improving their lives. In addition, since they are home-grown, there is more of feeling of control over one’s own life here, the loss of which is usually central to a feeling of having no better alternatives. In the presence of this alternative, if it can be maintained, there is far less reason to support drastic measures like terrorism in order to improve one’s state, and recruitment and funding are almost certain to drop. Relatedly, when one has hope for a better future, one automatically has more to lose and is therefore less willing to accept the consequences of violence. These elements go hand in hand in predicting a diminished availability of resources for future terrorist acts.

Finally, these points together greatly diminish the ability of terrorists to carry out their actions. With fewer recruits and less funding, terrorists will have a much more difficult time carrying out their plans. Terrorism will, of course, never end, but in all likelihood its scope of importance and the severity of its effects will be greatly diminished because their ability to create and deploy successful large-scale actions will greatly diminish. Terrorism will likely return to its former state of diminished relative international significance rather than maintaining a central stage of world importance and a corresponding power over world events. There is no need for a war against something like that.

I do not know who deserves credit for all of this, or how long it will take for these effects to occur. I’m guessing it’s sooner than people think, though. It is nearly impossible to tell whether or not people would have felt empowered to rebel if Bush’s wars hadn’t made it clear that there was external support for regime change. It is probably true that Obama’s ability to divert or limit anti-American sentiment encouraged people to pursue other options and lessened the strength of a scapegoat to keep people feeling hopeless. It is also certainly true that Obama’s handling of the killing of bin Laden will greatly help in preventing people from finding as many grounds for maintaining the fervor of their commitment. But so long as we don’t let a dangerously shallow and obscene figure like Donald Trump or, to a lesser extent, Sarah Palin reinvigorate these attitudes by foolishly choosing them make important decisions in these matters, there is good reason to believe that the fervor will die down, the lives of people throughout the Muslim world will blessedly improve, and the war on terror and its justification will end not just because we have decided to stop giving it that name, but because it no longer has a basis for existing.

Apr 16

The Wire

It is often hard to sell The Wire to people. When people hear that it is about the drug trade in Baltimore, they are likely to immediately misunderstand it. There are a few ways such a show could traditionally go. It could glorify the drug trade and gangsters, like a classic Mafia movie parading around violent criminals as if they are all charismatic heroes. It could be a heavy-handed anti-drug show, depicting how the cops are nobly fighting a group of evil thugs. Or, perhaps, it could be a self-important leftist effort to show how drug dealers are misunderstood but good people who don’t deserve to be so maligned. But The Wire is none of those things.

Another reason people often feel drawn away from The Wire is that they have an assumption going in that they won’t be able to relate to these people. Maybe it’s good for what it is, but what do the struggles of inner-city gangsters and the cops that are trying to stop them from selling drugs have to do with me? How can I connect to such people, or care what happens to them? This problem is so drastic that the show was nearly cancelled because it’s focus on mostly black characters kept foreign audiences from giving it a try. But this concern is grossly misplaced. I have never cared half as much about any character on any other show as I do about even some of the most minor characters on The Wire.

Finally, many people who give The Wire a try give up after the first few episodes. It doesn’t feel like any show you’ve ever seen. There’s no soundtrack, the development seems astonishingly slow, and, for a show about gangs, the violence and drama isn’t filling every minute like it would in a movie. It isn’t until you get through about half of the first season that you begin to see that, as Lester Freamon puts it, “all the pieces matter.” The show doesn’t just unfold with careful character development and important stage setting like a novel does, the entire series is crafted to deliberately take the form of a nineteenth century serialized novel, and the writing is better than virtually any such novel ever written. At some point you start to see that what looked like unusual television is actually an art form never tried before and likely never to be done with such skill again.

When you read a great novel, you sometimes imagine what it would be like to experience it as a great film, perfectly acted and directed to capture all the realism and all the detail, yet somehow maintaining the depth and meaning that is so hard to transfer from the written word to the screen. Every episode of The Wire is like experiencing a chapter of your favorite novel perfectly brought to life and not only capturing what you thought would inevitably be lost, but enhancing it. With episodes lasting a full hour and the story stretching across Baltimore and embracing every level of the city, The Wire manages to be better written, more complex, more engaging, and far more satisfying than any literary work written in my lifetime. Watching it unfold feels like reading one of the great, serialized novels of the 19th century. It is an amazing work of art; indeed a better one than I thought we had it in us to make any longer. To say that it is a great show, or even the best show, or my favorite show, would be to drastically undersell it. The Wire is a phenomenal work of art. It is an unparalleled accomplishment in any artistic medium for at least the last three decades, and probably longer. Nothing else is even close. Don’t pass up the opportunity to experience it.

Sep 06

Why Study Philosophy?

A college education is usually thought to provide three main things to students:

1. Valuable knowledge about various subjects.
2. The acquisition of useful skills.
3. A signal to future employers of intelligence and a willingness to work hard.

Students should study philosophy because it does a very good job at providing value relative to these standards.

Valuable Knowledge:

Philosophical knowledge has a reputation for being impractical. It is. What is weird is that so few other disciplines have the same reputation. Here is a simple fact about education: almost everything you learn in a college course consists of information you will never need and never use at any point in the future. With the exception of a few majors that teach very specific information in areas many of their students will actually spend their lives working in, such as degrees in engineering or marketing, almost no information you obtain in college will be of any value to you at any point in your career. This might be disheartening to some; I prefer to think of it as freeing. Since you won’t actually need the information in the future, it doesn’t matter what information you obtain. Given this, you might as well learn the coolest and most interesting information you can find. On this score, philosophy does very well. Philosophy addresses issues that have fascinated people forever, and deals with questions most people wonder about when they have free time. Given the fun and interesting nature of the subject matter, if you have to obtain a bunch of useless information to get the degree you want, shouldn’t it at least be cool useless information?

Acquiring Useful Skills:

While philosophy is no worse than most other things in terms of useful information, philosophy’s reputation as an impractical subject isn’t deserved, because unlike most majors, philosophy actually teaches you a useful skill. Specifically, it teaches you how to think. Many people don’t realize that thinking is a skill, or that it takes a lot of training to do it well, but it is and it does. Here are two compelling pieces of evidence that philosophy teaches you how to do it well. First, philosophy majors do better than virtually every discipline out there when it comes to scores on post-graduate (grad school and law school) entrance exams.* This seems to suggest that it teaches you the skills you will need to be able to become an expert at whatever field you plan to go into. Second, mid-career salaries for philosophy majors are higher than they are for any other humanities major despite the fact that entry-level salaries for philosophy majors are often lower.** This shows that philosophers are getting promoted while others lag behind. This isn’t surprising. When you learn how to think well you will understand, anticipate, and be able to solve problems better than others. You will also be able to communicate your ideas more effectively. This will get your bosses to trust you, and will lead to promotions and success. So, unlike many other disciplines, learning all the useless information in philosophy will have a wonderful side effect of teaching you a useful skill that will actually help you in life.

Signaling Employers:

Because philosophy is so poorly understood, it hasn’t always provided as much of a signal to employers as it ought to. However, recently this trend seems to be changing. In his book The Undercover Economist Tim Harford praises philosophy degrees as great ways to signal to employers that you are smart and hardworking since obtaining a degree in philosophy is usually harder than obtaining one in most other disciplines.*** In addition, publications varying from the New York Times**** to the Bloomberg Businessweek***** have praised the value of a degree in philosophy for succeeding in life. This suggests that more and more employers are recognizing that training in philosophy is a valuable sign that the person they are hiring will be smart and capable. Given this, there is reason to be hopeful that by the time a prospective student is on the job market philosophy degrees will have higher initial value on the job market than they have had previously.

So, philosophy will introduce you to a bunch of cool ideas, teach you valuable skills, possibly help you get a job, and probably help you succeed and get promotions at a job once you find one. Prospective students who are interested in thinking about cool ideas while setting themselves up for successful lives should seriously consider majoring in philosophy. In particular they should do so if they are smart, but lack interest in majors that involve learning information and skills that are specifically suited to particular career paths.

*- See http://www.uic.edu/cba/cba-depts/economics/undergrad/table.htm , http://www.ncsu.edu/chass/philo/GRE%20Scores%20by%20Intended%20Graduate%20Major.htm

**-http://www.payscale.com/2008-best-colleges/degrees.asp

***-See p. 111 of the paperback edition, ©2007, Random House: New York

****-http://query.nytimes.com/gst/fullpage.html?res=9902E3DD1E3EF935A15751C1A961958260

*****-http://www.businessweek.com/managing/content/jan2010/ca20100110_896657.htm

Aug 16

Why Health Care isn’t a Right

Positive and Negative Rights:

When someone has a right to something, this always creates obligations in others. My right not to be killed creates an obligation in others not to kill me. My right to be compensated for my labor in accordance with the terms of a contract creates an obligation in my employer to pay me for my work. It is common in discussing rights to differentiate between positive rights and negative rights. A negative right is a right that creates an obligation in others not to interfere with us. A positive right is a right that creates an obligation in others to provide us with something. To fulfill the first sort of obligation, all we have to do is refrain from interfering in the lives of others against their wishes. To fulfill the second, we have to accept a burden of providing something to someone and carry through on an action.

Natural vs. Contingent Rights:

Natural rights are rights that we possess merely in virtue of existing. Contingent rights are rights that we acquire in virtue of the specific circumstances we are in. The right to life and liberty are generally thought to be natural rights. We don’t have to do anything in order to acquire them, nor does anyone else have to do anything to us in order to be obligated not to violate them. A right to be paid by your employer, however, is contingent. If I walk into a building and start working, I don’t have a right to be paid. In order to create that obligation in another, there has to be an agreement between parties creating the obligation. Similarly, we don’t have a right to charity from others. The difference between a charitable contribution and a payment of a debt is that the former is voluntary while the second is morally obligatory. This is why charity is praiseworthy, while paying one’s debts at least ought to be expected.

Against Natural, Positive Rights:

While people certainly obtain positive rights in a large number of circumstances, there is debate over whether or not any natural, positive rights exist. There is good reason to believe that they don’t. This reason is that any such rights would seem to violate our general right to liberty. If, merely in virtue of the fact that you exist, I am required to act in such a way as to provide you with some benefit, then I am not free to live as I see fit, because I must put forth whatever effort is necessary in order to fulfill my obligation to provide you with the good or service in question. Negative rights merely require us to leave one another alone. Positive rights place requirements on us even when we aren’t interfering with anyone else. This interference in how we choose to live our lives in spite of the fact that we have done nothing to create any need in those to whom we are presumed to be beholden seems to require a great deal of effort to justify. I find it unlikely that such justification can be found, or that we possess any rights to be given goods or services merely because we exist.

What about Children?:

One case that seems like it may be an exception is the case of children. It seems at first glance that a child has the right to be provided with what she needs to survive by her parents. I think, however, that this right is actually contingent. Specifically, I think it is grounded in the fact that the parents are responsible for the fact that the child has the needs in question. By bringing a child into a world while knowing that it would need to be cared for in order to survive, you create an obligation to provide for that need in the same way you would be obligated to provide for the medical needs of someone you injured. This explains why we are obligated to provide for the needs of our own children, but not for the needs of other’s.

Health Care:

If health care were a right, it would seem to have to be a natural, positive right. We haven’t done anything to one another that would seem to justify a requirement that we pay for one another’s health care. Since we aren’t responsible for the bad health of others, it seems quite implausible that we are nonetheless obligated to provide for fixing or improving their health. We may, of course, choose to do so, and we could, if we wanted to, all agree to pay for one another’s health care, but I see no reason to believe that we are all naturally morally obligated to do so.

Does that Mean we shouldn’t Provide Health Care?:

Not necessarily. Rights aren’t the only reasons to act. There are at least three sorts of reasons to help others that could justify deciding to pay for the health care of some people. First, there is the basic fact that it is good to alleviate suffering and to help one another. This provides a strong moral reason (although not an obligation) to do what we can to help one another obtain the health care we need. Second, it is good for us to express our concern for one another and to demonstrate compassion and charity. Creating a morally decent citizenry should include encouraging us, when possible, to help those in need, including medical need, when it is in our means to do so.

The problem with both of these reasons to act is that while they certainly justify a great deal of charitable action on the part of individuals, typically we don’t recognize such moral considerations as justifications for state action. Since state action is inherently coercive, using it to require people to be charitable in the way that you would prefer them to be rather than allowing them to decide for themselves how to use their own money is a violation of their rights. Therefore, while I strongly encourage the development of charitable actions to help those in great medical need, justifying state intervention in health care requires a different approach.

Here, we come to the third reason we may choose to help one another pay for health care: because we believe it is in our best interests to do so. One reason we form social contracts is for mutual benefit. Therefore, if we believe that we would all be better off if we chose to pay for health care through state action, we could create an obligation in one another to pay for it in the same way we create an obligation in one another to pay for the security of the nation or the building of highways. This avenue of argument seems to be the best avenue for arguing in favor of government health care. From what I understand about the issue, I tend to find the arguments for the view that government health care would be a better option than private health care unpersuasive, but since many, if not most economists disagree with me on this point, I am far from willing to be dogmatic about the issue. If good practical grounds for government involvement in health care can be found, then that would provide reason to adopt it. The arguments that rest on rights, however, seem to have little basis, and the arguments that appeal to other moral considerations, while they provide good reason for charitable action, aren’t the sort that justify government action.

In the meantime, here’s a practical suggestion for those who are interested in helping others pay for insurance. When I get my electric bill, there is an option to donate money to help pay for the cost of heating for poor people during the winter months. The electric company will take donations and use them to lower the heating bills for low-income families. This procedure doesn’t require forcing others to pay more for their service, but it gives people an opportunity to be charitable if this issue concerns them. I see no good reason not to offer a similar service to those who otherwise couldn’t afford health care. If you want to contribute to your health insurance company to offset costs so they can provide for those who are in financial need, then there is no reason not to do so. Encouraging insurance companies to set up charitable funds for this purpose would be a non-coercive and convenient way for those who care deeply about providing health care to those in need to choose to help provide it.

Jun 23

Judicial Confirmation

The threat of the “tyranny of the majority” has been recognized as a serious problem for democratic societies since the formation of American democracy. The solution to this problem the founding fathers envisioned was to establish a set of rights as a foundation for American democracy, and to set up a branch of government whose job it was to ensure and protect those rights no matter what the people wanted. This branch, the Supreme Court, was supposed to be independent and completely free of the interference or even the input of the electorate. We don’t get to vote for Supreme Court justices precisely because they are supposed to be a branch of government whose actions are beyond the say of the public, and free from the influence of the tyranny of the majority. This makes the current processes for confirming justices seriously problematic. Holding proceedings to determine the fitness of judges in a public forum virtually requires politicians to act as if their constituents political views should be reflected by nominees to the court, when this is precisely what establishing the court was intended to bypass.

So long as justices are competent and willing to use their judgment in protecting the rights of the people, no matter what the popular views of the citizens are about such protections, it is wrong for members of congress to stand in the way of judicial nominees. It is certainly wrong to do so out of concerns over “judicial activism,” whatever one takes that to be. It was recognized at the time of its creation that the bill of rights was artificially limited, and that laws would be created that violated our right without violating the bill of rights. Part of the proper role of any judge is to ensure our rights, even in the absence of precedence from the founding fathers. The real test of the proper conduct of a justice is whether or not they are willing to subjugate the rights of individuals to the interests of groups or the interest of government. Actions of that nature would be clear violations of the proper role of the judiciary in our society. Actions that seek to protect our rights, on the other hand, even when they go beyond those guaranteed in the initial formulation of the rights guaranteed by the founding fathers are part of the proper role of a judge. We should want wise people (Latinas or otherwise), guided by an underlying aim of protecting individual rights, to have the authority to do so no matter what the public wants.

There are any number of issues where there is room for legitimate debate over whether or not something is a right that ought to be protected on behalf of the people. Such debate can, of course, lead some to believe that the court is legislating contrary to our rights while the court can view itself as protecting them. However, on issues where great legislative minds strongly disagree, deferring to a less informed public hardly seems a superior alternative. In the long run, it is likely better to defer to the informed, if debatable judgments of those whose function is the preservation of our rights against the threat of democracy than subjugating all of us to the whim of the people at any given time. The long view of the preservation of our rights is what made the founding father’s decision to create an independent judiciary sound, and it has led to the long-standing preservation of our rights in this country. There are any number of issues where I would likely strongly disagree with the views of any of the recent appointments to the court by either party in power. Despite this, I think it is a good thing that the people don’t have a direct say in who gets chosen for those seats. I also think it is a problem for the long-term preservation of our freedoms that people of vision and independent thought are so often kept from prominent positions on the court due to the inappropriate influence of the public through the televised and lengthy processes of judicial confirmation that judges are now subject to.

Older posts «