Friday, February 29, 2008

How not to write: review of Fahrenheit 451

Some time last year, I began to suspect Fahrenheit 451 was a terrible book. I had read it sometime in high school, maybe in middle school, so I wasn't quite sure. I was fairly confident in a few memories: that it was about a fireman whose job was to burn books, and that he ended up with a bunch of book-memorizing hobos. And actually, I don't remember it being an especially bad book, beyond the unpleasantness associated with anything one is forced to read for an English class. Yet I had suspicions all the same.

I decided to re-read it over winter break, and it was more or less as I feared. The main thing Bradbury does in it is parade out a string of characters--an odd young girl, a retired English teacher, a chief of book-burners--to spout Bradbury's half-baked speculation about the role of books in society. There isn't even much concern for decent characterization. Why does the villain of the story turn out to make such a good spokesperson for Bradbury's views? The reader isn't given the slightest clue. Orwell, at least, put some effort into making his representative villain of 1984 (O'Brian) plausibly intoxicated with power, so he could say some of the things Orwell would have said about totalitarianism while still thinking totalitarianism a good thing. And while the preachy sections of 1984 are impossible to ignore, Orwell invests time in simply describing his dystopia in a way Bradbury doesn't. These observations have a high degree of credibility, as they were based on Orwell's own first-hand encounter with the Soviet-backed faction on the Republican side of the Spanish Civil War.

Fahrenheit 451 is often described as a novel about censorship, it is plainly nothing fo the sort. The book burners aren't out to to crush books they dislike, rather, they're just out burning books. "Books are good, so let's make some paper villains who go around burning them," seems to have been Bradbury's logic. I suspect few people--perhaps save some fans of classic science fiction drawn in by Bradbury's better stuff--would even be aware of such a book, if not for the fact that it flatters English teachers. For this reason a generation of high school students have been forced to read it.

There we come to the great irony of Fahrenheit 451: it was intended to paint books in a good light, but if it is presented to a young student as a paradigm example of literature, he is likely to conclude that books have nothing to offer but flattery for authority figures and propagandizing on their own behalf. This is what philosophers call having a biased sample, but the student would not be at fault for this biasing, and could not entirely be blamed for going off to spend his free time watching Fox News and TV crime dramas.

SC 81

The 81st edition of the Skeptic's Circle is up at Conspiracy Factory. It includes some very good posts on communicating skepticism, as well as a Skepchick takedown of a silly Oprah quiz.

Colin McGinn on "there must be something"

At Secular Philosophy, philosopher Colin McGinn comments on the claim that "there must be something".

Evangelicalism on campus

For awhile now, I've been hearing hints from professors that religion, especially Evangelical Christianity, is a much bigger force on campuses now than it used to be. Hemant serves up another example of this: students who try to attribute conventional Christianity to a well-known atheist like Shelley. Yikes!

How not to write a book review

Right now, I'm in the middle of reading the book The White Man's Burden by NYU economist William Easterly, about what's going wrong with current efforts to aid the poor in foreign countries. I was curious about what other people were saying about the book, so I googled for reviews. One by Amartya Sen is nice for it's thoroughness, but it's quite misguided: the main criticism seems to be that Easterly is basically right that we need to do a better job of managing foreign aid, but he gives the impression he's against aid. Yet who does he give this impression to? Surely not the book reviewer. I also had no trouble understanding Easterly's thesis. There seems to be a rather careless assumption that everyone else who reads the book will lack the reviewer's basic reading comprehension. Now, reading comprehension is not always what it should be, but surely this is an exaggeration.

I, for one, strongly recommend Easterly's book to anyone interested in how to better help the poor.

Wednesday, February 27, 2008

Mere Christianity

I think I've said this before, but I'll say it again: can we dispense with this talk of problematic religious beliefs as "extremism"? What's set me off this time around is a post at Atheist Revolution titled Christian Extremism on the Playground. The activity complained about in the post simply amounts to holding a doctrine that has historically been dominant in Christianity and telling people about it. Granted, the doctrine is a crazy and vile one, namely that all non-Christians spend eternity in Hell, but it isn't obvious that the behavior is "extreme" from the point of view of historical Christianity. That a child who had been taught this doctrine would tell friends about it is not, at first glance, surprising. Friends don't let friends burn in Hell for eternity.

Granted, perhaps it is abnormal to actually act as if you hold the religious beliefs you claim to. ("Hear the verbal protestations of all men," Hume says. "Nothing so certain as their religious tenets. Examine their lives: you will scarcely thing that they repose the smallest confidence in them.") Still, we shouldn't be encouraging that behavior, letting people think that those who actually act in accord with their beliefs are somehow distorting them, letting people hold onto crazy beliefs by not thinking about them too hard. Even when beliefs aren't normally acted on, they can suddenly start influencing action in nasty ways. Shine the light on the nature of those beliefs before that happens.

Consciousness and understanding

Last fall, I wrote about being disappointed with John Searle's Chinese room thought experiment. In that post, however, I left out something else about Searle's ideas that have been bugging me: he talks about understanding (say, of a story told in Chinese) but seems to assume that there's some deep relationship between understanding and consciousness. Searle's last resort in dealing with reductionist approaches to the mind (and this is something I'm entirely sympathetic to) is to claim that they are simply ignoring the issue of consciousness. Searle shifts casually between the two issues without ever distinguishing between them, but it's quite clear that they're different.

Consciousness, in the sense that has everyone excited, is subjective experience. Understanding, on the other hand, is I know not what. When I hear something in English vs. a language I don't understand, there's a certain feeling that I understand it that isn't present with other languages. I'm not sure that is very deep, though. Certainly, I can call to mind an internal monologue providing a sort of commentary on the meaning, but such a monologue does not occur in the very moment I feel understanding, and indeed mentally thinking sentences to oneself takes some time. I wonder whether any single experience alone can indicate understanding. This might explain how people can think such ludicrously incoherent thoughts as "there is no such thing as truth." In such cases, perhaps, they have the sentence in their internal monologue, but they don't really understand it.

The sort of considerations I raised in the Chinese room case, especially the intuitions about variant cases, suggest to me this possibility: understanding has to do with the ability to integrate the thing with the rest of what's in your brain. The difference is not mainly in the initial experience, but in what your mind is able to do thereafter.

Thought of the Time Being

Trust yourself? Would *you* trust someone who's lied to you daily for as long as you can remember, talked you into doing all kinds of stupid stuff, and generally worked his hardest to ruin your life?

Quote of the Time Being

[Philosophy] is one of any number of blanket terms used by deans and librarians in their necessary task of grouping myriad topics and problems of science and scholarship under a manageable number of headings.
-W. V. O. Quine. HT: Footnotes on Epicycles

Tuesday, February 26, 2008

Ignorance and indifference

Richard Chapelle asks whether ignorance is really such a bad thing. The discussion that unfolds in the comments pretty much lines up with my own gut reaction. No one can be thoroughly informed on every subject--there are large swarths of political and military history, higher-level social science, and cultural studies which I will never make the faintest effor to learn about. I've got too much else to do. However, many people give the impression of never making a serious effort to better themselves by learning of any sort. It is part of an indifference to serious living that I find very disturbing, though I admit I have trouble understanding exactly what is going on.

Monday, February 25, 2008

Voting and stupidity

Today on campus, there was a little tempest in a teapot over a lecture by Wendy McElroy, who gave a lecture titled "Don't Vote! It's immoral and it wastes your time!" I didn't see the talk (choosing to go swing dancing class instead), but I managed to learn a lot from it: not only am I surrounded by stupid, but I've found the stupid doesn't have much specificity to religion, as I had previously been inclined to think from viewing the stupid upwelling seen in response to recent religion critics.

I say this based on a post by Lester Hunt, one of our resident political philosophers who helped publicize the talk, who has published the angry e-mails he's gotten. In both the comments threads of both a local news story and a post by another local philosopher Harry Brighthouse, people have been accusing McElroy of being in league with racists. Yikes people, what is wrong with you?

Let's get something straight: your vote doesn't count. If Florida 2000 taught us anything, it is that the margin of error for our vote-counting techniques is greater than one. The idea that by voting you play some monumental role in democracy is nonsense. I'm at a loss to understand why people think a self-selected yet entirely representative 10% of the population doing all the voting is in any way worse than having 100% turnout. Our civic religion, which is so touched by that completely bogus one vote changing history list, is completely ungrounded in reality.

That's not to say that it isn't sometimes, often, or usually a good idea to vote and encourage others to do so. Elections really are a useful tool for good government. It only means that the knee-jerk response, which will not even consider such criticisms as Elroy has voiced is indefensible. Yes, that's a link to the essay version of what she probably said tonight. For all those tempted to spout off

Incidentally, though I admire this part of Hunt's post:
What the Hell is going on here? Is democracy itself the last surviving state religion? Is voting some kind of sacrament? Is hostility to politics the last remaining sacrilege, the only one you can still hate?
I take serious issue with this one:
Maybe the title of the talk is just a tad inflammatory. She seems to be telling me (yes, I do vote!) that I am immoral. The title of one of Wendy's essays on the subject -- Act Reponsibly: Don't Vote -- makes the same point in a less offensive way. I guess it would have been better. Oh, well, this is our title and we're stuck with it.
There is something that strikes me as lazy and cowardly about being unwilling to tell people that they are immoral. Last I checked the idea that some things are morally wrong are immoral was at least an idea accorded serious consideration in philosophy. The sort of things that are morally wrong are things people do, so if you come down on the "pro" side to the question "are some things morally wrong," you have to say that some things people do are morally wrong.

It is amazing how some people cannot even entertain ideas carefully enough to give a sophistical rebuttal. At least religion critics aren't alone in getting this kind of treatment.

Viva la glutamate!

Okay, that title is my best shot at hyping a Mind Hacks post titled Psychosis and the coming glutamate revolution. Looks like one of those science stories that will surprise laypeople but isn't all that surprising if you know a little science. Glutamate is actually one of the most common neurotransmitters, more common than dopamine or serotonin, so it isn't surprising that understanding glutamate systems would be important to understanding mental illness.

Oh, snap!

Thom Brooks does a takedown of postmoderist ignorance of actual philosophy. Potentially mineable for a quote of the time being, but really, read the whole thing. Philosophers need to do this more often.

Notebook: Beyond Inanity

While doing some randomly browsing, I discovered that Peter Unger is writing a book called Beyond Inanity, and the first four draft chapters are up on his website. (Yes, you read that right: consider looking at the boring webpages of academics an appropriate part of random browsing.) Anyway, here are my notes on the first chapter. Notes on the other three will follow when I have time to read them.

The book's central claim is that is that inane claims dominate contemporary philosophy. They are not quite as bad as they used to be in, say, the 60's, but they are still at most "a little bit better." Attentive readers of this blog will know that this lines up pretty well with some of my worries about contemporary philosophy. What's interesting, though, is Unger's exact diagnosis of the problem: he proposes defining "inane" as meaning "insubstantial," having nothing to do with concrete reality. Determinism and the doctrine of free will are substantial claims, the claim that determinism is incompatible with free will is an inane one. Though in the first chapter he is not so clear about examples outside of metaphysics, it seems this distinction would be replicated in areas such as ethics and epistemology, with both disciplines giving rise to inane and substantial claims. Unger emphasizes that he does not think there is anything wrong with inane claims in and of themselves, only that philosophy should not be dominated by them.

In the first chapter, there are not well-developed arguments for avoiding inane claims, but here's one point in favor of Unger's distinction, and avoiding the inane side. We want to know how the world actually is. We want to know if we, personally, should give a lot of money to Oxfam, or are entitled to our beliefs, or have free will. Abstract claims that say nothing about the concrete world may be useful tools for figuring out the above sort of claims about the concrete world, but they're not plausible candidates for an end unto themselves. Argue about them for the sake of answering concrete questions, but don't lose sight of the concrete questions. There also seems to be a problem with certain sorts of mistakes creeping in when we forget what we're supposed to be talking about, when we move from concrete cases we can point to to abstractions that can be misunderstood a dozen different ways.

It will be interesting to see how this develops. Unger may well turn out to have other things in mind, indeed, I hope he has more things which I never would have thought of on my own.

White People

I, along with half of the internet, have discovered a blog called Stuff White People Like. Each post describes a different thing white people like, such as irony, i.e. being a white person who links to a site making fun of white people.

In other news, I was recently in college library studying, in the coffee-shop section where flashy-books on ethic and gender studies are put on display. I noticed the following title: Critical White Studies. The back cover began by declaring "No longer content with accepting whiteness as the norm, critical scholars have turned their attention to whiteness itself." Though I didn't have time to read it, I suspect it would have turned out to be just as funny as the above-linked website, only unintentionally.

Thought of the Time Being

Drop the whole "devil may care" attitude. He is never going to care, no matter what you do.

Wednesday, February 20, 2008

What is irrationality?

In the blog circles I run in, few doubt that irrationality is a major problem in the world. Forget religion: people's beliefs about how governments make life-and-death decisions are also a mess. Yet some people seem maddeningly oblivious to this: for example, over winter break I read an essay by philosopher Peter van Inwagen claiming it was just obvious that politics is full of rational disagreement, and even claiming a consensus for the idea that it isn't irrational to hold political beliefs on insufficient evidence.

Someday I will write good-sized essay about the irrationality of politics. Now, though, I think it's just worth stepping back and trying to say what irrationality is. But I know enough philosophy not to try to give a final, once and for all definition, which will give the correct answer in all situations we might devise (if you haven't learned why this doesn't work yet, read this). No, instead I'm just going to throw out some rough ideas, some kind of groundwork to build on. It's not much, but it's worth getting these things clear.

First of all, irrationality, applied to beliefs, means something is wrong with your beliefs. But it's not that they're false. People in ancient Babylonia who believed that the Earth was flat are just as wrong about the facts as modern people who believe the same. However, modern flat-Earthers were irrational while the Babylonians weren't, or at the very least the Babylonians weren't irrational to the same extent. It is not entirely implausible to think that the Babylonians would have been irrational to believe the Earth round. A true belief can be irrational.

On the other hand, that something wrong with an irrational belief isn't totally disconnected from truth. It isn't about pragmatic rationality--what will make you feel good, or make people like you, or stop the evil AI program from torturing you (or the evil God, for that matter). I realize some will dispute the claim that it is always irrational to believe things on pragmatic grounds, but at the very least letting pragmatic reasons guide us too much risks leading us into irrational beliefs.

Irrationality is about getting at the truth. When we don't have direct access to the truth, we can at least adopt methods more likely to give us it. Failure to do so is irrationality, or at least a component of it.

The flat-Earth example suggests irrationality is related to being clearly wrong. Modern flat Earthers are clearly wrong, the ancient Babylonians were wrong, but not clearly so. This gets some interesting support from attempts to define what a psychiatric delusion is (from Wikipedia):
Although non-specific concepts of madness have been around for several thousand years, the psychiatrist and philosopher Karl Jaspers was the first to define the three main criteria for a belief to be considered delusional in his book General Psychopathology. These criteria are:

* certainty (held with absolute conviction)
* incorrigibility (not changeable by compelling counterargument or proof to the contrary)
* impossibility or falsity of content (implausible, bizarre or patently untrue)

These criteria still live on in modern psychiatric diagnosis. In the most recent Diagnostic and Statistical Manual of Mental Disorders, a delusion is defined as:

"A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everybody else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith)."
I have doubts about whether there is a clear difference between psychiatric delusion and normal irrationality, but even if I'm wrong about that, there's some relation. A common-sense way of looking at things is that delusions are a form of extreme irrationality. The common sense view would say that even if religious beliefs are irrational, they are not on the same level as a delusion. The DSM would suggest that the presence of the community makes them qualitatively different. Actually, I'm not sure I disagree with that: perhaps the difference between ordinary irrationality and much of what's classified as delusional is that ordinary people need to have their irrational beliefs socially reinforced.

In spite of these suggestions, a belief can be irrational even if it's not clearly false. A belief can be irrational simply because you really have no idea whether it's true. It would be irrational for me to take a passionate stand on whether the number of stars in the universe is even or odd.

If I had to take a stab at what it means to be irrational, here's what I'd do: I'd try to combine two strands of thought about knowledge in modern epistemology: intenralism and externalism. Internalism says the rationality part of knowledge is entirely a matter of things the subject has access to. Externalism says it's a matter of things the subject doesn't have access to: something in the ballpark of "the reliability of your methods." I'd suggest an compromise, an inxternalism if you will, where it's a matter of the methods that you have access to. Someone who believes their eyes, having no way of knowing their eyes deceive them, isn't irrational. But someone who forgoes available methods of rational inquiry in favor of sophistry is irrational. That's why it's important to do your best to be rational, at least if you want to get at the truth.

Quote of the Time Being

The fact that Christianity is less believable than a talking goat is exactly why we're atheists.
-Richard Carrier

Now go read the whole thing.

Gary Habermas' curious evasion

The following message by Gary Habermas re: the Flew Scandal:
In our case, when asked if he knew me, Tony said something like, "yes, I think we met at a debate." What appeared to question his memory of me was actually the very opposite: a very accurate comment from more than 20 years ago! The first time we met (in 1985) was at a debate, and it was not one of our dialogues, either, so Tony was entirely accurate.
The quote from the article is not quite accurate. It differs in an ever so slight but important way:
"Have you ever run across the philosopher Paul Davies?” In his book, Flew calls Paul Davies "arguably the most influential contemporary expositor of modern science."

"I’m afraid this is a spectacle of my not remembering!"

He said this with a laugh. When we began the interview, he warned me, with merry self-deprecation, that he suffers from "nominal aphasia," or the inability to reproduce names. But he forgot more than names. He didn’t remember talking with Paul Kurtz about his introduction to "God and Philosophy" just two years ago. There were words in his book, like "abiogenesis," that now he could not define. When I asked about Gary Habermas, who told me that he and Flew had been friends for 22 years and exchanged "dozens" of letters, Flew said, "He and I met at a debate, I think."
Notice the absence of the word "yes." In the context of the section, it's clear that Flew couldn't a definite answer to the question, contrary to what Habermas indicates.

Flew's response on that point could have been a relatively isolated glitch, involving failure to match a name to anything rather than completely forgetting about Habermas. Still, it's one of a number of pieces of evidence of the mental decline Flew has gone through.

My blog in a nutshell

Source: xkcd

Some day it will be "someone is wrong in a philosophy journal" for me.

Dating & plagiarism

Apparently, profile plagiarism is rife on online dating sites. This shouldn't surprise me, but it does. Fodder for fun-poking if I ever join one of these sites. HT: Mindhacks

Comic Book politics

Brilliant piece on politics in recent comic books (via a Sully guest blogger):
Perhaps the most interesting thing about these stories is why they fail... there is often a strong (if unintended) neoconservative subtext even in stories by left-leaning authors.

The "Civil War" storyline may provide the clearest illustration of this. The Superhero Registration Act is a straightforward analogue of the USA PATRIOT Act; the rhetoric of its opponents could have been cribbed from an ACLU brief. But under scrutiny, their civil libertarian arguments turn out to hold very little water in the fictional context. The "liberty" the act infringes is the right of well-meaning masked vigilantes, many wielding incredible destructive power, to operate unaccountably, outside the law -- a right no sane society recognizes. In one uneasy scene, an anti-registration hero points out that the law would subject heroes to lawsuits filed by those they apprehend. In another, registered hero Wonder Man is forced to wait several whole minutes for approval before barging into a warehouse full of armed spies from Atlantis. Protests about the law's threat to privacy ring a bit hollow coming from heroes accustomed to breaking into buildings, reading minds, or peering through walls without bothering to obtain search warrants. Captain America bristles at the thought of "Washington … telling us who the supervillains are," but his insistence that heroes must be "above" politics amounts to the claim that messy democratic deliberation can only hamper the good guys' efforts to protect America. The putative dissident suddenly sounds suspiciously like Director of National Intelligence Mitch McConnell defending warrantless spying.
I used to think Alan Moore had given superheros all the deconstructing they needed, but perhaps I was wrong. It will give the next generation of literary talent something to do.

Humanist Symposim up!

The 15th edition of the Humanist Symposium is up at Café Philos. My top pick is Alonzo Fyfe's Lying the kind of thing I wish I had written myself. Except this last part:
These are not good people. Yet, they have done such a good job of taking over our society that they have blinded us into realizing them for what they really are.
Taken over society? He talks as if the sort of people he talks about are in the minority...

Monday, February 18, 2008


This will be good to find my way back to when the U.S. is hit by a coup d'etat. Just kidding. I think.

HT: Sullivan

Solved Philosophy

Richard has a post on Solved Philosophical issues. I propose one: just because a grad student thinks a philosophical issue is solved doesn't mean it's solved, based on looking at Richard's list. For example:
1. Knowledge does not require certainty. But nor does justified true belief suffice.
The idea that justified true belief doesn't suffice has come under assault lately. For one, people in Hong Kong don't seem to have the relevant intuitions. Even working within more traditional considerations, I just read a paper suggesting maybe we should hold onto the JTB theory, in spite of contrary intuitions, because the theory explains so much ("What are good counterexamples?" in 2003 Philosophical Studies).

Of course, we could consider a revised sense of settling a philosophical issue, where it's just like letting the dust settle: it could get kicked up again 30 seconds from now. This is a pretty good model for what's happened with the JTB theory: Gettier's attack sidelined it for decades, but in the last decade or so people have begun to be skeptical. Something similar has happened with non-reductive physicalism, I understand.

Another useful example:
8. It's not analytic (true by definition) that cats are animals. But it is metaphysically necessary: there is no possible world containing a cat that is not an animal.
This is based on some philosophy of language claims which I've complained a good deal about on this blog. This raises a new question: if 95% of philosophers take a view, is the issue settled? But if that's what settling means, how much should a young philosopher like myself care? Not very much, I'm inclined to think.

The most interesting category is the last: "more controversial" examples of settled philosophy: WTF? How is something controversial settled? This suggests an equivocation at the heart of the post: between the "well supported" and the things that are, in a more natural sense, settled (say, mostly uncontroversial).


I'm following Richard's lead and switching my comment system to OpenID. It's a compromise between having pesky, unsigned anonymous comments and only letting blogger people comment. OpenID lets you comment as long as you have an account with one of several different systems.

PC 63

The 63rd edition of the Philosopher's Carnival is up at Noah Greenstein's place.

Against whining (or: when will Rev. Nedd stop beating his wife?)

Whine(Cross posted at God is for Suckers!)

Somewhere in the synoptic gospels, Jesus tells a parable of a widow who goes daily to beg a less-than-principled judge for justice, and eventually gets it just because the judge gets tired of listening to her. This may be my favorite parable in the entire Bible: on the one hand, the stated moral is dubious (as this is supposed to be a model for how prayer works), but on the other hand, it does provide a memorable statement of the fact that sometimes, whining will get you what you want.

The fact that whining sometimes works, however, doesn't mean it's a good strategy (that's what makes the moral of the parable so dubious). First of all, you're working from a position of weakness. The fact that your target sees it lowers the chances of success vs. other methods, and means you'll look contemptible even if the target gives in. Furthermore, whining is a technique which has no inherent tendency to favor the person in the right. True, it's a useful tool for the little guy, but sometimes the little guy is wrong. In our modern society, full of special interest groups struggling to manipulate the media and public opinion to their advantage, too many people live in avoid speaking freely for fear that they will be hit by an attack of concentrated whining.

For example, I once heard a story involving one of Bill Richardson's political campaigns, in which his opponent attacked him as not being a real Hispanic. This was obviously a silly idea, and it happened that Richardson went on a talk show where the host thought to set up a joke where Richardson would poke fun at the idea by insulting his opponent in Spanish. Richardson went with it without missing a beat, and it was funny. The catch? The host decided to use the word "maricón," which happens to be a derogatory term for homosexual.

Gay rights groups, naturally, were furious, and demanded an apology. Now, granted, it might have been nice if some other insult had been used for the joke--say, if Richardson had called the guy's mother a puta or some such. But really, so what? Obviously, Richarson's comment had nothing to do with denigrating homosexuals, it had to do with poking fun at something stupid someone else had said. When the gay rights groups got upset about it, it wasn't because any great harm had been done. On some level, they knew that they were just trying to show that while a minority, they were numerous enough to whine a politician into submission.

Keeping on the gay rights track, another example: when I was in high school, high school students started using "that's so gay" to mean "that's so stupid," and I recently read an article in a local magazine celebrating attempts by high school students to stop it. Here, I'm somewhat more sympathetic to the protests, because I've seen first hand that high school can be a pretty crappy time for students who differ from the norm. However, the problem of weakness remains, insofar as just insisting people not use the phrase is whiny. I seriously doubt loudly insisting on the point has actually done much good. At best it would turn casual disrespect into focused, if muffled, resentment.

You know what I have seen work, though? Ridicule. When students started saying "homophobia is so gay." It poked fun. It got laughs. It deflated blowhards. It was a hell of a lot more effective than whining.

I was prompted to write this post when I saw this at the blog of Hemant Mehta, of e-bay atheist fame. You see, someone paid to put up a bulletin board with the words "why do atheists hate America," and someone got the bright idea to send a whiny letter. Hemant thinks it's "fantastic blueprint if you wanted to write one of your own." I think it's one of the most ridiculous things I've ever seen. Five paragraphs full of stock, emotionally charged phrases with little substance. They range from a knee-jerk appeal to patriotic sentiment (the bulletin board is said to be "un-American") to a dubiously-sincere closing ("Wishing only the best for you").

I've written before about how atheism is becoming more like a movement. I really do find that thrilling. However, I hope we don't become another whiny special interest group. We want to win because we're smarter and funnier than the opposition, not because we can out-whine them. And honestly, I'm not sure we can out-whine them: Bill Donahue of the Catholic League has proved a masterful whiner, to the point that his Wikipedia page is dominated by a list of twenty-one things he's whined about. Of course, Donahue doesn't always have the effect he wants, but that's because of his deficit in the smart & funny department, not a lack of an ability to whine.

Back to what I said about ridicule. Ridicule is really a wonderful tool, because like whining it can let the little guy level the playing field, but unlike whining, you have to have a point if you're going to avoid looking like a jackass yourself. The fact that a well-aimed bit of ridicule will have a point means it can earn you some real respect. With this in mind, I decided to send the following message to the people behind the bulletin board Hemant was talking about:
Dear Rev. Nedd and In God We Trust,

I recently heard about the bulletin board your are sponsoring, which asks, "Why do atheists hate America?" This made me wonder: when is Rev. Nedd going to stop beating his wife?

Sunday, February 17, 2008

CotG 85

The 85th edition of the Carnival of the Godless is up at Greta Christina's blog, and comes to us "lovingly and painstakingly illustrated it with raunchy pulp fiction cover art." It's interesting to see how much consciously-irreverent cover art of that sort Greta could find.

There are, of course, other carnivals out there. At the moment, I'm feeling lazy, so I'll just direct the interested to PZ's most recent carnival round-up.

Total uncertainty = 50% chance?

There's a brief thread at Philosophy etc. on whether being totally uncertain about something ("you haven't the faintest clue") is equivalent to giving it 50% odds of the thing being true or false. Richard has an argument pro; I have an argument con that I've always sort of taken for granted for as long as I've thought of it, though now I realize it's potentially controversial.

Friday, February 15, 2008

The Inoculated Mind is back!

Some of you may be familiar with The Inoculated Mind, a podcast which I vaguely knew had gotten some attention, but never really listened to. It was produced by an undergraduate at UC Davis, who then whent off to gradschool and let the thing lapse. By luck, though, he went to grad school in Madison, I've met the guy, he's cool, and now his show is back! Meeting him prompted me to actually listen to it, and it's a good show. Check it out.

What we say and what we mean

Last week Tuesday in class we were discussing reference and possible worlds. I asked what a possible world is--superficially, I felt comfortable with the concept, but I knew that there was debate. I knew about Lewis' crazy view on which every possible world is actual. I knew of the view that they are propositions providing a complete description of a world. The professor had other suggestions, such as that they were collections of properties, but at the end of the day yes, were were discussing the relationship between sentences and propositions. At the end of the class, I began to have a new worry which I didn't quite feel like telling the teacher about: I wasn't sure what a proposition was!

For a philosophy major, this constituted a minor existential crisis. Debating the status of propositions is what philosophers due, so if you're a philosopher, you'd better know what a proposition is. I had gotten comfortable with the word some time ago. It wasn't like going into a science class and being confused by a bit of terminology that you were supposed to have learned the day you missed. Suddenly, though, I realized I didn't know what the word meant.

The Stanford Encyclopedia of Philosophy ended up coming to my rescue: among the proposed ideas of what a proposition was was "the meanings of sentences." Suddenly, the last philosophy class made sense: we had been discussing the relationship between what we say and what we mean. This simple fact had been obscured by a mess of jargon, and I think some more substantial points were also obscured by this in retrospect.

The great irony turns out to be this: arguably, my professors' use of that jargon was a failure by him (and the philosophical community he took his cues from) to say what he meant.

McCain's character on display

Today/yesterday (I write this from post-midnight date limbo) John McCain voted against a bill that would apply military interrogation rules to the CIA. Here's McCain's defense of his vote, provided in full by Andrew Sullivan, to Sullivan's credit:
Mr. President, I oppose passage of the Intelligence Authorization Conference Report in its current form.

During conference proceedings, conferees voted by a narrow margin to include a provision that would apply the Army Field Manual to the interrogation activities of the Central Intelligence Agency. The sponsors of that provision have stated that their goal is to ensure that detainees under American control are not subject to torture. I strongly share this goal, and believe that only by ensuring that the United States adheres to our international obligations and our deepest values can we maintain the moral credibility that is our greatest asset in the war on terror.

That is why I fought for passage of the Detainee Treatment Act (DTA), which applied the Army Field Manual on interrogation to all military detainees and barred cruel, inhumane and degrading treatment of any detainee held by any agency. In 2006, I insisted that the Military Commissions Act (MCA) preserve the undiluted protections of Common Article 3 of the Geneva Conventions for our personnel in the field. And I have expressed repeatedly my view that the controversial technique known as "waterboarding" constitutes nothing less than illegal torture.

Throughout these debates, I have said that it was not my intent to eliminate the CIA interrogation program, but rather to ensure that the techniques it employs are humane and do not include such extreme techniques as waterboarding. I said on the Senate floor during the debate over the Military Commissions Act, "Let me state this flatly: it was never our purpose to prevent the CIA from detaining and interrogating terrorists. On the contrary, it is important to the war on terror that the CIA have the ability to do so. At the same time, the CIA’s interrogation program has to abide by the rules, including the standards of the Detainee Treatment Act." This remains my view today.

When, in 2005, the Congress voted to apply the Field Manual to the Department of Defense, it deliberately excluded the CIA. The Field Manual, a public document written for military use, is not always directly translatable to use by intelligence officers. In view of this, the legislation allowed the CIA to retain the capacity to employ alternative interrogation techniques. I’d emphasize that the DTA permits the CIA to use different techniques than the military employs, but that it is not intended to permit the CIA to use unduly coercive techniques – indeed, the same act prohibits the use of any cruel, inhumane, or degrading treatment.

Similarly, as I stated after passage of the Military Commissions Act in 2006, nothing contained in that bill would require the closure of the CIA’s detainee program; the only requirement was that any such program be in accordance with law and our treaty obligations, including Geneva Common Article 3.

The conference report would go beyond any of the recent laws that I just mentioned – laws that were extensively debated and considered – by bringing the CIA under the Army Field Manual, extinguishing thereby the ability of that agency to employ any interrogation technique beyond those publicly listed and formulated for military use. I cannot support such a step because I have not been convinced that the Congress erred by deliberately excluding the CIA. I believe that our energies are better directed at ensuring that all techniques, whether used by the military or the CIA, are in full compliance with our international obligations and in accordance with our deepest values. What we need is not to tie the CIA to the Army Field Manual, but rather to have a good faith interpretation of the statutes that guide what is permissible in the CIA program.

This necessarily brings us to the question of waterboarding. Administration officials have stated in recent days that this technique is no longer in use, but they have declined to say that it is illegal under current law. I believe that it is clearly illegal and that we should publicly recognize this fact.

In assessing the legality of waterboarding, the Administration has chosen to apply a "shocks the conscience" analysis to its interpretation of the DTA. I stated during the passage of that law that a fair reading of the prohibition on cruel, inhumane, and degrading treatment outlaws waterboarding and other extreme techniques. It is, or should be, beyond dispute that waterboarding "shocks the conscience."

It is also incontestable that waterboarding is outlawed by the Military Commissions Act, and it was the clear intent of Congress to prohibit the practice. The MCA enumerates grave breaches of Common Article 3 of the Geneva Conventions that constitute offenses under the War Crimes Act. Among these is an explicit prohibition on acts that inflict "serious and non-transitory mental harm," which the MCA states "need not be prolonged." Staging a mock execution by inducing the misperception of drowning is a clear violation of this standard. Indeed, during the negotiations, we were personally assured by Administration officials that this language, which applies to all agencies of the U.S. Government, prohibited waterboarding.

It is unfortunate that the reluctance of officials to stand by this straightforward conclusion has produced in the Congress such frustration that we are today debating whether to apply a military field manual to non-military intelligence activities. It would be far better, I believe, for the Administration to state forthrightly what is clear in current law – that anyone who engages in waterboarding, on behalf of any U.S. government agency, puts himself at risk of criminal prosecution and civil liability.

We have come a long way in the fight against violent extremists, and the road to victory will be longer still. I support a robust offensive to wage and prevail in this struggle. But as we confront those committed to our destruction, it is vital that we never forget that we are, first and foremost, Americans. The laws and values that have built our nation are a source of strength, not weakness, and we will win the war on terror not in spite of devotion to our cherished values, but because we have held fast to them.
I do not dispute for a moment that it's worthwhile to debate the particulars of McCain's position, and ask what specific restrictions he thinks should be put on the CIA beyond a ban on waterboarding.

However, I see little evidence to support the line Sullivan is taking, that this is a heartbreaking betrayal of principle by McCain, made to appease pro-torture Republicans. He's got no reason to. He trounced the two main pro-torture Republicans (Guiliani and Romney) in the primaries, forcing them out of the race and getting endorsements from them (Romney's came today). There's not the faintest chance that the torture issue will cost him the nomination anymore. His solid majority primary votes indicate few Republicans are going to stay home in November over the torture issue.

On the other hand, the response from Sullivan et. al. is entirely predictable. This makes the vote, from a political perspective, moronic. As a sign of the political stupidity of the vote, let me point out that I can safely bet, in advance, that most comments on this post are going to be negative. The vote wasn't politically motivated. McCain's instincts don't work that way.

This kind of behavior from McCain is something we should expect by now. Do a YouTube search for Obama's name: you'll find videos of speeches full of empty phrases eloquently delivered. Do a YouTube search for McCain's name: you'll find attack videos over politically inept comments which, nonetheless, often contain worthwhile points.

The treatment McCain has gotten reminds me too much of the media treatment of recent atheist writers. "Look at this shocking thing he said! Forget his arguments, look at what he said!" And the way Obama supporters defend Obama's platitudes remind me too much of how Evangelical Christians deal with requests for serious intellectual reflection. Earlier in the week, I talked to a Obama-supporting friend, and was told that if I worry about substance there was an Obama speech online I should listen to, but the people asking for substance aren't really sincere. I latter realized the speech was one I had already heard and not been impressed by. "If you want evidence, read Lee Strobel! And if you aren't impressed by Lee Strobel, you must not really care about evidence!"

McCain's instincts about Obama are right in touch with my own. And Obama's supporters, well, they make it hard to tell whether something like this is a parody.

I don't claim any absolute insight into which candidate would have a better presidency, when all the consequences of their actions are tallied. From where I stand, though, Obama has too many red flags attached, things that are conspicuously absent with McCain.

If nothing else, a President McCain is more likely to get the kind of scrutiny that politicians need. We don't want the kind of guy who can try to invade a foreign country based on an idiotic plan involving a handful of refugees, and then somehow convince America he's such a wonderful guy his face belongs on the 50 cent piece. (Though my angle on this point may seem a little strange to some, I wouldn't be the first to suggest that Obama might be that kind of guy). If McCain screwed up that bad, he wouldn't even make a decent attempt at talking his way out of it. More likely, he'd stand up and say whatever happened to come to mind, invariably pissing people off. I feel safer with that than Obama's sweet talking.

What we need is not a candidate we can believe in, but one who deserves to be believed.

Thursday, February 14, 2008

Happy Valentine's Day!

I had meant to post this earlier today, but better late than never: Post Secret has a special Valentine's Day edition. If you're feeling down this V-day, reading Post Secret will let you know that others have it worse than you.

We're All Going To Hell

Check it out:

Now check out their website for other cool, largely free, stuff. I just added a new entry to my "favorite music" on Facebook. Strongly recommended for anyone wondering about the lack of atheist music--what makes a song like "We're All Going To Hell" work is precisely that it doesn't try too hard, and just has some gentle fun with religion.

Now I have to hope that they don't become *too* popular. Not that I buy all that nonsense about "selling out," but yes, if major radio stations started playing them every half-hour, it would get annoying, as with any other group, and it would become embarrassing to admit to liking them. But that wouldn't make me take them off my Facebook page.

SC 80

The 80th edition of the Skeptic's Circle is up at Bug's Girl Blog.

Wednesday, February 13, 2008

Medicalizing sexual problems

Mindhacks spots some dubious medicalization of sexual problems:
Apparently "About 43% of women and 31% of men in the U.S. between ages 18 and 60 meet criteria for sexual dysfunctions, according to a 1999 report on the sexual behavior of more than 3,000 U.S. adults".

This report was a research study published in the Journal of the American Medical Association that classified sexual dysfunction as reporting any one of the following during the last 12 months:
(1) lacking desire for sex; (2) arousal difficulties (ie, erection problems in men, lubrication difficulties in women); (3) inability achieving climax or ejaculation; (4) anxiety about sexual performance; (5) climaxing or ejaculating too rapidly; (6) physical pain during intercourse; and (7) not finding sex pleasurable
Almost all of which fall within the normal range of a year's worth of regular sexual experiences, which probably explains why a third to almost half of people surveyed experienced at least one - but hardly a marker of a serious medical problem in itself.
To plop "anxiety about sexual performance" in the same conceptual space as cancer is ludicrous. Now, I think it would be interesting to have an open debate about whether it would be ethical to let people use biotechnology to try to enhance aspects of their sex lives they find unsatisfactory, for no greater reason than their not being satisfied. But please let's not pretend that doing so would simply be a matter of treating recently-discovered diseases. To pretend that is simply to willfully obscure the argument by twisting language, the sort of thing George Orwell so rightly railed against.

This is, by the way, part of the reason I was skeptical of the porn nation guy and his alleged "sex addiction."

Notebook: SEP on hedonism

From the SEP article on consequentialism:
These points against hedonism are often supplemented with the story of the experience machine found in Nozick (1974, 42-45; cf. the movie, The Matrix). People on this machine believe they are surrounded by friends, winning Olympic gold medals and Nobel prizes, having sex with their favorite lovers, or doing whatever gives them the greatest balance of pleasure over pain. Although they have no real friends or lovers and actually accomplish nothing, people on the experience machine get just as much pleasure as if their beliefs were true. Moreover, they feel no (or little) pain. Assuming that the machine is reliable, it would seem irrational not to hook oneself up to this machine if pleasure and pain were all that mattered, as hedonists claim. Since it does not seem irrational to refuse to hook oneself up to this machine, hedonism seems inadequate. The reason is that hedonism overlooks the value of real friendship, knowledge, freedom, and achievements, all of which are lacking for deluded people on the experience machine.
This seems a rather odd statement of what Nozick's experiment is supposed to prove: on serious hedonism, the idea drug-high (or perhaps electrode-high) trumps more subtle experiences from an experience machine. The real purpose of Nozick's machine, I thought, was to show it matters for our experiences to correspond to reality.

Tuesday, February 12, 2008

January reviews in short

This is a bit late, but here it goes:

Sensation and Perception by E. Bruce Goldstein: Interesting information on the psychology of perception here. Three stars.

The Sandman Vol. 4: Season of Mists by Neil Gaiman: Gaiman has a unique talent for making fantasy truly magical. The story of Lucifer giving up the keys to hell is the best Sandman thus far. Five Stars.

The Theory of Morality by Alan Donagan: Decent exposition of what the author calls the Hebrew-Christian moral tradition. Three Stars.

God and the Philosophers by Thomas V. Morris (ed.): Some nice material on the personal side philosophy, and the actual philosophy is provocative for all its flaws. Four Stars.

Tragic Sense of Life by Miguel de Unamuno: I read this as my first full book in a second language. Beautiful prose, though the content left me with the suspicion that I'd have enjoyed it less if I understood it better. Three Stars

The Conquest of Happiness by Bertrand Russell: A book on an important subject by a great philosopher and masterful writer. At times paradoxical, but also with real insight. Four Stars

Homage to Catalonia by George Orwell: Records the experiences that led Orwell to reject communism and provided the basis for 1984. Also a darn good story. Five Stars.

The Sandman Vol. 5: A Game of You by Neil Gaiman: Usual good work from Gaiman. Four Stars.

The Sandman Vol. 6: Fables and Reflections by Neil Gaiman: Brilliant mixing of myth and history. The story of America's first and only emperor is especially beautiful. Also features chibi Death and Dream (they're so cute!). Five Stars.

Naming and Necessity by Saul Kripke: Had to read this for the Metaphysics class I'm taking right now. I feel like the only person in the world who's not impressed by Kripke--towards the end he deals with apparently clear counter-examples to his claims with some very brief hand-waving. Two Stars.

BB on thought experiments

The Barefoot Bum has a nice follow up to my previous thoughts on thought experiments, hitting some points I missed.

Monday, February 11, 2008

The time-traveling kidnappers.

Another metaphysics post, this time getting away from Kripke and language:

One example introduced on the first day of the metaphysics class I'm taking was of a 17 year old slacker who goes off, spends three years in the military, and as a result changes significantly in character, is more disciplined, etc. When he gets back, he reports, "I'm a different person now." Philosophers, being philosophers, may be inclined to ask in this situation whether this might literally be true.

It sounds silly, but I've thought of another thought experiment that suggests something significant may be going on here. Imagine a person is kidnapped by time travelers, whether from another planet or a future Earth it matters not. In addition to time travel, their technology includes great medical and genetic technologies, which allow the subject to be kept alive indefinitely. Over a course of ten thousand years, he goes on many adventures, entirely forgetting his original life for all practical purpose, and his genetic code is slowly altered, one tiny insignificant bit at a time, until it is unrecognizable. Maybe he has some vague ideas about what it was like to live in his home milieu, and maybe his genetic code could be recognized as originally human, but both memories and genetics have been altered (slowly, over the course of ten thousand years, remember!) to the point where he could never be identified with a single individual.

To complete the thought experiment, imagine that somehow the time traveling technology finally places him back in his home setting, an hour or two after the abduction. Would it make any sense whatever for the subject's previous friends and family to treat him as in any way the same person? I think not. I dare say he would not be the same person. This is in spite of the fact that the example is different only in extent from the military case, and I have stipulated that the change is completely gradual. It is interesting to think that the ex-slacker will likely not interact with many of his old friends in quite the same way, especially if they themselves have remained slackers. The time traveler is only a more extreme form of that.

Vjack on science and religion

I'm always ticked off by attempts to insist science and religion are compatible when they're clearly motivated on political rather than rational grounds. However, Vjack has a smackdown of a particularly silly example: a refusal by supposed defenders of evolution to say science conflicts with young earth creationism. Yikes!

Notebook: The SEP on thought experiments

I'm currently taking a thought-experiment heavy ethics class, which has gotten me worrying a lot about the value of thought experiments. The prof suggested I look into what the Stanford Encyclopedia of Philosophy had to say about them. It didn't cover all the issues I was hoping for, but it did have interesting material, especially this proposal on the nature of thought experiments:
Recent years have seen a sudden growth of interest in thought experiments. The views of Brown (1991) and Norton (1991, 1996) represent the extremes of platonic rationalism and classic empiricism, respectively. Norton claims that any thought experiment is really a (possibly disguised) argument; it starts with premisses grounded in experience and follows deductive or inductive rules of inference in arriving at its conclusion. The picturesque features of any thought experiment which give it an experimental flavour might be psychologically helpful, but are strictly redundant. Thus, says Norton, we never go beyond the empirical premisses in a way to which any empiricist would object. (For criticisms see Bishop 1999; Brown 1991, 2004a, 2004b; Haggqvist 1996; Gendler 1998, 2004; Nersessian 1993; and Sorenson 1992; and for a defense see Norton 1991, 1996, 2004a, and 2004b.)
This seems an accurate account of the thought experiments described in the article that actually advanced science. The fact that many philosophical thought experiments don't fit this model is, perhaps reason to be suspicious of them, and especially suspicions of the suggestion, made by the author of the encyclopedia article, that philosophical thought experiments and scientific ones are just the same.

Linguistic intuitions and memory

Many students of philosophy, myself included, are convinced that at the end of the day philosophical arguments just have to appeal to intuition. They have to stop somewhere, and it's hard to formulate a totalistic theory of where they can stop, so lets stop wherever call our arbitrary stopping points intuitions. Maybe a given intuition will be undermined, but appeals to them are hard to rule out in advance.

The standard inclination is to think that intuitions must be a priori, but I think I have a counter-example to that: linguistic intuitions, intuitions about the meanings of words. Things like our intuitions about the whether the sentence "Hesperus might not have been Phosphorus" is a legitimate, meaningful sentence. Though it's hard to further justify an intuition about this case, it plainly isn't purely a priori: the only way we can know what an English sentence means at all is by experience with English. Such intuitions appear to be a form of memory about how we've heard words used, just a very vague form. We are unlikely to be able to cite specific instances, much less quote them verbatim. Not terribly surprising, as research on the psychology of memory has made perfectly clear that memory isn't a video camera, that it's often quite vague. However, it's interesting to see how far the vagueness can be taken, to the point of giving us something that might at first glance be mistaken for an a priori intuition.

Incidentally, this may explain how Galileo was able to use a thought experiment to figure out that Aristotle's ideas about gravity were wrong (discussion here). It wasn't an a priori deduction, but rather a combination of realizing the logical consequences of Aristotle's and having a vague idea that he (Galileo) hadn't seen things fall that way.

Kripke on speaking loosely

If you asked someone immersed in contemporary Anglophone philosophy who the greatest living philosopher is, an awful lot of them would say Saul Kripke. Surprisingly, Kripke has published very little, concentrating on giving lectures. In one case, a series of lectures was deemed important enough to be transcribed and published, thus giving us one of Kripke's main publications, Naming and Necessity.

One of Kripke's central theses is that names, and many terms, act as "rigid designators" referring to the same thing in all possible situations. It's hard to see the significance of this until you see some examples. One example is that Kripke claims that Hesperus and Phosphorus, two names given to Venus at different times when it appears in the night sky before people knew the two things were the same object, refer to Venus in all possible situations. There is no possible situation in which the words refer to a separate object (there are possible situations in which the words are used differently, but when we use the words in our situation, we refer only to Venus, not some other possible objects).

Now, my gut reaction to this is "Hesperus might not have been Phosphorus" is a perfectly sensible, meaningful thing to say, and Kripke actually uttered sentences along those lines in Naming and Necessity. To his credit, Kripke notices this problem and at the end of the lectures imagines a hypothetical critic rattling off a string of such counter-examples. To this he responds (on p. 142 of my edition) that "when I speak of the possibility of the table turning out to be made of various things [another of Kripke's examples--CH], I am speaking loosely." If not for Kripke's ideas about speaking loosely, the counter-examples go through, and his project fails. Yet it strikes me at first glance as a hand-wave, and at second glance as an instance of the sort of misguided approach to language I've criticized several times before on this blog, such as in can water taste funny. The idea--and its proponents would surely try to find something more nuanced--is that words cannot mean multiple things. Applied to Kripke, it is the idea that "Hesperus" cannot mean "Venus" sometimes and "the object that appears in the sky at such and such times and locations with such and such brightness" at other times, with frequent equivocation between the two senses. More precisely, the assumption is that the two senses cannot be equally legitimate. One has to be dismissed as mere loose speech, but the grounds for this aren't clear.

Brief Google scholar searches yield nothing on this idea of loosely speaking. I wonder if anyone has ever commented on it, and if so, what has been said in defense of it. It certainly needs more exposition than Kripke has given it.

Sunday, February 10, 2008

Red green blue

Enigmania has been posting on the sky's being blue, including a post titled The sky's blue, therefore it's an object. Oddly, that title may be an important insight in and of itself. "Colored things are objects" seems like a paradigm case of a necessary truth, knowable a priori, yet the sky seems not to fit many of our common-sense ideas of what objects are. It certainly doesn't fit the common-sense assumptions that used to be made of it, namely that it's a solid dome.

This catches my interest, partly because I've been reading Laurence Bonjour's In Defense of Pure Reason, a modern defense of the idea that there are substantiative a priori claims. That in and of itself seems reasonable enough, as I don't see any alternative to that except Humean skepticism, and I'm pretty clearly convinced that skepticism is self-defeating. However, one of Bonjour's main examples bothers me: the idea that no object can be both red and green all over. Scientifically-minded person that I am, I immediately reflected on the fact that the property of the object is to emit light in a certain way, and what we think of as the redness or greenness of the object is a product of how our minds interact with the object. It is possible for the light-waves to be overlaid, though our minds wouldn't give us simultaneously red and green experiences in that case--I think it would be more of a yellowish orange (I'm trying to remember here what it's like to see green and red diodes placed very close together). Now, might we have an experience of something that seems to be simultaneously red and green? I honestly don't know. It's inconceivable in so far as I cannot imagine it, but that doesn't mean its impossible.

This isn't a huge blow to Bonjour's specific ideas--he emphasizes that apparently a priori knowledge can be fallible. Still, it's interesting to think about.

The creepiness of Obama

Over the weekend, I found a number of stories of people who saw Obama speak and changed their loyalty. The first two are from Vastleft of Corrente, and both involve also seeing another Democrat speak (Edwards in the first case, Hillary in the second case) and being won over by the greater substance of the non-Obama candidates. The other story comes via Andrew Sullivan, about someone who switched to Obama: in this case, all that could be said for Obama is that he had the crowd going crazy.

We are not impressed.

The most disturbing part of what I read over the weekend is a quote attributed to Obama by the person who switched to Hillary: "At some point in the evening, a light is going to shine down and you will have an epiphany and you’ll say, ‘I have to vote for Barack.’" This is the most extreme example yet of Obama's messiah complex. He thinks he's the best thing that's ever happened to America. What gets a great many of his supporters going is that they like the idea of supporting a messiah. It boosts their egos to conceive of themselves as those who will give the messiah to the world.

Corrente also links to a WaPo story (which in turn contains a fuller video) in which Obama accuses people who don't support him of being cynical, as if this were a bad thing. It is impossible to have watched politics since I was aware that there was such a thing, to read in history about what has come before, and not be cynical. Obama is of course right that we can do better. But what we need to do better is people of simple competence and integrity, not egomaniacs who try to storm their way into power by smoothly spouting platitudes. Obama's smooth-talking inspires no confidence in me. Instead, I find it profoundly chilling.

Scientology protests

Today was the day for anti-Scientology protesters to take to the streets. Via Roy N., I've found that there's a ton of good photos of the protests on Flickr. The LA Times story (first link) is the best source for the story, but a detailed, city-by-city, if somewhat disorganized account is available at Wikinews.

My main thought on this is that these protests were organized via the internet, and I think we can say that thanks to the internet (and thus by extension Al Gore--okay, that was a horrible joke) Scientology is done for. I have to wonder, though, what would happen if there were a similar organization made up of people that actually understand the internet, able to competently hack enemies, etc.

Friday, February 08, 2008

PZ debates a creationist!

Recently, PZ debated creationist Geoffrey Simmons. It went just about perfectly. This is rather encouraging, given PZ's past history on the issue of debating creationists. Here's me back when PZ was denouncing the idea of these debates:
Eddie Tabash handles the technical aspects of debating tolerably well, but there are times when watching him debate that I've wished he had an expert's grasp of the issues...

PZ's opposition to public debating is really a shame, because he strikes me as an ideal debater. He knows the issues well, and given his blog output, I assume he's good at coming up with things to say on the fly and saying them in a compact manner. That's everything he needs to be a good debater.

So please, PZ, reconsider. Next time a creationist comes to town, do that research, but also watch some videos to learn the dos and don'ts of debating, and maybe call Eddie Tabash for some coaching--he's been fairly generous in volunteering his time for that sort of thing. It would mean a chance for you to shred creationism in front of people who would never read your blog.
I think I've been vindicated in my prediction that PZ would make a good debater.

Questionable work in philosophy of religion

Philosophy of Religion(Cross posted at God is for Suckers!)

I've found out via Prosblogion that there's a philosophy of religion conference happening this weekend, and the papers are posted online. The first one that caught my eye was "Is Atheism Reasonable?" by Ted Poston, and well... um... let me start by just outlining the paper.

The paper could be divided up into two sections, one on "sympathetic atheism" and another on "unsympathetic atheism." Sympathetic atheism is defined as follows:
(i) the concept of God is coherent, (ii) there‟s no God because there‟s gratuitous evil, and (iii) were there a God the world might not be all that different than it actually is.
Unsympathetic atheism is more vaguely described as involving a rejection of (iii). Sympathetic atheism is alleged to be incoherent, while unsympathetic atheism is alleged to have been refuted by Plantinga.

There is something puzzling about the section on sympathetic atheism: it's treated as something worth discussing, even though there's no evidence anyone's actually held it as defined. There's a citation of William L. Rowe's advocacy of "friendly atheism," but Rowe characterized his position in terms of thinking religious belief isn't always irrational, and it was a somewhat weak thesis given that he was thinking along the lines of people who aren't aware of the reasons for atheism, or aren't aware of the problems with the reasons for their religious beliefs (I read Rowe's essay over winter break, and again in preparation for writing this post). Poston obviously thinks there's some connection between the two, but the actual arguments for this point are very sketchy. He may well be right about "sympathetic atheism" as he defines it, but that doesn't mean he's refuted a position which anyone's actually held.

With "unsympathetic atheism," the careless shift from the stated definition and thinking in terms of rationality is particularly pronounced. The suggestion is that if the atheist claims to have strong justification for thinking they've gotten something right that believers get wrong, they have to think believers are irrational. Might they establish this by appeal to the arguments for atheism. No. Why? Because Alvin Plantinga has shown otherwise.

The claim that "for any proposition P, if S has shown that P, where S is Alvin Plantinga, then P is true" is certainly widely held in philosophy of religion. But what arguments are there for it? Of course, "show" here might take an analysis paralleling popular analyses of "know," in that the truth of the claim is supposed to be an integral part of what it means to say it is known (or, perhaps, shown). This, though, only shifts the issue to how one can so effortlessly gain justification for thinking Plantinga has shown something. On the proposed reading of the word "show," one has to conclude that many philosophers of religion hold to something along the following lines: the fact that Plantinga has argued something, and one can find the appropriate citation, this constitutes at least strong evidence that Plantinga has shown the thing. Yet I don't know of any arguments for this position either. If anyone could point me to a relevant journal article, it would be much appreciated.

*Sigh* The one upshot of all the bad philosophy papers out there is that if I become a professional philosopher and am feeling tempted to advance my tenure case by publishing a paper with dubious claims, I resist the urge by going out and finding some really lousy papers, write critiques, and publish those critiques in place of the dubious paper I was tempted to submit. Then I can sleep easy at night knowing even if I really have no idea what's going on with that philosophical issue, I have reason to be confident I was right to say that the particular approaches critiqued don't work.

Greta Christina interviews Mistakes Were Made author

How cool is this? Greta Christina managed to arrange an interview with the author of Mistakes Were Made (But Not By Me). She's slowly becoming one of those bloggers you're afraid to link to because all of her stuff's so good...

The morality of astronomy

e.g. looks at the question of whether things can be outside moral judgement--like the motion of Mars.

Can McCain be stopped?

Jon Swift asks.

Thursday, February 07, 2008

More Kristof on religion

Awhile back, December I think it was, Nicholas Kristof said some silly things in the New York Times about religion. He's done it again. Austin Cline already has a pretty good response, but silly claims about religion are the sort of thing I like to keep track of.

On Porn Nation

The local chapter of Campus Crusade for Christ is putting on a lecture tonight titled "Porn Nation," speaker is a guy named Michael Leahy. Toyed with the idea of going--Crusade's activities are worth keeping tabs on--but after some searching online, which ultimately lead to finding preview chapters of Leahy's book, I'm convinced it's a waste of my time.

I recommend reading the at least introduction to the book. Here's what I got out of it: Leahy was a shallow, emotionally immature, selfish jerk. He let himself get strung along by infatuation, betrayed people close to him, and suddenly found out he had gotten tied up with someone also willing to betray people close to her. From all the talk in the publicity about the book, it seems like he's desperate to blame some outside force: our "hypersexualized culture," a mythological "sex addiction," masturbation, whatever, rather than simply admit to being a major jerk. When you betray people close to you, the main problem is not sex addiction, it's that you betrayed people close to you. Leahy seems like he doesn't want to recognize this, which makes him all the more shallow.

Wednesday, February 06, 2008

McCain, Iraq, and withdraw

On the heels of Super Tuesday, it looks pretty clear that McCain will be the Republican nominee. I'm fairly confident I'm going to end up voting for him, unless he picks a nutcase for a running mate. The big shadow hanging over his campaign seems to be his support for continuing the war in Iraq, and Democratic critics are making hay over his comments that we may well be in Iraq for 100 years.

Here's what I think when I hear that: We invaded a country. Our initial rationale for being there slowly faded, and was replaced by a second one. Then that second rationale collapsed. We've still got troops there. Tens of thousands, in fact. Few are seriously entertaining removing them. This is over sixty years after the initial invasion, sixteen years after the collapse of our secondary rationale for being there. So: when are we going to withdraw from Germany?

When I look at the troops we have stationed there, the whole thing doesn't make a lot of sense to me. On the other hand, it hasn't been an unmitigated disaster. Stationing troops in South Korea and Japan for decades hasn't been an unmitigated disaster either, and makes a fair bit more sense, as we still have enemies in that part of the world. I just looked up the video of McCain's initial comment on YouTube, and it's clear he had exactly these considerations in mind. Frankly, I wouldn't expect anything less from a mentally alert individual:

On top of this, the Democratic candidates haven't been particularly encouraging on this issue. Clinton has vague talk about making a plan to leave, but no guarantees that it won't take awhile. Obama is a little more specific: he promises sharp decreases in troop levels, but on the other hand promises keeping enough troops to counter Al Queda, pretty much guaranteeing we'll be there awhile.

The question is not so much how we'll get out of Iraq as how we'll stay in Iraq. On this issue I'm throwing in with McCain. He was one of the first to figure out Bush was botching the job of running the war, and has made clear he's more worried about sensible policy than what plays well in opinion polls. The Democratic proposals strike me as a crude compromise between poll-following and a recognition that we can't actually abandon the country.

Tuesday, February 05, 2008

Mill's semantics

De crapulas endormiendo has a nice post on one potential problem with Mill's theory of meaning, which was revived in a form by Kripke.

New Philosophy of Religion Stuff

Two things, both via Prosblogion.

First, the latest issue of the International Journal for Philosophy of Religion is a special issue dedicated to ethic of belief. I doubt it's arrived in Madison's current periodicals room, but when it does, it will be something to check out.

Second, Internet Infidels' Great Debate page has been updated. Apparently, they were having issues with participants not getting their stuff in on time, but now everything in the main section is there, except the third round of section three. They've yet to post any responses to questions yet, though, so read, enjoy, and send in any good questions you think up.

Monday, February 04, 2008

PC 62

The 62nd edition of the Philosopher's Carnival is up. Enjoy.

A new self-referential paradox

Conundrum has a post on a self-referential paradox which is, at least, new to me, and I suspect will be new to almost all readers of this blog. It does a nice job of countering a tempting response to such paradoxes. Check it out.

A Dangerous Cult

Xenu(Cross posted at God is for Suckers!)

I recently found out that a certain Dangerous Cult has been Google bombed:
...Yet the church has appeared powerless to stop the online sabotage. Guerrilla action has so far included the temporary disabling of its international website and "Google bombing", a manipulation of the search engine which has resulted in the website being the first result returned by Google when users type "dangerous cult". [The dangerous cult]'s UK website has been unavailable and in the US the FBI were investigating what they said was the hoax dispatch of white powder in envelopes to 19 churches in the Los Angeles area....
The linked article also mentions that Facebook groups have been set up to mock the dangerous cult, and protests are being planned to fight the dangerous cult.

All of this, of course, comes on the heels of a video featuring a prominent member of the dangerous cult, a video which the dangerous cult tried to supress.

So if you're in college, join the Facebook group. If you have a blog, keep the Google bomb going. Oh, and one little thing: I notice the Facebook group is loath to link to material the dangerous cult is trying to keep secret with copyright laws. That is not a winning policy. To start a trend against it, I will start off by dropping a link to, the leading clearing house for information on the dangerous cult that is Scientology. Contribute what you will in the comments.

Sunday, February 03, 2008

CotG 83

The 83rd edition (I think) of the Carnival of the Godless is up at Mind on Fire. Lots of great stuff here. A two part book review by Greta Christina which I had been meaning to link but didn't get around to. A video mashup which I had thought of posting but never did. And Isla has a nice post on living without religion which managed to work in a John Lennon music video.

Abraham Lincoln, founding father

This continues my musings on metaphysics, though I suppose it's really more philosophy of language. Those trying to figure out what's going on here may want to look at the comments on the previous post in the series (see above link), and maybe even check out the SEP article on rigid designators. Rigid designators in a nutshell: many names refer to one thing in all possible worlds, "gold," for example always refers to the element with atomic number 79, never something else sharing gold's superficial properties.

It seems on the face of it that the person who believes "Abraham Lincoln was the third president of the United States" has a false belief about Abraham Lincoln. But consider the person who believes the following:
Abraham Lincoln was a Virginian statesman born in the 18th century. He drafted the Declaration of Independence in 1776, and went on to become a U.S. ambassador to France and the third president of the United States. He had a strong break with some of his fellow founders on a number of issues, including the French Revolution. Though he had doubts about the morality of slavery, he owned slaves all his life.
Now, does such a person have a false belief about Abraham Lincoln, or a false belief about Thomas Jefferson?

The general puzzle here is that we think we have both linguistic knowledge and knowledge about the way the world is, but we tend to learn the two things at the same time and our teachers may not take pains to explain the difference. The situation is particularly bad with jargon-heavy fields of knowledge. This allows for even stranger examples, such as the student who thinks in general "hept" is the Greek root for "six," and therefore returns a wealth of information about hexane when asked about heptane.

Friday, February 01, 2008

Quote of the Time Being

Is $60 part of $50? According to the agent "My computer says it is." Eventually, I gave up in disgust.
-Mark Chu-Carroll

Though I wonder if this is just an issue of weak-willed employees who do what they're told no matter how stupid it is.

Santa Claus and Jonah Goldberg: a bit more musing on metaphysics

In Saul Kripke's Naming and Necessity, Kripke proposes the Old Testament prophet Jonah may have existed even though nothing the Old Testament says about him is true and he wasn't named "Jonah" (apparently, the sound made by the letter "J" in English does not exist in Hebrew). This to my mind raises two questions:

(1) Might we say that Santa Claus existed, on the grounds that he may be very vaguely based on a historical Saint Nicholas?

(2) There's that lingering puzzle of why "Jonah" refers to an ancient Hebrew prophet, rather than an American journalist who operated circa the year 2000 and was widely mocked for attempting to connect Fascism with the U.S. liberals of his time.

I'm inclined to think of these to be the sort of messy little real-world details that theorists like to ignore.


The most recent editions of the carnival of the liberals and the skeptic's circle are up. Don't have much time here, so let me just channel you through Greta Christina and her own recommendations for the best of the carnivals.

Notebook: Oliver Sacks

Right now I'm reading Oliver Sacks' The Man Who Mistook His Wife for a Hat for Neuro 524. It's a series of case accounts of patients with neurological disorders, including one case of a man who could see features of objects but not identify the objects themselves, thus on one occasion mistaking his wife for a hat. One interesting feature of the book is the way Sacks puts manages to relate philosophical ideas to his cases in novel ways. One patient had totally lost the ability to form new memories, preventing him from doing even simple tasks taking any amount of time. Sacks described him as fitting Hume's model of the human mind: a series of impressions utterly disconnected from eachother.

This made me wonder if the man who mistook his wife for a hat too was really having memory problems. As I've previously noted, our peripheral vision is surprisingly limited. Might the man have lacked sufficient vision-memory, or a certain type of vision-memory, necessary to reconstruct it as his eyes scanned across it, thus only able to comprehend the small part of an object able to be in the center of his visual field at any one time?

God Gold and electrons: musings on metaphysics

Here's another rambling, musings on metaphysics post: there's a popular view in metaphysics that things have essential properties--properties without which they would not be the things they are. I have thought of two ways to undermine this view:

I. Consider two scientists: one declares that positrons are positively charged electrons, and whenever two particles of the same type but opposite charge collide, they anihilate eachother. A second scientist declares the first is wrong, that positrons are not electrons at all, but the antimatter counterpart to electrons, thus having the opposite charge, and anihilating electons when they collide with them. Do they have a substantiative disagreement? It seems to me at least, they do not, casting doubt on whether it makes sense to say that a negative charge is an essential property of electrons.

II. Consider this question? Might it have turned out that not all matter is atomic, that, say, gold is a substance which can be divided indefinitely while remaining gold? Though Kripke and his followers, as I understand it, would disagree, it seems the answer is "yes." Now consider this: Might it turn out that some tiny blue particles, obviously not having the superficial properties of gold as we're used to thinking of it, are actually gold and nothing but (no blue coating)? The intuitions here are a little trickier, but it turns out that sufficiently small nanoparticles of gold will be blue in color. This is a scientific discovery presupposing gold is defined by its atomic nature. When these two examples are considered alongside eachother, they suggest that neither atomic structure nor superficial properties can be essential to gold.

The Importance of Disagreement

Richard Chapell, organizer of the Philosopher's Carnival, has passed along a message from the host of the next carnival:
The mission of this edition of the carnival, should you choose to accept it, is to prove that philosophy is in a better state than it was in 1997... So get out there in that big blog world and write something provocative or find me something provocative someone else has written.
This is a reference to a recent issue of The Philosophers' Magazine, in which major philosophers were asked whether the state of philosophy today is better or worse than in 1997.

Now, I hate to disappoint my host, but I'm not totally convinced the state of philosophy today is all that great. I suspect--and suspect is all I can do as an undergrad--that a lot of what Leiter quotes in the link is right. However, the call was made, so I decided I would write something saying something good about the philosophy that's going on right now. It didn't take long to think of something: the epistemology of disagreement.

"Epistemology of disagreement" is one of those big, fancy phrases philosophers like to use, in this case referring to what implications disagreement has for human rationality, how we should let others influence our beliefs, etc. It seems to be getting serious, formal discussion unlike anything its gotten in the history of philosophy. It got mentioned on Leiter's blog as one of the hot issues in epistemology right now. I hear Richard Feldman is coming out with an anthology on the subject.

All of this is at least three and a half centuries overdue, if not two millennia overdue. Consider the following quote from Rene Descartes' Discourse on Method:
Regarding philosophy, I shall say only this: seeing that it has been cultivated for many centuries by the most excellent minds and yet there is still no point in it which is not disputed and hence doubtful, I was not so presumptuous as to hope to achieve any more in it than others had done. And, considering how many diverse opinions learned men may maintain on a single question--even though it is impossible for more than one to be true--I held as well-night false everything that was merely probable.

As for the other sciences, in so far as they borrow their principles from philosophy I decided that nothing solid could have been built upon such shaky foundations...
Talk of foundations, of course, recalls the opening of the Meditations and Descartes' foundationalism. This passage suggests much of Descartes' motivation in epistemology was to find a way to conclusively resolve disagreements. However, he wrote nothing memorable on disagreement itself, and instead embarked on the grand foundationalist project now widely held up as an interesting failure. I don't know the passage, but I've heard similar things about Locke--that his empiricism was formulated in part as a tool for resolving disagreements.

As we move back in time, I find myself on shakier footing, historically. I have some vague idea that the ancient skeptics wanted to hold themselves above ancient philosophical disputes, but its only a vague idea. At any rate, I'm fairly sure disagreement as a driving force in epistemology extends forward in time, beyond Descartes. Consider, for example, Karl Popper's attempt to provide an epistemology of science which could tell us the difference between real science and good science. Or a lot of epistemology of religion. The problem is there, it's just not attacked directly, even as it motivates.

Careful philosophical attention to disagreement is exciting. You can still find philosophers talking about disagreement in a way that doesn't recogize it as an issue demanding close scrutiny, remarkably unaware of what other people think on the subject ("what do you mean politics is highly irrational?") Of course, philosophy of disagreement could get bogged down in the sort of problems that shadow currently well-established areas of philosophy. Indeed, philosophy of disagreement is going to pretty much look like other areas of philosophy, I think we can be confident of that. We still have the chance to learn something, though, and teach people that they can no longer deal carelessly with this issue.