Monday, September 22, 2008

Short, Obvious Point

Let me advise against watching too much cabel news. Logical loops like the following will get stuck in your head, you will feel compelled to say something about them, but in the end you'll feel stupid for having brought them up to yourself.
For example, it grates every time I hear an American politician preface every remark, from the profound to the banal, with 'America is the greatest and strongest nation on earth.' 
This is clearly the sort of value judgment that even the hardest-nosed moral realist is going to realize does not have any obvious truth-conditions. Not having truth-conditions, it is not asserted as a claim or belief. It is rather a meat-headed sort of performative, intended (whether consciously or not) to signal tribal identification and standing.

Now, this wouldn't necessarily be a problem, were it treated as the mere formality that it is, on par with a hand-shake or a salute. However, it commonly occurs that if one goes on to a make an actually contentful statement critical of some aspect or other of American society, then this statement is regarded as inconsistent with the previous utterance. The charge is then voiced, 'So you don't believe that America is the greatest and strongest country on Earth?' But this is clearly ridiculous. It is like arguing that I could contradict your policy proposal by voicing a loud belch. Ironically, this is precisely what usually happens: since one can't respond to the charge with further assertions or claims, one can only respond by re-uttering the initial statement more loudly and more often. And thus, we descend to Walrus politics, and the project of deliberative democracy is all the worse for it. I would add finally that, if this analysis were only true to the utterance 'America is the greatest and strongest nation' then it might not be such a big deal after all. But I feel like I am making an obvious point when I say that most of our political discourse--especially as found in the most-consumed media formats like cable news--is much more like Walrus- rather than deliberative politics. 


Continue Reading...

Wednesday, September 17, 2008

Am I Missing Something?

There's been quite a lot of flap over Nagel's recent article defending the teaching of intelligent design in public classrooms. Most commentators have chosen not to focus on the constitutional question--which is the real focus of the article--and instead focus on the defense he offers for the scientific respectability of intelligent design. I'm going to follow that lead.

Is Intelligent Design a scientific hypothesis, even if a very unlikely one? Depends upon what we mean by 'scientific hypothesis'. But I think that the most plausible definition of science entails that it soooooo obviously isn't. That this point has not been commonly made makes me suspect I'm missing something.

On the one hand, any claim is 'scientific' insofar as it is supported by evidence. Any reasoning process is scientific insofar as it allows itself to guided by this evidence. The next question is, What counts as evidence? On a broad definition, ANYTHING can count as evidence. If I'm wondering whether or not to believe in God, for example, I might turn to St. Anselm, and to Alvin Plantinga, and to Michael Martin, and then Richard Dawkins, and then to the authority of my grandmother, who says that there's a God and I respect her opinion, to the fact that the Church has been around for a long time and this seems to be some evidence for Divine protection, to the fact that the universe seems largely explicable in purely physical terms and I favor Okham's razor, and so on and on. When I am forming my own personal beliefs, if I am rational about it, I will weigh all this evidence (never mind how I compare them for strengths and weaknesses), and form a subjective probability of belief. Let's call this the Bayesian Theory of Science.

The Bayesian Theory of Science is not a very good one--or at least, it's not a very sufficient one. Because on a more plausible definition of science, some types of evidence are going to be ruled out of court, and for good reasons. In addition to a commitment to evidence, and a commitment to allow one's reason be guided by evidence, the upshot of the scientific enlightenment was to stipulate that every state of the universe is fully explicable from the facts of an earlier state (actually, there's no need to temporalize this: any state of the universe U is explicable in terms of another state of the universe U'.) Maybe this wasn't such a stipulation. Kant made a good case in arguing that we simply had to think this way. But in any case, that is the idea.

By this definition, ID obviously is not scientific, even if there is some non-scientific evidence for a strictly Bayesian thinker to consider in its favor. The major premise behind ID theory is that the state of the Universe LIFE is inexplicable in terms of the state of the universe NO LIFE. Some non-universe actor intervened at some point, and an intelligent one at that. But what sort of claim is this? Clearly, it's a claim that a miracle happened. ID is a miracle-theory. Miracles don't belong in science, even if there is some non-scientific evidence to consider for their presence. I don't get why this point hasn't been made (that I've seen). When Spinoza and Hume and Hobbes were writing their long tracks against miracles, it wasn't just to prove that there were no miracles. More importantly, it was to convince their compatriots that the very notion of a miracle was incompatible with the emerging scientific world-view. We don't need to prove that science contradicts the possibility of miracles, only that miracles and any scientific theory are inconsistent.

Consider a situation where the scientific community actually adopted the ID position: what would be left to do? There would be consensus that it was pointless to research anymore how something like a living organism could arrive out of amino acids in the early conditions likely to have obtained on a primordial earth. In other words, they would stop trying to explain the origin of life. I would submit that any hypothesis, if true, that would halt scientific inquiry is, by definition, nonscientific.

Consider finally the argument Nagel is most famous for: the irreducible nature of subjective consciousness. There's good evidence in favor of this hypothesis. Anyone who is conscious has access to this evidence. The presence of this evidence is pretty overwhelming. Many smart people are convinced that this evidence is strong enough to ground the conclusion that consciousness is non-explicable in physical terms. But Nagel's own conclusion was not that consciousness has some queer, scientific status. His conclusion was there could not be a science of consciousness, at least not until our conceptual frameworks radically and unforseeably changed.

UPDATE: You might notice that two of the responses linked to above are moderately complementary of Nagel's argument. I would like to point out that both are Bayesians (or at least almost).

Continue Reading...

Tuesday, September 16, 2008

The Drinking Fallacy

I would say that that I don't mean to quibble, but that would be false, because I precisely do mean to quibble.Will Wilkinson has weighed in against there being a drinking age at all.

Here's an example of his argumentation:

"UCLA professor of public policy Mark Kleiman, an ex-advocate of age restrictions, told PBS that he came around to the no-limits position when he saw a billboard that said, 'If you're not 21, it's not Miller Time--yet.' Age limits make drinking a badge of adulthood and build in the minds of teens a romantic sense of the transgressive danger of alcohol. That's what so often leads to the abuse of alcohol as a ritual of release from the authority of parents. And that's what has the college presidents worried. They see it."

Smells like a fallacy of false cause. It might be true that restricting legal drinking to 21 lends a weird romanticism to the activity (really though, who knows), and the abuse of alcohol is indeed a problem, but the idea that kids abuse alcohol because it is romantic is specious inference based on some pretty sketchy folk sociology. What is probably true is that some aura of romanticism encourages some extra amount of drinking, but drinking abuse is undoubtedly caused by many other factors, very few of which have to do with any sort of aura, and that together dwarf this supposed romanticism effect. Kids drink because its fun. In part it's fun because it's rebellious, but its fun for an whole lot of other reasons as well (inebriation feels good, individuals feel more sociable, you're more likely to get laid, worries are easy to forget, it's a social activity with the all the benefits of group membership, for some people the stuff just tastes good, etc.)
Will goes on:
"There's certainly evidence that if we got rid of age limits, teens would drink more. But drinking more is a drinking problem only in the minds of neoprohibitionists. In a 2003 survey 22% of American tenth graders said they'd had five or more consecutive drinks in the last 30 days. But in Denmark, where there's no legal minimum to drink (though you have to be 18 to buy), 60% of 15- and 16-year-olds said they'd thrown back five or more in a row within the last couple of fortnights. Maybe you think that's too much. But the European champion of heavy teen drinking ranks as the world's happiest country and scores third in the United Nation's 2007 ranking of child welfare. In the UN listing the U.S. came in 20th out of 21 wealthy countries."

Um, maybe Danes are so happy because they drink so much. But regardless, it's not unimportant that Denmark is a wealthy, relatively homogenous and very well educated nation. I've spent a fair amount of time in Denmark. There is a lot of conspicuous drunkenness. Drunkenness is a problem in Denmark, as most Danes would admit on those occasions when they're not drunk. But being wealthy, well-educated, and committed to a generous social welfare state, they can afford a level of alcoholism that there's very little reason to think that the United States could afford. We have here a fallacy of false analogy. In any case, I don't think I'm going out too far on a limb to again assert that, even if alcohol policy has some effect on metrics like happiness and child welfare, the effect is going to be very, very small, to the point where overall social happiness and child welfare are completely unrelated and so can't support any inference either way.

Will also suggests that drinking-age and drunk-driving traffic accidents may not be positively correlated. I don't know any of the research, and so won't comment on that angle.

But he continues:
"Salt makes things taste better. If you eat too much, it can kill you. But we don't need laws regulating salt."

Again, false analogy, in this case so obvious that there's hardly need for comment. Crack makes you feel better too, but if you smoke too much, it can kill you. A-bombs give you a sense of security, but if you let one off, it can kill lots of people. Point is: just about everything has some sort of benefit, and just about anything can be dangerous. We need to decide which are too dangerous to allow to be legal. He concludes:
"In an America without a minimum drinking age, we would shift our focus from demon rum and car crash statistics to creating an environment where parents are expected to supervise their children and alcohol would become for teens just another thing, like bicycles or swimming pools, that can either make your day or take your life."

I'm pretty sure that this is perfectly fine anyway in most states. If parents want to ease their kid into a responsible drinking habits starting at an early age, I'm rather certain that there's no legal obstacle to this, and that even if there were, no one bothers to enforce it. I've at least never heard of a 15 year-old kid get into trouble with the law for enjoying a glass of red wine with his parents. Final point: kids drink and party too much for the same reason many have sex too early and too often: it's fun, and there's not much that legislation either way is going to affect that fact.

Personally, I'm agnostic on the issue. I do remember what a bummer it was not being able to drink legally as a 19 year old in college. But I drank anyway, and if it had been legal, that wouldn't have been different. Point being: the two are not all that related. I'm sure that arguments for Will's position are out there, but they need to be on principled grounds, not on utility effects. The arguments ought to be of the sort: 18 year olds should be able to legally drink, period, and if that entails some net costs, such is the price of freedom. We allow them to join the military. We allow them to vote. We allow them to have children and to marry. It seems a little arbitrary to prohibit them from drinking. If we are not going to argue the issue on these grounds, then if someone is goint to persuade me that lowering the drinking age would be better, they'd have to convince me that there's not after all anything wrong with the following inference: we will lower alcoholism among kids by making it easier for them to get it. That said, the arguments I've made suggest that there would be little effect either way if the drinking age were lowered. This is why I remain agnostic on the issue--I don't think it matters all that much.


Continue Reading...

Sunday, September 14, 2008

Torture and Americans

Andrew Sullivan has linked to several new polls demonstrating American support for torture as a policy for national security. Nearly six in ten white southern Christan evangelicals believe that torture is an okay policy. Among countries that support a general ban on all torture, the United States is towards the bottom of the nineteen surveyed, in the same group as Russia, Iran, Azerbaijan and Egypt. The number of Americans who support the torture of terror suspects is fourty-four percent. That deserves to be repeated: nearly half of Americans believe that torture is legitimate against individuals who have been accused--not proven guilty--of terrorism. I think that's crazy, but I also think that it's (partly) explainable.

Here's Andrew's diagnosis:
"The idea that torture is immoral in itself seems alien to a majority of the millions who lined up to see Mel Gibson's The Passion Of The Christ."
And again:
"This is what America now is: a country with the moral values of countries that routinely torture and abuse prisoners, like Egypt and Iran."
Now take at a look at the groups listed with the United States: Egypt, Iran, Russia and Azerbaijan: all four are corrupt autocracies that are highly more likely to be torturing their own citizens than foreign nationals who happen to get swept up in a drag-net half-way around the world. In each of these countries an elite coalition not representative of the nation as a whole rules through an exclusionary and often precarious power-sharing agreement in which each member would happily game it to their total advantage if possible. This leads to a suspicious citizenry, wary of the state but perhaps more importantly, of other groups of citizens and non-state actors. In each case the state positively encourages this paranoia, knowing that the best way to deflect attention from itself is to play up fears of non-state groups.

Andrew's theory is that Americans have given up on a moral principle against torture. Many Americans no longer believe that torture is an absolute moral wrong. Torture is a conditional evil--'When a comparable moral evil is not at stake, torture is wrong'--whereby a negation of the antecedent entails a negation of the consequent. Andrew believes that this is a morally culpable error in moral judgment, confusing a categorical for a hypothetical injunction.

The trouble with Andrew's analysis (he has been one of the most forceful and effective critics of America's current torture policy) is that he has never given a solid argument for why torture is an absolute moral evil, and as Nagel and Bernard Williams have often pointed out, we have just as strong intuitions against moral absolutism as we do in favor of it, and there are certain moral dilemmas in which, no matter what we do, we will understand that we have violated one or the other of a fundamental moral principle. Let me propose, perhaps with much charity, that those American's in favor of torture understand it to be a true moral dilemma, as defined by Nagel, in which however one acts,
"it is possible to feel that one has acted for reasons insufficient to justify violation of the opposing principle...Given the limitations on human action, it is naive to suppose that there is a solution to every moral problem with which the world can face us. We have always known that the world is a bad place. It appears that it may be an evil place as well."*
Those Americans in favor of torture maybe recognize that it is--to use another Nagelian phrase--a 'moral blind alley.'

Now, also take a look at those countries most opposed to torture as a means of state policy. They are Spain, France, Britain and Mexico. All have had bad and recent histories on the subject of torture, as both victims (Spain, Mexico) and perpetrators (France, Britain). They are acutely aware of the moral, political and cultural corruption that a torturing regime can effect. They are strongly against the policy because they are very sensitive to the dangers. The net effect of this history is a wary and distrustful view of the governmental security apparatus and policy.

It seems to me that this is what the many American's lack, not a moral principle. Americans in favor of torture as an official policy have not necessarily abandoned a moral absolute (if Nagel's right, it may not be an absolute in any case), but believe that this absolute has, after all, some conditions in extremis, and that the government can be trusted to respect those conditions. In other words, Americans are too ready to believe that the accused are actually guilty, and that that the accused actually have actionable information that they are withholding out of dogmatic hatred and an evil ideology, and that this information may save millions of lives. They have been persuaded of the falsehoods that ticking-time-bomb scenarios actually occur, and that other means of interrogation are less effective than torture. They believe that their government would only use torture in cases of imminent, deadly threats against real bad guys, rather than for any political or strategic reasons. All of these, as I say, are false and/or confused, but IF you believe all these things, then you have not necessarily abandoned a fundamental moral principle in supporting torture.

In other words, contra Andrew's interpretation, our values may after all be the same as those of Spain, France, Germany and Mexico, while quite different from those of Iran, Egypt, Russia and Azerbaijan; the relative variable here might not be moral value, but political judgment and trust of governmental authority. If so, then it's not that Americans have lost sight of a fundamental moral principle, they have lost sight of a political one.

*Thomas Nagel. "War and Massacre" in Mortal Questions. p73.










Continue Reading...

Constellations

Kenny Easwaran at Thoughts, Arguments and Rants has a fun, off-the-cuff post about the nature of star constellations. Just what are we referring to with the term, 'constellation'? An initial response might be, a collection of stars, but Easwaran correctly makes the point that current stars within a constellation could disappear (go supernova, get sucked into a blackhole), or additional stars could show up, and yet in neither case would we conclude that a new constellation had emerged, or that the old one had been destroyed. It'd be the same constellation, just different.

Perhaps constellations are just heaps, then? This isn't quite the point either, however. Heaps may not have any internal organization or principle, but they are, after all heaps, regardless of whether I happen to be observing one or not. Heaps are not observer-relative or observer dependent, in a way that constellations, we should admit, are. If the earth were in a different location in the galaxy, our night sky would appear differently, and there would be different constellations for us.

Easwaran concludes that
"rather than being composed of stars (as in the actual glowing balls of gas), a constellation is composed of beams of light reaching Earth."
I doubt that this is right. If this were right, we could equally say a traffic light isn't really composed of metal and circuitry, but of photons. The difference between something really being something, and something's being instead the media by which information is transmitted does not cut the difference between real unities, heaps and observer-dependent heaps. This observation is one of the motivations behind the causal theory of perception: the content of a perception is whatever object is responsible for eliciting that perception, regardless of how it did so (through light-beams, through wireless transmission to the chip in my brain, etc...). (There are problems with the causal theory of perception, obviously, but making this point is one of its merits). Certainly those the stars in a constellation are partly responsible for my perception of the constellation, and so must be partially included in the content of that perception. Of course, there's nothing special about stars for constellations: if galaxies were bright enough, they could be parts of constellations, and I think I'm right that some constellations include nebulae as members. The point is not that stars must be part of the definition of constellation, but that a constellation must contain some reference to the objects responsible for the light that reaches me, regardless of what sorts of objects those are (they could even be disco balls, for that matter).

Anyway, my point is not to critique Easwaran's account, but to echo his initial point, namely, that there is something a bit strange about objects like constellations. So, while I don't think that he's right to say that 'constellation' has light-beams as its reference, he correct to note that angles of sight are not incidental to the meaning of 'constellation': constellations are observer-dependent objects; they do not exist without observers, and the proper concept 'constellation' must include that somehow. However, it is ALSO not the case, I'd say, that the term' constellation' refers to a mere appearance (even an 'objective' one--in the sense that the appearance of a stick being broken in the water is an objective fact about the way that stick will appear to an observer, even though it is not any property of the stick or the water or any other such object), any more than it the case that, when I think 'unicorn' I'm referring to my idea of a unicorn.

So, Easwaran's right, constellations are queer sorts of objects, and it doesn't take a lot of reflection to convince yourself that lots of regular objects (maybe all middle-sized dry goods) are queer in this sort of way. But putting that to the side, here are some other examples of objects that, given what I've described, are queer in the same way that constellations are queer: they are observer-dependent objects, but are not mere appearances.

Horizons, rainbows, colors, mirages, the 'man in the moon,' maybe all paintings and images....

Anyone have further examples to add to the list?

A further point: Daniel Dennett is a fan of the 'grand illusion' theory of conscious experience. We take in limited information, and then our brains construct the filling material that makes it seem as if we have rich, robust experiences reflecting a rich, robust external world. That' s an interesting response to an interesting theory, but it hardly exhausts the interest in these matters. I mean, presumably, when light refracts through water droplets and then reaches my eyes, my brain sometimes runs the rainbow function producing the experience of a rainbow, but even if my brain does create these illusions, that doesn't answer the questions above, because those illusions are still 'objective,' in the same way as a constellation is.

Continue Reading...

Machiavellianism

An interesting if obvious observation from this week's New Yorker book review:

"There is today an entire school of political philosophers who see Machiavelli as an intellectual freedom fighter, a transmitter of models of liberty from the ancient to the modern world. Yet what is most astonishing about our age is not the experts’ desire to correct our view of a maligned historical figure but what we have made of that figure in his most titillatingly debased form. “The Mafia Manager: A Guide to the Corporate Machiavelli”; “The Princessa: Machiavelli for Women”; and the deliciously titled “What Would Machiavelli Do? The Ends Justify the Meanness” represent just a fraction of a contemporary, best-selling literary genre. Machiavelli may not have been, in fact, a Machiavellian. But in American business and social circles he has come to stand for the principle that winning—no matter how—is all. And for this alone, for the first time in history, he is a cultural hero."

Continue Reading...

Monday, September 8, 2008

Beyond Belief

I never know what to make of philosophical historiography. Charles Taylor new Templeton book, A Secular Age, is such a work. In a recent post at Immanent Frame, Taylor picks up on a distinction made in his new book between 'porours' and 'buffered' selves. A buffered self is like you and me, selves for whom there is a discrete frontier between itself and the world, between the mental and all else. A porous self is one for whom this discrete border does not exist. (I wonder what Taylor would make of the extended mind thesis)?

Does it make sense to ask, Is this accurate? I don't think so, for a reason I'll provide in a moment. But even if I do not think that this sort of work is something that can be accurate or inaccurate, it is still possible to disagree with certain claims made in it. I have one claim in particular in mind that I'll address below, which is this: that the difference between ourselves and our ancestors has less to do with different beliefs, and more to do with different 'experiences.'

First let me register my qualifications on philosophical historiography. Taylor tells a story about how, roughly five-hundred years ago, a porous experience of self was supplanted by a buffered experience of self. As Taylor acknowledges, this account has similarities to Weber's theory of Entzauberung. For us, purposes, meanings, intentions, and values are intrinsically mental predicates, whereas for those who experienced a porous self, such things were parts of the environment as much as parts of the soul, and a world that itself embodies meaning, purpose and value is a an enchanted, zauberische world. Of course, while I detect a hint of nostalgia in Taylor's piece (he is a practicing Catholic after all), Taylor's work succeeds as admirably as any at being a fair-minded, work of descriptive philosophical historiography.

Taylor’s story is consistent with itself. We might even be able to say that it is consistent with the facts, were it not the case that in history, more often than not, the facts are decided by story we are trying to tell. Danto made this point classically: we can say, in a rather uninteresting but unassailable way, that at 7pm, just after sunset, January, in the year 49b, Julius Ceaser rode his horse across the river Rubicon--but this hardly makes for a historical fact. There is no history here at all. History requires tying earlier events to later events within a narrative framework, and that narrative framework requires ascribing psychological predicates like desires and intentions. Thus, to make the above fact interesting, we could say that Ceaser crossed the Rubicon and thereby ended the Roman republic--but this fact is unaccessible, or even meaningless, outside of the narrative about the fall of the republic and the rise of the empire. If that's the case, then I'm not sure what it would mean to call a work of philosophical historiography 'accurate.'

Nonetheless, there is one claim made in Taylor's post that is questionable regardless of whether it is accurate. He asserts that the difference between ourselves and the selves of our forebearers is not a matter of belief, but of 'experience.' He doesn't define experience, but I suspect he means something like existential mood, horizon, attunment, or what-not.

So, Taylor claims that beliefs are not at stake here. I wonder. Here is an example common both to our forebearers and many today: Heaven is a place beyond time and in heaven we will meet and enjoy company with our relatives and loved ones. 'Meet' and 'enjoy' are temporally extended predicates. It is not clear at all what it would mean to meet, or to enjoy oneself, divested of extension in time. This conjunctive belief cannot be maintained. It would not be right, in the end, to say that the belief is false; it would be better to say that it is confused. Nonetheless, it is a belief, at least in the sense that it is a proposition that, when uttered or written, a large number of people throughout history have and would assent to.

So, I want to say that while the difference between us and our ancestors is not a question of truth or falsity (ignorance vs. knowledge), it is a still question of belief; a question of whether a belief embodies a coherent concept. While it would be wrong to call our forebearers ignorant, it would be okay, I’d argue, to call them confused. Following this argument, while it is not fair to assert that our ancestors were wrong and we are right, it does make sense—I might claim—to say that we are less confused than they were, that we do not have here a mere difference in worldviews, but a normatively constrained difference wherein confusion and consistency are criterial.

As I said, I do not say that Taylor is wrong, only that his claim is questionable. I suspect that there really is something to the notion of 'experience' as Taylor uses it, but it is something that has to thought-through, not just asserted.


Continue Reading...