Tuesday, July 25, 2006

Making the mind: Why are our brains so big?

The author of The Origin of Mind describes human brain development and a meta-theory of evolution

By Brint Montgomery

The Origin of Mind: Evolution of Brain, Cognition, and General Intelligence
David C. Geary
Washington, D.C. American Psychological Association, 2005.
459 pages. $59.95 hardcover

Seven million years ago, there was a small group of lock-kneed primates walking around with brains that measured 350 cubic centimeters. Seven seconds ago, there was a large group of lock-kneed primates walking around with brains that measured 1,350 cubic centimeters. How did we get from there to here?

In The Origin of Mind, David C. Geary offers his answer in the meta-theory of evolution. The development of the brain, cognition and general intelligence of the human mind differ from that of other mammalian species only as a matter of degree, not kind, he says. A professor of psychological sciences at the University of Missouri-Columbia, Geary offers the working assumption that motivational, feeling-emotive, behavioral and cognitive systems have evolved to process social and ecological information patterns — such as facial expressions — that correlated with survival or reproductive options during human evolution. Specifically, he proposes that all of these are ultimately and proximately focused on supporting individuals’ attempts to gain access to and control of the social (mates), biological (food), and physical (demarcation of territory) resources that supported survival and improved reproductive prospects during human evolutionary history.

If that last paragraph seemed a bit ponderous, get used to it. Geary’s prose is flamethrower-on, full-bore, scorched-earth academic writing — which is to say it’s dry. Nevertheless, it’s all about us because the whole thrust of the work is to explain the rise and success of the human race as we know it. The story being about us, then, lends a continuing and ultimately satisfying intrigue to the overall book as the explanation of our origin and cognitive skills unfolds.

Brain takes center stage

Although the book’s nine chapter divisions move roughly from hominid history to brain modularity and function to general intelligence, the real story of the biological arms race that fed human brain development actually has only two acts.

In the first act, humans achieve “superpredator” status, probably around 800,000 years ago at the latest, with Homo erectus’ mastery of fire. That, along with the ever-developing use of tools, allows humans to become masters of their ecological domain. This has enormous survival and reproductive consequences. Once ecological mastery was achieved, “an evolutionary Rubicon was crossed.” After that point, the effects of extrinsic forces of natural selection diminished, and within-species competition became the principal hostile force of nature, guiding the long-term evolution of behavior capacities, traits and tendencies.

In the second act, the natural, cyclical contractions of ecological resources force social competition among small human bands for diminishing resources. In such a situation where social competition intensifies, the stage is set for a form of runaway selection, whereby the more cognitively, socially and behaviorally sophisticated individuals are able to outmaneuver and manipulate other individuals to gain control of resources in the local ecology and to dominate of the behavior of other people.

The 17th-century British philosopher Thomas Hobbes would be proud of himself, no doubt, for guessing something not too different from this. To the extent that access to these resources correlates with survival and reproductive outcomes — and trust me, Geary cites all the studies that do indeed show such correlation — the associated socio-cognitive competencies and supporting brain systems will necessarily evolve. To boil it all down: Ecological pressures were salient earlier in hominid evolution, and social pressures were salient later in hominid evolution.

ID: Not just today’s debate

Upon considering brain expansion during the social pressure phase of human evolution, Geary moves to explicate which particular modules support survival and reproductive success. As a setup to this explanation, Geary recounts an important but rancorous squabble — as eerily familiar as this morning’s news — between Charles Darwin and natural selection codiscoverer Alfred Russel Wallace.

In 1871, Darwin argued that the mental faculties of the human brain, such as language, had evolved by means of natural and sexual selection. And although qualitatively different in some ways, the human brain showed much continuity with the faculties of mind and brain in other species. Wallace just couldn’t buy it and was unable to countenance that the mental faculties of the human brain — with its sensitivity to moral values and its rational powers — could be the result of mindless, purely organic evolution. As Geary duly notes, one of the first places in which Wallace’s argument was articulated was at the conclusion of a review of geologist Charles Lyell’s 1867 edition of Principles of Geology. It is worth recounting outright:

“But let us not shut our eyes to the evidence that an Overruling Intelligence has watched over the action of those laws [natural selection] so directing variations and so determining the accumulation [of favored traits], as finally to produce an organization sufficiently perfect to admit of, and even to aid in, the indefinite advancement of our mental and moral nature.”

Sound familiar? It should, for much of the haggling over intelligent design issues is associated with people who seem to have the same intuitions about other biological structures that Wallace had over the astounding powers of the human brain. But Geary is explicit in his ultimate goal: To develop a theory that not only is consistent with Darwin’s idea of evolved faculties but also integrates contemporary theory and research on modularity with the competencies that define general intelligence. On the Darwin-Wallace squabble, Geary concludes that “contemporary debates have a less theological flavor.”

Hunt directors and lie detectors

Rejecting Wallace’s position that the brain/mind of humans is fundamentally different from those of other species, Geary shows how the pattern emerging from current research suggests that the basic architecture and some of the specialized functions of the mammalian neocortex and subcortical regions are conserved across many species, including humans. Geary argues that in both stages of human brain evolution — the ecological and the social — specialized modules within the brain form a means of processing information. As a coordinated grouping, I will dub them “hunt directors” and “lie detectors.” I extend my apologies to Geary for abridging his multiple and more subtle module classifications.

Hunt directors are concerned with what Geary and others have called “folk biology” and “folk physics.” Specialized brain processing areas for identifying flora and fauna as well as other kinds of natural features in the world, fall under the rubric of folk biology. Specialized brain areas for identifying human faces, hands and other structural patterns in the environment fall under the rubric of folk physics. Finally, the ability to isolate and/or discount various combinations of modules for specialized attention and learning falls to the master command and control center of consciousness, in particular the “executive module.”

Lie detectors are specialized processing areas for understanding beliefs and intentions, predicting behaviors and recognizing minded entities. These all fall under the rubric of “folk psychology.” Naturally, lie detectors play into survival and reproduction rates for mastery of social competition, cooperation and control.

With the growing evolutionary efficiency of hunt director module groupings and lie detector module groupings, growth in the overall volume and increased connectivity of the human brain is a natural consequence. In the end you have the “big brain” — or, more accurately, a brain with a more favorable ratio of size to body weight. Elephants’ brains, for example, are much bigger than ours, but when one factors in their body weight, they do not make it into the big-brain club.

Consciousness and self-awareness

With Geary’s position being so carefully nuanced and documented, it might appear to present few opportunities for counterargument. But let me suggest at least two ways to place the king in check, even if checkmate is hardly in the offing.

Geary holds that even though there is a vigorous debate, one finds no definitive evidence that great apes have a sense of self or can make inferences about inferences — the latter sometimes called “second-order thought.” There are certainly issues to worry over here, and some cut to the heart — or perhaps to the head — of Geary’s theories on the origin of the human mind.

First, as Geary well knows, there’s little truly definitive evidence on anything pertaining to that most difficult of difficult issues concerning mind — to wit, “consciousness.” Even determining what counts as evidence is controversial. The 17th-century philosopher René Descartes, for example, argued that because animals don’t use language, they don’t have minds. In contrast, advocates for the idea that animals are indeed minded — often defended as prelude to establishing some ethical argument for our accountability to them — think that language use is neither helpful nor decisive concerning their mental status.

Second, the central issues of self and second-order thinking are pivotal for evaluating when consciousness arises in the evolutionary chain of human development. One might make the argument that only humans are conscious because all normally functioning humans are self-aware, while no other mammals outside of our seven-million-year common primate heritage definitively exhibit this trait. However, both of these premises are open to debate.

After all, not all humans have an “awareness of self.” Children under the age of 2, in fact, seem not to be self-aware. Self-awareness is commonly tested by surreptitiously marking an animal’s skin or fur, then placing the animal in front of a mirror. If the animal recognizes the mirror image as itself, it will usually examine or try to clean the marking on its body, prompting scientists to dub it “self-aware.”

Nevertheless, infants are certainly perceived as conscious by people who shuffle that term around. Indeed, there are some powerful empirical studies to this effect. So why think that children under age 2 lack a concept of self? Because one can have a concept of the self only if one has a concept of what a mind is. But experiments with young children indicate that they do not recognize the very concept of other minds. Hence, they have no recognition of the concept of a self. Eventually, of course, virtually all do come to such recognition of other minds, though some children, such as acute autistics, never come to gain this power.

And, to debunk the notion that no animals outside of our primate heritage are self-aware, there are intriguing results from studies with another group of mammals, ones that were alive and well 43 million to 63 million years before our primate heritage sees a positive trend in brain size, and ones that have developed along a different evolutionary trajectory — namely, bottlenose dolphins. Bottlenose dolphins have brain sizes about 250 cubic centimeters greater than human adults. Moreover, they have a favorable brain-size to bodyweight ratio. And, most importantly, they can recognize artificially imposed body markings in the mirror-exposure studies, as do chimps and humans. But whatever developmental pressures evolved self-aware brains of such great size and abilities in bottlenose dolphins must be different from the kinds of ecologically and socially induced developmental pressures posited by Geary for the self-aware big brains in us.

Recall that Geary holds that evolutionary pressures first resulted in our superpreditory status, and subsequently resulted in our social competitor status. But bottlenose dolphins are neither superpreditors nor cutthroat social competitors. Thus, the exclusive development of self-aware creatures need not have necessarily followed the trends identified by Geary. Granted, Geary never claims this need be so, but it significantly weakens what is interesting about his thesis — that evolutionary pressures for predation and social competition ultimately are the basis for the development of conscious creatures. Maybe in a few million years the octopus, another highly complex creature with an up-and-coming favorable brain-size ratio, will offer a more objective opinion on this matter.

Even so — and with the earlier cautions about writing style still on the table — I recommend this book as a thought-provoking, tour-de-force overview of recent findings in cognitive science and neuroscience as well as in primatology, anthropology and sociology.

Brint Montgomery is professor of philosophy at Southern Nazarene University in Oklahoma City, Okla. A version of this review appeared in Science and Theology News.

Saturday, July 22, 2006

Evaluating UML (Unified Modeling Language)

UML is a way of modeling the design of software applications before coding. It uses specialized diagrams and flowcharts to manage large software projects. [1]

I studied it at length last summer for the purpose of embarking on a large data mining project, and because I wanted to be able to track what was going on between work sessions. When one has thousands of lines of code that are sometimes left untouched for weeks at a time, it becomes nearly impossible to review and understand what one was thinking during the last session. In essence, it's like receiving the code from somebody else, since what was being done earlier is so quickly forgotten. Even copious commentary within the source code itself has limited value, since often what seems obvious at one time becomes mysterious at a later time.

Now that I've had a few months to work with UML, I've formed some opinions about it, which I'll eventually share. But first, let me review what some professional programmers wrote on Slashdot [2] about UML. I noted therein that many were not impressed with UML. The reasons were varied, but the essential positions were as follows:

  1. UML is merely a bunch of pretty diagrams that give power-point addicted managers lots of excitment but give over-worked programmers yet one more layer of interference in the project development cycle.
  2. UML effort is usually wasted in the end, since by the time the project gets to the coding stage, the after-the-fact discoveries about design and performance negate the earlier UML design work.
  3. UML is an extraneous activity, since good coding practices pretty much carry along project development anyway.
  4. UML is a fine way of avoiding the hard work of actual coding. There were a few other minor issues, but these four seem to capture the bulk of the complaints.
It is no accident I've placed the complaints in the order that I have, since they roughly reflect the chronology of the development cycle. I'm not so much interested in giving rebuttals to 1-4 as I am in remarking about why programmers might hold the negative views that they do on UML. For the record, I do not agree with the comments, but that doesn't mean such comments are necessarily misguided either.

First, it hardly takes a sociology study to see that many programmers have a professional psychology that exists somewhere between libertarian and anarchist. Accountability to managers who may not know the first thing about coding structures is highly irritating to programmers who regularly rule the kingdom of algorithm/data structure abstraction from within the fortress of their own heads. (Sometimes this goes under the moniker of "taking ownership of the code".) There is something like an old-school social contract theory operating here: The manager agrees to leave fingers out of the programmer's structures, and the programmer agrees to meet design specs. Whether it meets explicit specs--that's the manager's problem. Whether it has bugs--that's the coder's problem.

UML violates this social contract. Because some of the diagrams are so intuitive, even nonprogamming-trained managers can get some idea of how a project gets designed, and since, as a discipline, businesses (and business training schools) typically use organizational diagrams and flow diagrams, such managers can quickly pick up on the aesthetics of efficient flow and balanced breakdown of algorithm/data structures. This allows nonprogamming-trained managers to now ask questions that are much more direct and difficult, and which (per above old school social contract view) seems to violate the professional psychology of programmers. Think of it this way: If a programmer liked using UML for his own purposes, it would merely be a question of style, much like the selection of a text editor or IDE, and nothing important would ride on complaints against UML. What is really at issue, I think, is a breech in the walls of programmer privacy. (As a side note, if one finds, as I have, that UML offers distinct advantages to exclusively personal coding development, then the use of UML is unquestionably advised.)

Second, UML effort need not be wasted in the end by after-the-fact discoveries about design and performance issues. This is the advantage of automated code generators. With such generators, one does not change the code, but the CASE tool designs that produce the code; hence, just as newly discovered design issues would otherwise be re-coded where needed, likewise new issues can be re-generated by the CASE tool where needed. The issue is one of merely deciding up-front to stick to the development method (in this case a UML code-generation tool). If the UML tool produces bad or incomplete code, that's not a UML problem! It's merely a tool problem. (In my own case, I was mostly interested in program structure, and not in code generation; thus I sought ease of printing from a no-cost tool. I found UML Pad [3] to work best for me, since it was simple to use and was not bogged-down by a Java machine and/or by crippleware.)

Third, it is claimed that good coding practices pretty much carry along project development, so UML design is extraneous. An awful lot rides on the particular circumstances of this complaint. a) Is it true that most programmers use good coding practices? For those that don't, UML would indeed improve their coding practices, thus it is advisable and not extraneous on that basis. b) Is it true that all projects are manageable by good coding practices? The answer is clearly No, since there would have been no drive to invent, or reasons to use, UML if good coding practices were sufficient for every project. Projects with hundreds of thousands of lines of code need their own cartography for explaination and for task assignments. UML suffices for this task. c) Is software project development merely about good coding? Of course not, for there are also the matters of robust documentation and client explanation as to what the project is and does. UML tools can manage and generate these other components of project development, ones which are otherwise acknowledged as very labor intensive and burdensome. (As I have found, and if for nothing else, automated software documentation is very important for reference in maintaining and modifying a software project. Many UML tools do this and do it well. Also, when I'm presenting what I'm working on at a particular time, class diagrams are very intuitive for tracking and reporting coding progress; thus, they can be sent to nonprogrammers who have connection to the greater implementation of the project.)

Finally, UML modeling is perceived as a fine way of avoiding the hard work of actual coding. This is probably a perception of those who have not used UML at length or well (or both). Again, if one committed to a tool which has code generation and document generation, then much of the time spent modeling just is time spent coding and documenting; hence, UML contributes to the work of a software project. Moreover, often times the work of actual coding is "hard" because there has not been sufficient modeling either to conceptualize the project to begin with or to avoid unexpected programming bugs. Thus, and in my own experience, UML indeed avoids the hard work of actual coding -- yes, by merit of making coding easier to manage overall.


[1] A quick history and overview can be found under "Unified Modeling Language" Wikipedia (Accessed July 21, 2006)

[2] "How Do You Use UML?" Slashdot (Accessed July 21, 2006)

[3] "UML Pad" (Accessed July 21, 2006)
http: //web.tiscali.it/ggbhome/umlpad/umlpad.htm


Thursday, July 13, 2006

Genesis Inflatable Space Craft

Yesterday a US tycoon, by means of a Russian space venture, lauched the Genesis inflatable spacecraft into high orbit. Apparently it uses a very flexible outer shell that is padded with layers made of tough materials including Kevlar (used in police vests). This allows the zepplin-like ship to withstand cosmic debris, such as micro meteors that zip around at thousands of miles per hour.

I find it encouraging that while the US government put $100 billion into the silly science project of the International space station, a vegas hotel-chain millionare pledges only 500 million and things move along fine. This is great news for the commercialization of space. Eventually Robert Bigelow wants to have a space hotel by
joining several modules together. Well, I think he's the man to do it if anybody can, and he certainly seems motivated.

Back in 1967 there was a short science fiction story by Larry Niven, titled Flatlander, that made mention of an i
nflatable expansion bubble, a similiar type of craft that provides temporary space for cramped space travelers:
  • "...The bubble had inflatable seats and an inflatable table and was there for exercise and killing time but it also provided a fine view; the surface was perfectly transparent.... "

Too bad Niven wasn't about patenting every idea he ever had, as modern corporate executives do. He could have engaged the newest hobby of profit by technology suit.

Book Review: A Mind of Its Own by Cordelia Fine

Your Brain Is Continually Messing With Your Mind

A Mind of Its Own: How your Brain Distorts and Deceives
Cordelia Fine
New York: W.W. Norton & Co., 2006
232 pages. $24.95 hardcover

Do a quick search on the Internet for Cordelia Fine's picture, and you'll find a single shot. Situated among stately grey-bearded professors is a too-grainy, blurred image of a smiling, elf-like young woman, posed before a tacky, tartan-striped background. Given the low quality of the picture, as compared against the color-shaded, well-focused web images of her academic peers, the university must not overly value her presence as a faculty member.

If you too quickly took the bait of my analysis in the previous paragraph without suspicion, then you should likewise take a fast trip to your local bookstore and immediately purchase one of the most engaging books on social psychology and brain function in recent memory. Unfortunately, Fine has convinced me that my memory is probably about as objective as my mother-in-law's child rearing advice. And my memory is not the only worry.

Here is a confession: I giggle when I’m nervous. And it has cost me dearly. Break down and reassemble an M16A2 in some number of seconds while being yelled at. I focused. I failed. Then I giggled. That one cost me professionalized psychic trauma in the mid 1980s. Listen agog to my siblings’ first-person accounts of grandmother’s stroke. I paused. I glanced up. Then I giggled. That one cost me two years of July frost at family reunions in the early 1990s. Just what is it with this Benedict Arnold brain of mine? Fortunately, Fine’s latest work gives a candid and lucid analysis of just when such “inborn mind-bugs” haunt our interactions with others.

The book's chapters unfold according to its thesis which is brutally stated in the last paragraph of its pithy, two-page introduction: "Your brain is vainglorious. It's emotional and immoral. It deludes you. It is pigheaded, secretive, and weak-willed. Oh, and it's also a bigot. This is more than a minor inconvenience." Yes, the truth hurts.

It's moments like this when I miss Descartes' innocent view of a mind which can transparently run reconnaissance on its own tactical operations. In an odd way, however, Fine's project is a redux of Cartesian doubt, yet without presuming an optimistic, transparent a priori understanding of the self. That's what makes her use of the most recent empirical studies in social and behavioral psychology so accursedly effective.

In some ways, reading Fine’s work reminds me of another recently noteworthy book in psychology, The Man Who Mistook his Wife for a Hat (by Oliver Sacks.) Therein, one is exposed to all manner of psychological mis-functions which, in their own peculiar way, reveal much about the operations of our own minds. What’s different about Fine’s book is that many of the mis-functions of mind are not about damaged brains, but about brains which are perfectly functional. Thus, where in most investigations of psychology the reader is safely removed from the object of study, Fine’s book pulls one in, indeed continually placing the reader into an emotive state which I can only describe as “uncanny.” The whole book is a carefully researched -- and I assure you -- unsparing look into what our biological brain is doing behind our conscious mind’s back.

Even Fine herself confesses that the hammer of the social psychologist lands squarely upon her own family relationships. Part of the clever delivery of the chapter-by-chapter reporting and analysis is how Fine opens each new issue by using a sub-text drawn from her own life. As an example of our brain’s tendency to evade, twist, discount and misinterpret, yea even to make up evidence to retain that ever-satisfying sense of being right, Fine recounts her own four-year battle with her husband on the sublime culinary question of whether spaghetti should be strained with a colander or a sieve. Despite astonishingly lengthy (and apparently heated) discussions over this choice, a reasoned resolution to the matter has not been forthcoming. (I myself have been married nearly twenty years, and have not completely reconciled the issue in my own mind, though it was helpful to have reviewed and subsequently incorporated a new battery of pigheaded justifications for my pet choice of straining implement.) Such an example drawn from common life easily allows Fine to transition into the more academic investigations of such topics as belief polarization, initial impression bias, non-negotiable political commitments, and other related issues of pigheadedness.

While each of the book’s eight chapters nuances some particular distortion and deception of the brain, I was especially impressed by the last--appropriately titled, ‘The Bigoted Brain.’ I knew it was good, since (per my above confession) I giggled regularly even from the first paragraph, which was, true to her writing form, an opening miniature from Fine’s vacation with her husband in Scotland.

With a name like ‘Montgomery’, I’m sure I’m Scottish. Not that I have much, or even good evidence for this, but I like Scottish philosophers a bit too much, and I have, to cite Fine’s words, “a propensity toward thrift.” That is conclusive enough evidence for me. I’ve always desperately wanted to believe it anyway. There are other things I don’t want to believe, however, but I do; and as Fine recounts in case after case of research, the desperation and want thereby falls upon others. It’s that bigoted brain of mine. Don’t misunderstand me. When my crusty, WWII Navy pappy died a few years ago, gathered around his bed were his six Caucasian children: two who married African Americans; one who married a Jew; one who married an Asian; and, two who married Caucasians.

In such a family context, I’ve experienced the kind of frank and direct talk about race relations that would make a United Nations ambassador blush. Even so, Fine’s review of studies concerning bias, prejudice, and race have raised serious concerns even about my own “schemas”, as she calls them, those patterns of thought which provide an efficient means of extracting and interpreting information from a complicated word.

Take just one example: Ply American subjects with racist jokes, recording their responses using a “Ha!Ha!-ometer,” and they are suitably grudging with their humor. But give them some sort of distracting counting/memory task while they rate the jokes, and they will find such humor “much funnier.” In length of job interviews, of assumptions from first impressions, for competency in social evaluation of females -- over and over, Fine slams the hammer squarely upon our brain’s “inability to reflect reality truthfully” under the disadvantages of time, stress, or previous social conditioning. Even if my mind has become egalitarian by long family exposure and honest interchange about prejudice, I am stung by the worry that my brain probably isn’t so virtuous in its evaluations of the other.

As one can see, I’ve been deeply affected by Fine’s work, so there really is no higher compliment I could pay her. In a word, the overall quality of this book is best stated by reference to its author's last name: it is indeed Fine.

Tuesday, July 11, 2006

Halting electronic gambling (further) subverts American political liberty

Today I happened to read an article describing how online betting is now under attack in Congress. [1] Essentially, congress wants to put in a bill that would prohibit credit cards and other payment forms of electronic payment for settling online wagers. Online wagering, according to Christiansen Capital Advisors, has been growing very quickly, and runs about $15 billion today, and will advance to almost $25 billion by 2010, or at least it would if trends are allowed to continue. [2] Half of the capital for online gambling comes from the US, which makes sense, because Americans have a good portion of extra income to waste. However, the actual gambling companies are located outside the US. I think this last fact is the most important one, though there are several reasons why I think this move by congress should be resisted.

First, it appears to me that congress is trying to halt what most members recognize cannot be regulated, and for bad reasons. The problem is, no doubt, we live in an era were responsible budgets are no longer generated by the two power-entrenched partys (Republican/Democrat). Internet gambling is being politicized in order to drain moneys from an industry that many consider "sinful."

There are several problems here, the but one I find most amusing, having been raised outside of Louisville, KY, is that horse- racing is the singular exception to this clamp-down. Internet betting on dog racing would be illegal. Internet gambling on camel racing would be illegal. But horses? Oh, that's just fine. This shows that congress is not interested in establishing and enforcing cultural mores best suited for society’s operation, but are arbitrarily placing laws to grab whatever easy money is available.

Second, in this case of limiting freedom of economic movement. there seem to be direct analogies to limiting the liberty of the citizens in other areas. This has been a growing trend by the US government, and it really should concern anyone who believes that the right to use one’s own fairly gained economic resources is a right worth protecting.

Again, since horse racing would be excepted, this maneuver is clearly *not* an example of congress protecting the moral or rational ineptitude of its citizens. Consider the following: there is no federal law for penalizing not wearing a motorcycle helmet, though that’s irrational to do (for risk assessment). There is no federal law outlawing cigarette smoking, though that’s irrational to do (for health reasons). Nor is there a federal law for outlawing pornography, imported B-movies, or velvet Elvis posters, though these are all examples of commercialized bad taste. There are things that are bad for some people; however, the tyranny of minority stupidity should never legislate over the liberty of majority freedoms. This is exactly what has happened in recent years, where every seeming whim of an power-expansive US government can be justified on the basis of safety or terrorism.

Again, this is were John Kindt, a business professor at the University of Illinois at Urbana-Champaign who has studied internet gambling, gets it wrong. He calls the Internet "the crack cocaine" of gambling, dramatically claiming that "There are no needle marks. There's no alcohol on the breath. You just click the mouse and lose your house," he said. This is also known as overstatement for dramatic effect. No doubt it was a nice text-bite for the journalist and for those scaring pious mothers of college students.

Finally, the philosopher John Locke (the main influence on Thomas Jefferson) wrote that under the law of nature, every man has “a power not only to preserve his property—that is, his life, liberty and estate, against the injuries and attempts of other men, but to judge of and punish the breaches of that law in others” [3]. Our life, our freedom of movement, and whatever land we own were all considered property in Locke’s view, and his writings spells out quite clearly the concept of natural property rights entitled to all persons. The idea of a constitutional government is to encode those natural rights into civil law. By accumulation of special interests and arbitrary tampering, the civil laws can subvert those rights, and the overall quality of life of the majority who live under that civil law. Here, we see congress attempting to arbitrarily take property --in the abstract "money" -- and thereby subverting liberty.

I can’t wait to tell my kids what it was like in the old days, when the US congress cared enough to guarantee the rights of individuals and the finances of the prosperous.


[1] "Online wagering under attack in Congress" Yahoo News (Accessed July 11, 2006)

[2] "Internet Gambling Estimates" Christiansen Capital Advisors (Accessed July 11, 2006 )

[3] John Locke "The Second Treatise of Government, Chapter 7: Of Political or Civil Society 1690. Founder's Library (Accessed July 11, 2006)

Sunday, July 02, 2006

I'm a Fan of Mr. Mustard; It's the Best

I have spent my life searching both the US and Europe for the ultimate mustard. And of all of them I've tried -- and I've had more mustards than is reasonable -- the singularly best among them is the humble Mr. Mustard. The look on this man's face expresses it all, and someday in mustard Valhalla, we shall spread the Mr. Mustard upon our Snitzels, and together cry the chemically induced tears of joy. Even the stylized blurriness of the picture is no doubt due to the residual wet-weep in his friend's eyes after himself partaking of a pleasing round of that special nectar slathered upon some now long-digested virgin frank.

Of course those who eat the plastic-putty of mustards, French's Yellow Mustard, end up in mustard hell, as has been the destiny of this man. Choose wisely, oh mustard lover, while you still can. Perhaps someday I shall seek-out the immortal masters of the Mustard Museum to see if they are as wise in this matter as is rumored.

Recent Ant study and Genetic Algorithm Selection Techniques

Recent studies on non-pheromone navigating ants have show that by manipulating an ant's stride length, one could determine whether such insects were using an pedometer-like mechanism to measure the distance, that is by counting off their steps with this "internal pedometer." It turns out that ants appear to have such an internal mechanism [1].

Such a study would have direct application in determining fitness values for chromosome selection in
genetic algorithms. There are several different selection criterion for mating chromosomes. A simple geographic mating scheme might be to select a mate within a 1 unit radius. Consider a matrix of chromosomes:

10101 10101 11111
11101 11100 00001
11001 00101 00101

The center chromosome, 11100, could mate with any of the eight chromosomes surrounding it. However, after a few iterations such a limited range of selection could bring about homogeneity among chromosomes. Thus, it would be desirable to extend the mating range. Consider another matrix of chromosomes:

001100 00001 00010 00100 000110
001101 10101 10101 11111 000101
001110 11101 11100 00001 000110
001111 11001 00101 00101 000111
010000 01000 01001 01010 001011

In this case, chromosome 11100 could made with any of the original eight, but also with an additional 16 chromosomes. In fact, counting from right to left, bit five is a 1; thus, this could indicate a mating range two units. If bit five were a 0, this could indicate a mating range of one unit. Naturally, adding more bits would allow a wider range of values for establishing mating range (e.g. 00 = range of 1, 01 = range of 2, 10 = range of 3, 11 range of 4).

Selection might vary over time, where large differences in chromosome morphology give faster climbing fitness in early generations, while smaller differences in chromosome morphology would give faster climbing fitness in later generations. This is not a new idea, since local mating of chromosomes has been shown better than merely random breeding within a chromosome population (i.e. better than "panmictic" selection). [2]


[1] Bjorn Carey "When Ants Go Marching, They Count Their Steps" Yahoo News (Accessed July 1, 2006)

[2] Robert J. Collins, David R. Jefferson Selection in Massively Parallel Genetic Algorithms (1991) CiteSeer.IST: Scientific Literature Digital Library (Accessed July 02, 2006) http://citeseer.ist.psu.edu/cs