Connected Brains: BrainView

The Web Connected Brains

Brainview: a collection of Blogs and Columns about brains, networks, complexity and science

In the essay "Unpacking my library" Walter Benjamin writes: "Of all the ways of acquiring books, writing them oneself is regarded as the most praiseworthy method.". He may have been right, but nowadays not everyone has the time or inclination to write a book, or, for that matter, to read a book or any text that does not fit on the screen of an iPhone or Blackberry. Benjamin was a master of the essay, a short, well-written piece of opinion with no limits to the topics that can be addressed. Would he have loved Blogging and Twittering? Perhaps he would have been the first to see the boundless new possibilities opened up by publishing texts on the World Wide Web. Brainview is a collection of Blogs on a variety of topics, all related in some sense to brains, networks, complexity and science. In contrast to the rest of Connected Brains this page is about opinions, not hard scientific facts. If you want, you can send a comment on any of the Blogs. Comments of sufficient quality and interest for readers of Brainview will be published below the texts they refer to.

Table of contents:


Plato in the MRI scanner

By: Cornelis Jan Stam

Date: 3-5-2014

In her book “Plato at the Googleplex” Rebecca Goldstein imagines a Plato who has been transported to our modern time, and who is doing an extensive tour to promote his best seller “The republic”. He starts at the Googleplex, where he hardly makes it to the lecture hall since he gets absorbed in a discussion with his media escort and a Google programmer. In the rest of the book he has many interesting meetings and discussions until in the final chapter he volunteers to participate in an fMRI experiment in the laboratory of the famous doctor Shoket, a dedicated and ambituous neuroscientist who is convinced that neuroscience has about solved all the problems of philosophy. Plato, who is apparently very interested in the enormous progress of science and neuroscience in particular, has done a lot of homework on his new iPAD, but – being a philosopher – he still has many questions. He asks what the famous neuroscientist thinks about the philosophical implications of his work. “Shoket: With the neuroscientists explaining consciousness, free will, and morality, what’s left for the philosopher to ponder? Plato: Perhaps self-deception?”

This short exchange nicely summarized Goldstein’s central question: does modern neuroscience make philosophy superfluous? According to Shoket, who represents a point of view held by many scientists, it clearly does. Many deep questions that have puzzled philosophers for more than two millennia can now be rephrased in scientific terms, and often turn out to have exact, and sometimes surprising solutions. We have an understanding of our universe, matter, living organisms and their development that would probably have been awe inspiring for the ancient Greeks; Goldstein’s Plato is duly impressed. In fact, this state of affairs does not come as a surprise: modern science and modern logic are the legitimate children of philosophy. Do these brilliant, highly successful children still need their mother? They can do their homework perfectly well without her help, and no longer need her good advise in all sorts of worldly affairs. It is symptomatic that serious philosophers like Patricia Churland, together with her husband Paul one of the founders of the modern philosophy of mind, went back to college to study the nuts and bolts of neuroscience. Has mother lost her self-confidence?

However, it may be that Shoket’s self-confidence is slightly premature. The reply of the re incarnated Plato was quite interesting: “perhaps self-deception?”. Where it reigns, science seems supreme; but it does not reign everywhere. Sometimes it behaves as an overly enthusiastic adolescent: it is blatendly unaware of its own boundaries. It is certainly possible, perhaps even quite likely, that neuroscience will come up with a scientific description of brain processes underlying phenomena as consciousness, decision taking and free will. Perhaps morality will be reduced to dopamine levels going up or down, and the amygdala doing overtime. However, the irony is that this will not drive out the philosophical problems,- it will just blow them up. How should we deal with humans (ourselves) if they are deterministic neuronal machines? Can we punish them if they behave badly? Can a “machine” behave badly? How do you educate machines? Have smart machines more right to essential resources? Do we need laws to protect computers, robots, smart software? What should you do with machines that keep asking questions? Furtunately, at least the last question can be addressed; we just have to read Plato’s dialoques to find out how the Greeks solved this problem in 399 B.C.E.

The shopping bag

By: Cornelis Jan Stam

Date: 26-4-2014

This is the story of a shopping bag. It was completely black except for a short text in bold white letters. The text was: “God is dead. Nietzsche is dead. And I am not fealing very well myself either. Woody Allen”. Under the assumption that the aim of human life is to sell other people things this text was both amusing and surprising. One might expect that a shopping bag, in the order of things sometimes referred to as a “free market”, is a device that contains stuff you have bought on the inside, and a text on the outside that induces a strong desire to buy more stuff. From this perspective it does not seem Woody Allen will be of much help. The gloomy perspective of a universe where Nietzsche has killed God, God has killed Nietzsche, and even Woody Allen is starting to lose faith is not one to create proper incentives for consumers. Furthermore, at least in the Netherlands, only few bookshops are left, so there is very little to put into shopping bags anyway.

Perhaps an explanation for this remarkable shopping bag text is the fact that April, in the Netherlands, is the month of philosophy. The Netherlands has days, weeks and months for almost everything, from fathers, mothers, secretaties, animals to trees and kings, so the fact that there is also a month of philosophy is not inappropriate. What is the goal of this month? Is it to promote books, lectures, courses, meetings? It seems we even have a philosophers G8. Clearly, there is hope. At the same time, things are getting slightly out of hand at the Ukrain borders. It is not quite clear what the king’s Greek neighbour is up to. This mixture of hope and worry, supplemented with sufficient spare time is an ideal condition for philosophy. As Berthold Brecht said, in slightly different words: “First we have dinner, than there is time for thought.”

So is this black shopping back intended to induce people to buy philosophy books? I cannot help to phantasize about the person who designed this bag. I guess some company got an assignment to design a bag, appropriate for the theme “month of philosophy”, and ordered a junior, perhaps promising person (a smart young girl?) to come up with a new idea. When this person came up with the Woody Allen quote the board was probably pleased: really funny, this Allen guy. This is something people can relate to. White on black; let’s do it. It will sell. At the same time I cannot help imagining that this person (my smart girl) has smuggled a little piece of true philosophy through the mazes of the commercial system, a small message “in code”, so that nobody would take offence (but those who want could see). Does this text refer to modernity as the moral vacuum created by the death of God and the birth of science? I have a beautiful solution to this problem, a complete philosophy that solves everything, including the meaning of life, but unfortunately it is too long to fit into the limited space available here. Therefore, I will just end with another quote of Woody Allen: “I do not want to become immortal by my work. I want to become immortal by not dying.”

The Library at the End of the World

By: Cornelis Jan Stam

Date: 29-3-2014

Of all the strange places I have known, the strangest place is the Library at the End of the World. To begin with, the end of the world is a rather unusual location. If you are pulled over for speeding and you inform the police officer on duty that your residence is the end of the world you will probably stretch his imagination beyond breaking point and suffer the consequences; but then again: if you are concerned about places like the end of the world you are not likely to be a police officer. But the strangeness of the Library goes beyond its admittedly slightly exotic location. It is a library for sure. It is located in a nice neo classic building, the kind of impressive architecture you expect to find in the center of large cities with a rich history and culture. Something like Het Rijksmuseum in Amsterdam. But if you enter it (assuming you were able to find it) you are in for yet another surprise.

One might expect that a good library has a lot of books. A better library would probably have even more books. This kind of thinking makes perfect sense; if it wouldn’t, why would decent people evaluate scientists along similar lines? But what about the Library at the End of the World? It is a huge building, it has a lot of floors, corridors, rooms, staircases and yet more rooms and staircases seemingly without end. But there are no books. Perhaps you are tempted – briefly – to think this is a Polare bookshop, or a modern university, but that is not the case. In fact, you are witnessing the library with the largest collection of books any library could ever have. This is the Library of all books that have never been written. Welcome.

Suddenly you realize you are not alone. An old man, very distinghuished and with grey hair, is standing right next to you. His reminds you of someone, but you can’t rember who. Where did he come from? He asks politely, but with an overtone of suspicion, who you are, and why you are here. Perhaps he wants to be sure you are not a policeman, or a businessman who wants to buy old libraries. You try to explain you were told about this magnificant place, with the largest collection of books of any library and you wanted to find out. Talking to the old man you suddenly realize he can hardly see you; he must have a very poor eyesight. “Indeed” the old man says. “You have come to the right place. We have the complete collection of all books that have never been written. Please take a look around if you want.” “Could you show me around? I am new here.”, you ask, realizing too late that “show” might be slightly in appropriate. Fortunately, the old man – the librarian? – doesn’t seem to be offended. Without hesitation he leads the way, and guides you through this building, which seems to grow bigger and bigger, becoming almost endless, with each turn you take, each staircase you ascend, each room you enter. The man must know this place by heart. Empty bookcases are everywhere, each of them marked with a sign describing the books that are not there: “philosophy”, “mathematics”, “biographies”, “South American Literature”, “Russian foreign policy” “Rise and decline of the university in the civilized world”. “Why is this place so big? It seems almost endless?” The librarian smiles. “The collection is rather large. We need a lot of space.”

After a tour that must have lasted hours you are exhausted and decide you have seen enough. But there are still some urgent questions. “This is a library, isn’t it?” The librarian nods. “But if it is a library, do you actually lend books to people?” The librarian smiles, politely, amused by so much ignorance. “Well, I guess it all depends upon how you define “lending”. But books do disappear from this library, all the time, in fact: as we are speaking.” Shocked, you look around, but you see nothing, nobody. “But there is nobody here, except you and me? Who is taking the books away?” After a short pause, apparently struggling to find the right words to explain the obious to person who simply doesn’t see, the librarian says: “This is the complete collection of all books that have never been written. It is complete; it will always be. Elementary logic. At the same time, books are disappearing, continuously, all the time.” He pauses again, and suddenly he seems to look you straight into the eyes: “Don’t you see?”

Inside out

By: Cornelis Jan Stam

Date: 14-12-2013

Overstating his case, Socrates famously claimed he knew nothing. Allthough he knew nothing, he learned quite a lot, and for this credit should be given to his methodology. According to Paul van Tongeren Socrates’ method can be summarized by two questions: “What do you mean?” “Is this correct?”. In essence one only has to repeat these questions, over and over again, until one reaches some true understanding. Young children have a natural appetite for this approach, especially the first part, sometimes annoying their parents to breaking point. No wonder the rulers of Athens wanted to get rid of Socrates. Socrates’ approach is essential for something one might call “clarifying one’s thoughts”. It is one key element of rationality. However, when we extend the domain of questioning from human affairs to nature at large, and try to describe, classify and understand what can be found “out there” we enter the Aristotle’s world of scientific, empiral research. This is also an aspect of rationality. Apparently we can learn either by “going inside” or by “going outside”.

A fascinating description of this dilemma is given by Daniel Kehlmann in his novel “Het meten van de wereld”. This historical novel describes the lives of two giants of German nineteenth century science: Alexander Humboldt and Carl Friedrich Gauss. For the aristocratic Humboldt science meant going out. Together with Bonpland he went to South America, traveling to the most exotic and dangerous places, going down deep and climbing on high mountains and vulcano’s. He measured everything: temperature, humidity, height, magnetic fields, and recorded all his observations in meticulous reports, as a real Prussion officer should do. In many respects he is the ideal type of the natural scientist. Perhaps not coincidentally, his travels are not unlike those of Charles Darwin’s voyage with the Beagle. Both collected large numbers of specimens of minerals, plants and living organisms. Humbold made great discoveries, for instance about the temperature in deeper layers of the earth, the relation between altitude and vegetation, the origin of mountains and vulcanoes, and become famous for these discoveries, even before his return. Unfortunately, lacking a sense of humor and essential writing skills, his reports of the discoveries were largely a summary of facts and close to unreadable.

In contrast Gauss did not like to travel at all. As a schoolboy he annoyed his teacher by producing a solution within three minutes of the following problem: what is the sum of all integers up till 100? (you can test your own math skills on this one). Gauss was always amazed by the extreme slowness of thinking in other humans, until he realized he was rather fast himself. Proceeding in the same way, he rapidly acquired what could be learned from mathematics and astronomy, and developed into the greatest mathematician of all time. For a true mathematician, work is everything; he even interrupted matrimonial first night obligations to write down a new discovery he just made, to proceed with more mundane carnal duties where he had left off afterwards. Fortunately, Johanna did have a keen understanding for the priority of things in a broader scheme.

The lives and discoveries of Humboldt and Gauss are intertwined in a beautifull way. Both know about each other from hearsay and newspapers. The crucial part of the novel is when these two celebrities meet at last in Berlin. Here the contast between their personalities and philosophy of science becomes very clear. To the annoyance of Humboldt his guest succeeds in making a mess out of the social obligations in Berlin. At the same time it is clear both have a deep respect for each other’s achievements, although it seems that Gauss’s achievements will ultimately be more enduring.

Kehlmann ends his novel with two ironic twists. Humboldt is finally granted permission to go on a state-supported tour of discovery in Russia, accompanied by a large and rapidly growing entourage of fellow scientists, local authorities and military support. The whole Russian compaign becomes an ironic mirror image of the South American voyage. Humboldt is toured around the country as an old museum piece, traveling from reception to reception, unable to do any new research of his own. In the rare moments he succeeds to do some measurements himself, the results are hopelessly off, and his young and more able colleagues have to do their utmost to save Humboldt from utter humiliation. Ageing and going out do not seem to go well together.

In contrast it might seem that Gauss was better off, but this is only partly true. Even though in old age he was saved the disgrace that was Humboldt’s fate during his Russian adventure, he was faced with declining skills, and increasingly took to practical work related to magnetism. Discoveries in mathematics are made by the young, and this must be frustrating for a mathematical genius growing very old. Nor did Gauss get much consolation from this family. He could hardly stand his second wife, and despised his son Eugen, who was no good at mathematics and who got himself imprisoned and was subsequently forced to leave the country. Sailing to America Eugen gets stuck temporarily on Tenerife. Here he comes across a small memorial dedicated to the great Alexander Humboldt, who visited the same place many years ago on his voyage to South America. Eugen, the son of the king of the world inside crosses the path of the champion of the outside world. Perhaps the two roads to wisdom are hopelessly intertwined?

Closing the Circle

By: Cornelis Jan Stam

Date: 14-12-2013

In the final scene of David Egger’s novel “The Circle” the main character Mae Holland visits her friend and colleague Annie, who has suffered a mysterious breakdown and is now in a comatose state. Fortunately The Circle, a mix of Google, Facebook and Apple, has excellent medical care for all its employees, and Annie is in a room where all her bodily functions are constantly monitored, including the activity of all parts of her brain. Mae wonders about the state her friend is in, and what thoughts she might be having. Then she realizes that it should be possible, with the technology of The Circle, to read Annie’s innermost thoughts. Isn’t it a pity, perhaps even a disgrace, that nobody can know these thoughts? May’s last words are “Why shouldn’t they know them? The world deserved nothing less and would not wait.”.

In many respects The Circle is a modern sequel to George Orwell’s 1984. Both novels sketch a dystopia where “being seen” is the central element. The fact that you are constantly “being watched”, either by Big Brother or by a multitude of camera’s, often hidden and with a direct connection to the Internet, allowing the whole world to see what is happening, has unprecedented consequences for behaviour and moral. Interestingly, already Jeremy Bentham, champion of utilitarianism, realized that being watched would automatically induce decent behaviour. He even designed a prison according to these principles with a central watchtower from where all cells and their inmates could be seen all the time. Such prisons have actually been build, for instance the notorious “Koepel gevangenis” in Breda. What has changed since the time of Bentham and Orwell is not the moral principle, but the techniques available to implement it. The “Eye of God” has transformed into omnipresent camera’s. However, the notorious television series “Big Brother” and its later copies have shown that although constantly being watched certainly affects behaviour, but does not necessarily bring about the best in men. The “Golden Cage” is a challenge to moral philosophy.

Is it possible to escape? In 1984 the main characters find a place where they think they cannot be seen and have some valuable “private space”. Of course they are betrayed and find out to their horror that Big Brother knows everything, even their innermost fears, for instance for rats. Orwell had a prophetic view of “personalized torture”. In the Circle May sometimes escapes from observation with her canoe, However, this won’t last, and she is captured when uses a kon without permission, all her actions captured on camera, and the police waiting for her on her return. Instead of being punished, or tortured, Mae is made to realize the moral deficits of her behaviour. Subsequently she becomes a convert of “total openness”, dragging her parents along in the culture of openness and even bringing about the death of her former boyfriend who tried to flee from a hoard of drones equipped with camera’s. According to her updated moral he is to be blamed himself since he didn’t comply with complete “openness”, and refused to be helped / watched by other people.

This raises the question what Annie, former star of The Circle, is doing now that she has slipped into coma. Is she hiding in her thoughts? To what extent are our thoughts private? How can anyone know what we think, as long as we do not show it in our facial expression, in what we say, do or write? It would seem that modern brain imaging techniques may present a challenge to this ultimatum refugium of our private thoughts. According to modern neuroscience mental events are correlated with, or perhaps identical to physical changes in the brain, and these can be observed with appropriate techniques, notably fMRI and event-related potentials. Such techniques are being used to detect remnants of mental activity in comatose subject, with the noble objective to identify patients who may have some chance of recovery from coma. In a very constrained experimental context it has been shown that fMRI can distinguish between different thoughts of the experimental subject. If our thoughts a physical processes in our brain, if we “are our brain”, mindreading is just a technical problem. If surveillance camera’s are not longer hindered by our bony skull, one can only look forward to the next episode of “Big Brother”.

Kandel's Vienna

By: Cornelis Jan Stam

Date: 31-8-2013

Modern science in general, and neuroscience in particular, display an amazing and rapidly increasing hunger to explain everyting around. With the birth of cognitive neuroscience and the advent and rapid development of new brain imaging techniques the scientific eplanation of all we dream of and care about seems to be become a matter of time. The ambitions increasingly go beyond explanation of more basic functions related to perception, motor function, memory and attention, and now involve consciousness, free will, feelings and emotions. Beauty is in the brain if the beholder. To answer the question while the Mona Lisa is an exceptional piece of art, or why subject X is in love with subjectY, we mainly need proper funding and a state of the art scanning device. The main question is: what brain areas are activated? Is it the amygdala, or do we need to include the insula? And, of course, what is the default mode network doing?

In view of these concerns one might be worried that any new book addressing the relation between science and art would be another episode of the “scanner knows it all” saga. However, Eric Kandel’s book “The Age of Insight” miraculously succeeds in investigating the interface between art and science without falling into the trap of explaining away all that is interesting. Several factors may explain why Kandel succeeds where others have not. First of all Eric Kandel, author of one of the most infuential textbooks of neuroscience, and Nobel prize winner for his work on the molecular basis of memory, is one of the most brilliant neuroscientists of the twentieth science. Secondly, as far as one can judge from his book, he is a devout lover of art and has an enormous knowledge of the subject, in particular painting and Austrian modernism. In addition, he is an excellent writer. This was already evident in his biography “In Search of Memory”, one of the finest illustrations of a life in science. In fact, the autobiography holds the key to understanding how Kandel succeeds in dealing with art and science in “The Age of Insight”.

When – like Sigmund Freud - Erich Kandel had to flee from Vienna for the Nazi’s, he changed his name from “Erich” to Eric. Later, in the United States, his father in law, a psychiatrist, suggested that he should become a psychoanalyst as well. He may have been tempted, but decided to go his own way, and took up the study of fundamental mechanisms of memory using the humble aplysia as a model. In retrospect one could say he continued where Freud, with his “The Project for a Scientific Psychology” had given up. Kandel never lost his interest in the Vienna of his youth, and in The Age of Insight he uses it as kind of focal point to organize his discussion of science and art. More specifically he asks what we can earn about the relation between science and art by looking at the emerging interest in the subconsiousness in fin de siècle Vienna.

Kandel gives a fascinating description of the life and work of Sigmund Freund, and places him in the context of medical science in Austria of the late nineteenth and early twentieth science. He shows how the study of the unconscious did not start with Freud, but emerged naturally from an increasing interest in medicine to “look underneath”, and discover the underlying mechanisms of disease. But fin de siècle Vienna was not only Freud. This relatively small capital of a declining empire was a melting pot of new developments in philosophy and art as well. A group of Viennese philosophers, forming the “Wiener Kreis” developed a new approach to philosophy known as “logical positivism”. Ludwig Wittgenstein and Karl Popper, although only indirectly related to this group, were strongly influenced by it.

Kandel’s main interest however lies in Austrian modernism: the writer Arthur Schnitzler and, in particular, the three most important Austrian painters of the time: Gustav Klimt, Oskar Kokoschka and Egon Schiele. He shows how these painters were all influenced by the intellectual climate of Vienna, including developments in medicine and psychoanalysis. In contrast to Freud, however, these artists developed new ways to “visualize” hidden layers of the mind in their portraits. Ironically, it seems that Freud, despite his many sessions with Viennese ladies on his sofa, did not understand women very well, while the painters were the real experts in the field. Of interest, Freud’s own grandson Lucian would later become a famous painter. Clearly, there are more avenues to the depths of the human soul.

In the last part of the book Kandel attempts to cross the bridge between neuroscience and art by discussing recent developments. In particular he gives a very readable overview of some of the most important discoveries that have been made with respect to the “emotional brain”. In contrast to Marvin Minsky who suggests in his book “The Emotion Machine” that emotions are just another way of thinking, Kandel is more careful with prosing psychological theories or explanations of our emotions and appreciation of art. However, he does believe that modern neuroscience is slowly catching up with Freud’s intuitions. Our deepest emotions and feelings are rooted in the brain. While art and science cannot be “reduced” to each other, they are both aspects of single reality. Kandel concludes: “The unconscious never lies”. Freud was right after all.

On the Edge

By: Cornelis Jan Stam

Date: 1-12-2012

When everybody starts to agree it is time to move on. This is especially true in science. A field of science with a clear topic, a well-defined set of problems to work on and textbook procedures for handling and solving this problems is usually characterized by a high level of agreement between all participants. Kuhn referred to this enterprise of “puzzle solving” within the boundaries of generally agreed upon paradigms as “normal science”; when most of the work has become normal, the field can be called “mature”. Most of the work is done within the safe boundaries of mature science, but this is not where the fun is. Think of knowledge as a large, continually growing area or island on a plane. It is bounded on all sides by the huge unknown, an ocean of things that are waiting to be discovered. The existence of some parts of this ocean can be guessed (the known unknowns); of other parts we do not even know we don’t know them: Rumsfeld’s “unknown unknowns”. Normal or mature science resides at the center of the island, far away from the coast. Madness is far out on the ocean, with land no longer visible. It is at the beach – the edge between what we know and what we do not know – where the party is.

Usually autobiographies are not as good as biographies. The inside perspective may distort the true story. However, Benoit Mandebrots posthumously published memores, “The fractalist memoir of a scientific maverick“ is an exception to this rule. Mandelbrot invented fractal geometry and discovered the Mandelbrot set, considered the most complex object in mathematics. Interestingly, Mandelbrots life and career reflect to a large extent the nature of his scientific work: at various times he lived in Warsaw, France (Paris, Tulle), and at different places on the east and the west coast in the USA. Mandelbrot came from a highly gifted family, and his uncle Szolem was a distinguished professor of mathematics in Paris. His Jewish family succeeded in moving away to the west just in time to keep away from the forces of twentieth century horror, notably Nazi occupation and the holocaust. This erratic life may have influenced Mandelbrot deeply. Although his uncle Szolem desperately wanted Benoit to settle for a decent academic career in mathematics he did not follow his advice. In fact, almost obsessively, Mandelbrot always succeeded in choosing highly eccentric topics for study.

Mandelbrot did his PhD in Paris on Zipf’s law of the statistics of word order (the power law describing the relation between word order and word frequency in languares). Later he ventured into ecomonics studying the price fluctuations of cotton, and discovering their similarity to Pareto’s law on income distribution. In fact he traveled along the rough edges of a large variety of different, seemingly unrelated fields of science. One of the few constant factors in his life was his long involvement with IBM, at a time when this company still had a very open mind with respect to basic research. Mandelbrot’s most famous discovery was the Mandelbrot set that is related to the study of iterated quadratic equations. At low magnification the Mandelbrot set looks like a black insect, with a head, two wings, and several antennae and small globules sticking out. The real surprise comes when zooming in on one of the edges of the set: an effectively infinite world of geometric complexity is revealed here, with new copies of the original Mandelbrot set appearing at unexpected places and magnification levels. Clearly, the edge is where the fun is, and the Mandelbrot set is a beautiful illustration of this.

During his life Mandelbrot was never at home in any particular field of science. He was a controversial person, operating like a tourist bumping into islands of normal science, trying to show its inhabitants the beauty of the beach. The erratic path in science of Mandelbrot is unique, and impossible to copy. It does show however that it is important to go to the beach, and not to be afraid of getting one’s feet wet. The key message of Mandelbrot’s work can be summarized by the conclusion that the border between the island and the ocean may look infinitly complex, but also harbours deep geometric laws. The Edge between the known and the unknown is fractal.

Phi in the sky

By: Cornelis Jan Stam

Date: 17-11-2012

Despite gloomy reports on the future of the book and the publishing industry, there seems to be al least one field that blossoms as seldom before: books about brains. The stream of books explaining how our enchanted looms operate, and in particular how our neural networks do or do not relate to consciousness seems endless. One might expect that at some point saturation must be reached: everything that can be said has been said and even all variations on the theme have been explored to exhaustion. The topic could fill Borges’ library. In view of this Giulio Tononi has achieved something close to a miracle with his new book Phi. It is a book about the brain, and in particular about Tononi’s theory how consciousness emerges from integrated information in brain networks, but it is not like any other book on the topic. How is this possible?

The composition of the book is a mixture of Platonic dialogue, Dante’s Divina Comedia and references to numerous classic and modern novels, illustrated with colour reproductions mostly of famous paintings and sculptures on almost every page. The chief character is Galileo, father of modern science. In a dream he is acquainted with the major achievements of modern science, notably neuroscience, and the integrated information theory of consciousness. Three guides accompany him on his dream-journey: Francis Crick, Alan Turing and Charles Darwin. Together with his guides Galileo’s visits numerous people, often suffering from pleasantly illustrative neurological or psychiatric conditions, such as locked-in syndrome and cortical blindness, and is instructed by his guides to appreciate the lessons that can be learned about the brain and the mind from all this. No effort is spared to make this a real educational experience for Galileo: at one point, there is a conversation involving Marcel Proust (communicating via loudspeakers since he doesn’t leave his bed), Freud, Darwin and Emily Dickinson. In other scenes Galileo witnesses cruel experiments intended to illustrate the nature of epilepsy and pain; one wonders whether anyone would be willing to undergo awake craniotomy after reading Tononi’s Kafka like phantasies about the procedure. Perhaps Borges, one of the other protagonists, can help to make sense out of these phantastic stories.

Clearly Tononi’s hardest struggle is with qualia: the “what it really feels like” part of experience. The problem is that any theory that explains the working of the brain in terms of sophisticated information processing leaves open the question why subjective experience should exist in addition to all this information processing. After all, we are just our brain. Apparently Tononi wants to argue that qualia are a kind of Platonic “shapes” in the integrated core of brain networks. Something like circles or piramids, but much more complex, and irreducible to anything simpler. While this line of reasoning has a nice seductive philosophical flavour, it fails to convince, if only because it is unclear how this theory could be tested, or, even better, falsified. Not surpisingly, Karl Popper is one of the very few celebrities missing in Galileo’s dream.

Phi (note that the Greek capital Phi consists of a central I for information and a circle symbolizing integration; the world is full of meaning if you know where to look) is a deeply mysterious book; adding to this the extensive comments at the end of each chapter seem deliberately incomplete and sometimes misleading. The biggest mystery of all is the question why Tononi wrote this book. If his intention was to write a book to explain his theory to a broad general audience it seems the composition of the book gets in the way of the message. Instead of one theory that is hard to comprehend anyway we end up with a complete labyrinth of real and imagined references to history, art, philosophy and literature. If his goal was to produce a piece of art there might have been more direct ways to do so, without the theory of consciousness getting in the way. Perhaps he hoped to do what Douglas Hofstadter did in Godel, Escher, Bach: to clarify his message by illustrating it with art. The problem is that Hofstadter succeeded in writing a brilliant and unique book, where the illustrations from art, mathematics and music, really are the message, while Tononi, like a tragic Greek hero, seems to get lost in his own Garden of Forking Paths.

Never mind, it's mere matter

By: Cornelis Jan Stam

Date: 31-3-2012

When the voyage of the Space Shuttle Challenger came to a premature end on January 28, 1986, the seven astronauts “kissed the face of God”. Clearly, these words of consolation by Ronald Reagan make little sense in a deterministic universe where the proper diagnosis would be that some collections of atoms were suddenly re-arranged in a random way. This point of view clearly lacks the poetic inspiration of Reagan, but it is in close agreement with the way many neuroscientists view the mind and the brain. Our brain is made up of cells, in particular neurons and glia, and cells are made up of molecules and atoms, all abiding the deterministic laws of nature. It is just matter. If you blow them up you may get a mess, but no soul is lost. Strict neurodeterminism has a long history, but has only recently gained great momentum due to the rapid advances in neuroscience, and in particular neuroimaging. The notion that consciousness is a process in the brain, and thoughts are produced by sets of neurons just like urine is produced by kidneys has been championed amongst others by Francis Crick and Dick Swaab. The philosopher Verplaetse has shown that if we take the logical implications of neuroreductionism seriously the consequences are far reaching; in fact concepts like free will, ethics, justice will suffer the same fate as the astronauts of the Challenger.

In “Who’s in charge?” Michael Gazzaniga takes up the challenge of defending free will in the context of neuroscience. Gazzaniga, an influential neuroscientist and one of the founders of modern cognitive neuroscience, certainly has the right credentials to be taken seriously. Much of his research was done on so called “split brain” patients. These patients, suffering intractable epileptic seizures, underwent surgery which disconnected their two cerebral hemispheres. Amazingly, especially in view of the animal experiments of Gazzaniga’s mentor Roger Sperry, splint brain patients were not only cured to some extent of their epilepsy, but also displayed little or not obvious behavioural or cognitive deficits. However, a long series of careful experiments by Gazzaniga and others showed that in fact communication between both hemispheres was disrupted, revealing the distinct character of the “left brain” and “righ brain”.

In his book based upon the prestigious Gifford lectures Gazzaniga gives a fascinating overview of these experiments, as well as the history of neuroscientific thought, ranging from Gall to Lashley, Hebb and Sperry. In contrast to many other neuroscientists Gazzaniga takes the discrepancy between our subjective sense of self and free will on the one hand, and the fact that our brains are ultimately “mere matter” on the other hand, very seriously. Instead of explaining the problem away, or invoking mysterious ghosts in the machine, he tries to defend an alternative view on the relation between mind and brain. In Gazzaniga’s view, our brains are complex networks that have evolved to adapt to our changing environment. To put it bluntly “Our brain’s job description is to get its genes into the next generation”. With increasing size, connectivity has become increasingly sparse, and a hierarchical organization with interconnected specialized modules has arisen. Gazzaniga assumes the existence of an Interpreter in the dominant hemisphere that attempts to make sense out of the multitude of messages that are produced by the specialized, unconscious modules. The Interpretor, one might say, is a more polite version of Victor Lamme’s “kwebbeldoos”.

Gazzaniga thinks that consciousness and free will can be compatible with such a hierarchical organization of the brain. First, following the ideas of his mentor Roger Sperry, he suggests that consciousness is an emergent property of functional brain networks. The notion of emergence is crucial, but not very easy to grasp. The essence is that a system of interconnected components can have properties that none of the individual possesses. For instance, a dot does not have the property “triangularity”; however, three dots arranged in a particular way can have the property triangularity. Examples of emergent properties can be found in many complex systems, ranging from clouds to civilizations. If consciousness is an emergent property of neural networks it is clear that the notion of neuroreductionism that the brain is a mere collection of neurons misses the point. The point is organization. The real challenge is to find out the nature of this organization: what is the neural equivalent of “triangularity”?

In the final chapters Gazzanige takes the idea of emergence to an even higher level by considering interactions between people in communities. He argues that dealing with other people is one of the key tasks of our brain: we have to develop models of what other people may perceive, think and do in order to survive. Law is an example of the elaborate structures that can arise when the interactions between large numbers of people have to be organized such that a society as a whole can function and survive. Gazzaniga says: “My contention is that ultimately responsibility is a contract between two people rather than a property of a brain, and determinism has no meaning in this context.” Consciousness may be an emergent property of our brain, but our mind in its turn can influence our brain. In a system with circular causality it may be difficult to see where the action starts, and who is influencing who. The mind may be a mystery, it is not magic. Mind matters.

How can evolution be constructive?

By: Cornelis Jan Stam

Date: 24-3-2012

At the end of the KNAW symposium “Understanding and managing complex systems”, that was held on Monday March 5th at the Trippenhuis in Amsterdam, all four speakers were asked what, in their opinion, was the major question that should be addressed by complexity studies. Three speakers suggested various interesting topics that would be worthwhile studying, mostly related to the modeling and characterization of complex neural, commercial and geographic networks. Only one speaker, the mathematical biologist Martin Nowak, came up with a very short question that dit not refer explicitly to complexity at all. He asked: “How can evolution be constructive?”. This deceptively simple question was probably better than any of the detailed suggested by the other speakers. It reminds us of the simple truth that in science questions are more important than answers. To give proper credit I would like to call “How can evolution be creative?” “Nowak’s question”. It is worthwhile to give it some further thought.

Evolution is the principle theoretical framework of biology. It deals with the development of populations over time, and their ability to adapt with increasing efficiency to their evironment. A remarkable feature of evolution is that things get more complex over time. This is true for individual agents, populations and whole ecosystems. Evolution toward higher levels of complexity, sometimes referred to as the propensity for self-organization, may also be observed in non biological systems, ranging from to patterns of clouds to cities and civilizations. So, an obvious question is: where does all this organization come from? The ubiquitous evolution toward higher levels of organization seems to be difficult to reconcile with the laws of thermodynamics: in a closed system energy is conserved (the first law) and the degree of disorder (entropy) can only increase or stay constant. So how can we have birds and bees in a universe that is heading toward its heat death? This is Nowak’s question.

It may help to give some thought to the nature of evolutionary processes. The three indispensible components of any evolutionary process are: random innovation, selective deletion, and multiplication of non-deleted innovations. In biological terms these three principles could be translated to random mutations, lower chance of survival of some mutants, and reproduction of other mutants. If this process is repeated over and over again in a population, the population will get better adapted to its “environment”, whatever that may be. However, biological evolution may be a specific example of a more general process of problem solving. In mathematics, only relatively simple, mostly linear equations can be solved. To handle large systems of (nonlinear) equations heuristic methods are used. A famous example is simulated annealing: starting from random initial values for all parameters involved, small random changes are tested and kept if they are closer to the desired solution, and rejected (with some tolerance level that decreases over time) if they are not. This is an extremely robust and powerfull algorithm to find approximate solutions to multiconstraint problems. It works very nicely in the detection of modules in brain networks, for example. Biology, like simulated annealing, does not produce perfect solutions. It produces solutions that are good enough, a concept referred to as satisficing by Herbert Simon. Perhaps all evolutionary processes, biological as well as physical and mathematical, are a bit like simulated annealing, evolving by trial and error toward solutions that are “good enough” to deal with a large set of often conflicting constraints of all sorts.

An interesting implication of this line of reasoning is the following conjecture: what we observe at any specific time, whether we are dealing with physical, biological or social systems, are evolving solutions. Our world is crowded with solutions in progress, preliminary attempts to deal with sets of constraints. The problems that have simple solutions will disappear, or hardly be visible; the ones that do not have simple solutions constitue the evolving complex systems that surround us. In fact, the organization and dynamics of our own brain may be the ultimate example of an ongoing attempt to handle a large set of conflicting constraints. So, one way to think about Nowak’s question would be to assume that evolutionary processes tend to accumulate evolving solutions that go by the name of “complex systems”. The longer you wait, the more beautiful they get. Perhaps Zeno’s paradox can reconcile evolution with the second law of thermodynamics. If you keep slowing down, Achilles will never overtake the turtle.

Do you love my connectome?

By: Cornelis Jan Stam

Date: 21-2-2012

If Sebastian Seung is right, we are our Connectome. The notion of a connectome, obsivously inspired by that of a genome, was introduced in 2005 by Olaf Sporns. It refers to full description of the network structure of the brain, all structural connections between all brain areas. In the USA The Human Connectome Project supported by the National Institutes of Health was launched in 2010 to stimulate and coordinate research to obtain a full description of human brain networks. But for Seung this is not the real thing. He refers to this type of work as “regional connectomics”. In his view the real objective should be neural connectomics: a full description of all connections between all neurons in the human brain. Obviously, there is no lack of ambition here. According to Seung our neural connectome is what we are, what distinghuishes between you and me, and what determines our character and memories. Just like the full description of the human genome changed molecular biology, so will the elucidation of the complete human connectome be indispensible for understanding normal brain function and brain disease and point the way to effective new treatments.

It seems Seungs great hero is Antonie van Leeuwenhoek who observed living sperm (Seung’s favourite cell) with his microscope in 1677. One of the themes in Seung’s book is “seeing is believing”. In his view major breakthroughs in biology and neuroscience were brought about by technological innovations such as the light microscope, silver staining and the electron microscope that allowed scientists such as Golgi and Cajal to see parts of the nervous system more clearly. The most recent advances involve electron microscopes linked up to ultramicrotomes and powerful computer systems to generate three dimensional images of small networks of neurons and all their connections. Seung devotes many pages to phantasizing about the possible knowledge and insights that will be obtained when the full connectome of the human brain will have been revealed this way. Still he admits that “Even for rodent brains, however, finding and entire neuronal connectome is a long way off”.

C. elegans is a worm of one millimeter length. The full connectome of the nervous system of this worm, consisting of about 300 neurons, has been described after more than a decade of hard work, started by Sydney Brenners, the colleague of Francis Crick at the MRC unit in Cambridge. One might think that C. elegans would be the ideal showcase to demonstrate how knowledge of a full connectome leads to an explanation of behavior, but unfortunately things are not nearly so easy. For instance, the properties of the hundred different types of neurons in this simple nervous system are insufficiently known to build a powerful and comprehensive model. This makes one wonder whether Henry Markram, who is building a dynamical model of the whole brain, will be more successful. According to Seung this is very unlikely since Markram lacks a crucial piece of information: the human neuronal connectome.

Connectome is a very entertaining and well written book, full of humor, original explanations and metaphors, perhaps one of the best introductions to neuroscience for a general public presently available. But there is also something strange about this book. Seung discusses brain networks for 276 pages but does not refer to any work done in the field of modern network theory at all. Graph theory is not in the book, nor in the index. Apparently, it is not part of Seung’s connectome. He mentions Olaf Sporns, but only to criticize the idea of a regional connectome. There is nothing about Sporn’s book “Networks of the brain”, nor is there any reference to his work on complex brain networks. The only modern neuroimaging technique discussed is diffusion MRI. Clearly, this technique falls short of Seung’s high standards of resolution: anything less than electron microscopy simply won’t do.

In the final part of the book the gloves really come off. First he discusses the option of cryonics: freezing your body, or – if you can only spend 80.000 dollars - just your head, so that a future generation with far more advanced science can restore a better version of you for everlasting life. But even this may not be enough. The ultimate test of the idea that you are your connectome should be to upload your full connectome in a computer, so that you can be run as a simulation on future powerful computers. After all: “Heaven is really a powerfull computer”. One really wonders about Seung’s own connectome.

Apple's bite

By: Cornelis Jan Stam

Date: 29-1-2012

Where does the bite in Apple’s famous logo come from? The first time I came upon a possible explanation was in a biography of the enigmatic computer pioneer Alan Turing written by David Leavitt “The man who knew too much”. In the concluding chapter Leavitt describes how Turing commits suicide by eating from an apple he has injected with cyanide. On the last page of this book Leavitt remarks that “A rumour circulates on the Internet that the Apple that is the logo of Apple computers is meant as a nod to Turing. The company denies any connection; on the contrary, it insists, its apple alludes to Newton. But then why has a bite been taken out?” David Leavitt comes up with an original theory of his own: he suggests that Turing’s suicide refers to the fairy tale of Snow White who falls asleep after eating a poisoned apple. Indeed, Turing had always had a childlike addiction for movies of Snow White. That might explain the bite, but it does not solve the mystery of Apple’s logo. Why not ask Steve Jobs?

This is exactly what Walter Isaacson, author of “Steve Jobs. The biography”, did. Confronted with the Alan Turing hypothesis of Apple’s bite “He [Steve Jobs] replied that he wanted he would have thought about it, but it wasn’t true.” [page 12] But perhaps we should not accept this explanation too soon. Where did Apple’s logo come from in the first place? According to Jobs “I was on one of my fruit diets, he explained, and it sounds fun, easy, and not at all intimidating. “Apple” took away the sharp edge of the word computer. Also, it would imply that they would be above Atari in the telephone book.” [page 89] This explains the Apple, but does not explain the bite. It seems the bite was originally the idea of art director Rob Janoff. Considering a modern version of Apple’s logo “Janoff came up with a simple apple in two formats: one comlete, and one with a bite left out. The first one looked too much like a cherry and therefore Jobs chose the one with a bite.” [page 109] So, the bite was just a way to distinguish the apple from a cherry. Do you believe this?

Steve Jobs was a perfectionist who was obsessed by the idea of merging design and technology in high quality products. His father, Paul Jobs, taught him that even the inside of an apparatus should look perfect, even if no one would ever see it. One of the most imaginative of Job’s products was the first Macintosh, named after MacIntosh, the favorite apple variety of Jef Raskin. Did you know that the signatures of the whole Macintosh development team, including that of Jobs, are on the inside of each computer? Jobs must have loved this way of leaving behind a hidden message. As CEO of the highly creative company Pixar Jobs collaborated closely with Disney. Snow White was one of the early successes of Disney. This brings us back again to Turing. Turing loved Snow White, but he also loved codes. His breaking of the code of Enigma, the German secret coding machine, was possibly largest single person constribution to the allied victory in the Second World War. Is it likely he left behind a secret message with his apple? And what about Job’s Apple? Perhaps true creativity always comes with a bite.

The Small Brain Project

By: Cornelis Jan Stam

Date: 7-1-2012

The weight of the human brain is about two percent of total body weight. Yet its energy consumption, about 20 Watt, is 20 percent of the total energy consumption of the body. One might conclude that, from the point of view of the body, “loosing your head” would be an excellent way of saving energy. Apparently, some people – notably politicians, but one might think of other categories as well - are already performing experiments on themselves according to this principle. Unfortunately, as might be deduced from observations of chickens running around without heads, the three pounds of yelly mass inside our skull do seem to be necessary to bring some minimal organization to our behaviour. The title of Donald O. Hebb’s classic 1949 book “The organization of behaviour” was very well chosen: we need our brains to organize our actions. If this is the case it might be worthwhile to learn a bit more about how our brains work, and how they fail in the case of disease. We can do this in the Big Way or the Small Way.

If you like the Big Way (big is beautiful) you should consider the Blue Brain Project. The ambition behind this project is to put all the information about neurons and brains that has been accumulated in the last decades in one all-embracing computer model. This ambition seems to be the equivalent in computational neuroscience of the search for a TOE (theory of everything) in particle physics. Once we have all the details in the model, the rest – explaning consciousness, curing neurological and psychiatric disease – should not present too many difficulties, if only enough funding will be available. A nice illustration of this kind of ambituous modeling of the brain is the PNAS paper by Izhikevich and Edelman. The major observation made with this model was the fact that the firing of a single neuron makes a difference for the whole brain, in agreement with what might expect in a deterministic chaotic system. The ambitions of Markram go beyond this: the hope is to fulfill the ultimate dream of reductionism.

Reductionism is a scientific program that consists of two parts. The first part involves taking complex objects like the brain apart to discover the fundamental building blocks and their interactions; the second part consists of putting all the building blocks together again to reconstruct a working system. Reductionism has been exceptionally succesfull in the first part, but seems to have postponed the second part forever. At the very least, attempts to reconstruct whole brain models incorporating all we know about the fundamental building blocks, constitute an audacious program to start work on the second part. Unfortunately, this approach may be an expensive version of Zizek’s “right step in the wrong direction”. To discover the laws of gravity you do not have to model each rock on the moon; perhaps to understand the brain we do not have to model each individual neuron.

The problem in neuroscience is not a lack of data, but a lack of ideas. The problem with computational neuroscience in particular is that it sometimes confuses number crunching with concepts. In physics mathematical modeling reflects the maturity of a scientific field. In neuroscience mathematics more often seems to reflect the availability of cheap computing power rather than the depth of understanding. As Gabriel Silva suggested there is: “The need for the emergence of mathematical neuroscience: beyond computation and simulation.”. A major symptom of the immaturity of mathematical modeling in neuroscience is that we do not know what we can ignore. There is no question about the need for powerful models in neuroscience. As Bartlett W. Mel said “In the brain, the model is the goal.”. The question is what kind of models we should be looking for. We could think of the value of a model as the ratio of its explanatory power divided by its parsimony. According to this standard Hodgkin and Huxleys model of the action potential ranks as first class science. In modern network theory a breakthrough was achieved by the deceptively simple models of Watts and Strogatz and Barabasi and Albert. Perhaps we need a Small Brain Project to achieve something similar for the whole brain. Big is not always beautiful; sometimes less is more.

Are you out of your mind?

By: Cornelis Jan Stam

Date: 26-11-2011

According to Sir Ken Robinson we are all creative. This is the good news. The bad news is that, after some proper education, virtually nothing is left of it. Creativity, in serious and law abiding adults, is the exception rather than the rule. It is the privilege of a handful of great minds, artists, scientists and the like. Most of us simply have a hard time earning a decent living, and prefer to spend our spare time playing golf, drinking beer and doing other nice things. How can our educational system be held responsible for this regression from creative babies to boring adults? According to Robinson our educational system, and many of the organizations in our society as well, are inspired by the philosophy of Enlightenment and the requirements of the Industrial Revolution. This is reflected in the factory like processing of batches of children, labeled by year of birth, to produce well behaving citizens with proper academic qualifications. We are trained in logical thinking and factual knowledge, in a linear way; our left hemisphere just loves this. Unfortunately, at a time when we cannot predict how the world will look like next week, this nineteenth century style approach is a recipe for disappointment rather than solutions.

What should we do? We should go out of our minds. Robinson stresses three important ideas: imagination, creativity and innovation. Imagination is the ability to see before the mind what is not present before the senses. We all have this ability; when we dream we are even experts in this field. The problem is that we do not value this ability enough. Creativity is the process of coming up with new ideas that have value. Imagination is the well from which creativity springs: our imagination comes to life when we play around with new ideas, think of unlikely scenario’s, crazy solutions. Real creativity requires that we learn to detect the few precious gems amidst the multitude of fantasies. We need to learn to detect the valuable amongst the possible. This takes courage since errors are unavoidable, even essential. Don’t forget that an expert is someone who has made all the possible errors within a particular field (a process that is supposed to take about 10.000 hours of hard work). Finally, even the best creative idea is of little use unless it is put into practice. With innovation imaginative creativity comes to life. No matter how the world will look like next week; if we are able to cherish our creative talents (and undo some of the educational damage) we will be able to face the challenge.

Teaching in the exact sciences often proceeds by examples and exercises. We are taught how the world works, and are tested with prefabricated exercises. The curious thing about these exercises is that they have exact answers that are known. If you can solve them you have learned a trick, and may proceed to the next level. If you can solve all the puzzles you get a degree. Unfortunately, the world is not like this. In real life the problems do not have known or exact solutions. That is where creativity comes in. It is the difference between solving a linear equation in algebra, and thinking of all the possible ways you can use a paperclip. Interestingly, there are mathematical tools to deal with problems that are too complex to be solved by simple tricks. These so-called heuristic methods have fancy names like Monte Carlo methods and Simulated Annealing. These methods often proceed by simply guessing an initial solution (that is bound to be dead wrong) and then iteratively trying to improve it by small, random changes, preserving the successes and dismissing the failures. Such methods are surprisingly effective, especially if the tolerance for wrong solutions is initially high and lowered gradually, as in simulated annealing. This fascinating approach to problem solving is taught mainly to engineers. What would happen of school children would be told that solving complex problems by fooling around with ideas and trying out what works is actually a good, and perhaps even the best way to go? Prezi instead of Powerpoint; patterns instead of sequences.

Pieter van Strien devotes more than three hundred pages to the enigma of scientific creativity in his book “Psychologie van de wetenschap”. Ultimately he comes to the conclusion that the solution is to be found in Freudian mishaps in the youth of the scientist. Clearly, van Strien is a psychologist; if only he had been a biologist. Evolution discovered random optimization long before we started to process children in teaching factories. It is possible to view living organisms as evolving attempts to solve problems; as Popper said: “Alles leben ist Problemlosen”. Since the problems usually do not allow simple solutions, and the rules of the game are changing unpredictably, the emerging solutions display a wealth of patterns that challenges our wildest imagination. Evolution, as Henri Bergson suggested, is really creative. Humans have the unique ability to become aware of this creative ability, and to use it to deal with the problems they face. If we dare to go out of our minds, we can recover this source of inspiration.

Waiting for doctor Watson

By: Cornelis Jan Stam

Date: 29-10-2011

For any serious chess player, and probably for other people as well, one of the most humiliating events in recent history was the defeat of chess world champion Garry Kasparov by IBMs supercomputer “Deep Blue” on May 11, 1997. This falsified the conjecture of late chess grandmaster Jan Hein Donner that women and computers cannot play chess. Computers cannot just play chess; they are better at it than us. Chess, with its 10120 possible different positions on the chessboard (remember there are only 1080 particles in the whole universe), used to be considered the ultimate challenge to any entity that would claim it could “think”. But things would get worse.

On February 16, 2011, IBM’s latest supercomputer “Watson” beat Ken Jennings and Brad Rutter at the game called “Jeopardy!”, a kind of general knowledge quiz. The fact that it took 14 years after the defeat of Kasparov to win at Jeopardy is really adding insult to injury for chess players. Of course, winning Jeopardy! Is not the ultimate goal of the creators of Watson. They want to improve Watson so that it can be used to solve really difficult and interesting problems. One of the next goals is to use doctor Watson in 2012 to solve medical problems. The challenge is to learn the supercomputer how to make sense out of the history of the patient so that it can come up with the correct diagnosis and the best possible treatment.

Are we approaching the Singularity? According to futurologist Raymund Kurzweil the Singularity is the moment when computing power and technological development will take over from biological evolution. Apart from solving all our problems such a development would mean we no longer have to worry about getting our pizza in time as long as our battery is not down. The problem however is that prediction is difficult, especially of the future. If this weren’t the case, futurology wouldn’t be such a profitable business. Apparently there is a deep human urge to phantasize about the future, and to imagine all kinds of scenario’s that send shivers down your spine. That’s why horror and science fiction are bestselling categories.

In 1957 Herbert Simon, ever the optimist, said that within 10 years, a digital computer would be the world's chess champion. It took 30 years more than Simon predicted. If you would have asked IBM engineers in 1997 how long it would take to beat a human at something like Jeopardy, they probably would not have said 10 to 15 years, otherwise IBM might have reconsidered how allocate its resources. The point is that, while progress in artificial intelligence seems inevitable, the rate of progress is constantly falling behind expectations. This may tell us something about the complexity of “thinking” itself.

The human brain is probably the best multi purpose problem solving device around. The question is whether implementing the essence of what the brain does in silico (or any other suitable material) is ultimately only a technical problem. Here a distinction between “dry” and “wet” complexity might help. The notion that computers can think is based upon the idea that thinking is ultimately reducible to information processing. In this view, exemplified by cybernetics, information theory and much of modern cognitive neuroscience, brains are essentially automata that exchange information with their surroundings, and use feedback to adjust their state and output to desired values. Such systems are thermodynamically closed. We could call this the “dry” version of complexity.

In contrast, as stated many years ago by Ludwig von Bertolanffy in his General System Theory, biological organisms are thermodynamically open systems. They exchange matter and energy, in addition to information, with their surroundings, and achieve a steady state (“Fliessgleichgewicht”) where the average levels of system components are kept constant even though there is an intense metabolism going on. Biological organisms are typical examples of “wet” complexity. While dry complexity is the results of design, wet complexity can only evolve, and that, unfortunately, can take a lot of (evolutionary) time. Interestingly, wet complexity of open systems involved in multiconstraint optimization often results in a hierarchical modular organization; an observation Simon might have appreciated.

The real problem is not whether we can design computers or other devices that can perform a well defined task better and faster than we or our brains can; the real point is that we always seem to be able to do more than we can specify. This notion is what makes the Turing test so powerful. If, in 2012, doctor Watson can solve all your medical problems, we can easily drive IBMs engineers crazy by requiring that Watson should also be able to go out and buy a pizza. How long will it take to design a computer than can do that as well? Can wet complexity beat dry complexity? Just poor some coffee over your computer and see what happens.

Life in a hub

By: Linda Douw

Date: 24-10-2011

It is a beautiful September day in Rockport, Massachusetts; about two hours by commuter rail from Boston. A varied company of people saunters from the rail station to a theatre building by the sea, the place where the annual retreat of their workplace is held this year. It’s my third week in Boston, so I’m still only starting to grasp all research that is done at the Athinoula A. Martinos center for Biomedical Imaging, which is part of Massachusetts General Hospital, Harvard Medical School, and the Massachusetts Institute of Technology (MIT). But I’m about to be enlightened.

The day starts with an overview of past work by the center’s director, Bruce Rosen. The work of Kenneth Kwong is described. Kwong and his colleague Seiji Ogawa were investigating blood flow and oxygenation of the brain in the early 1990s, since it had become clear that they were related to neural activity. In 1991, Belliveau and colleagues injected a contrast agent (gadolinium) into the brain to help visualize the brain’s oxygenation with magnetic resonance imaging. Last author: Bruce Rosen. Kwong and Ogawa, then working at Harvard Medical School, built on these results and reached an even bigger scientific milestone. They managed to visualize intrinsic changes in deoxyhaemoglobin in the visual cortex of a person looking into a flickering light. The technique was called ‘Blood Oxygen-Level Dependence’, now commonly abbreviated BOLD fMRI. In the years to come, neuroscience and neuropsychology would largely focus on this neuroimaging technique, the subject of 1000 papers per year in 2000, and almost 2500 per year in 2007.

A week later, I’m invited to participate in an introductory course of magneto-encephalography (MEG). MEG was the main method used in my PhD thesis, but I decide to attend the course anyway, because of the teacher. David Cohen was working on measuring biomagnetism, magnetic fields emitted by the body, in the 1960s. Scientists had already succeeded in picking up magnetic fields originating from the heart, but the brain’s magnetic fields were still beyond the quality of equipment in those days. In 1968, Cohen succeeded in measuring biomagnetic brain oscillations for the first time, using a copper coil. He then went to MIT and improved the technique, by building the first magnetically shielded room, which greatly reduces noise coming from the environment, and using superconducting measuring devices (SQUIDs) instead of common copper coils. The room he built then is still highly popular among MEG centers around the world, Cohen tells us during our first session, and nowadays every MEG is equipped with SQUIDs.

So, Boston was in some ways the start of modern neuroimaging and neuroscience, and the tradition is being carried on. Rosen, Kwong, and Cohen are still part of the faculty at the Martinos center, while many innovational techniques are being developed by new generations of scientists working at this center that is tied to both Harvard University and MIT. And although according to the Times the California Institute of Technology has taken over Harvard’s top ranking in the world’s best universities this year and MIT is ‘only’ number seven, much scientific magic still exists in the city of Boston. Oliver Wendell Holmes stated in 1858 that “Boston (…) is the hub of the solar system”. Although the sun is of course the hub of the solar system, it is clear that Boston is a giant hub in science!

Faster than light

By: Cornelis Jan Stam

Date: 24-9-2011

Some scientific theories are very likely to be true, other are almost certainly false. But there is a third category, introduced by the brilliant twentieth century physicist Wolfgang Pauli: “This is not even wrong.” When an idea is not even wrong there is clearly no hope. We should admit however that Pauli’s appreciation of scientific ideas proposed by simple minds (anyone lacking Pauli’s genius) would almost automatically fall into this third category. Even so, the perpetuum mobile, theories of time-travel, and objects moving faster than the speed of light are probably awarded with the “not even wrong” status even by less gifted minds. What about quantum mechanical explanations of consciousness?

There is no lack of popular science books that attempt to explain the mystery of consciousness in terms of quantum mechanics. In general the reasoning is as follows: (i) consciousness is a big mystery; (ii) quantum mechanics is a big mystery; (iii) therefore, consciousness can be explained by quantum mechanics. To stay in tune with scientific progress “quantum mechanics” can be replaced by quantum electrodynamics, string theory, M-theory, or any other theory of physics that aims to describe the ultimate nature of reality and is too difficult for simple souls to understand. Some attempts to link consciousness to quantum physics are a bit more serious however, and deserve attention if only because of the scientific credentials of the authors.

Karl Popper and John Eccles jointly wrote a book on “The Self and its Brain”. In this book, Popper repeats his famous “Three World Theory”, arguing that at least three different levels of reality exist: material (World 1), psychological (World 2) and the realm of ideas (World 3). Eccles uses this pluralistic metaphysics to propose that quantum effects may interfere with neurotransmitter release in some selected parts of our brain, a cortical quantum version of Descartes’ pineal gland as “antenna for the mind”. Physicist and mathematician Roger Penrose has suggested that a theory of quantum gravity may be needed to explain consciousness. Together with anesthesiologist Stuart Hameroff he has proposed the idea that quantum effects might interfere with brain processes and produce consciousness at the level of microtubuli in neurons.

In his new book “Brain, Mind, and the Structure of Reality” distinguished neurophysicist Paul Nunez summarizes his life-long career in neuroscience and in particular the neurophysics of the EEG. His aim is to address (in David Chalmers terminology) the “hard problem” of the relation between brain and mind, without discarding any possible scenario beforehand. According to Nunez, we need to understand the nature of reality itself to come to grips with the hard problem, that is not really touched upon by the numerous correlations between brain activity and mental processes accumulated by modern neuroscientists.

The first chapters contain a wealth of information on neuroscience, in particular neurophysiology and EEG, often with original and provocative excursions into philosophy, politics, complexity science, economy and so on. Nunez stresses that the brain is a complex adaptive system, multi-layered, with a hierarchical, nested structure, both anatomically as well as physiologically. He illustrates some of these ideas with his own model of standing waves of EEG rhythms, and addresses connections with modern network theory and the notion of small-world networks. Many of these ideas are nicely summarized in the quote from Vernon Mountcastle (1950): “…The brain is a complex of widely and reciprocally interconnected systems and the dynamic interplay between these systems is the very essence of brain function.”.

In the final chapters Nunez introduces some of the basic ideas of relativity, quantum mechanics and thermodynamics, and tries to relate them to consciousness. He refers to the connection between these spheres of reality as the RQTC conjecture. His explanation of the physical concepts is very didactic and can be followed by any educated reader. However, the introduction of new concepts such as Ultra-Information (“That which distinguishes one entity from another”), Ultra-Computers, and dark information might easily stretch the cerebral antenna of some readers to breaking point. Nunez seems to agree with Max Planck who said: “I regard consciousness as fundamental. I regard matter as derivative from consciousness.”. An explanation of consciousness in terms of classical theory as an emergent property of interacting neurons in the brain is unlikely to be sufficient. According to Nunez we must consider “…the possible existence of some undiscovered physical field or even a nonphysical entity, a subcategory of Ultra-Information, which interacts with brains in some unknown manner.”. Mind as a subcategory of Ultra-Information, and the brain a sufficiently complex antenna that can interact with it? Would Pauli have categorized this theory as “Not even wrong”? But then: did Pauli predict that his own neutrino’s can travel with supraluminal speed?

Comment by: Prejaas Tewarie

Date: 23-1-2012

I certainly agree that relating quantum mechanics and consciousness nowadays is often based on the fact that both are mysterious and therefore one should be explained in terms of the other. One of the problems that arises here is that in this way this subject falls prey to unscientific new age ideas. Another problem of the attempt to relate both subjects is that quantum mechanics and nowadays (quantum electrodynamics, quantum chromodynamics, quantum field theory) are subjects within theoretical physics and consciousness a subject within neuroscience. These two fields are entirely different. One of the differences is that the first often uses highly abstract mathematical concepts which can be very counterintuitive whereas the other is more based on experimental findings. Of course both fields are actually based on a combination of a theoretical framework and experimental findings, but the emphasis on one or the other is different in neuroscience and theoretical physics. Could transgression be a problem here? Theoretical physicists often lack knowledge about present neurobiological views and research on consciousness and still are able to make bold statements about consciousness and neuroscientists often have no idea what quantum mechanics is about and what is implications are. A quite often misunderstanding within the neuroscientist community is that quantum mechanics is only a theory which describes the world at the micro-level and is therefore irrelevant for them since quantum mechanical effects on their level are not visible. Should transgression not be a problem if both groups respect each others field more and try to cooperate more?

The link between quantum mechanics and consciousness in the beginning was however somewhat different than the relation put forward by some physicists nowadays who are trying to find quantum mechanical effects in the brain. Initially the question was about: what causes the collapse of the wavefunction? Is the collapse caused by a measurement/observation, by a observer or by a conscious observer. Von Neumann’s idea was that this should be a conscious observer because of the delayed choice experiment. Some other experiments such as double-slit experiments with robots made him to come up with this idea either. On the other hand a lot of other experiments in favour of other interpretations are there as well. Most physicist still hold to the original Kopenhagen’s interpretation. However there is an increasing interest for alternative interpretations such as quantum decoherence, parallel universes and many others. The von Neumann’s interpretation is also being explored. But even if it turns out that a conscious observer causes the collapse of the quantum mechanical wave function, does that mean that quantum mechanical effects are measurable in the brain? I don’t know.

Free will, or what is left of it

By: Cornelis Jan Stam

Date: 3-9-2011

In the movie Amelie a depressed Canadian tourist jumps from the Notre Dame and lends on top of Amelie’s neurotic mother, killing her instantly. Perhaps the old Greeks would have called this a typical example of fate; modern Americans would probably say: “Shit happens.”. But Amelie is neither an American sitcom nor a Greek drama. It is French, and the French love to eat, drink and think a bit. So the proper question is: why did the Canadian tourist jump? Could she have done otherwise? Did she possess that precious but ephemeral quality called free will? That is a really big question. Thinking about big problems that cannot be solved is a highly challenging and dedicated enterprise that goes by the name of philosophy. The French love philosophers; they produced lots of them. Philosophers have been thinking about free will for more than two millennia now. Unfortunately a final verdict has not yet been reached. Some think it exists, some think it does not exist. Yet others have proven that the question does not exist. In fact, we cannot exclude the possibility that we ourselves do not exist. In philosophy it is wise to keep all options open, and not to jump to conclusions.

In the mean time working conditions for philosophers have worsened considerably by the advent and progress of modern science. Science has earned a rather notorious reputation by solving the favorite problems of philosophers, often in unexpected ways. Philosophers have been doomed to retreat to the safe houses of logic, the art of living well, and a few leftovers as yet unsolved by science. Until recently the problem of free will was such a leftover; it was a relatively safe topic for armchair solutions and creative thought experiments, without serious danger of scientists bashing in. This situation has now changed significantly. In a series of experiments Benjamin Libet has shown that a slow build up of neuronal activity lasting a few hundred milliseconds precedes awareness of conscious decisions. More recently, fMRI experiments have confirmed and extended Libet’s findings: our brains are (unconsciously) making our decisions many seconds before we become aware of them. Our conscious will is an illusion. In Victor Lamme’s words: it is a “kwebbeldoos”, confabulating ownership of our own thoughts and decisions that does not exist in the causal deterministic reality of our brain. Similar ideas have now been expressed by several neuroscientists such as Ab Dijksterhuis. Perhaps the most catchy summary is Dick Swaab’s “We are our brain”.

With the neuroscience team on the winning hand it might be interesting to know whether the philosophers have given up or are trying to fight back. A nice but slightly superficial overview of the current situation is given in the essay “Taking aim at free will” by Kerri Smith (Nature 2011; 477: 23-25.). An overview of the battlefield is rather depressing. Philosophers have either attempted to provide some half-hearted criticisms of the details of the experiments, or have avoided physical contact with scientists and have fled to such an abstract level of discourse that no one can or wants to follow them anymore. Such philosophical cowardice should be frowned upon. Didn’t the knight in Monty Python’s Holy Grail keep on fighting even after he had lost both arms and legs?

One of the few exceptions is Jan Verplaetse, a philosopher who has written a serious philosophical response to the challenge of the abolition of free will by modern neuroscience. In “Zonder vrije wil” starts with reproaching neuroscientists, in particular Victor Lamme, for treading upon philosophical land without proper credentials. Clearly, Verplaetse believes transgression is a sin. Next he firmly states his philosophical position: there are no credible or relevant alternatives for causal determinism, causal determinism is incompatible with “verwijtbaarheid” (Verplaetse’s euphemism for free will), therefore “verwijtbaarheid” does not exist. This philosophical position, called “hard incompatibilism”, has extreme consequences if it is taken seriously. Verplaetse should at least be credited for thinking through these consequences, in particular with respect to law and human relations, although his arguments are not convincing. He needs to assume the existence of a general moral (the topic of his previous book) and a willingness to live according to regulations to save a more or less decent human society. But where does a moral, or a distinction between good and evil come from if we are all just causal deterministic systems? Why should you punish molecules or blame the laws of physics? How can you fall in love with a jelly mass of matter?

If even serious philosophers give in there seems to be little to guard us from the claim of neuroscientists that consciousness and free will are simply epiphenomenal steam escaping from the busy causal deterministic networks in our brain. No doubt our brains are physical objects, subjected to physical laws, in a physical universe. But even physical objects or systems of objects can behave in very different ways. If Amelie’s mother would have been hit by a piece of stone detached from the Notre Dame we would have consulted Newton, not James. On the other hand, some physical systems are simply complex; that is why Erwin Kroll always fails in predicting tomorrow’s weather. If we ask for a full causal deterministic explanation of the Canadian tourist’s jump, we need to put her in a scanner on top of Notre Dame, record all her neurons, perhaps from birth, and perhaps also those of her parents, and so on. The causal chain leading up to the brain processes that decide they had better get rid of its owner spreads out in all directions and is ultimately intractable. Even a fully causal deterministic reality may hold some Godelian surprises in store. Amelie needs a whole movie to find out what she wants and take a decision; could a proper brain scan have provided a short cut?

Is Transgression a sin?

By: Cornelis Jan Stam

Date: 4-8-2011

For a long time Ronald Reagan must have held the record of proposing the shortest possible sentence in the English language containing the largest number of horrible words. If I remember correctly, it goes something like this: “I am from the government and I am here to help you.”. Twelve words, a world of horror: what a record. But records are there to be broken. Douwe Draaisma seems to have beaten Reagan with the title of his essay in “De Academische Boekengids”. It goes like this: “It’s alright, I’m a doctor.”. Seven words, that is all you need. Despite the fact that Draaisma is not Reagan there are some remarkable resemblances between their statements. In both cases an authority is trying to comfort us, sending shivers down our spines. “There is no reason to worry; everything is in good hands.” It’s like a zombie who offers to shake hands before starting dinner. In the case of Reagan there is a whole political philosophy behind the zombie sentence: government is bad, let’s keep it small. A distinctly modern thought. In the case of Draaisma it’s about transgression. But what is transgression?

Let my start by confessing that, ever since I read “De metaforenmachine”, I am an admirer of Draaisma. I guess I have read most of his books, and I would rank them with the best science books for a broad audience. Secondly, I share Draaisma critical opinion of such books as “Wij zijn ons brein” by Dick Swaab and “Eindeloos bewustzijn” by Pim van Lommel. However, although I admire his writing, and share his criticism, I am a bit worried about his argumentation. This argumentation centers on the concept of “transgression”. By transgression Draaisma refers to a situation where someone who is an undisputable authority in field A, and not in field B, makes far-reaching statements dealing with field B, and uses his expertise and reputation in A to give his argument concerning B authority. “You may think that I’m talking nonsense, but don’t forget I’m a doctor.”

This notion of transgression is quite interesting and deserves to be taken seriously, although I think it results in a paradox. Before I deal with the paradox I first want to deal with some of the rhetoric elements in Draaisma’s essay. These rhetoric elements make the essay a brilliant read – like a text by Karel van het Reve (about the greatest compliment I am willing to grant) – but they also get in the way of the real arguments. For instance, Draaisma starts with a brilliant critique of the self-image of medicine. The fact that in medicine it is apparently necessary to put the claim “evidence based” in front of every assertion is sufficient to become the laughing stock of serious science. Probably Willem Frederik Hermans would have loved this. According to Draaisma doctor’s compensate for this scientific insecurity by having a life-long authoritarian relationship with their patients. Blind admiration can only be bad for your character, and certainly does not help to sharpen your critical sense. Finally, retirement adds the last ingredient to this recipe for disaster: at the zenith of your career and prestige you suddenly have all the time of the world to share a bird’s eye view with anyone who wants to listen. Time to write a book, and better do it quickly; otherwise people may have forgotten who you were.

While this recipe for disaster, culminating in transgression bestsellers, is a lot of fun to read, it is not very accurate, incomplete and most of all it distracts from the main argument. Dick Swaab is a neurobiologist, not a neurologist (although he was trained as a medical doctor). Neurobiologists are serious scientists but they do not treat patients (unless they are dead and parts of them can be put under a microscope). So if transgression is a typical sin of doctors Swaab is on parole. Furthermore, van Lommel is a drs., not a professor, so it is difficult to see how the emeritus syndrome could affect him. A very interesting observation that is not really analyzed seriously in the essay is the fact that transgression from field A to B seems asymmetric in terms of prestige; A is a field with a high prestige (at least in the eye of the general public) and B has a distinctly lower prestige, perhaps even below the level of serious science. Although Draaisma notices this, he does not generalize: it may be applicable to any two fields A and B that have prestige asymmetry. An obvious example is physics or mathematics compared to neurobiology or psychology. If you are an expert in physics or mathematics, and close to retirement, you are allowed to speculate about the hidden workings of the brain and in particular the secrets of consciousness. The books by Roger Penrose, a world famous and brilliant mathematician, could serve as an example. To summarize Penrose’s explanation of consciousness: (i) quantum gravity is a big mystery; (ii) consciousness is a big mystery; (iii) therefore, consciousness is quantum gravity. Q.E.D. Who ever said transgression is the privilege of doctors?

Now that we have cleared Draaisma’s discourse from superfluous rhetoric, we are left with a really interesting question: is transgression a sin? Let me rephrase the transgression theorema: the fact that a person is an expert in field A does not give any extra weight to statements of this person about B. In this form the proposal sounds quite reasonable: any statements about B should be judged on their intrinsic merits; you cannot use your bonus from A. If we agree with this it is not clear why there should be an objection in principle against transgression. As long as B statements are judged according to proper B criteria it is not clear what the problem is. It seems “having a professional background in B” has slipped into Draaisma’s argument as an extra condition. You cannot make statements about history if you are not a historian. This is a weak argument, rightly countered by Swaab “Dan moet hij aantonen waar ik fout zit – niet alleen: daar zit een hekje omheen” [NRC Handelsblad Dinsdag 2 augustus 2011, pag. 15]. The evaluation of any statement in science should be based upon its content, not upon the background of the person who is making the statement. While it may be rare that a person who is an expert in one field can also have a big impact on other fields, there are numerous examples: Norbert Wiener, Ludwig Von Bertolanffy, Herbert Simon, to mention a few. With them transgression is not a sin, but a blessing. The point is credibility, not credentials.

Who is to judge whether an attempt at transgression is a sin or a blessing? Draaisma argues in favour of a panel of experts to evaluate dangerous and potentially illegal border crossings. The “Academische boekengids” has decided to follow up this suggestion. The potential problem with this solution is the fact that specialists probably can judge adequately only a small part of the transgression; and the sum of a series of piecemeal comments is not necessarily a good critique. So to be able to judge transgression you have to have a kind of overview that by the same reasoning nobody is supposed to have; that is the paradox. But perhaps Draaisma underestimates the critical abilities of intelligent readers. You do not have to be an expert on everything to be able to judge the merits of a text that deals with a variety of fields in a general and perhaps provocative way. Good transgressions may not always become bestsellers, they certainly tend to become classics like Schrodinger’s “What is life?”. Failed attempts – no doubt the majority – may experience their 15 minutes of fame, but ultimately they will end up in “Het Vergeetboek”.

Lady Lovelace

By: Cornelis Jan Stam

Date: 9-7-2011

In the beginning of the last century things used to be much simpler. According to Viennese philosophers and scientist the world could be captured in statements or propositions. They were inspired by Ludwig Wittgenstein, who did not return the favor. Such statements would come in two flavors: they were either analytical, and therefore necessarily true, or they were synthetic, meaning that their truth depended on the actual state of affairs in the world. From a philosophical point of view this presents a small nuisance, since it implies you have to get out of your armchair, leave your comfortable home and start observing things. To complicate things further, such observations are a bit messy. We make errors in our observations, nature misbehaves, things are more complicated than expected. To make sense out of this mess we need to have some organizing principles. Let’s assume mother nature is a trustworthy sort of person: if the sun rises each day, and has been doing so for as long as we know, perhaps we can infer it will always rise in the morning. Technically, this is called induction. Relying on induction may be seductive, but can also be dangerous. The chicken that gets fed every morning might see its inductive concept of the world unexpectedly falsified when Christmas arrives. It is the same with women and mathematics.

Mathematics is full of myths, and one of them is that women cannot be good at it. Even though the German prime minister started her career as a physicist and seems to be doing a reasonable job in keeping Sarkozy under control, one could argue that a physicists is only a physicist and not yet a mathematician; obviously, we need a stronger argument of falsify the “Women cannot do maths” conjecture. In particular if we add the extra constraint that the lady in question also has to have fair looks. Enter lady Lovelace. Born Augusta Ada Byron, she was the daughter of the famous British poet, who left her and her mother to do better things in Greece,- something one just cannot imagine any more these days. Byron never saw his daughter grow up, which should be considered a pity, since she was beautiful, imaginative and extremely smart. Although she never had access to a formal university education (times have changed, but government is working on that) she found ways to educate herself, and with considerable success. In James Gleick’s description: “…she knew more mathematics than most men graduating from university.” Lady Lovelace was kind of a mathematical genius, but life as a countess was hardly suitable to satisfy her fascination with mathematics. She was a bit obsessed, writing: “Before ten years are over, the Devil’s in it if I have not sucked out some of life-blood from the mysteries of this universe, in a way that no purely mortal lips or brains could do”. Who said women do not have ambitions? Then she met Charles Babbage.

James Gleick, authors of “Chaos”, gives a vivid picture of Babbage in his latest book “The information”. Babbage was too many things at the same time; he must have had an unbelievable desire to start new projects on topics related to technique, science, transportation and mathematics. He is generally known as the inventor of the modern computer. Indeed, in terms of grant applications, he set new standards for generations to follow. For many years he was funded by the British government for a machine that was never build: the Analytical Engine. Fortunately we do not have to be afraid that something like that will ever happen again (I mean the funding); since 15 September 2008 there is simply no money left.

Lady Lovelace fell in love with the machine, at least with the idea of the machine, since it was never build. She became the intellectual muse of 51 year old Babbage, haunting him with letters and questions, trying to figure what the machine could do, how it could be turned into a universal problem solver. According to James Gleick: “Though he was the eminence, fifty-one years old to her twenty-seven, she took charge, mixing stern command with banter.” In fact, Ada Byron invented the idea of programming the computer. She and Babbage were way ahead of their time. It would take almost a century for something resembling even remotely the Analytical Engine they had in mind would be build. John von Neumann and Alan Turing more or less re-invented many of the ideas Babbage and Ada Byron had come up with. Lady Lovelace falsified the “women cannot do maths” conjecture; fortunately she is no longer the only one. Q.E.D.

Lost in translation

By: Willem de Haan

Date: 3-7-2011

“Frank de Boer, congratulations with leading Ajax to the Dutch soccer championship once again. Please tell us, what is your secret?” “Well…it’s not one secret of course; we’re talking about a complex system here” “Eh, yes, but can you give any clues?” “Let’s see… a lot has changed since the seminal papers by Watts & Strogatz... first of all, we always put way too much emphasis on short path lengths! Now that we use eigenvector centrality to guide ball circulation, we can put much more pressure on the opponents. And what about the ‘targeted attack’ strategy…” “What about it?” “Preventing enemy connectors to get into the game, ruining modular team play.” “Mmm…do younger players have a hard time grasping this rather abstract material?” “Yes, well sometimes they come up to me and say “sorry coach, I shouldn’t have normalized that clustering coefficient” or “what do you mean, I’m playing too assortative?” but our youth staff is now spending a lot of time on math skills.” “Is it true what they’re saying; that this is all, once again, the influence of Cruijff?” “So it seems…he managed to keep graph theory to himself all this time…”

True, this dialog may seem somewhat improbable. And, undesirable to most soccer fans as well, as they would probably prefer more a straightforward interviewing style like “A miserable 4-0 defeat, what happened? I have no clue, it just didn’t feel right today”. They would argue that once things get too technical it spoils the drama. For others however, a more theoretical type of analysis would perhaps allow them to experience excitement about the game for the first time ever. After all, soccer is just two functional networks with an attitude, and appropriate concepts and definitions could really help to make more sense of it. Well, there is hope: more and more football clubs are now starting to use advanced player tracking equipment and computational analysis methods to improve their game. Moreover, the first graph theoretical studies have already been published, for example predicting a Spanish victory in the 2010 Word Cup Finals.

Like in soccer, network thinking in neuroscience is still in its infancy, but is rapidly gaining attention. We live in an age of information highways, complex systems and growing connectivity, and many neuroscientists start to realize they too are involved in some kind of ‘network’ research; whether it is communication between brain regions, growth and connectivity patterns of neuronal assemblies, or intricate gene and/or protein interactions, networks are never far away. So, conferences are organized to bring together all these innovative ‘network’ scientists, and feature sessions with eloquent titles like ‘Wiring the brain’, ‘From neuron to network’, or, even worse: ‘Brain networks: connecting the dots’. However, just calling our brain a network may be trendy, but it does not help in solving its mystery. What happens is that presenters give their research a network polish, but keep doing the same thing, without bridging any gaps, and without growing into a network community that can inspire across disciplines.

From a complex systems perspective, it appears that the balance between segregation and integration is suboptimal. Of course, we could depend on the scarce individuals who are able to connect research fields and successfully translate ideas to different areas. Duncan Watts remembered the small world phenomenon from sociology, had the great idea to apply this to groups of chirping crickets to elucidate their synchronization mechanism, and in the process found an important cornerstone of modern network knowledge. However, depending on a few talented connectors might not be the fastest way to progress. How can we improve functional integration between all network related researchers? We need to speak a common language. It already exists.

One of the main obstacles is that many scientists refuse to look at brain networks as…well, networks. They recognize that the brain is a very complicated system, but think this means it’s just made up of a bunch of specialist cells or regions tied together. However, the point is that in many real-life complex systems behavior emerges that cannot be pinpointed to any of its parts, but arises from their interactions. One water molecule is not ‘wet’, one car does not form a traffic jam, and one neuron knows nothing. Large-scale connectivity and interactions are essential factors for cognition to emerge. It is important to realize that this also means that global network changes can cause dysfunction, and not only be a result or reflection of it. So, when an emergent network property like cognition is disturbed, asking the traditional neurological question “where is the problem located” might not always work.

If a network is more than the sum of its parts, the only way forward is to study its connectivity. Hence, we should talk about networks using the language that truly tries to focus on connectivity itself: graph theory. Talking about networks without graph theory is like dancing about architecture. Maybe we need conferences where speaking graph theory is a prerequisite? I would personally really look forward to a meeting where for example a sociologist, engineer, economist and neuroscientist are put on the same stage and forced to explain in simple terms how they apply graph theory to their field, without using any jargon or technical details, and without everything getting lost in translation. When a basic sociological observation can lead to spectacular progress in all kinds of research fields, who knows what else we will find? To put it more trendy: graph theory should not just connect brain regions, it should connect brains. It is not perfect, and not suited for all types of questions, but at present it is by far the best candidate to really unite all those researchers talking about human brain networks all the time. Perhaps it is also our best bet to one day truly understand the brains of soccer coaches.

May you always stay younger than your h-index

By: Cornelis Jan Stam

Date: 2-7-2011

People can wish other people all sorts of things: a merry christmas, a happy new year, a happy birthday or a brilliant career. Of course the wish does not always has to have a positive content. Drivers in traffics jams sometimes also express wishes with respect to each other’s health. So, we can wish other people good things or bad things. In view of this, how should we categorize the following wish: “May you always stay younger than your h-index”? The receiving part of this wish was a highly distinguished professor and the occasion was his retirement; the sender was a colleague. The question is: how should you respond to such a wish? Before we can answer this question, we need to do a little bit of research.

The h-index refers to an index introduced by Hirsch in a paper with the title “ An index to quantify an individual's scientific research output”. So, the h-index is to science what the Elo-rating is to chess: a single number that expresses how good a player or scientist is. It is easy to determine this index: rank order all the peer-reviewed publications on the basis of the number of times they are cited, giving rank number 1 to the most cited paper, rank number 2 to the second most cited paper, and so on. Now, the h-index is the largest rank number such that number of citations of the paper with that rank number >= h. The h-index was invented to improve the estimation of scientific influence, since simply counting number of publications or number of citations could be biased, for instance by loads of junk or singular successes.

Where does this obsession with counting come from? Science, especially at the higher levels, is a very competitive enterprise, like top sport. Scientists are often ambitious, and want to be the first and the best; what a joy if we can have a number that indicates how good we are. Certainly, chess grandmasters cherish their Elo-ratings. But probably a more important consideration is the hunger for objective criteria to make policy decisions in science. Whole departments are funded on the basis of impact factors. Hirsch says: “For the few scientists who earn a Nobel prize, the impact and relevance of their research is unquestionable. Among the rest of us, how does one quantify the cumulative impact and relevance of an individual's scientific research output? In a world of limited resources, such quantification (even if potentially distasteful) is often needed for evaluation and comparison purposes (e.g., for university faculty recruitment and advancement, award of grants, etc.).”

The nice thing with the h-index, at least from the point of view of individual scientists, is that it can only increase over time; for a reasonably successful scientist roughly by one point per year. The problem is that his makes comparison of h-indexes between scientists a bit unfair: those with longer research careers will tend to have higher h-indices. In fact, like nails and hair, your h-index will continue to grow some time after you are not longer there,- a kind of scientific afterlife. Therefore, in addition to the h-index Hirsch also introduced a slope m: the average increase of the h-index per year. Since it is a kind of derivative, the slope m should allow comparison of scientific competence between scientists in different phases of their career. According to Hirsch, an m value of 1 indicates a successful scientist, an m value of about 2 an outstanding scientist, and an m slope of 3 or higher “characterizes truly unique individuals”. This may be true for physics, but different norms subjected to some inflation might apply in other fields of science.

What about our question? If we do not take neonates into consideration (they always have h-index = biological age = 0 years, and m positive infinity: an excellent profile for obtaining funding), the only way to become younger than your h-index is to have an m much larger than one; so you have to be very successful. Your scientific maturation has to outrun your biological demise. If, at retirement, you are younger than your h-index, you have done very well; however, if you live long and happily after retirement, your biological age might once again catch up with your h-index. A worrisome prospect, probably not foreseen by the inventor of the highly original wish. Is there a solution? Perhaps we should be a bit more careful with h-indices. For instance, if one would say to a female scientist that she looks much younger than her h-index, would that be appreciated?

An awake craniotomy in the Ukraine

By: Cornelis Jan Stam

Date: 9-6-2011

Television, these days, seldom succeeds in activating other brain areas than the insula and amygdala, apart from the obligatory assault on the primary visual and auditory cortex. But fortunately there are exceptions, and sometimes very interesting ones. On Wednesday 8 June 2011 Canvas broadcasted “The English surgeon”, an award winning BBC Four documentary movie by Geoffrey Smith about the London neurosurgeon Henry Marsh. It is a story about a brilliant, mildly eccentric neurosurgeon, a pioneer in awake craniotomy, and his friend the Ukrainian neurosurgeon Igor Kurilets. Ever since 1992 Henry has been helping his friend to improve the quality of neurosurgical care in the Ukraine. It is a story about personal friendship, the tragic and glory of medicine, and the courage to accept ones mistakes.

The documentary follows Marsh on one of his visits to the Ukraine, where he will operate on Marian, a young patient with a brain tumor and epilepsy. The process leading up to surgery is followed, and the surgery itself, performed with only local anesthesia (the money and the means to do the first part under general anesthesia are lacking), is shown in detail. Fortunately, this part of story has a happy ending: in the end, after some complications (Marian has a seizure during the operation), the tumor is removed, and the patient survives, apparently in good neurological condition. This is part of the glory. But modern medicine in general, and neurosurgery in particular, have a grim side as well. This is nicely illustrated by many fragments mixed with the main story line.

For instance, bureaucrats do not generally like doctors or what they do (except, of course, when they have a medical problem themselves). This is nicely illustrated in a scene where Marsh is sitting behind his computer in his London office, struggling with some NHS software that requires he enters all his daily activities, from minute to minute. At one point – realizing he often does two or three things at the same time – Marsh wants to enter more than two “activities” in one time slot. This, according to NHS standards, is of course intolerable; the program crashes. Marsh gets furious and leaves the room slamming the door. For any doctor still attempting to treat patients in these modern times there must be a shock of recognition.

But there are more shocks. Craniotomy drills, that cost over 100 pound a piece, are used only once in London; Henry takes used ones with him to the Ukraine. Igor then uses them for the next ten years. Ho said that there are no opportunities for budget cuts in healthcare? Another interesting spectacle is the outpatient consultations Marsh and Igor do in a hospital rented from the former KGB. Dozens of patients are lining up in the dark corridors, waiting (and sometimes fighting) till it is there turn. If the fighting gets too bad, Igor steps into the corridor, and hands out a box of pralines to be passed around; that will keep them quiet for a few minutes. In the meantime Henry and Igor speak to patients with the most advanced stages of neurological and neurosurgical disease. Often, it is simply too late to do anything. By the time patients have the money and the urge to visit a doctor, their condition – even if potentially treatable – has advanced too far.

In some cases the tragedy of the cases is almost too much too bear. A grandmother presents the MRI of her grandchild; Marsh diagnoses a brainstem tumor; Igor explains the grandmother that her grandchild will die, and nothing can be done. Then, a beautiful 23 year old women shows her MRI. She was told by her doctors that she suffered from a kind of “infectious disease”. Marsh studies the MRI and explains to Igor this is a diffuse glioma, a malignant brain tumor that cannot be treated. The woman will get blind due to papillary edema within a few years, and probably die soon afterwards. Marsh and Igor discuss the diagnosis in English while the women, who obviously does not understand a word of what the doctors are saying, looks full of hope to the final verdict. You can see how Igor and Henry struggle with the decision whether or not to inform the woman about her fate; then Henry says you cannot give a young women of 23 a death verdict without any family present. She will have to come back with her mother, who lives in Moskow.

If anything, Henry Marsh is acutely aware of the limitations of what he is doing. He explains: “We are playing Russian roulette with two guns pointed at the head of the patient. One gun is called treatment; the other wait and see. Which gun should you fire?” Marsh also says: “The actual surgery is not what is the most difficult. What is really difficult is to decide when to operate, and when not to operate.” Marsh can know. During one of his early visits to the Ukraine Igor presented a young girl called Tanya to Marsh with a benign, but extremely advanced tumor affecting her beautiful face. Overwhelmed by passion, Marsh simply felt he had to do something. He operated on Tanya in London, twice; it was a miserable failure. Tanya lived for a terrible two years, and then died.

Perhaps it would be only too human to attempt to forget these terrible failures. Such failures are fatal to the narcistic self-image of doctors; we cure patients, don’t we? If I wouldn’t have acted, things might have been even worse. The point is of course, that sometimes intervention makes things worse, much worse. To be able to confront your mistakes and learn from them is the difference between a skilled surgeon and a truly brilliant doctor. At the end of the documentary, Igor and Henry go to visit the mother of Tanya; it is an obsession of Marsh to do this every time he visits the Ukraine. The mother and her family have prepared a feast meal for the two neurosurgeons; it is obvious they do not blame them, but are instead deeply grateful for the fact that they tried to help them. In an emotional speech, translated by Igor, Henry addresses the mother and her family. At the end, Henry Marsh visits Tanya's grave. It sounds almost like the plot of a Dickens novel, with a failed neurosurgical case featuring as Tiny Tim, but it isn’t. There is not a glimpse of false sentiment in the whole documentary. If it is emotional, it is because life is. We all die; the point is what we do before. If you have ambition and courage you can do something for others, but only if you are brave enough to face the consequences of the terrible mistakes you are bound to make.

Passing the rouge test

By: Cornelis Jan Stam

Date: 5-6-2011

Have you ever heard of the “rouge test”? It sounds like a distinctly feminine kind of thing. Perhaps it is a test of your skill in putting on rouge, or your ability to recognize the quality of a particular type of rouge. If you would know it has to do something with mirrors your suspicion that women must be involved would probably increase. In fact, the test does involve mirrors, but it does not necessarily require feminine expertise. You do not even have to be human to pass this test. The rouge test, described by Frans de Waal in “Een tijd voor empathie”, refers to the ability to recognize oneself when looking in the mirror. Although woman probably have more practical experience with studying their faces in mirrors, even males can pass this test. In fact, all human subjects from the age of about 18-24 months pass this test. The criterion is that they will see a marker (the “rouge” or some suitable substitute) on their forehead, and try to remove it.

Passing the rouge test implies you have an imagine of yourself. This sounds like a typically human ability. However, some great apes also pass this test. Thus, having a self-image, an important condition for the development of empathy, is not limited to humans. Using rather unconventional experimental setups mirror tests have also been performed in a variety of other animals. Only two categories succeed in showing awareness of the equivalent of rouge on their body: dolphins and elephants. Elephants also have long trunks that come in handy when you want to remove undesired markings on your head. As predicted by the outcome of the mirror test, dolphins and elephants have shown remarkable demonstrations of compassion, even with respect to humans. What kind of brain does it take to care about other creatures in this way?

Since the crucial test uses mirrors, one might suspect that you need mirror neurons to pass the test. Mirror neurons are neurons that fire when a monkey observes a human or another monkey perform a motor act. They were discovered by accidence by Rizzolatti . Since then, they have been the subject of intensive investigations, and also extensive speculation. In particular, it has been suggested that a mirror neuron system might underlie our ability to have a “theory of mind”, and to be able to imagine the feelings and emotions of other human subjects. A disturbance in this system might explain the lack of empathy typical of patients suffering from some form of autism. However, these ideas have been criticized, for instance by Patricia Churchland in her recent book “Braintrust”. Are there any other candidates that can explain success in the rouge test?

The same categories of animals (humans, great apes, elephants and dolphins) that succeed in the rouge test all have a particular type of neurons in their brain called von Economo neurons or spindle neurons. Although much less well known than the mirror neurons, they are at least as interesting, and might be even more relevant to understand the neurobiological basis of such abilities as empathy. Von Economo neurons are a kind of giant neurons that have an unusually large number of connections. Interestingly, in humans these neurons are highly prevalent in the anterior cingulate cortex, a brain area that becomes activated when mothers hear their children cry. Perhaps passing the rouge test does require a female touch after all?

Designing the brain from scratch

By: Cornelis Jan Stam

Date: 5-6-2011

Just imagine our planet earth is suddenly invaded by visitors from a highly developed civilization from some far corner of the universe. This does not require a lot of imagination. There are untold numbers of books and movies dealing with this topic. Why would a highly developed civilization be interested in such a visit? Perhaps a delegation of their parliament wants to find out how primitive societies organize their economy, how they handle budget cuts, or how they organize health care and teaching and preserve the environment. Unfortunately, none of this is the case. This delegation consists of scientists with an interest in evolution. After a brief field study – they are obviously much smarter then we – they marvel at the random mixture of brilliant solutions and unimaginably tinkering that we call biological evolution. They congratulate us with Charles Darwin who was one of first to come up with a possible explanation of this process where blind randomness subjected to selection results in diversity and adaptation. But they don’t like the mess. So, they suggest a small experiment, to be conducted by us. They want us to run evolution again from scratch, to see if more efficient results can be obtained a bit faster. This is an assignment the scientist got from their upper management, who are interested in planting evolution on other planets, by who are worried by cost.

Suppose the noble task of designing the brain was given to you. A great honor, at least, so it seems at first. Design a brain from scratch, using all the information that human science has accumulated over many centuries. When you start working on your project you soon become overwhelmed. Leaving through Kandell’s textbook and other neuroscience tomes you are impressed by all the facts, but you start to realize how difficult it is to draw some general conclusions about design principles. Consulting all the neuroscience journals you can find in your library and on the WWW (the aliens left that in tact, at least for now) your feeling of despair only increases: there is not only too many facts, there is also a devastating controversy about almost any topic. The most efficient way to build a new brain almost seems to copy an existing one and pronounce this as the best way to go. That would really be a “Big Brain Project”, modeling our way out of the problem, but it will take billions of dollars and at least ten years to finish. Unfortunately, the aliens are not that patient (and you start getting doubts about their good intentions as well). They say: “We are not interested in details. We want general principles. And we want quick results.”

Under pressure you decide to take a desperate action: forget about all the textbooks, forget about all the journals, all the teaching, all the facts. Just focus on the bare essentials. You remember the saying that to do mathematics you only need to have a pencil, a piece of paper and a wastebasket (for philosophy the wastebasket is not necessary, but the aliens are not interested in our philosophy which they refer to as “speculation”). Looking at the blank piece of paper, forcing yourself to forget almost everything you know, you ask a simple question: what are the minimal requirements to have a brain?

Obviously, we most have some have basic elements, and probably a lot of them. Perhaps these will turn out be neurons, but for the time being let’s call them “agents”. What do these agents do? You decide to postpone this question and address another one first: whatever these agents do, it makes little sense if they are not able to tell the others. So, the agents need to be connected, otherwise the whole structure makes no sense. You smile when you realize this may also be literally true. But how to connect the bastards? If you connect them all to each other, and they start to communicate at random, you obviously get a mess, like a school class after the teacher has left. If all agents could somehow coordinate their activity things would be better. However, if all the agents are dancing in strict synchrony, what is the point of having a network in the first place? One very big agent could do the job.

So, you need to dig a bit deeper. Taking care that the aliens don’t notice you decide to sneak in a bit of evolution. Somehow the agents must find a solution to the problem how to obtain system-wide communication at the lowest possible cost. But we do not want to assume too must smartness of the agents (this might insult the aliens); probably all the agents know about is their own state, their own connections, and the presence of some nearby neighbours. Just think about this problem in terms of pairs of agents, each with a certain number of connections (degree); for utter simplicity assume the “activity” of the agent just is directly proportional to its degree. How do two agents decide to connect or not? Probably, they are more likely to connect if they are nearby. But is that all? Once a connection is established, traffic can pass through the connection between the two agents. Assuming that the activity of an agent is proportional to its degree, the traffic load of a connection must be proportional to the product of both degrees. Too much load may cause the connection to disappear. So, the probability that two agents connect depends somehow on the inverse of their distance, and the inverse of the product of their degrees (an inverse square law; you must be on the right track!). You start to write down some formula’s for this dynamic “connection probability”, but you fail miserably. However, two features of the emerging network are already clear: (i) it will evolve towards a critical level of connectivity and activity; (ii) it will maximize the diversity between the nodes. This is because the product of the two degrees is lower when both are maximally different. Then you remember the formula for kappa…

At this point the aliens decide they have had enough. They step into your room, take away your piece of paper, and start a rather noisy and excited discussion about the results. At one stage one of the scientists even makes an inter galactical call, probably to consult higher authorities in science, management or politics. After that, they break out in laughter. One of them puts back the piece of paper on your desk. Bending over, and putting his hand kindly on your shoulder he says: “Nice try. Keep on dreaming”.

A Free University for free thinking

By: Cornelis Jan Stam

Date: 29-5-2011

The official English name of the institution that emerged from the fusion of the medical faculty and the academic hospital of the VU in Amsterdam is “VU University Medical Center”. This name was chosen after careful deliberation, but it is a bit peculiar and it may be worthwhile to have a closer look at the roots of this peculiarity. The official Dutch name of the institution is “VU medisch centrum” (abbreviated as VUmc). So the logical translation would be: “VU Medical Center”. However, this would hide the academic nature of the center. The obvious solution would be to write the abbreviation VU (that stands for “Vrije Universiteit”) in full: “Free University”. Then we would get “Free University Medical Center”. Clearly, this option must have been considered, and has been rejected for the more than slightly illogical solution of VU University Medical Center. Apparently there is a problem here, and it has to do with the notion of “Free”.

When you tell foreigners you work at the “Free University” they tend to smile, and ask what this “Free” stands for. Does it mean this institution has no admittance fee? Or does it mean the students are free, and do not have to follow lectures or do exams? Perhaps faculty is free as well, and can pursue scientific work without teaching obligations, like in the Princeton Institute for Advanced Science? Of course – and to some extent: unfortunately – none of this is true. De Vrije Universiteit did not get its name by accidence. The Vrije Universiteit was founded by Abraham Kuyper (“Abraham the magnificent”) on 20 October 1880. In his inaugural address with the title “Souvereiniteit in eigen kring” (sovereignty in their own circle) Kuyper stressed that he aimed to found a university “Free from state and church”. This may sound a bit paradoxical if you consider the strongly neo Calvinist character of the VU, at least in the years up till the second world war, after which the VU lost some of its distinctive character and become a more conventional university. The paradox is how a university can be clearly dedicated to a religious conviction and still claim to be “Free of church and state”.

Interestingly, there is a whole philosophy behind this paradox. In Kuyper’s view society consisted of set of communities or circles that each had their own relative autonomy, goals and rules. These communities, like family, economy, university, church and state, were supposed to be relatively independent of each other. Kuyper never worked out his ideas in a systematic way, but the VU philosopher Herman Dooyeweerd did. Working initially at Kuyper’s desk in Kuyper’s former home Dooyeweerd developed his “Wijsbegeerte der wetsidee”, later translated and expanded as “A critique of theoretical thought”. According to Dooyeweerd reality consists of different “aspects”, ontological modi that each have their own nature, purpose an inner rules. He distinguished at least 15 different aspects, ranging from numerical and spatial reality all the way up to culture, language and faith. Interestingly, though Dooyeweerd stressed the existence of separate, distinguishable aspects, he also pointed out how they are related to each other: “In this inter-modal cosmic coherence no single aspect stands by itself; every-one refers within and beyond itself to all others.”.

So the “Free” that is so ostentatiously hidden by using the abbreviation VU turns out to have interesting and deep historical and even philosophical roots. Although reality may be thought of as consisting of interacting spheres – a touch of network theory avant la lettre – the spheres do have their own character and relative autonomy; without that they couldn’t function properly. They have to be free to function. This plea for a “Free University” could still serve as a guide when confronted with distinctly modern problems. For instance, one wonders what the founding father of the Vrije Universiteit and its principal philosopher would have thought about government pressure to organize teaching and research of universities according to business models based upon the notion of a free-market.

The problem with hubs

By: Cornelis Jan Stam

Date: 21-5-2011

When Duncan Watts and Steve Strogatz introduced their “small-world” model of complex networks they made one important error: they failed to appreciate the importance of node heterogeneity. Put into simple words they assumed that all nodes or network elements, whether they represent actors, neurons or power plants, have more or less the same characteristics. In particular they are supposed to have more or less the same number of connections, referred to as the degree of a node. This may have been the ideal in communist times, but it is certainly not true of most real complex networks in nature. Barabasi was kind enough to point out this crucial omission and propose a model that does take into account differences between nodes. The scale-free model of Barabasi and Albert predicts that some nodes will have many more connections than the average node, and consequently will have a much more central position in the network. Why is this important for understanding the brain?

There is no unique definition of what makes a hub, or what is centrality. A high centrality can simply refer to having a high degree, or having a lot of shortest paths passing through a particular node, or being at a short distance from all other nodes. Paradoxically, nodes can be hubs even when they do not have many connections themselves, such as is the case with some connector hubs. Even if you do not have many friends yourself, al long as your friends have many friends you will be okay. The eigenvector centrality takes this into account. What is important is that hubs or highly central nodes attract most of the network traffic. In the brain this implies that most of the information flow will go to or emerge from hubs like the posterior cingulate cortex and the precuneus. This explains why these hubs in the brain are so important for normal brain function and the general level of cognitive functioning as has been demonstrated by Martijn van den Heuvel, Li and Langer. It is nice to have a brain with well-functioning hubs, but what happens if they fail?

Exactly because they attract most of the network traffic hubs can be vulnerable. This is nicely illustrated by the collapse of transport systems under extreme conditions. In the winter of 2010 / 2011 air traffic at London Heathrow and railroad traffic at Utrecht Central station failed miserably. This is not a coincidence, but – according to network theory – a logical consequence of their high centrality in the network. A similar scenario may be at work in Alzheimer’s disease. There is now growing evidence that hub like brain structures mail fail under the pressure of excessive information traffic. As a consequence of network topology, neurons in hub areas fire much harder than neurons elsewhere in the brain; because of this they are more likely to suffer damage due to excessive and sustained overload. This would explain the accumulation of the abnormal protein amyloid in those brain regions that have a high network centrality.

There is now increasing evidence that epilepsy is also due in part to network abnormalities in the brain, but the exact mechanisms are less well understood. One intriguing hypothesis is that epilepsy can be understood as a mirror image of Alzheimer’s disease. In Alzheimer’s disease hubs suffer from overload; perhaps in epilepsy the hubs strike back and make the rest of the brain suffer. The point is that hubs do not only receive most of the network traffic; like real CEOs they also send out most of the network traffic. In management terms: they give a lot of orders, but do not necessarily do a lot themselves. If the balance between excitation and inhibition in a local brain regions gets disturbed, in particular if it is the inhibition that fails, then the crucial question is whether this pathological excitability can spread to the rest of the network. Perhaps if inhibition failure strikes a network hub, this will create a scenario where global network involvement is a serious risk.

So hub problems could be central in many neurological diseases in more than one sense of the word and may come in two different flavours: hub breakdown under excessive network traffic, or excessive network traffic due to hub dis-inhibition. Therefore: be careful with your managers: they can be quite helpful, but tend to break down under too much pressure, or cause widespread damage if you over excite them.

The face of science

By: Cornelis Jan Stam

Date: 15-5-2011

G.H. Hardy, one of Britain’s greatest mathematicians, did not like pictures, especially not pictures of himself. That is why he never wanted to see his own face in a mirror. In hotel rooms the first thing he would do is cover up all mirrors with towels. One wonders how he shaved in the morning. Of course, as an Englishman, and as a brilliant mathematician working in Oxford and Cambridge, he should be allowed a mild display of eccentricity. Hardy was addicted to cricket, something (the addiction as well as the game) impossible to understand for anyone born outside the United Kingdom or any of its former dominions. He used to spend most of his afternoons at the cricket field, but in England one cannot always count upon the weather. Hardy solved this problem by putting on multiple sweaters, challenging God to falsify Hardy’s prediction of rain by actually letting the sun shine. With mathematicians there is never a dull moment.

Hardy was a “pure mathematician”, involved in number theory, the mathematical equivalent of heaven. The depth and scope of his work, in particular the papers he wrote with Littlewood and the Indian prodigy Ramanujan, are probably beyond mortal understanding. According to Hardy: “There is a very small minority who can do something very well, and the number of people who can do two things well is negligible”. Fortunately for us, just before the Second World War Hardy did write a small book on his work, “A mathematicians apology”. This book, including the forword by C.P. Snow, is a little gem, in the same class as “Chance and Necessity” by Jacques Monod, “What is life” by Erwin Schrodinger or “The computer and the brain” by John van Neumann. The book gives, if only briefly, a glimpse of the true face of science.

The miracle of art is that it allows us to “see” or “grasp” aspects of reality that are at once immediately clear, and at the same time very difficult or impossible to capture in ordinary language. The picture on the right (by Marcella Stam) shows a painting of the face of a young woman. Although my judgment is obviously highly biased, I like this particular painting very much, since it seems to me to capture the emotions of the woman. It is difficult to tell what this emotion is, but there seems to be pattern that is impossible to ignore once you have experienced it. What Hardy does in “A mathematician’s apology” is to show that mathematics, at its best, is like art. It can reveal a truth that goes beyond us, and transcends the contingent patterns of the wiring in our brain. Hardy says: “Just like a poet or a painter a mathematician is a creator of patterns.”

Hardy uses two examples of classic Greek mathematics to make his point: the proof by Euclides that there are infinitely many prime numbers, and the proof by Pythagoras that the square root of two is an irrational number. Both proofs are deceptively simple, but open up – literally – infinite worlds of possibilities. This has nothing to do with usefulness (Hardy hated “applications”; he would probably have been abhorred by such notions as “valorization” or “vermarkting”); instead, it has everything to do with beauty. Great mathematical discoveries show the face of scientific beauty, stripped of all the contingent details. This is a very emotional matter; just look at the face of Andrew Wiles at the end of the first scene of the Horizon documentary on his proof of Fermat’s last theorema. Probably Hardy was very much aware of this. That is why he didn’t like mirrors.

Why moral philosophers should go to medical school, and neuroscientists may consider treatment for philosophical agnosia

By: Cornelis Jan Stam

Date: 13-5-2011

On March 26, 1987, when I was a third year neurology resident, I bought a book with the title “Neurophilosophy. Toward a unified science of the mind brain” written by philosopher Patricia Churchland. I must have spent a large part of the summer of 1987, in a small holiday apartment at the Wolfgangsee in Austria, working my way through the 546 pages of this rather unusual and for the time quite revolutionary book. Tired of high-brow philosophers who propose theories about the relationship between brain and mind, without being able to tell a neuron from a fly on the carpet, she tries to bridge the gap and develop a serious philosophy of the brain taking the advances of modern neuroscience into full account. Churchland took her mission very seriously: trained as a philosopher, she decided to learn neuroscience the hard way by going to medical school, and learning all the details of neuroanatomy, neurophysiology and neuroscience. The book “Neurophilosophy” symbolizes the start of a research program that aims to take both the philosophy as well as the neuroscience part of understanding our minds and ourselves seriously. It is perhaps difficult to imagine now, but in 1987 bridging the philosophy neuroscience gap was almost anathema. In fact, even in 2011 best-selling neuroscientists sometimes boast of their ignorance of philosophy, and philosophers preaching about the mind can still get away with Kindergarten level understanding of neurobiology.

Patricia Churchland has spent a considerable part of her long and successful career on the hypothesis that the brain should be seen primarily as a collection of interacting computational networks. This approach was inspired to a considerable extent by the work on artificial neural networks, boosted by the publication of “Parallel Distributed Processing” by Rumelhart and McClelland in 1986. Artificial neural networks have become very important, if more as universal nonlinear pattern identifiers than as realistic models of the brain. Francis Crick, not one to be afraid of novel ideas or critical notes, has challenged the idea the in the brain everything can be reduced to some form of computation. Indeed, there is more to understanding the complex adaptive self-organization of brain networks than running pattern recognition algorithms on sets of sense data. However, Churchland and others did have a considerable influence on a fresh multidisciplinary look at our brain.

Now, 24 years later, Patricia Churchland still hasn’t lost her appetite for crossing bridges and breaking new ground. In het latest book “Braintrust” she tries to define the neural mechanisms of social behaviour and morality. The first few chapters are a bit disappointing. In essence she tries to explain the sources of morality in terms of neuroendecrinology, in particular the oxytocin (the hormone everybody loves) and vasopressin systems in the brain. She argues that these evolutionary old systems underlie the development of self-care, later extended to kin and ultimately to communities. However, as she repeatedly admits, the evidence is still incomplete, and not always consistent. You cannot count on becoming a nice person by overdosing yourself with nasal sprays of oxytocin.

Although the first chapters may be a bit disappointing, the later chapters get better and better. Interesting enough, these later chapters contain razor sharp philosophical and methodological criticisms related to the wide topic of morality and the brain. First she does a post mortem of the idea that we have “genes” for complex behavioral patterns in “The parable of aggression in the fruit fly”. Next, simplistic interpretations of brain behaviour correlation using modern neuroimaging experiments, in particular those done with fMRI, are scrutinized. Mirror neurons, discovered in monkeys by Rizzolatti et al., are given a special treatment showing that as an explanation of morality they are the neurophysiological equivalent of the emperor’s new clothes. The final blows are reserved for rule-based foundations of morality, religion and G.E. Moores “naturalistic fallacy”. In essence, using a mixture of arguments from Socrates, Aristotle, Hume and common sense (they go together quite naturally) she argues that there is no a priori philosophical or logical reason why there could not be a sound biological and neurobiological explanation of morality. Perhaps Patricia Churchland may not be right on all the details. But, ever since “Neurophilosophy” she has certainly shown her ability to make a good point.

All together now: the complex science of cooperation

By: Cornelis Jan Stam

Date: 6-5-2011

All the major religions in the world seem to agree on some fundamental ethical principles such as the Golden Rule: treat others such as you would like them to treat you. Although followers of various religions may not always be very effective in practicing what they preach, at least most of us would agree that the Golden Rule sets a high standard for morality. Setting high moral standards is one thing, but convincing everybody that it is better to cooperate and live according to these ethical principles is another matter. It would help if we could prove God exists to supervise things a bit, but attempts to provide mathematical support have a rather notorious record in the history of philosophy. Anyone who still cherishes any illusions in this respect might enjoy the lectures of Dutch philosopher Herman Philipse on this topic or Richard Dawkins “The God delusion”. You cannot derive an ought from an is, and life on earth is simply a hard struggle for survival between creatures driven by the blind Darwinian forces of mutation and selection. However, there is at least one evolutionary biologist who challenges this point of view.

In “Supercooperators” by Martin Nowak and Roger Highfield it is argued that we need to include cooperation as a fundamental principle in addition to mutation and selection if we want to understand the evolving and dynamic complexity of the biological world. This highly accessible and inspiring book provides an overview of the research done on evolutionary game theory by Nowak working in Vienna, Oxford, Princeton and Harvard. Like his great example Robert Axelrod he starts with the deceptively simple question how cooperation can evolve and sustain in a world dominated by self-interest and blind evolutionary forces. Using the Prisoner’s Dilemma as a basic paradigm of the conflict between cooperation and defection, he describes a large number of experiments and modeling studies conducted over two decades in the search for the mystery of cooperation. The conclusions are fascinating, although they have given rise to considerable controversy. For conservative evolutionary biologists it is hard to accept that cooperation can have a sound, mathematical basis that transcends the laws of the gene. Hume and Hobbes, inspired by the success of Newton, tried to develop a mechanics of the human spirit and society but failed to achieve their goal. Does evolutionary game theory succeed to provide a true mathematical understanding of how we group together?

Nowak identifies five principles that support the emergence of cooperation: repetition, reputation, spatial, multilevel and kin selection. Repetition refers to recurrent encounters. If we are bound to meet again, we will find out that nice cooperation strategies such as Tit for Tat, Generous Tit for Tat or Win Stay, Loose Shift will prevail, although a periodic “Freefall” towards All defect is unavoidable. This notion of direct reciprocity, based upon the work of Axelrod, is extended by the idea of indirect reciprocity: if I earn a reputation of being nice by helping you, others will help me later. For this type of collaboration language and big brains are a great help. We need them to gossipi and build mental images of reputations of good guys and bad guys. By grouping together in space cooperators can protect themselves from defectors, at least temporarily. Selection also operates at the group level: groups with large numbers of cooperators may be more successful as a group, even though individuals within such a group may pay a price. Finally, cooperating with those who are genetically close to you also promotes collaboration, and, of course, spreading your precious genes. Kin selection was nicely summarized by J.B.S. Haldane: “I will jump into the river to save two brothers or eight cousins”.

Each of these five principles has a rigorous mathematical underpinning. There is a real calculus of cooperation. For instance, for agents living on graphs, if benefit divided by cost exceeds degree, cooperation will prevail. Even more fascinating, Nowak has discovered a parameter called sigma that reflects the rate at which like minded agents interact. If sigma is larger than one, assortative mixing results, and cooperation will be successful. Does this sound familiar? Brain networks are the only complex graphs that are both disassortative (at the cellular level) and assortative (at the macroscopic level of brain regions), like social networks. Why is this the case, and why does it only happen in brains? Perhaps we need to learn a little bit more about evolutionary game theory to understand our brains.

While you were awake, your brain was sleeping

By: Linda Douw

Date: 2-5-2011

Have you ever wondered why sleep-deprivation causes lapses in your functioning, even when you don’t feel tired? Maybe some part of your brain was actually asleep! A recent study from Giulio Tononi in Nature shows that parts of awake rats’ brains go offline after being sleep-deprived, without any sign of tiredness or decreased alertness. As the rats grow more and more tired, their brain automatically mixes in some rest. Neural down states usually only occur during non-REM sleep. However, they also show themselves when these rats are continually kept awake. Interestingly, these mini-naps seem to occur throughout the cortex in a random manner, which does make me wonder about the safety of this mechanism. Imagine driving at night with only a few hours of sleep: your foveal visual area or steering hand area may shortly go offline… And even more worrying: larger areas of the brain go for a nap as sleep deprivation increases!

Does anyone of you have some dolphin associations right about now? Dolphins are known to sleep with one cerebral hemisphere at the time (as do some birds). This is a matter of life and death for them, because unlike humans, dolphins must remind themselves of breathing: if their entire brain falls asleep, they will suffocate (would not know what the evolutionary advantage of this could be). A positive feature of their sleeping behavior is that dolphins can keep on the lookout for threats 24/7, although they may be slightly hindered by their one-eyed view as the other hemisphere gets some rest.

Although we are not as ambiguous as dolphins when it comes to our sleeping behavior, we all know sleep and wake are not straightforwardly separate states. ‘Confusion’ the other way around (being partly awake while sleeping) is very familiar to about 15% of the world’s population. When sleepwalking, a person is asleep, but the mechanism that usually ensures the person to lie still and remain unconscious does not function normally. Sleepwalking is usually relatively benign, wandering around and possibly peeing in inappropriate places, but being able to perform these automatic behaviors does pose some interesting issues in light of the debate on consciousness. As some may reason that walking around and opening and closing doors are not very complex behavior and are thus mere automatisms, it is difficult to maintain this stance when looking into more freaky sleepwalking behavior. For instance, holding an intelligent conversation while sleepwalking is difficult to differentiate from actually being awake. Would every sleepwalker fail the Turing test? Surely not. And from both a scientific and a legal perspective, being asleep but able to commit a crime is also very confusing. Some violent sleepwalkers have even been acquitted of murder on grounds of temporary insanity.

So, if rat brains tend to go to sleep while being awake and a certain percentage of people is able to be awake while sleeping, do our old concepts of sleep still apply? Tononi and his colleagues questioned whether some parts of the brain may actually be awake during behavioral sleep. And indeed, although most of the brain was at rest globally, local up states did occur, particularly as sleeping duration increased. So, while awake rat brains snooze more and more when they are awake for longer periods of time, rat brains that have been asleep for a long time only nap locally. Does this mean that sleep-deprived rats cannot be held accountable for their actions? In the study, local mini-naps lead to mistakes when trying to reach for some sugar some hundreds of milliseconds later. It seems as though it is possible to mix up the entire spectrum of being awake or asleep, with some serious impact on behavior.

Of course, these findings need to be replicated in humans, but if they are: think of the endless possibilities for explaining odd behavior! But do make sure to keep it brief: these naps may not last for more than a few seconds…

The E. coli of social psychology

By: Cornelis Jan Stam

Date: 29-4-2011

In his preface to Robert Axelrod’s “The evolution of cooperation” Richard Dawkins writes: “The world’s leaders should all be locked up with this book and not released until they have read it. This would be a pleasure to them and might save the rest of us. The Evolution of Cooperation deserves to replace the Gideon Bible.” Perhaps one can argue over Dawkins second suggestion, but his first advice should be followed by everyone, even if you are not a world leader and do not like to be locked up. Robert Axelrod’s monograph on the Prisoners Dilemma is a classic on the nature and evolution of large communities of interacting agents, with and without brains. It teaches some fundamental principles of collaboration, some quite counterintuitive, some distinctly strange, and others full of hope.

Axelrod did not invent the Prisoners Dilemma, or the TIT for TAT strategy that proved so successful dealing with it but both play the role of key characters in his fascinating exploration of the nature and roots of working together. The Prisoners Dilemma refers to a situation where two agents interact, and have the choice to cooperate or defect. Each choice comes with a prize, but the gain also depends upon the actions of the other player. This miniature drama serves as a laboratory test ground to investigate how collaboration can arise and become stable, even if the agents are irrational, selfish and short-sighted – a fairly reasonable assumption if human beings are involved. What Axelrod did is invite experts in game theory, coming form different fields of science, to propose what they think would be the optimal strategy to play an iterated Prisoner’s Dilemma, where all players meet each other many times (if they would meet only once, mathematic predicts a bloodbath). The tournament was won by TIT for TAT, one of the simplest strategies, entered by Anatol Rapoport from Toronto. TIT for TAT always starts with collaborating on the fist move, and than simply copies the opponents response on each subsequent move. So TIT for TAT loves to collaborate, but strikes backs immediately if you try to cheat. It is also forgiving, since it wants to restore collaboration if the opponent does. How can such a trivial strategy win a tournament, where many very sophisticated programmes, written by experts on game theory participated?

The first tournament was followed by a second one. All new participants, now 63, were given full access to all results and strategies of the first tournament. They could use all the available information and their own imagination to come up with the ultimate winning strategy in this multi player multi round collaboration game. The second tournament was won again by TIT for TAT, submitted in unchanged form by Anatol Rapoport. Isn’t this magic? Next, Axelrod simulated many new versions of the tournament as well as a long term combat in an “ecology” of interacting strategies. TIT for TAT surfaced as the great victor. Most of the book is devoted to an analysis of the roots of TIT for TATs success. Axelrod proves several theorema’s that explain why “nice” strategies (those that are never the first to defect) are better then “mean” strategies (those that occasionally or always defect as first), how a community of nice strategies can protect itself against invading meanies, and how small clusters of collaborators can take over a world of defectors. The final analysis results in some clear lessons on how to collaborate successfully, assuming almost nothing about the agents themselves: be nice, retaliate defections, be forgiving and do not be too smart. Simple rules, but one can imagine that some leaders will need a bit of study to take them to heart.

Is TIT for TAT, the biblical “eye for an eye”, the computer age equivalent of the ultimate good? Axelrod shows that the evolution of cooperation does not require that the interacting agents are friends, or that they trust each other. The agents do not even have to have minds. The key ingredient is the likelihood of recurrent encounters, recognition of specific opponents, and memory of previous actions. This explains how English and German soldiers could live and let live in the trenches during World War I, how the USA and USSR succeeded in not throwing the bomb, and how bacteria can enjoy our company while living in our inner organs. In fact, the natural tendency towards systems of collaborating agents is so strong that they sometimes need to be actively suppressed. Cooperation theory predicts collaborating soldiers and congressmen, but also the elaborate texture of the Italian Piovra. Even in a typical zero sum game as chess, where everyone is supposed to fight for himself, pathological collaboration can emerge, as Robert Fisher suspected when all his Russian opponents played easy draws when they played against other Russians, only to go for a win when dealing with Fisher. Of course this didn’t help, since Fisher become world champion anyway.

An old philosophical adagium says that you cannot derive and ought from an is. Cooperation theory is about how things are, and why they are as they are; it does not provide a moral, or a prescription what is good or bad. It is helpful however to know a little bit about the mechanisms that underlie the emergence and stability of cooperation. Once we understand the mechanisms, we can use this knowledge to steer the system, and try to promote good things and prevent bad things. But when it comes to making such decisions we are on our own. Unless, of course, you prefer to consult your copy of the Gideon Bible.

Usernames, passwords and PIN codes: the human element

By: Cornelis Jan Stam

Date: 28-4-2011

How can you know whether an engineer is extrovert? Quite simple: he is extrovert when he looks at your shoes instead of his own shoes when he is talking to you. This is a bit of a nasty joke, that makes fun of the suspicion that talent for the exact sciences often comes with a slight problem in the field of human interaction. To be honest, perhaps I should include myself in the engineer category once in a while, although I have no real qualifications when it comes to physics and mathematics. There is however one category where the joke must be considered a gross misrepresentation of reality. It would be very exceptional indeed to see the software engineers who are responsible for the 250-500 databases that store everything we are engage in anything that even closely resembles human communication.

This state of affairs has a major influence on the joys of daily life. In particular, it relates to the pleasure of having to come up with a new username, password and sometimes PIN (“personal identification number”) code for each and every application we have to interact with. For instance, scientists are often invited to review papers for scientific journals. There is nothing wrong with asking this; in fact, reviewing could be considered a highly moral act, contributing the well being of science. The problem is that you first have to register, obtain a password, log in, and – if you are really lucky – go through an extensive enquiry about your scientific habits, taste and preferences. If you pass the test, you may proceed to review the paper. But beware: once your review is finished, you will be exposed to the niceties of the journal’s website once again.

The people who design these web applications are obviously not the ones who use them; they would probably not accept the way the poor user is expected to find his or her way on websites that are the digital equivalent of bureaucracy in its most frightening manifestation. Nothing is more dangerous to mental health than getting error messages for not filling in scores of “required fields” in the unknown way intended by the software engineer, but not getting any clue whatsoever on what was wrong, and how it should be done properly. One issue is of special interest: the requirement of coming up with a new password or PIN code for each new application. Of course, requirements for such passwords vary wildly (apparently, software engineers also do not to talk to each other). If you think you can get away with the name of your beloved one you will typically receive immediate punishment: “password not allowed”. Even if you succeed to come up with something that software engineers would qualify as acceptable, for instance “^&%%_&--@sinterklaas-december”, you will be tested again soon, since really safe systems will demand that you change your password every few months. Quality regulations demand this. We need to consider safety, don’t we?

How many passwords and PIN codes can a human subject remember? At what stage does your brain blow up? Do we perhaps have to invent applications for the management of all these usernames, passwords and PIN codes? No doubt, we will need a password for that application as well, so where do we keep that one? How long will it take before all this information will be hacked? TOM TOM passes information on your whereabouts to the police, so that they can manage their speed checks more efficiently. Sony also likes this approach. If you fly over the USA, the USA government has information on your diet. They must care a lot about your gastronomic well being. It is such a pity we don’t have a fully connected electronic patient file yet, and still need to put all our fingerprints in one large database. However, as politicians assure us, this will be a matter of time. Perhaps we should ask government officials to store all our passwords and PIN codes, just to be on the safe side.

There is at least one tragic example where this would have been helpful. When the two planes crushed the two towers on September 11, 2001, not only did they kill close to three thousand humans, but all their passwords were lost as well. This was more than a minor nuisance in getting things going again. Fortunately, people can be very creative in dealing with such unexpected disasters. After nine eleven groups of survivors sat together, and tried to guess passwords of all their deceased colleagues. Apparently, in many cases, this strategy worked, and a complete database disaster in the wake of the terrorist attack could be prevented. Somehow, many people still squeeze the name of their beloved ones into their password allowing those who survived to recover what was lost. Perhaps we should include such human considerations in version 2.0 of all the databases that store our lives.

Scientific knowledge as shareware

By: Cornelis Jan Stam

Date: 25-4-2011

In “Freefall”, his in-depth analysis of the 2008 financial and economic crisis, Joseph E. Stiglitz reminds us that “America’s third president, Thomas Jefferson, pointed out that knowledge was like a candle: as one candle lights another, its own light is not diminished.” This was a truly enlightened piece of advice: sharing knowledge with others in no way diminishes your own knowledge. There was no shareware in Jefferson’s time, but he probably would have loved it. Knowledge is not a zero sum game; you can share it without losing it. In fact, lack of information sharing can be a major source of problems, for instance in economy. Stiglitz earned the Nobel Prize for economy for his investigation of the disruptive effects of “information asymmetries”. In science however it seems there is a longstanding tradition of sharing information, and communicating your results. Science as a whole can only flourish if everyone agrees to collaborate in this worldwide game of trust.

Unfortunately scientists are not necessarily less selfish or egoistic than other human beings. If anything, scientists may tend to be very ambitious, and keen upon getting recognition for their contributions to the common good of shared knowledge. In science, priority is one of the most highly valued commodities. This explains why Isaac Newton and Gottfried Wilhelm Leibnitz, who both invented calculus more or less simultaneously and independently, could have such a bitter fight on the question who was really first. The discovery of the double helix structure of DNA is another example of the fierce battle for priority. James Watson did not hesitate to use information on the B form of the DNA crystal, provided to him in secret by Maurice Wilkins, to solve the puzzle of DNA structure. Sometimes it seems that the name of the person who lit the candle is more important than the light it emits. In finance a system of ill-directed incentives played a major role in promoting irresponsible and even downright immoral behavior. In science, excessive appreciation of priority, rewarding the discoverer and not the discovery, can be an incentive to leave the royal road of information sharing and collaboration. The consequences can be considerable, and may strike a major blow to the trust that underlies the scientific Endeavour. Perhaps every scientist should read “Valse vooruitgang” a collection of horror stories about scientific fraud by Frans van Kolfschoten.

When he was thinking about what kind of academic study he should undertake Stiglitz was given the following piece of advice by his parents: “Money is not important. It will never bring you happiness. [Strange advice to a future economist.] Use the brain God has given you, and be of service to others. That is what will give you satisfaction.” This may sound a bit moralistic, but there is probably also some wisdom in it that Jefferson would have agreed with. For scientists, of course, one should replace “money” with “credit for priority”. There may be an extra, hidden message in Jefferson’s advice: candles burn only for a limited span of time. If they do not succeed to light some other candles, their flame will simply be lost. It is the light that endures, not the candle.

Unexpected encounters, seeing patterns, and filling holes: the nature of scientific discovery

By: Cornelis Jan Stam

Date: 22-4-2011

Many, perhaps most scientists will spend their career working hard, contributing useful things to the body of scientific knowledge once in a while, but never making that one big discovery that brings instant fame and glory. Certainly, for the general audience, unaware of all the hard work and silent suffering that is going on in research laboratories, great discoveries give science a face. This may even be literally true: scientists are not generally known for their beauty, but when they announce a breakthrough, many people will tolerate a picture or two in newspapers and magazines. Scientific discoveries are important, if only to show society why it is worthwhile to spend money on science. At the same time discoveries are notorious for their lack of discipline. Although NWO and administrators of universities and research institutes would really appreciate it if scientists could plan their future discoveries in a neat way, indicating what they expect to discover, when, and at what cost, including a detailed specification of consumables, deliverables and valorization benefits, in practice scientific discoveries tend to behave more like bohemian artists than civilized citizens. Could it be that this tells us something about the fundamental nature of scientific discovery?

One obviously important point is that scientific discoveries may come in different flavors. For instance, we may distinguish between unexpected encounters, sensations of suddenly seeing the pattern, and the hard work of filling in holes. Of these, the unexpected encounters most easily fulfill the usual description of a scientific discovery. When scientists, usually looking for something else, suddenly and unexpectedly come across a completely new and interest specimen, dead or alive, we would probably willing to acknowledge that this constitutes a scientific discovery. The discovery of X rays, awarded with the first ever Nobel prize, probably falls in this category, but so do cosmic background radiation, and mirror neurons. The ability of recognizing and making use of such chance discoveries is called serendipity. Such unexpected discoveries tend to speak to the imagination, but are difficult to control, and may reflect a relatively immature stage of a scientific field. If you do not have a series of well-developed scientific theories available, you are more likely to be really surprised by unexpected visits from reality. Little children must have lots of experiences like this.

Things are different with scientific theories. A theory could be defined as a set of well-defined relations between a number of entities; basically, a scientific theory is a pattern, sometimes cast in the exact language of mathematics. The existence of the entities – often remnants of previous unexpected encounters – is not enough; one has to see or grasp the frame that holds a set of such entities together. Discoveries of this kind, including evolution theory, gravity, electro-magnetism, relativity and quantum theory, are often preceded by long and hard work. Frequently, progress is in stages, with different scientists re-arranging the patterns over time. Although the recognition of the pattern may occur rather suddenly – think of Kekule and benzene, or Watson and Crick and DNA, this type of discovery is a mixture of long hard work and luck. As Pasteur remarked: luck favours the prepared mind.

But even when you are not so lucky, you can still do science and make discoveries. A peculiar feature of many powerful scientific theories is that they not only explain the exact relations between known entities, but may also predict the existence and even the properties of as yet unknown objects. These unknown objects one could say are the holes in the patterns. It is often possible to give a very accurate description of the entity that should fill the hole; the only problem is that it has not yet been observed. Nice examples of discoveries that could be viewed as filling in the holes of a powerful pattern are the discovery of Neptune (predicted by Newtons laws of gravity applied to our solar system) and the discovery of gallium and germanium (predicted by the pattern of Mendeleev’s periodic system). The interesting feature of these types of discovery is that we know what we are looking for. We could even send out a search warrant, if the police were a bit more efficient. Have you seen a graviton? Please report immediately to the nearest police station. Filling in holes may be less romantic than unexpected encounters, or seeing new patterns, but it is crucial for the corroboration of scientific theory. It is a sure sign of mature science. Astrology doesn’t have unfilled holes, but astronomy does.

Unexpected observations, discovery of hidden patterns, and tracing occupants of theoretical holes may reflect progress and maturation in science, but their interrelationship could well be very complex. For instance, following the ideas of Karl Popper, even a well-established scientific theory may face falsification when confronted with new observations that are neither predicted by nor easily accommodated within the old pattern. Major changes in large-scale scientific theories, provoked by rebelling observations, are often called revolutions, for good reasons. In contrast to these revolutions much scientific work falls into Thomas Kuhn’s category of “normal science”, disciplined hole filling within the context of a well-defined pattern or paradigm. Kuhn and Popper had different opinions on the relative importance of normal science and revolutions, but both would probably agree that scientific progress reflects a complex dynamics of novelty and discipline, observation and theorizing. We should respect the unexpected, keep looking for hidden patterns, and actively search for the occupants of theoretical holes.

How does modern network theory fit into this scheme of scientific discovery? Network theory is a very theoretical, mathematical enterprise, that allows one to see common patterns in wildly different complex systems, ranging from genetic networks to societies and brains. The availability of simple, elegant mathematical models of complex networks such as the random graph, the small-world network and scale-free networks, has turned out to be a major driving force behind research in this field. It seems that almost any complex system can be explained as a variation of one of these three basic models. But this confirmatory character of much of modern network science is also a danger and a weakness. We may be overlooking some black holes, right in the middle of the theory. Think of random graphs, small-world networks and scale-free networks as filling the upper left, the lower left and the upper right corner of a simple two by two table. All of them have short paths. Small-worlds have high clustering, but fail broad degree distributions. Scale-free networks have broad degree distributions, but fail high clustering. Why don’t we have a simple model with both short paths, high clustering and broad degree distributions? Where does modularity fit in? What about degree correlations? Could it be we are missing a model, one that fits into the unoccupied lower right corner?

Think patterns, not people

By: Cornelis Jan Stam

Date: 17-4-2011

Just imagine the following utterly simple game: you are given a certain amount of money, say 100 euro, and you are confronted with another player whom you don’t know and will not meet again. You do not even have to meet face to face. The assignment is to offer the other player some of your money, anything from 1 to 100 euro. If the other player accepts your offer you can both keep the money. If the other player refuses, you both get nothing. What would you do?

If you are completely rational you will figure out that the correct answer is to give the other one euro, and keep 99 euro for yourself. If your opponent is rational as well, he will not refuse since, of course, one euro is more than no euro. However, as Mark Buchanan shows in “The social atom”, human nature has some surprises in store. Although classic economic theory assumes perfect rationality of all players as the only reasonable way to understand group behavior and economic dynamics, real humans are different. The ultimatum game has been tested in numerous populations, across all strata of society, and in different cultures. Completely at odds with economic predictions people usually offer the other player between 25-50% of the money; perhaps we are not so bad after all? There is only one exception to this rule: economy students consistently give less money to the other player. So, humans may be good by nature, but proper education can do wonders.

“The social atom” by science writer Mark Buchanan is full of similar stories and examples. In line with his previous books in which he explored complex networks and the ubiquitous power laws in non-equilibrium systems Buchanan examines what modern physics can teach us about social systems. This may seem a bit preposterous. Nothing could be more complex than a human being (think of yourself as an example), so, surely, a large group of interacting humans must be a system beyond comprehension. But in fact, it may be just the other way around: whatever differences may exist between individual humans, when we are dealing with really large numbers of them their overall behavior starts to show surprising patterns. According to Buchanan “The most important lesson of modern physics is that is often not the properties of the parts that matter most, but their organization, their pattern and form.”

Theories derived from physics and mathematics, especially those falling under the general heading of complexity, are increasingly successful in explaining and predicting human behavior. With deceptively simple models it is possible to understand why segregation can occur without racism, why the size, growth-rate and lifespan of business firms follow power laws, and why putting obstacles in their way may actually help large masses of people to escape easier from a building in the case of panic. The simplicity of the most successful models, that produce counterintuitive but empirically verifiable results, is not a coincidence but a key element of their success. As Buchanan stresses: “The point isn’t that any oversimplified model will do, but that oversimplified models can go a long way if they get right the few details that really matter.

The “Social atom” is about the physics of society, not about the workings of the brain. It is a critique of the classical approach in economy and sociology, where complex phenomena either have even more complex causes, or are explained by models that fail miserably when confronted with empirical reality. Building models is the way forward, but these models have to be as simple as possible, and highly predictive, like a good physics model. In modern neuroscience this is exactly what is missing. Neuroscientific research has degenerated into fact collecting at an unprecedented industrial scale. Perhaps we need to take a step back and reconsider what we are really looking for. To understand the mathematics of traffic jams we do not have to know the genetic makeup of all individual car drivers. It cannot be excluded that equally simple and powerful models may one day explain how macroscopic spatial temporal patterns of neural activity may underlie consciousness, memory and perception. To get there we have to keep in mind that “In the brain, the model is the goal.” (Bartlett W. Mel. Nature Neuroscience supplement 2000, 3: 1183). We just need to get right the few details that really matter.

Biking and the secret of life

By: Cornelis Jan Stam

Date: 15-4-2011

Thermodynamically speaking we are all dead. It is at least slightly ironic that only living, thinking creatures can come to this conclusion. Wolfram doesn’t wonder and carbon doesn’t care. Only we seem to be trapped by the mystery of the second law of thermodynamics: in a closed system the degree of disorder, technically called “entropy” can only stay constant or increase. Ultimately, disorder rules, like in a children’s room. In the thermodynamic limit, everything will dissolve in a heat bath. A rather depressing idea, when you come to think of it. No wonder Ludwig Boltzmann, the godfather of thermodynamics, hastened the inevitable by committing suicide. So, why are we still here? Where is all this order, this whole magic tapestry of patterns coming from? Is the second law having a bad dream?

Have you ever wondered how it is possible that you can have a ride on a bike, without falling to the right or the left all the time and ending up in the only really stable state: lying flat on the street with your bike on top of you? If you think of it riding a bike is a rather curious activity, one that seems to defy the laws of physics, in particular those related to gravity. How can you keep your balance? Part of the trick is that you, the driver, are actively involved; by moving your body slightly to the left or the right you can try to prevent the inevitable. However, unless you have exceptional acrobatic skills, stability is almost impossible to achieve when your bike is standing still. The point is you have to get moving. Speed helps, but it still isn’t enough. If you wouldn’t have a steering wheel, you would still loose the battle against gravity, and end up on the street, with the additional bonus of injuries caused by falling at full speed. The solution is steering. Did you ever realize that the process of riding a bike is actually a continuous series of falls that are prevented by small corrections made by steering? These corrections are so small you probably aren’t even aware; but remember how difficult it was to learn it in the first place. Riding a bike is a process where you achieve temporary stability by a combination of speed and steering. With appropriate training you can even use this system to go wherever you want. But it is transient; in the end you will get tired, and you will have to step down sooner or later. No one has expressed the underlying message better than Michael Dudok de Wit in his movie "Father and Daughter".

Basically this is it. All complex self-organizing systems consist of various interacting elements, driven by some force or energy, and steering to keep their balance. That is how the glucose level in your blood and the firing rate of neurons in your brain are kept constant, at least on average. Complex systems create order, local niches of low entropy, by driving and steering, at the cost of increased disorder elsewhere. But this process is bounded in space and time. Order can be created locally and temporarily, but ultimately the system will collapse. Interestingly there seems to be a relation between size and lifespan of complex systems. In animals, for instance, body size is correlated with lifespan. That is why elephants get so old, while mice die young. Getting bigger seems to be a strategy in complex systems to live longer. If it weren’t for gravity we would probably have elephants with the size of skyscrapers. In very large systems of interacting elements equilibrium can be postponed longer, and the thermodynamic fatum can be delayed. Perhaps splitting off some baby complex systems before the transient feast is over is also a good idea in the battle against entropy.

Riding your bike as a metaphor for the first law of complex systems. Think of it the next time you take a ride, but keep an eye on the traffic, especially cars at roundabouts, to prevent premature thermodynamic equilibrium.

Survival of the nicest

By: Cornelis Jan Stam

Date: 10-4-2011

In a recent paper from the group of Martin A Nowak "How mutation affects evolutionary games on graphs" graph theory is used to study the evolution of cooperation on complex networks. Combining game theory with modern network theory is a fascinating approach. Several years ago Martin A Nowak wrote a small Christmas essay for the scientific journal Nature with the title “Generosity: a winner’s advise”. The winner referred to in the title was Robert May, an evolutionary biologist of world fame, and Nowak’s supervisor at Oxford. May’s advice was deceptively simple: “You never lose for being too generous”. This may seem a somewhat odd advise, especially coming from a highly successful and ambitious scientist like Robert May who would go for a win even when he was playing with his dog. Is he fooling us, or is there a lesson to be learned?

Many people still think of evolution as a struggle where only the smartest and the brightest will survive in the long run. This is the notorious “survival of the fittest”, a notion often incorrectly ascribed to Charles Darwin (in fact it was coined by Herbert Spencer). The flavor of this combative approach to success has greatly influenced many fields of human interaction, from economy to science. In science the top is where you want to be, and beating your opponent is the way to go. It is important to outsmart the guy next door. The discovery of the double helix by James Watson and Francis Crick is a nice case study, especially since it is so well documented: both Watson and Crick, but also the third man (Maurice Wilkins) have written about it. But the biography of Rosalind Franklin, who died before the Nobel Prize for the discovery of the double helix was awarded, shows that there is more to success than a simple battle to be the first and the best. Perhaps evolutionary dynamics can teach us a lesson that shows that there can be no winners without collaboration.

Modern game theory, starting with Axelrod and John von Neumann has shown that the best way to success is, strangely enough, to be nice. One of the most successful strategies in the iterated prisoner’s dilemma is the “tit for tat with forgiveness”: start out with collaborating, retaliate if your opponent defects, but give him the opportunity to get back into the game by forgiving. Nowak argues that collaborative systems, whether they consist of business competitors, politicians or scientists aiming for glory, obtain optimal results when they combine generosity with hopefulness and forgiving. In other words: allow others a fair share, assume their good intentions, and forgive them if they disappoint you. This seems a long way from the “survival of the fittest”; it is more like survival of the nicest. Similar ideas have been expressed by the famous Dutch zoologist Frans van der Waal, for instance in his book “Van nature goed”.

Systems of interacting agents collaborating along these lines can be highly successful; working together on the basis of generosity, hopefulness and forgiveness actually produces the best overall result. There is a twist, however: such systems are also highly unstable. Once a successful collaboration based upon mutual trust has been established, it becomes very attractive for individual agents to defect: to take the money and run. The more successful a collaborative system is, the more seductive defection becomes. Of course defectors can be detected, and systems can install protective mechanisms to guard them off. But the prize for keeping the defectors out, or at least under control is loss of collaborative resources. Typically, the battle between collaborators and defectors will show a complex waxing and waning dynamics: periods of success based upon mutual trust will alternate with episodes of paranoia and battle against defectors. It is the battle between “good” and “evil”, cast in a mathematical framework.

Does this sound familiar? Society is probably full of systems that show features reminiscent of the simple models of game theory. Recognizing these mechanisms, for instance in groups of collaborating or competing scientists working in a specific field, does not make the problems go away. It does help however to realize that success is not simply winning the struggle of the fittest; it is give and take, having trust, without being naïve. There is nothing wrong with ambition; but without trust it will never get wings.

Connecting minds by weaving the web

By: Cornelis Jan Stam

Date: 3-4-2011

It is sometimes said that success has many fathers but failure is an orphan. With the wisdom of retrospection we can all claim that we could have predicted the World Wide Web would become the mother of all networks, but in reality the existence of the WWW started with the work of a single person, Tim Berners-Lee, who wrote the software while doing a consultancy job at CERN in Swiss. It is fortunate that he wrote a book about his experiences while setting up the web, getting other people and communities involved, and steering it to the future with the W3C, short for World Wide Web consortium, a surprising manifestation of democratic rule in a world of technology and commerce. “Weaving the web” was first published in 1999, ten years after Berners-Lee first proposed his web idea to the community of high-energy physicists. Ten years is like forever in our information technology driven societies. It is amazing how well Berners-Lee predicted the future of the web. There are also some interesting lessons in his book about human behavior, and the need to balance personal ambition and pride with the appreciation of the idea that we cannot achieve anything without collaboration.

Initially, nobody was interested. Tim Berners-Lee tried to write software, first in Pascal, to do some housekeeping: he wanted to organize information on various projects, contact persons and institutions he had to deal with. Even at that time, CERN was a huge organization, a highly international melting pot of brilliant minds, but also a hopeless entanglement of different computer systems, networks, operating systems, and communication protocols. The key question was: what would happen if we could make all of this a shared information space, with a common language? Although the Internet, the physical network of computer systems, had been around since the seventies, no simple solution existed for such a world-wide, generally accessible communication network. Berners-Lee realized that a simple, but universally accepted system would be the key. He combined hypertext, a technique to link different documents and files on a computer, with a standardized way to communicate between servers and clients. This resulted in the concept of an URI (uniform resource indentifier), HTTP (hypertext transfer protocol) and HTML (hypertext markup language. These inventions, and especially the idea to link any piece of information to any other piece of information, irrespective of its physical location, lie at the heart of the web.

The WWW was not an official project, but a wild idea set up by someone in his spare time. It took Berners-Lee and a few close collaborators a long time to make people see what he saw: a world of accessible information, and ultimately a semantic web, a digital equivalent of society. Only around 1995, when Netscape started off as a successful business, and Bill Gates decided to incorporate Internet Explorer in Windows 1995, did the web really become big. In fact it became big so rapidly that many thought they could go for quick wins by setting up al sorts of web-based business enterprises. Challening this behavior Berners-Lee writes: “What is maddening is the terrible notion that a person’s value depends on how important and financially successful they are, and that that is measured in terms of money. That suggests disrespect for the researchers across the globe developing ideas for the next leaps in science and technology.”

Berners-Lee decided instead to go to MIT to set up the World Wide Web consortium at the LCS (laboratory for computer science). He was not against business or profit; he was just concerned that the ideal of a global information space would break down if it would be dominated and fragmented by a small number of players. The W3C is a rather unique collaboration bringing together all the important parties involved in the web. It has been able to set standards such as XML, deal with conflicting business interests by consensus, and keep the web essentially what it was from the beginning: open and free.

The story of the WWW shows how the combination of human drive and the willingness to collaborate can produce a revolution. When “Weaving the web” was written, Google and Facebook didn’t exist. However Berners-Lee already predicted developments along these lines, and strongly promoted the use of the web not only to connect computers, but more importantly to connect people and minds. If Berners-Lee wouldn’t have been so brilliant the web wouldn’t have been invented. If he wouldn’t have been able to convince other people of the need to collaborate, you wouldn’t be able to read this. He said: “My hope and faith that we are headed somewhere stem in part form the repeatedly proven observation that people seem to be naturally built to interact with others as part of a greater system.”. It seems only just he was given an honorary doctorate by the VU University on October 20, 2009.

Popular science: what's the point?

By: Cornelis Jan Stam

Date: 2-4-2011

Mothers are notorious for sinking through their knees, changing their voice, and converting to a kind of mini-speak avoiding difficult words and abstract concepts when addressing their children. Similar symptoms can be observed in aunts, uncles and grandparents. The underlying assumption seems to be that small size comes with limited capacity of understanding. Whether small children actually appreciate being addressed in this way is hard to know; they simply cannot tell yet. Possibly this behavior has emerged in our evolution as a convenient solution to bridge the gap between adult knowledge, and children’s need to understand.

It can be questioned whether this “mother strategy” is the best way to communicate progress in modern science to the general public. Many people may be keen to learn about exciting scientific discoveries, even though they are neither scientists nor experts. A beautiful illustration of this interest can be found in a short column by Vincent Icke with the title “Komt een man bij de bouwmaat” (in the essay bundle: “Dat kan ik me niet voorstellen”). It shows that we need not have to sink through our knees to explain what scientists are doing; explaining something well does not require the adoption of mini-speak, but a serious effort by experts to communicate what their work is about. This should be a moral obligation of all scientists to society. Science and scientists are funded by the community, and their work has a major impact on the present and future structure of our world. Reporting what is happening cannot be left completely to science journalists, but should also be done by investigators. In fact, trying to explain what you do to someone who is interested but knows little or nothing about the topic is one of the best ways to get your own thoughts straight.

Unfortunately popular science is sometimes polluted by journalistic mini-speak and an unhealthy addiction to sensationalism and hyping. The current tsunami of popular brain books contains many specimens that could be categorized as a waste of trees. Here both scientists as well as serious science journalists have an important responsibility: scientists should spend time to communicate their work in a way that is respectful of the general public. Journalists can help to point out high quality work in their field. Fortunately there are excellent science journalists such as James Gleick and Mark Buchanan; reading their books, especially on topics you are not familiar with, may actually change your thinking. But scientists can write such books as well; Sync by Steven Strogatz has opened a completely new world for me. Even television does not always have to be of SBS6 quality. The portraits of scientists in “Magie van de wetenschap” and the Horizon documentary on Andrew Wiles are small masterpieces of popular science. Their quality helps to convey the message, and that is the whole point

Google-think as an antidote to free market disasters in healthcare, science and education

By: Cornelis Jan Stam

Date: 29-3-2011

As could be expected the financial and economic crisis of 2008 has finally reached its most vulnerable victims: healthcare, education and science. At the same moment when CEO’s of government funded financial institutions fail to understand why they cannot cash their over 1 million euro bonuses, hospitals, schools and universities are now faced with the threat of major budget cuts. After all, the economy is in bad shape, and we really need some money to do military missions abroad, so we all have to take a fair part of suffering. If only doctors would be willing to earn a bit less, students to work a bit harder, patients not to get ill so often and so long, and administrators of hospitals, schools and universities to run their businesses a bit more efficiently, it should be possible to tackle our financial problems.

It cannot be much fun to be responsible for the running of major public institutions such as a university theses days. These institutions are paid for to a large extent by the community. It makes sense that society expects them to deliver good quality education and high level science for a price that is reasonable. Everyone working in the public sector should be aware that you are living on and spending community resources. This brings special responsibilities with it, including high productivity combined with efficient organization. These requirements seem perfectly reasonable. However, in practice the pressure of doing more for less is reaching intolerable levels in at least some places, and the long term consequences of damage to these institutions do not seem to be taken seriously enough. Many years ago Karel van het Reve warned us: “If the Netherlands will not spend much and valuable money to everything related to education, it will run the risk of becoming an equally poor and backward country as England in 1920” [quoted by Theodor Holman in “Karel. Het leven van een overtollig mens."; Tr. C.S.]. Even if we survive the present misery, do we want a future for our grandchildren where they will have to speak, write and read Chinese if they want a decent job?

It is often assumed, explicitly or tacitly, that many problems of public institutions in health care, education and science are remnants of the welfare state and its overly optimistic approach to solving problems and spending money. No doubt there is an element of truth here. The problem is that the cure now seems to be getting worse than the disease. Apparently, to make institutions more efficient they need a transfusion of free market thinking. If you create a market, and force schools, hospitals and universities to start competing for students, patients and grants, cost will go down, and efficiency will emerge. If it works for mobile telephone and Albert Heijn, why wouldn’t it work for the public sector as well?

Of course it doesn’t. A university is not a company, and the laws that determine business success are not necessarily applicable to trains and research groups. Scientific discoveries are not pizza’s; they cannot be delivered around the clock at a fixed low prize. Eminent scientists have warned against this political tunnelvision that aims at converting all institutions to competitive supermarkets in “If you are so smart, why aren’t you rich". In “Utopie van de vrije markt” Dutch philosopher Hans Achterhuis exposes the ideological distortions that underlie this neo capitalistic religion. Mark Chavannes is fighting a frontline battle against attempts to “valorize” everything that should be precious to us. While suffering from the terminal stage of ALS Tony Judt wrote “Ill fares the land”. This book, a testament for Judt’s children, is a bitter warning against the consequences of a free market approach to public services and should be obligatory reading for everyone responsible for running politics and public institutions.

So how can we cut costs if converting everything into a company is neither efficient nor morally acceptable? One problem seems to be that both the responsible politicians as well as the administrators of public institutions seem to be captured in outdated modes of thinking. The pressure to cut costs leads to Pavlov like solutions like selling stuff, closing departments, firing people. We have to make choices, don’t we? What seems to be lacking is a thorough rethinking of what it is an institution is actually for, and what alternative ways may exist to deliver this service. It may be that the “business model” of our old institutions is becoming outdated. We may have missed it, but our society has changed. One of the biggest and most successful companies at this moment is Google. Google seems to give away everything for nothing: searching the WWW, finding your way in the world with Google maps, e-mail, translations, video’s, blogging and so on. Still they make a profit by using the information they accumulate in an efficient way. “For free” is a business model. Jeff Jarvis, author of “What would Google do?”, is one of the most ardent advocates of redesigning businesses and public services such as schools, hospitals and universities according to the new reality of globalized information networks. His main point can be summarized as follows: we should learn to move minds, not molecules. Instead of firing teachers and doctors we might consider different ways to make their services available to the public. As a bonus, one may save some costs as well. Again, Karel van het Reve may have been a bit prophetic when he suggested: “Anyone who really wants to undertake something to postpone the end of civilization would probably do best to aim at a freely accessible system of lower and middle education, with small classes, where children are tought reading, writing and arithmetic, and later French, German, English, mathematics and physics by competent teachers.”.

The pleasure of pointing things out

By: Cornelis Jan Stam

Date: 27-3-2011

In the movie “A beautiful mind" there is a scene where John Nash and his future wife are looking at the stars in the sky. Nash points to the sky with his index finger, and shows his lover all the constellations he knows. Then he challenges here to mention just any object. She says “umbrella”, and he points out to her there actually is an umbrella in the sky, if you want to see it. Apart from the obvious romanticism, there is deeper meaning to be discovered in this scene. In our mind reality consists of patterns of relations, cognitive constellations if you will. Once we see them, we can communicate them to others by literally pointing them out. As soon as the other suddenly “sees” the pattern there is usually a smile of recognition. Making someone else see the patterns you have discovered is a very satisfying act. It lies as the heart of our drive to teach and to discover. That is why Richard Feynman could speak about “The pleasure of finding things out”.

Our index finger is probably the most powerful instrument we have to point out things to others, to make them understand what we mean and what we see. The British geriatrician and philosopher Raymond Tallis has written a beautiful book about this: “Michelangelo’s finger”. This book is itself an exercise in pointing things out. By making us look at our own index finger and what we do with it, the deeper significance of “pointing” becomes clear. Pointing something out to somebody else requires a human who is aware of himself and his surroundings, and who can imagine what another human being, seeing an object pointed at, will understand. In fact, we can even point at things that are not directly visible, opening up a world of transcendence. When you point to something that cannot be seen, at least not yet, the notion of true or false emerges. With pointing you can help people, but you can also fool them. You can even fool yourself. It is amazing to see how Tallis constructs a complete philosophy out of a simple finger.

We can use our index finger to point, or another body part if need be, but we can also use artificial devices to point. Road signs are an obvious example, but so are sticks and laser pointers. It would be a nice exercise in practical philosophy to think of all the things that can be used as pointers. When we look at our computer screens, the cursor or mouse pointer tells us where to look. Perhaps the most fascinating modern version of the pointer is the hyperlink. Hyperlinks are the WWW equivalent of index fingers; they point the reader of webpages to other important or interesting pages or files. The hyperlink is the heart of the World Wide Web. Without links, there would be only isolated html pages, loose pieces of information. Only by linking these pieces of information to each other do patterns of knowledge emerge. Adding a hyperlink to a html page feels like magic; you control an index finger that can effectively reach the whole world, and everyone can see it.

However, there is a twist to this story. John Nash was a genius, who discovered patterns of interactions in students playing games. He saw the general rules that determine the dynamics of such social networks. This is now called the Nash equilibrium. For this he was awarded the Nobel prize for economics in 1994. However, Nash also saw patterns that nobody else saw. He was schizophrenic. Convinced he worked for the CIA he monitored countless newspapers and journals for “hidden messages”, and he discovered such messages everywhere. That is, of course, a personal tragedy. However, there is also a more general message in the Nash story. How do we know the patterns we see are really there? The sound and the fury by William Faulkner refers to famous lines in Shakespeare’s Macbeth:

Life's but a walking shadow, a poor player, that struts and frets his hour upon the stage, and then is heard no more; it is a tale told by an idiot, full of sound and fury, signifying nothing.

If Shakespeare’s witches are right, we are all chasing our own shadows. There is no meaning, no story, no pattern to be discovered. It is all the imagination of an idiot. Are there really umbrella’s in the sky? When asked: “How do you know for sure?" Nash replied: “I don’t. I just believe it”. As long as we can share the patterns we discover with others, as long as we can point out to them the constellations we see, they will be significant. That is the pleasure of pointing things out.

The barbarian networks of Alessandro Baricco

By: Cornelis Jan Stam

Date: 26-3-2011

Before the barbarians came things used to be simple. If you really wanted to understand something you had to study it carefully, identify the components it is made of, and dissect all parts until the smallest building blocks, the holy grail of pre-barbarian reductionism, come into view. Ramon Y Cajal and Golgi were awarded the 1900 Nobel prize for medicine or physiology for visualizing the building blocks of our brain: neurons. Recently experiments have been conducted in awake patients recording the activity of individual nerve during the observation of pictures. Some neurons fire exclusively when Jennifer Aniston is in view, while others prefer Bill Clinton. Recently Dutch investigators identified neurons that get excited by Bassie and Adriaan, Sinterklaas and Wendy van Dijk. In science, it is not common to argue over taste. In the same year in which the full human genome was uncovered Eric Kandell got the nobel prize for his research into the molecular basis of memory. We are not our brain, we are our neurons, and ultimately our molecules. But does this make use understand better what we see when we look in the mirror?

According to Allessandro Baricco the idea that we can understand ourselves better by descending ever deeper into the mineshafts of reality has its cultural roots in a “vertical dynamics” that started in the nineteenth century. Later, Sigmund Freud descended the staircase of the human psyche while modern particle physicists chase after the smallest building blocks of matter in ever more powerful – and appropriately underground – particle accelerators. The Large Hadron Collider is the Chinese wall of our civilization: it is not a defense wall that protects us against the barbarians, but a monument for a particular point of view and a symbol that defines what we thought we are, at least until recently. Understanding is digging, and knowledge is depth.

Barabarians have no respect for walls, irrespective whether they are located in Berlin or China: they tear them down, or simply walk around them, whatever is more convenient. According to Baricco it is exactly this “horizontal movement” that reveals the pattern of cultural mutation. We are no longer concerned with digging up archeological treasures, but with building places of passage that reveal a pattern of fast connections. Ironically, this transformation has its roots in the very temple of scientific depth. Tim Berners-Lee, former CERN physicist, combined hypertext with markup language, thereby inventing html, the Esperanto of the World Wide Web. Larry Page and Sergey Brin opened up the network of knowledge of the WWW when they understood that connections between websites constitute the essence of “meaning”. In the act, they transformed the very meaning of “meaning”. Google is not a successful company,- it is a new definition of what we are. We are all becoming barbarians in the Bariccian sense, whether we like it or not. Reality has become a superfast network of connections between facts; cultural and scientific mineshafts give rise to nostalgic memories at most while you surf around them, like the roadsigns next to French highways indicating historic places and old monuments. To be is to surf, to know is to Google. There is an important cultural message in the fact that Google is now digitizing all our books and musea.

Even the Wendy van Dijk neuron can no longer be properly understood if we stick to the pre-barbarian vertical concepts. For instance, does a Wendy van Dijk neuron contain different genes, different molecules than, say, a Peppi and Kokki cell? And what happens if your Wendy van Dijk neurons succumbs: would that imply you have lost her image forever? If you would take your Wendy van Dijk neuron out of your brain and would keep it alive in a plastic container, would it still fire enthusiastically as soon as she enters the room? The answer is obvious: Wendy is not a neuron, but a network consisting of countless connections. In this respect, our brain functions like Google: the significance of any component, whether it is a webpage or a neuron, is determined by its connections. More connections implies greater significance. But the pattern of connections is not arbitrary. A smart combination of many local and a few critical long-distance connections gives rise to a small-world network. In such a network everything is connected to everything else in about six steps. This is not only true for social networks, as suspected by Frigyes Karinthi and shown by Stanley Milgram, but also for our brain. Does this make us see something different if we look at ourselves in the mirror?

In “The barbarians” Allessandro Baricco simulates the dynamics of the cultural mutation in the style and the architecture of his book: he surfs along seemingly randomly associated phenomena like hollywood wine, the Dutch “totaalvoetbal”, and bestseller books. Thereby he demonstrates what he wants to argue: our reality is rapidly becoming a horizontal network of connections. The question is not whether we are our brain. The question is what our brains are. Perhaps in a next edition of his book “De metaforenmachine” Douwe Draaisma should add WWW and Google to the series of metaphors we have used to understand their brain, and thereby ourselves and our culture.

Comment by:

Date:

<-->

From humorist to tumorist

By: Cornelis Jan Stam

Date: 26-3-2011

Sometimes artists beat scientists. The Hungarian writer Frigyes Karinthy had an intuitive sense of the peculiar nature of human communities. In the short story “Chains” he wondered whether all humans could be connected by a chain of at most six steps. This may seem a wild guess, a piece of artistic imagination. Many years later the playwright John Guare used the same theme in his play “Six degrees of separation”. It took another creative mind, Stanley Milgram, to prove that this intuitive idea about the connectedness of human society is closer to the truth than you might suspect. In his famous letter experiment Milgram showed that letters send to randomly chosen subjects in the US could reach a target person in Boston in about six steps. More recently, this amazing finding was confirmed with e-mails. So, despite the incredible size of the human population (currently estimated to be 6.91 billion), and the fact that each one of us knows on average only about 150 other people (Dunbar’s number), we are only a few handshakes away from any other person.

Could it be that this kind of artistic creativity and scientific understanding comes with a morbid sense of humor and a short lifespan? Shortly after the PhD thesis defence of his student Christina Taylor Stanley Milgram did not feel well. His wife drove him to the hospital where he walked to the desk of the emergency room and said “My name is Stanley Milgram. This is my I.D. I believe I am having my fifth heart attack”. One hour later he was dead. He was 51.

Frigyes Karinthi himself developed symptoms of a brain tumor when he was 48. In the novel “A journey round my skull” he gives a vivid and fascinating account of all his experiences, from the first auditory hallucinations he experienced while sitting in a Budapest café, to the troublesome recovery after brain surgery in Sweden. In his own words, Karinthi went “from humorist to tumorist”. Karinthi was a brilliant novelist, and a keen observer of human behaviour. His inside report of having a severe brain disease is unique document, resembling in some respect “Hersenschimmen” and “Eclipse” by the Dutch writer Bernlef. Brain disease, especially if severe, splits the mind into two parts: the part that is still healthy observes how the very fabric it is made of falls apart. Experiencing how the ground you stand on suddenly becomes liquid and gives away during an earthquake, as described by Haruki Murakami in “After the earthquake”, must be a similar sort of experience.

If you fall ill, especially if it is serious, sooner or later you will have to deal with doctors. Ironically, Karinthi’s wife was a neurologist. She seems to have missed all the obvious manifestations of a growing brain tumor in her husband, at least until a very late stage when he was suffering from severe papiledema and failing vision. A journey round my skull is not only a deeply human and humoristic account of being ill, but it also gives an unforgettable description of the medical profession, and the primitive state of neurology before the second world war. The doctors Karinthi encounters on his long journey to recovery (and there are many of them) display the whole spectrum of human virtues and weaknesses. Every neurologist, perhaps every doctor should read this book to learn a bit more about what their patients are experiencing.

Did Karinthi’s story have a happy ending? Several of his friends collected money and arranged for him to be operated on his brain tumor by the brilliant neurosurgeon Olivecrona in Sweden; this surgeon was trained by Harvey Cushing himself, the great pioneer of neurosurgery. The operation went well, Karinthi survived his brain tumor and wrote “A journey round my skull”. However, fate had a final ironic twist in store. Karinthi died of a stroke while trying to fasten his shoe laces, one year after he had recovered from his brain tumour. He was 51.

If we are our brains, who is reading this?

By: Cornelis Jan Stam

Date: 24-3-2011

Although, according to an NRC Handelsblad science journalist, we are effectively addicted to science books about the brain, it is still amazing that a serious and over 450 page book about the brain would sell over a 100.000 copies. Yet this is exactly what happened with the book “Wij zijn onze hersenen” bij Dutch neuroscientist Dick Swaab. Swaab has never been afraid of public debate, and with this voluminous overview of his long and successful research career in neuroscience he has certainly drawn the attention of a very large audience. No doubt, our brain is a fascinating topic, but for whom or what? Is this book read by brains, or by human subjects? According to Swaabs mantra this is a senseless question. We as individuals coincide with our brains, and with all the neurons and connections that constitute their material substrate. There is no magic, other than in our imagination. Under the microscope there is cells, not thoughts.

Although the reductionist and materialistic approach to neuroscience that permeates Swaabs book is neither new nor, at least to most neuroscientists, very surprising, it evokes strong emotional reactions. Many readers have taken the trouble to write essays on this book, and explain their own opposing points of view. For instance, emeritus professor of biological psychiatry van Praag wrote a long critique in Trouw. Ultimately his main objections turn out to be religious. This has to be respected of course, but it is unlikely to convince Swaab. To use a much abused concept, it is virtually impossible to cross the bridge separating different paradigms. A very nice essay was published by writer Marian Donner in NRC handelsblad of Tuesday 1 March 2011. She argues that “we are our brain” may turn out to be more and ideology than sober statement of fact. Although Donner is not a neuroscientist, she does point out very well a widely shared feeling of discomfort with the way neuroscience is currently presented to the general public. Swaabs book is typical, but certainly not an exception. Other popular books about the brain, such as those by Victor Lamme and Ab Dijksterhuis convey a rather similar message. Our brains are in charge, and consciousness, Lamme’s “kwebbeldoos” is hopelessly lagging behind. Why can’t we simply dispose of ourselves? Perhaps Margaret Thatcher’s diagnosis of society (it doesn’ exist, there is just people) is also appropriate for our brain?

One could argue that the current image of neuroscience in the popular press is dominated by a certain point of view, that is not necessarily shared by everyone working in this field. There is an increasing interest in many fields of science, ranging from chemistry, genetics, all the way up to biology, sociology and economics, to think in terms of complex systems, rather than reductionist rigor. Complexity science is booming, but its implications are not widely appreciated or recognized. In particular, the relevance of complex systems and modern network theory to understanding the brain is not really common knowledge. I think neuroscientists in this field have a duty to communicate their work to a wider audience, since it may have consequences for the way we think about such things as free will, responsibility, biological determinants of behavior and so on. In fact, the very notion of who and what we are is totally different if we compare reductionist and complex systems points of view.

The issue is not whether, ultimately, it is matter all the way down. The magic is not in the parts, but in the interactions. It is the dance, not the dancers. All systems that are really interesting are so not because of the components they are made of, but because of the ways the components influence each other. Large ensembles of interacting components have properties and characteristics that are not simply the sum of the ingredients. However, if you choose to focus on the particles, than it is particles you will see, not the patterns that they weave. This is a research strategy than can become an ideology if you are not aware of its limatations and the alternatives. Although pattern formation in complex systems may sometimes look like magic (see for instance the books by Philip Ball), it is definitely not mysterious. There is no ghost in the machine. If you take the system apart, you are left with a box of particles. This is probably true for all complex, self-organizing systems, but it is certainly highly relevant for the brain. The history of neuroscience is beginning to show that the reductionist paradigm has its limitations. We can take the system apart, going all the way to neurons, genes and molecules, but building it up again in order to understand what we were trying to grasps in the first place, how we can be conscious, think, act, turns out to be an ever more hopeless goal, postponed to a ever more distant future where we have finally hope to reach the reductionist bottom and are free to move up again. If consistent, reductionists should all group together at the Large Hadron Collider, and wait for the final answers to come in, before they can start working on the brain.

In the mean time, something can be learned from a former CERN physicist, Tim Berners-Lee. He was the inventor of the World Wide Web, an innovation that has changed our world and our way of thinking more than any other major discovery in recent times. Understanding the WWW, which is in fact a complex functional network evolving on top of a structural network, the Internet, has greatly influenced modern network science and its wider applications in many other fields outside communication networks. Many lessons about the brain can be learned here, from the question why the precuneus is so vulnerable in Alzheimer’ disease, to how a network concept such as “path length” may predict our level of intelligence. What our brains do, and how they fail if we become ill, depends not just upon what our neurons are, but also upon how our neurons connect and communicate. We urgently need to discover the laws of communication in large-scale networks, and reductionism will not be of much help here. There is a lot at stake: we might even re-discover ourselves in the midst between our neurons.

For comments and suggestions: send an e-mail

copyright C.J.Stam
Contact information: Department of Clinical Neurophysiology VU University Medical Center
Postal address: De Boelelaan 1118 Postal code: 1081 HV Amsterdam The Netherlands
P.O Box: 7057 Postal code: 1007 MB Amsterdam The Netherlands
Phone: 020 4440727 Fax: 020 4444816