The Emergent Self

August 3, 2001

Published June 2000 at Edge (Originally published in 1995 as Chapter 12 of The Third Culture by John Brockman). Published on KurzweilAI.net August 3, 2001.

I guess I’ve had only one question all my life. Why do emergent selves, virtual identities, pop up all over the place creating worlds, whether at the mind/body level, the cellular level, or the transorganism level? This phenomenon is something so productive that it doesn’t cease creating entirely new realms: life, mind, and societies. Yet these emergent selves are based on processes so shifty, so ungrounded, that we have an apparent paradox between the solidity of what appears to show up and its groundlessness. That, to me, is a key and eternal question.

As a consequence, I’m interested in the nervous system, cognitive science, and immunology, because they concern the processes that can answer the question of what biological identity is. How can you have some kind of identity that simultaneously allows you to know something, allows cells to configure their own relevant world, the immune system to generate the identity of our body in its own way, and the brain to be the basis for a mind, a cognitive identity? All these mechanisms share a common theme.

I’m perhaps best known for three different kinds of work, which seem disparate to many people but to me run as a unified theme. These are my contributions in conceiving the notion of autopoiesis–self-production–for cellular organization, the enactive view of the nervous system and cognition, and a revising of current ideas about the immune system.

Regarding the subject of biological identity, the main point is that there is an explicit transition from local interactions to the emergence of the “global” property–that is, the virtual self of the cellular whole, in the case of autopoiesis. It’s clear that molecules interact in very specific ways, giving rise to a unity that is the initiation of the self. There is also the transition from nonlife to life. The nervous system operates in a similar way. Neurons have specific interactions through a loop of sensory surfaces and motor surfaces. This dynamic network is the defining state of a cognitive perception domain. I claim that one could apply the same epistemology to thinking about cognitive phenomena and about the immune system and the body: an underlying circular process gives rise to an emergent coherence, and this emergent coherence is what constitutes the self at that level. In my epistemology, the virtual self is evident because it provides a surface for interaction, but it’s not evident if you try to locate it. It’s completely delocalized.

Organisms have to be understood as a mesh of virtual selves. I don’t have one identity, I have a bricolage of various identities. I have a cellular identity, I have an immune identity, I have a cognitive identity, I have various identities that manifest in different modes of interaction. These are my various selves. I’m interested in gaining further insight into how to clarify this notion of transition from the local to the global, and how these various selves come together and apart in the evolutionary dance. In this sense, what I’ve studied, say, in color vision for the nervous system or in immune self-regulation are what Dan Dennett would call “intuition pumps,” to explore the general pattern of the transition from local rules to emergent properties in life. We have at our disposal beautiful examples to play around with, both in terms of empirical results and in terms of mathematics and computer simulations. The immune system is one beautiful, very specific case. But it’s not the entire picture.

My autopoiesis work was my first step into these domains: defining what is the minimal living organization, and conceiving of cellular-automata models for it. I did this in the early 1970s, way before the artificial-life wave hit the beach. This work was picked up by Lynn Margulis, in her research and writings on the origins of life, the evolution of cellular life, and, with James Lovelock, the Gaia hypothesis. Humberto Maturana and I invented the idea of autopoiesis in 1970. We worked together in Santiago, during the Socialist years. The idea was the result of suspecting that biological cognition in general was not to be understood as a representation of the world out there but rather as an ongoing bringing-forth of a world, through the very process of living itself.

Autopoiesis attempts to define the uniqueness of the emergence that produces life in its fundamental cellular form. It’s specific to the cellular level. There’s a circular or network process that engenders a paradox: a self-organizing network of biochemical reactions produces molecules, which do something specific and unique: they create a boundary, a membrane, which constrains the network that has produced the constituents of the membrane. This is a logical bootstrap, a loop: a network produces entities that create a boundary, which constrains the network that produced the boundary. This bootstrap is precisely what’s unique about cells. A self-distinguishing entity exists when the bootstrap is completed. This entity has produced its own boundary. It doesn’t require an external agent to notice it, or to say, “I’m here.” It is, by itself, a self- distinction. It bootstraps itself out of a soup of chemistry and physics.

The idea arose, also at that time, that the local rules of autopoiesis might be simulated with cellular automata. At that time, few people had ever heard of cellular automata, an esoteric idea I picked up from John von Neumann–one that would be made popular by the artificial-life people. Cellular automata are simple units that receive inputs from immediate neighbors and communicate their internal state to the same immediate neighbors.

In order to deal with the circular nature of the autopoiesis idea, I developed some bits of mathematics of self-reference, in an attempt to make sense out of the bootstrap–the entity that produces its own boundary. The mathematics of self-reference involves creating formalisms to reflect the strange situation in which something produces A, which produces B, which produces A. That was 1974. Today, many colleagues call such ideas part of complexity theory.

The more recent wave of work in complexity illuminates my bootstrap idea, in that it’s a nice way of talking about this funny, screwy logic where the snake bites its own tail and you can’t discern a beginning. Forget the idea of a black box with inputs and outputs. Think in terms of loops. My early work on self-reference and autopoiesis followed from ideas developed by cyberneticists such as Warren McCulloch and Norbert Wiener, who were the first scientists to think in those terms. But early cybernetics is essentially concerned with feedback circuits, and the early cyberneticists fell short of recognizing the importance of circularity in the constitution of an identity. Their loops are still inside an input/output box. In several contemporary complex systems, the inputs and outputs are completely dependent on interactions within the system, and their richness comes from their internal connectedness. Give up the boxes, and work with the entire loopiness of the thing. For instance, it’s impossible to build a nervous system that has very clear inputs and outputs.

The next area of significant work involves applying the logic of the emergent properties of circular structures to look at the nervous system. The consequence is a radical change in the received view of the brain. The nervous system is not an information-processing system, because, by definition, information-processing systems need clear inputs. The nervous system has internal, or operational, closure. The key question is how, on the basis of its ongoing internal dynamics, the brain configures or constitutes relevance from otherwise nonmeaningful interactions. You can see why I’m not really interested in the classical artificial intelligence and information-processing metaphors of brain studies. The brain can’t be understood as a computer, in any interesting sense, and I part company with the people who think that the brain does rely on symbolic representation.

The same intuitions cut across other biological fields. Deconstruct the notion that the brain is processing information and making a representation of the world. Deconstruct the militaristic notion that the immune system is about defense and looking out for invaders. Deconstruct the notion that evolution is about optimizing fitness to live in the conditions present in some kind of niche. I haven’t been directly active in this last line of research, but it’s of great importance for my argument. Deconstructing adaptation means deconstructing neo-Darwinism. Steve Gould, Stuart Kauffman, and Dick Lewontin, each in his own way, have spelled out this new evolutionary view. Lewontin, in particular, has much appreciated the fact that my work on the nervous system mirrors his work with evolution.

My fourth area of concentration–the most recent one–consists of using the same concepts to revise our understanding of the immune system. Just as conventional biology understood the nervous system as an information-processing system, classic immunology understands immunology in military terms–as a defense system against invaders.

I’ve been developing a different view of immunology–namely, that the immune system has its own closure, its own network quality. The emergent identity of this system is the identity of your body, which is not a defensive identity. This is a positive statement, not a negative one, and it changes everything in immunology. In presenting immunology in these terms, I’m creating a conceptual scaffolding. We have to go beyond an information- processing model, in which incoming information is acted upon by the system. The immune system is not spatially fixed, it’s best understood as an emergent network.

Continued at: http://www.edge.org/3rd_culture/varela/varela_p3.html