FROM BEING TO BECOMING...
Aristotle postulated four different characteristics and causes for the existence of everything in the world. These were:
material, formal, efficient, and final
Let us leave aside the last two, and focus on the first two. The material cause simply relates to the exact composition of any entity, that gives it its unique properties. For example, if I have to justify and describe the function of a chair, I would say that the exact material the chair is composed of, plastic, gives, because of its unique properties of malleability, the property of 'being a chair' to the chair, which makes it suitable for a human being to sit comfortably in it. I also might further inquire to the material cause of the plastic. As can be imagined, this inquiry when reiterated, leads us to polymers, to molecules, and finally to atoms. Thus, we try to explain everything from a 'bottom up' approach.
On the other hand, the other cause of Aristotle's is the 'formal' cause, which tries to explain the existence of entities purely based on their function. For example, the three most important characteristics of living systems are metabolism, self-repair, and replication. Instead of thinking about what property a material would have, we think about what material would be needed for demonstrating a particular property. This is the ‘top-down’ approach.
The bottom-up approach has been enormously useful, especially in all aspects of the natural sciences. Chemists, for example, synthesize molecules, and the eternal activity they have been engaged in, involves relating the structure of a molecule to its function. At one end, this leads to practical endeavors like drug-design, and at the other end, it leads to investigation of the molecules of life; how their particular unique structure has been tailor-made for their unique functions, without which life, as we know it would not exist.
In Computer Science, there have been two famous schools in the area of Artificial Intelligence. The 'Bottom- Uppers' tried to focus on the actual material and form of the human brain manifested through neurons, and tried to articulate these concepts in systems design. However, there has been a dominant school of thinkers, the 'Top Downers' who proposed that what matters is the way in which the brain functions- if we can duplicate that, the actual architecture of the brain is irrelevant. Of course, the two are complementary, and studying both is essential to the success of any discipline which hinges upon such relationships. But the real problem facing the top-downers is that they are dealing with a single effect-multiple cause situation. In the case of biological systems for example, even if DNA is an exquisitely special molecule, the structure of which points almost instantly to its function as the genetic and replicating unit of life, one could conceive of many other structures, some radically different, which could serve the same purpose. So going back from function to structure raises the problem of investigating multiple solutions to a complex problem. In the case of bottom-up analysis, however, we know that there is one SINGLE target, and we are trying to construct approaches towards achieving that target, multiple as they may be.
Interestingly, the top-down approach has always been used in the social sciences, because there, in the first place, the systems are much more complex than many natural systems. Secondly, at least till very recently, there was no inkling of what kind of economic, social, and political models could be built from scratch, that would possibly mirror the problems of modern society, and their possible solutions; the bottom-up approach was just too difficult. Because any kind of precise logical mathematical modeling of social scenarios was almost non-existent till the early twentieth century, social scientists adopted the top-down approach because it was the ONLY one they could adopt. Look at the behaviour of society, and try to extrapolate back to possible mechanisms and assumptions that could be modeled. Even now, this is the approach many fall back on, because something as complex as human behaviour is still unpredictable, even with the most sound looking bottom-up approaches.
In the natural sciences however, enough progress has been made since the renaissance to at least try to contemplate top-down approaches. For example, in the design of new tailor-made molecules that can be used as drugs, that can be used in the electronics of the future, that can be used as the materials of tomorrow, the focus now is on the synthesis of PROPERTIES, rather than that of the molecules themselves, In biology, the question we are asking now is not what material will have what property but how we can tread back from the property to the material. We now ask; what approach do we need to take to model life as it is. Although this question is quite old, until now, we admittedly lacked the heavy intellectual and technological equipment that would have been necessary to logically approach the problem. The main ‘problem’ with the top-down approach is that it puts forward very general principles, exceedingly powerful as they may be. In biology for example, the great mathematician John von Neumann postulated the mechanism of genetic transmission in a very general way in 1948, five years before Watson and Crick suggested a very specific model of this general idea. Admittedly, their model became famous in comparison to von Neumann’s ideas, because in science, what turns heads is the clear-cut paradigm, the foolproof experiment, and the watertight, precise logic. However, in the last couple of years, as science has become more and more interdisciplinary, general ideas from one field can more easily be preludes to successful realistic models in another, precisely because of their generality. This approach has seen its biggest explosion in the fields of artificial life, evolution, and artificial intelligence.
In the study of evolution, most approaches have relied upon assuming the existence of complex organic molecules as the logical starting point for the emergence of life. In the last century, top-down approaches based on the concepts of cellular automata and neural networks have yielded new perspectives that can be used to design putative experiments. Of course, in ANY experimental setup, we are obviously restricted by some kind of material approach or the other. Attempts can only be made to simplify and generalize the assumptions in the experiments as far as possible. From my point of view, the most lucrative experiment suggested to shed light on the exhausting debate of how life arose on earth, has been the so-called "Whole Environment Evolution Synthesizer" or "WEES". WEES consists of trying to put together a duplicate of the prebiotic environment that existed on earth at the beginning of life. Rather than assume, from a modern day standpoint, that molecules like DNA, RNA or proteins would be an essential part of the paraphernalia of living systems, WEES aims to adjust and control only the conditions and then let the evolutionary circus run for itself, leading to whatever comes out at the other end. The input of the system would simply consist of water, and the gases that were purported to form the so-called early ‘reducing’ atmosphere of the earth, along with other very simple possible organic and inorganic components. The gaseous atmosphere constitutes the ‘primary’ component of the system, while simulations of pools and tidal zones are the ‘secondary’ elements. The ‘tertiary’ portion would refer to the changes in the values of the various parameters that would influence the outcome. While nobody expects to see a Gorilla come crawling out at the other end, it would definitely be interesting what combinations of molecular architecture are generated in the system, and whether certain kinds are preferred, which would possibly gain the upper hand in traversing the roadblocks on the pathway to life. Till date, nobody has successfully actually completed such an experiment, not in the least because of the fine controls that would be needed to take care of all the physicochemical parameters over a long period of time; an engineer’s nightmare.
Interestingly, the British chemist Alexander Graham Cairns-Smith has come up with a simplified scenario, which by the mandates of Ocham’s Razor, sounds to me like THE most likely way for life to have arisen on our planet. Smith simply eschews the enormous hurdles that would be required to overcome the synthesis of complex organic molecules such as DNA and proteins. Rather than form such nightmarish architectures, Smith postulated that life arose…out of CLAY. In his vastly revealing book, “Seven Clues to the Origin of Life” that reads like a detective story, Smith postulates a splendid and detailed mechanism whereby life starts out in the form of crystals of clay (a material which has been present in vast quantities everywhere throughout time; silicon and oxygen constitute the most abundant elements on earth), which grow in the usual fashion of crystals. Over time, organic ions and molecules get trapped in these crystals and grow along with them. The breakthrough occurs when these organic molecules become polymers, and start acquiring special properties like structural versatility, and even a kind of ‘replication’ ability, which their simple, poor progenitors lack. As time passes, natural selection takes over, and the more robust organic molecules depart from the original clay systems in all their glory, now free to spin the tale of life.
Smith’s scenario has the beauty of being logical and simple. It has the drawback of not being actually demonstrated in the lab. That is where systems like WEES come in. Arrange for a steady supply of clay and other necessary ingredients at one end, and watch what kind of ‘crystal evolution’ takes place. If some kind of organic polymorph, or at least a clay-organic chimera, is spit out at the other end, we can be fairly sure that this could have been at least a plausible mechanism of the origins of life. Recently, Jack Szostak of Harvard has obtained some interesting results from such kind of a ‘life from clay’ system.
The main problem with such an experiment is that, even for a seemingly lowly ‘life form’ such as clay, evolution will take possibly thousands, if not billions of years, a time that no grant-pressed research scientist can afford to spend in observation. However, lacking all the peculiar and unique tendencies that befell human beings, computers can be a great help here, essentially accelerating the time needed for evolution. A virtual version of WEES would not only speed up the process, but also would easily take care of the immovably constant values of all the variables that are introduced.
Unfortunately, I think that even computers, in the end, would not provide us with a predictive and satisfactory picture of such a kind of evolutionary process. I think that there are two reasons why this would not happen. First of all, the above mentioned ‘constant’ values of all the parameters would hardly have been constant in the early days of our planet, with volcanoes, lava pools, comets and other celestial events, and other grand natural phenomena causing continuous and tumultuous upheavals in its existence, leading to wild fluctuations in the workings of the ‘primordial soup’ of life. If anything, these parameters would have been random. However, even this randomness would have been of a special kind, which finally led to successful dominance on earth of the molecules of life. It would be impossible to simulate this ‘non-random randomness’ in a computer. If the computer did decide to explore all possible versions of randomness, then I cannot imagine how the time needed for even the fastest supercomputer to arrive at a possible result would be any less than that required for evolution on earth itself.
Secondly, one of the most profound discoveries of the last century has been the discovery of ‘chaos’ in all kinds of natural and artificial systems, from the stock market to rabbit reproductive cycles. If one wants to pithily, if somewhat incompletely, sum up the definition of chaos, it would probably be ‘sensitivity to initial conditions’. The famous ‘Butterfly Effect’ (made infamous by a movie with the same name) contends that a butterfly flapping its wings in Timbuktu can cause a hurricane off Florida. In the financial world, the smallest of seemingly unrelated perturbations of any kind (rumours of ‘Ganpati drinking milk’, death of a movie star) can cause enormous fluctuations in market prices. Given this precarious balance in which physical and human laws hold the world together, it would be beyond imagination to even understand, let alone predict, how small changes in initial conditions on the early earth, would possibly affect its biochemical equilibrium.
Similar approaches are being taken towards investigating AI, the most telling and admittedly cute general model being the ‘Ant Model’, in which virtual ‘ants’ are let loose, with only broadly defined signals constituting food, ant hills and pheromones in their immediate environment. Alluding to chaos theory again, it is found that such models can give fascinating results, which again are critically dependent upon initial conditions. Human beings, contrary to some our fond expectations, also act quite confusedly upon quite random bits of information of all kind. Throw in some completely rampant emotional factors, and it becomes a fortunate and wondrous conclusion that we are able to develop logically after all. For example, our language capacity is one of the most fascinating properties of our brain in this regard. As children, all we hear is garbled pieces of sentences that frequently may be grammatically incorrect. Yet we catch on fast as the wind. While Noam Chomsky’s ‘universal grammar’ is a spectacular explanation of this phenomenon, we yet have to intuitively come to terms with the whole state of affairs.
In the end, of course, the most surprising fact about the results of such models, as well as life, is the simple fact that it EXISTS- a truth that flies in our face and that should jolt us when we become too vague. No matter how much we may rave about chaos theories, and juggle with probabilities, we are faced with a reality that seems to boast of order concocted out of supposed disorder. Everything from evolutionary patterns to weather patterns seems to hold a secret enclave of organization. Unless evolution was the result of a completely random fluke- unlikely since even a lucky fluke would need some kind of organizing principle to propagate itself- it is likely that these approaches would bear some kind of fruition, at least in providing patterns of recognition that would link all these fields and more together. It seems that finally, we are starting to make the transition from Aristotle’s ‘material’ causes to ‘formal’ causes, which question the very basis of existence of our world, and all the reality in it that we so much take for granted. Maybe this will provide us the passage from ‘being’ to ‘becoming’.
2 Comments:
Fantastic post.
In my physiology class last year I read a very interesting paper about 'Fractals and Chaos Theory'. It basically proposed that a certain amount of 'chaos' and redundancy is necessary to lend robustness to any living structure. However, there is no doubt that an underlying organisation is essential...
P.S. Maybe you should replace 'top-down' with 'bottom-up' in this phrase: "the top-down approach was just too difficult" ?
Also, when are you writing about 'efficient' and 'final' :)
"a certain amount of 'chaos' and redundancy is necessary to lend robustness to any living structure."- Absolutely right! I guess the challenge is whether we can at least quantify the logic of this 'consistent inconsistency'...
And the error is noted and acted upon. Thank you!
Posts on 'final' and 'efficient' causes coming up, whenever they step out of the realm of incoherent mumblings!
Post a Comment
<< Home