Saturday, September 03, 2005

THE AGE OF COMPUTER-GENERATED IGNORANCE- Careful what you wish for; you may get it...

It is surprising how our thoughts can take a roundabout turn overnight. But not so much as a roundabout turn as a stroll to the other side of the fence.
Yesterday, I was reading about scientific revolutions driven by technology, a gift bequeathed by the twentieth century to us, in Freeman Dyson’s lucid and engaging ‘The sun, the genome and the internet’ and decided to begin a post on this general scenario, based on some of my own thoughts and some other things I had come across.
Today, it occurred to me that one could well think about the dangers of scientific revolutions driven by technology; in fact one could very well think about the dangers of believing that there is something like that in the first place, which could endure for a long time. As it was, the thought was so tantalizing and the connections seemed so many, that I spent an hour in the shower thinking about it. By the time I was done, my fingers looked like bean pods kept out in the sun for too long- the direct untoward consequence of shower technology. As some other side consequences, I could not find time to shave, and yet missed my bus and had to walk.

The computer has revolutionized basic and pure science, an idea quite alien fifty years ago when Alan Turing first developed his ideas on computation, and John Von Neumann revolutionized the history and future of computing with his idea of the stored program- the direct forerunner of software. My initial plan was to write about some of the examples of such contributions by computers that I could think of, and I already had half a post in the pipeline about that. But now I think it would be pertinent to make this post; first get warned about the dangers, and then decide whether or not to take the plunge. It seems more important to know about the responsibility before we know about the power; a trend that is almost never observed in history...

In 1992, Alexander Wolszczan made one of those epoch making discoveries that mark a decade. Working at the University of Pennsylvania, Wolszczan discovered an actual extra solar planet orbiting an astronomical object called a millisecond pulsar. Pulsars are as different from stars like our sun as they can ever be, and Wolszczan’s discovery was incredulous, to say the least. After publication of this landmark discovery, Wolszczan was confronted by about fifty astronomers at Princeton University, who grilled him successively with the meticulous aggressiveness of a prosecution attorney to substantiate his finding. Wolcszcan had gone to great lengths to validate his discovery; his conclusion, drawn quite painstakingly, consisted of measuring the duration and nature of the signals emitted by the pulsar to uncanny accuracy, and then processing the vast body of data thereby obtained. Crucial to his work were high-speed computers, and state of the art software. In fact, the telescope that Wolcszcan used, the famous crater telescope at Arecibo in Puerto Rico, was quite old, and while it was still reliable, the telescope by itself would never have allowed him to reach his conclusions. Without the millions of data juggling units in modern computers, and the software which instructs them how to do it, Wolszcan would have been lost.
In the end, Wolszcan did of course succeed in convincing the community of astronomers that there was indeed, another planet much beyond our solar system. After his work, in the next seven years, ten more such planets were discovered. The point is that from the perspective of astronomy, Wolszcan’s discovery was a fundamental one, a ‘pure scientific’ discovery. However, it had been made possible only by the application of modern technology, and most importantly, the power of computing.

Pure mathematics, that most abstract field of human thought, would seem to be one in which the successful application of computers would be a significant event. For computers to demonstrate that they could actually find new things in pure sciences and solve unsolved problems, probably the greatest playing field would be the world of pure mathematics, that epitome of cold rationality, where it seems, only men of the likes of G H Hardy, Gauss, Riemann, and Euler could hold court, and where any allusion to computers making groundbreaking discoveries would be at the least, an insult to this pantheon of greats and their field. In pure maths, the most alluring problems are those that can be stated in less than fifty words, and whose solution has eluded the greatest minds of all times for five centuries. Fermat’s Last Theorem immediately comes to mind, which was solved at the turn of the last century by Andrew Wiles, who was purported to have thrown the entire kitchen sink of twentieth century maths at the problem. The famous unsolved Goldbach Conjecture- the conjecture that every even number can be expressed as the sum of two prime numbers- is another one that boggles the intellect. It seems very unlikely that computers could actually provide an unambiguous solution to such problems, whose resolution depends on wholly unanticipated and creative bursts of thought. However, there are two ways to skin a cat. One is to devise a way in which one could pull a zipper on the cat’s back, and viola; its skin comes off. The other way to skin it is to…well, skin it. Start with the tail, and skin it, skin it until kingdom come. If computers are seen to be too dumb to engage in the first kind of process, it’s already well known that they are adept at the second one.

In 1976, Kenneth Appel and Wolfgang Haken from the University of Illinois stunned the world of mathematics by announcing that they had found a ‘proof’ of the famous ‘Four Colour Problem’, the deceptively simple conjecture that four colours are the minimum that are necessary and sufficient to colour any generalized map, such that no two adjacent areas of the map have the same colour. However, quickly, the mathematical community was also consternated to find out that the ‘proof’ was actually an application of raw computer power. Appel and Haken had used supercomputers running over a period of several months to actually pool together the astronomical number of possible cases into a ‘few’ manageable ones, and then had instructed the computer to prove the conjecture for every one of these cases. While the problem finally got solved, other mathematicians themselves took several months to prove the ‘computer conjecture’ that Appel and Haken had come up with. At the end, even though everyone was convinced, nobody was very happy. Mathematicians have always looked for solution as well as beauty in proofs, and the greatest proofs in mathematicians have always been the most beautiful, although the quality itself is hard to define, and its import is usually truly appreciated only by the lucky few. Proofs that are obtained by hook or crook may be significant, but are frowned upon by the real purists. The great Paul Erdos (pronounced ‘airdish’)- the most prolific mathematician in history who literally had no home, wife, hobbies, and friends (in the conventional sense of the term) and spent all his life only doing mathematics around the world- said that the proof looks correct but it’s not beautiful. On the other hand, this was probably the first case of a famous unsolved problem in pure mathematics solved by computers.

Mathematicians again drew swords a few years ago, when Thomas Hales (at the University of Pittsburgh) announced a computer-generated proof of another famous conjecture, the Kepler Conjecture, which has to do with the stacking of spheres in the most efficient way (or the 'densest' way); as every fruit vendor will know, it’s a pyramid. But the number of different arrangements is vast, and it’s far from trivial to prove the conjecture. As with Appel and Haken before, Hales used a fantastic amount of computing power- incredibly advanced by the 90s- and again examined an exhaustive number of cases to find the solution. This time, when the proof was submitted to refereed journals, it took twelve referees almost four year to check the computer generated result (the theoretical apparatus occupied 3 gigabytes of space), and even then, they could not be utterly sure that it actually was a ‘proof’ in the mathematical sense of the term. In the end, the editors decided to publish the proof, saying that it was '99% correct' but with a word of caution about the validity of proofs generated by computers. Hales himself has begun a project to verify the validity of the proof by using computers to check each and every step, but he thinks it will take at least 20 man years to verify that result...

Both the above proofs seem to be correct, but are hardly the kind of elegant creations that mathematicians have so zealously praised and generated over the last several centuries. (Incidentally, the great G H Hardy has supplied a succinct and lucid definition of a ‘beautiful’ mathematical proof; it should be unexpected, economical, and inevitable. The paradoxical looking but telling combination of unexpectedness and inevitability is not just typical of Hardy’s penetrating mathematical and linguistic abilities, but also signifies the characteristically human qualities of a disordered nature, that reveal themselves in proving theorems. Just like the vagaries of human nature abound in everyday human affairs, the vagaries of the intellect leave telltale marks in cognitive creations that contribute to their novelty).

The two examples above, and many similar ones, at once demonstrate both the power and limitations of computers. On one hand, computers, with their awesome data and number crunching capabilities, demonstrate a prowess that seems to go far beyond simple speed and efficiency. Results obtained with computers often seem startling, and give the eerie feeling that the computer has used its own intellect to tread into lands yet unknown. On the other hand, I believe that the very construction of computer paraphernalia, software and hardware and the logic units, implies the fact that the computer is in fact following a purely algorithmic procedure to come up with the answer. Most importantly, this procedure has been implemented by a human being, whose brain is believed at least for now to be far from algorithmic, although it may seem like it is. Proponents of artificial intelligence may argue immediately that the human brain follows a rule-based procedure when it is thinking. But there are two things. Firstly, I strongly believe that it is a profoundly difficult matter to connect fifteen centuries of human creativity, science, art, and emotion with rule-based thinking procedures. Secondly, my limited human brain cannot deal with artificial intelligence in this post, and it is not the goal of this post anyway.

I can speak for myself. In 1999, John Pople and Walter Kohn, a mathematician and a physicist, received the Nobel Prize for developing modern Computational Chemistry- the application of the methods of quantum mechanics to chemical and biochemical problems, and their implementation in efficient algorithms and computer programs. The theoretical methods that were developed to tackle complex chemical systems are known by two names- semi-empirical methods, and ab initio methods. The first kinds implement some data from experiment to reduce the computational effort involved in calculation. The other kinds which are more accurate, work from first principles, as their name suggests. They calculate all the integrals and parameters from scratch. Admittedly, they need very labour intensive computer algorithms that can take weeks to run. In the last thirty years, the application of computer programs to chemistry has vastly grown in scope and size. Today, these programs are routinely applied to many problems; from drug design to the design of polymers, from explosives design to the design of superconducting materials. Until a few years ago, computer calculations were seen at most as corroboration for experimental results in physics, chemistry and engineering. Today, in all these fields, these calculations are actually trusted as tools for prediction of experiments and properties not yet done or observed, and not just explanatory devices. The most important reason for this newfound trust in computer simulations has without a doubt been the almost unbelievable expansion in hardware and software technology, the development of faster algorithms, and the shrinkage of circuits to an unprecedented degree. That was the reason why the semi-empirical methods quoted above became somewhat redundant, because one did not need to invent techniques purely because there was inadequate computing power. Today, one can routinely do a complex quantum chemical ab initio calculation on a desktop computer, and calculations which could only be done on supercomputers a decade ago, today are entirely feasible to do with clusters which can replace the connectivity and speed of supercomputers. The rapid development of computer graphics software also has given a tremendous boost to the use of simulation programs, especially for non-specialists. Today, one does not need to be a physicist in order to use quantum mechanics to solve his problem. This is largely because most of the calculations that are done by the computer are displayed in the form of attractive graphics that can easily be manipulated. Molecules in the form of ball and sticks in different colours make a pretty picture on the screen that can be understood by many non-specialists.

And yet, from my own experience, I can say that all this is seductively deceptive, and can lead to disaster. The problem is simply that as we climb the tree and taste its fruits, we forget about the roots that nourish the tree and our existence. It is all too easy to believe the attractive picture that’s painted on the display, but much harder to know what were the limitations of the inputs that produced the picture. The simple rule of computer applications, reiterated countless number of times, but also equally easy to forget, is GIGO: Garbage In- Garbage Out. If we put in nonsense, we get nonsense out, except that it looks especially pretty, dressed in deceptively attractive clothes. This, in my opinion, is the most important quality that separates the specialist from the non-specialist, and the sane computer user from the computer minion; the ability to know the limitations of the system and to identify garbage (In fact, the ability to know limitations basically separates a mature human being from an immature one). No matter how attractive the computer screen may look, it’s only as good as your input. No matter what accuracy the computer may spit out the value of a needed parameter to, it could be complete nonsense because the input variables were flawed. In complex systems like chemical systems, many times the problem is not even of a flawed input as such, but that we are using the right input for the wrong system. Every computer simulation program is built on certain assumptions. The code writers go ahead and write the code anyway, because that’s the best that can be done. But many times, they are unaware of the details of the exact kind of applications that the program is going to be used for. The only person who can be aware of these details is the chemist, the engineer, the physicist who uses the program. And to know whether the program will give a reasonable answer or not, he needs to know something about the algorithm which the program generates, but most importantly, he should know how to think in a commonsense way about his field. All too often, when the computer is displaying a complex answer on the screen and we are busy trying to interpret the answer, it turns out that the answer is simply wrong. Many times, the reason why it is wrong is also remarkably simple, and rests on a basic understanding of the properties of the system. I have faced this situation more than once, when I was busy trying to know what a particular computer generated result meant, when my advisor came in and pointed out two important things in a quick and painful way; firstly, that the result should be taken with a big pinch of salt, and secondly, that the result probably was wrong in the first place because the chemical system under study had certain characteristics that the program probably was not able to handle. Many times, all that was needed to know this was simple chemical logic, commonsense, and the knowledge gained from human experience and thinking. That kind of logic existed even five hundred years ago, completely independent of computer technology.

The real punch line of this scenario is devastatingly simple and well known: Computers are dumb. Even though it may impressively look like it, they do NOT know physics, chemistry, engineering, or biology. You cannot look at the computer screen and ask, “How come the computer is displaying this particular interaction? It's a simple law of chemistry that it should be different”. The simple truth to realize is that the computer does not know any law of chemistry. It cannot recognize the object on the screen as an atom, a chemical bond, a mechanical beam, a gene or protein, or a tree or park bench for that matter. For the computer, everything on the screen is simply binary code displayed a certain way. Computers don’t know what goes on in the real world, they are unaware of the laws of physics and chemistry that make it tick (although they function based on the same laws). The only entity who knows that is the human being who is using them. No matter how impressive the results of computers may seem, they can be complete nonsense. One should never, ever, blindly trust them. But what is more remarkable, and again something that is reiterated again and again, is that what’s not dumb is human intelligence. The qualities of taking unexpected detours and non-conventional approaches are unique to human thinking. As of now, even the best computer in the world cannot come up with non-intuitive approaches to problems based on intuitive thinking. The best and most revolutionary solutions are always based on intuition, physical, chemical, and sometimes plain commonsense intuition. Intuition is something that is difficult to define and understand even for human beings, let alone computers. Intuition is what separates a grandmaster of chess from the talented novice. Intuition, seemingly unexpected but essentially a product of human cognition, has been responsible for the most important discoveries in science and technology. I believe that intuition is a sine quo non of human uniqueness. It is a grand mistake to believe that the computer ‘knows’ what it is doing, to believe that it has intuition.

In the end, this excess reliance on the results generated by computers speaks volumes for many things, complete systems of instruction and attitudes that could make for a separate post. One of the main culprits is our severely stultified educational system that grooms students primarily for a job. With that view in mind, a fantastic amount of information is crammed into their brains in three or four years. In this process, the students completely lose a basic appreciation of the scope and limitations of what they are doing. All they are taught is a push-button existence where apparently, the more information you have, the more of the world’s problems you can unconditionally solve. What those computer programs mean, what their purview is, and most importantly, what they cannot do, is never stated. The very few independent thinkers among the students do pick up these things. But the majority of students who graduate from our engineering and science programs are taught to think exactly like computers- in a rule-based manner. No wonder very few of them become original thinkers and researchers, and most of them end up as computer technicians, perhaps glorified, high-earning ones, but technicians nonetheless. These workers are then extolled as role models to another generation of students by the same teachers, and the cycle continues.

But most importantly, this excessive reliance on computer-generated results is a telling indication of our servility towards technology. My grandmother can do mental arithmetic with a speed that I could never hope to match (but then it’s me we are talking about, and I better stay quiet on the subject of arithmetic…). People from our parents’ generation can do calculations much better than what I can do on my calculator, and I need to use it all the time. Maybe that disability may not directly impede my ability to possibly come up with original solutions to problems, but what certainly will is my ability to think independently as a human being. Any other disabilities may not stunt human ability to make progress, but it’s almost a trivial matter to say that the inability to think independently will, no matter what the circumstances.

Currently, maybe the problem is not too big precisely because computers still are not so powerful that we can blindly trust them. Even now, every result obtained with computers has to be double checked and examined, and modifications and corrections of the result are frequently desired and implemented. But after a few decades, the spate of computer technology is going to become so excessive and all encompassing, and we would have become so impressed with computer simulations, that gradually, we would stop thinking independently. Most of the times, it won’t be a problem, because the computer results would be reliable. Because of this trust, we will start thinking even less independently, and eventually, because of sheer lack of use, we will be unable to think even in the simplest of unconventional and independent ways that would be necessary to come up with novel solutions to challenging problems. No matter how much computers advance, every complex system can break down in the most unpredictable ways. Unpredictability is the essential future of progress. What is needed to deal with unpredictable happenings is sane human commonsense, and mutual dialogue. History has recurringly shown that the only way unexpectedness can be confronted, is with the power of human communication and common sensibility. With an overdependence on computers, both of these will become alien to us. We will be trapped in our own golden cage, unable to resolve problems within it, and unable to engage in efficient and serious dialogue with other human beings out of it, again because of sheer lack of practice.

This will be our transition into a new era of computer-generated ignorance. It is unlikely that the computers themselves would help us come out of it, because we would not have designed them for resolving that problem. I don’t know what will happen after that, and whether society will spiral towards death and decadence, gloriously surrounded by its own magnificent contraptions that were meant to ensure perpetual progress and save it from extinction. It would be the perfect ironic metaphor for the chameleon dying in a field of dead flies because its eyes can detect only moving objects.

As for many of the bleakest and most complex scenarios, this one seems to have a refreshingly simple solution, if only we are listening. The most important knowledge we need to have is that of our own capabilities. Capabilities that are unique to our human existence and human mind, and that have withstood the test of time since we evolved. These capabilities are a remarkable mixture of emotion and intellect. These are the capabilities of commonsense, logic, empathy, and dialogue. More than ever, it is necessary to keep channels of communication with other human beings flowing and alive. More than ever, it is necessary to acquire the ability to see simplicity in complexity, to weed out the mundane that always works, from the exalted that always could fail. All these qualities are independent of any technological age and scenario, which is why they have always worked until now. History has shown us all the time that problems, no matter of what kind, can only be resolved by human beings coming together and thinking about them with pedestrian logic and with each other’s help.

More than ever, and this is a point that never can be emphasized enough, it has also become important to teach our students how to become independent thinkers, and instill in them qualities of inquiry, skepticism, and wonder, because in the absence of wonder, there is no further thinking. Teachers and educational institutions must completely reconcile themselves to impart this knowledge to students at the expense of sheer volumes of information. Instead of teaching with a view to make the students ready for a certain future, it should be far more important to teach them in such a way, so that they themselves can decide what future they want. It is also essential to instill in them the conviction to be able to choose that future in the face of social pressure. Only if a committed number of students choose an unconventional future, can social pressure be alleviated in the first place.
All this will not be possible without the elimination of stereotypes, without the typecasting of society and citizens into ‘respectable’ and ‘disrespectable’ categories, and without the constant and inane emphasis on ‘scope’ that has been so brutally abused in our educational system.

Technology can drive scientific revolutions; indeed, it can drive social progress and build moral edifices. At no point in our history has this become more evident than in the era of computer technology. Computers have a literally unimaginable potential to aid in basic scientific progress and our ways of thinking. And it is precisely because it’s unimaginable, that that potential holds portentous import for our future. It may be the biggest mistake that humans have made, to put their faith in algorithmic processes. Computers are tools, great ones. But they are tools, and not the most important things around. It’s human beings who are the most important.

When I was a kid, I read a wonderful illustrated book called ‘Our friend the atom’, published by Walt Disney. In it, in the last chapter, we had to ask the ‘atomic genie’ for three wishes. The first one was power. The second one was happiness based on that knowledge of power. But the third one was wisdom, quite independent of the other two. That one was the most important, because without it, the first two could be lost in a heartbeat. That’s what we need…wisdom.

2 Comments:

Anonymous Anonymous said...

well here is a nice pdf on this topic for a bit of hands on session.
http://www.stewartcalculus.com/data/default/upfiles/LiesCalcAndCompTold.pdf

11:06 PM  
Blogger Wavefunction said...

thanks. i had seen it sometime before. it's pretty nice

10:19 AM  

Post a Comment

<< Home