Saturday, April 30, 2011

One more thing we can learn from Linus Pauling

What makes a successful scientist? The question is hard to answer, not because there is no general consensus but because the precise contribution of specific factors in individual cases cannot always be teased out. Intelligence is certainly an important feature but it can manifest itself in myriad ways. Apart from this, having a good nose for important problems is key. Perhaps most important is the ability to persevere in the face of constant frustration and discouragement. And then there is luck, that haphazard driving force whose blessings are unpredictable but can be discerned by Alexander Fleming's famous "prepared minds".

But aside from these determinants, one factor stands out which may not always be obvious because of it's negative connotation; and that is the good sense to realize one's weaknesses and the willingness to give up and marshal one's resources into a more productive endeavor. Admitting one's weaknesses is understandably an unpleasant task; nobody wants to admit what they are not good at, especially if they have worked at it for years. That kind of attitude does not get you job offers or impress interviewers. Yet being able to admit what qualities you lack can make your life take a radically successful direction. And lest we think that only mere mortals have to go through this painful process of periodic self-evaluation and subsequent betterment, we can be rest assured. It was none other than Linus Pauling who went through this soul-searching. And we are all the wiser for his decision.

When Pauling graduated from Oregon State University in 1922, he had already shown great promise. At that point he had had an excellent overall education in mathematics and physics and compared to his peers in the United States was mathematically quite outstanding. In 1926 he won a Guggenheim fellowship to study in Europe under the tutelage of Arnold Sommerfeld in Munich, with trips to the great centers of physics in Copenhagen, Gottingen and Zurich included as part of the package. There Pauling met the founders of quantum mechanics, almost all of whom were about the same age, and realized that maybe his talents in physics and mathematics were not as great as he thought. There is a story, probably apocryphal, that the famously acerbic Pauli dismissed one of his papers on quantum mechanics with two short words- "Not interesting".


At this point Pauling made what was one of the the wisest decisions of his life; he decided to focus not on physics but on chemistry. He swallowed his frustration at being beaten by the best and brightest of his generation in physics and realized the great value of striking out into new territory. Why? Because his mathematical and analytical abilities, while being of considerable value in physics, would be of wholly unique import in chemistry. At that point and to some extent even today, gifted mathematicians and quantitative thinkers are quite common in physics but less so in chemistry and biology. That is precisely what makes them more valuable in the latter disciplines. In addition, Pauling had always had an empirical and experimental bent, demonstrated by his earlier research in crystallography. So chemistry it was, and the rest is history. Pauling ended up making contributions to chemistry whose significance easily paralleled that of contributions made by Heisenberg, Pauli, Dirac and Schrodinger to physics.

There are two key lessons to be drawn from Pauling's story. The first lesson is to know when to let go, to know what path on the famed fork not to take. History would likely have been quite different if Pauling had decided to be stubborn and spent the rest of his career trying to outcompete his fellow theoretical physicists. But the bigger lesson is extremely valuable for scientists wanting to make discoveries. Take a skill-set which is valuable but not groundbreaking in one discipline, and then apply it to another discipline where it will lead to novel insights and real breakthroughs. Or to put in another way, move from a crowded field where you may share your particular talent with dozens of others to one which is sparser and where your talent will be more unique, productive and appreciated. The other related lesson is to capitalize on pairs of skills, each of which by itself may not be unique but whose combination turns out to be explosive in a particular field. For instance Pauling combined his deep grounding in physics with an encyclopedic memory and a remarkably wide knowledge of chemistry's empirical facts. There were a few chemists who could marshal one or the other talent, but almost nobody could serve up Pauling's powerful one-two punch. One can find similar analogies in combinations of diverse skills like computer science and molecular biology, or electrical engineering and neuroscience.

The history of science abounds with success stories stemming from this kind of recipe. Physicists venturing into biology constitute the best example. Francis Crick was a good physicist, but he probably would not have become a great one had he stayed in physics. Instead Crick had the wisdom to realize the value of applying his physicist's mind to problems in biology and became one of the greatest biologists of the century. Walter Gilbert trained under the theoretical physics virtuoso Julian Schwinger and would have been a first-rate physicist, but applying his talents to biology enabled him to become one of the founders of molecular biology. There are also more exotic examples. The quantum physicist Tjalling Koopmans who fathered a well-known theorem in quantum chemistry did so well in econometrics that he won a Nobel Prize. In fact just like biology, economics has been another field which has been thoroughly enriched by thinkers who would have been good mathematicians or physicists but who became great economists (although the application of strict mathematical modeling in economics can lead to a world of pain). There are more local and specialized examples too. A professor of mine who is world-renowned in the physical organic chemistry of surfactants and lipids told me that he considered working in protein chemistry but realized that the field was too crowded; lipids, on the other hand, were under-explored and could benefit from exactly the kind of talents he has.

This is precisely the reason why biology is such a fertile playing field for outsiders of all stripes, from biologists and computer scientists to engineers. The kind of complex systems that biology deals with can only be unraveled through a variety of talents which people from diverse disciplines bring to the table. On one hand you need reductionist, quantitative scientists to set biology on a rigorous theoretical basis but you also need 'higher-level' thinkers who can tie together threads from disparate empirical phenomena. That's why both mathematicians and doctors continue to make valuable contributions to the field. The same can be said of chemistry. Quantum chemists like Pauling did much to root chemistry in physics, yet the sheer complexity of chemistry (after all the Schrodinger equation can be solved exactly for no atom bigger than hydrogen) demands more intuitive thinkers who can devise approximations and include empirical parameters to improve chemical prediction. Similarly, organic chemists like Stuart Schreiber and Peter Schultz were excellent synthetic chemists, but it was in the application of synthetic chemistry to biology that they found unexplored terrain and great riches.

The lesson for young scientists seems to be clear. The most explosive discoveries can result from applying talents suitable for one field to a whole new different field. And perhaps this is not surprising. Nature is not hostage to the boundaries of disciplinary convenience devised by fallible human beings and does not divide itself into rigid compartments titled "Physics", "Biology", "Approximation" or "Analytical Solutions". Nature encompasses phenomena whose analysis spans a continuum. It is hardly surprising then that she yields her secrets best to those who are more than willing to use each and every tool of analysis to criss-cross her myriad domains.

Labels: ,

Tuesday, April 19, 2011

Dirac, Bernstein, Weinberg and the limits of reductionism

Jeremy Bernstein is a physicist and science writer who has worked with some of the leading physicists of the twentieth century and has penned highly engaging volumes about science, scientists and society which I have enjoyed reading. I was thus disappointed to read his review of a new book on quantum theory by Jim Baggott in the Wall Street Journal which opens thus:

In 1929, theoretical physicist Paul Dirac announced: "The general theory of quantum mechanics is now complete. . . . The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known." The discipline at the point was four years old. Dirac himself was just 27. But eight decades later, we see that his optimistic evaluation was too modest. In addition to a "large part of physics and the whole of chemistry," the theory now is extended to a significant part of biology, essentially all of electronics and nuclear physics, and a large part of astrophysics and cosmology.

Really? A significant part of biology and the whole of chemistry? Both chemists and ecologists may be interested to know this. Let's take stock of this viewpoint since it highlights a quote by Dirac which has been marshaled all too often in support of reductionism. It's time to put the quote in context. Paul Dirac was one of the greatest scientists of the twentieth century and perhaps of all time, but he was no chemist or biologist. He made that statement about quantum mechanics in 1929 when quantum mechanics was at the height of its powers. The complete theory had just been developed by Heisenberg, Born, Schrodinger, Dirac and others and it had turned into physics's crowning achievement. At the same time scientists like Pauling and Slater were applying the theory to chemistry. Suddenly the world seemed to be at physicists' feet and there seemed to be no limit to what physics could achieve.

But this was not the case. The optimism about reductionism endured for the next couple of decades when physicists made monumental contributions to chemistry and molecular biology. And yet the waning years of the millennium indicated that the reach of reductionism was distinctly limited. We were just getting warmed up. As we made forays into expansive fields of chemistry and biology like self-assembly, chemical biology, population genetics, ecology, systems biology and neuroscience, it became clear that the essence of complex systems was emergence. Emergent properties seemed to demand understanding at their own levels and could not be reduced to interactions between particles and fields. At the level of every science there emerged foundational laws, not reducible to deeper principles, that served as the bedrock for that particular science. Today, as we encounter new horizons in the study of signaling networks, brain plasticity and chaotic ecological systems, it's clear that we will have to find and formulate fundamental laws specific to every system and science. No, if anything, Dirac's statement was not too modest but too ambitious; as spectacular as its predictions have been, "a significant part of biology" and chemistry cannot be explained on the basis of quantum theory.

In fact, although I am not an expert in cosmology, the extension of quantum predictions even to cosmology seems surprising to me. Isn't gravity the other dominant force that needs to be taken into account when formulating cosmological explanations? And isn't the welding of quantum theory and gravity the great unsolved problem of physics? To me the inclusion of large parts of cosmology and astrophysics under the quantum fold seems premature.

Interestingly, this abiding interest in reductionist statements reminds me a of minor debate about reductionism between Freeman Dyson and Steven Weinberg that took place in the 90s. Dyson who has been critical of reductionism for a while penned an expressive piece arguing against reductionist philosophy in the New York Review of Books. In reply, Weinberg who has been an arch reductionist replied with his own spirited rebuttal. This rebuttal has been discussed in Weinberg's engaging collection of essays "Facing Up: Science and its Cultural Adversaries".

Weinberg defined what he thought were two distinct critiques of reductionism. One was the critique of reductionism as a working principle. The other was a more fundamental, philosophical critique of reductionism as being unable to account for higher-order phenomena even in principle. Weinberg thus was making a distinction between reductionism in practice and reductionism in principle. According to him, scientists like Dyson who criticized reductionism really had a problem with the former manifestation of reductionism, as a working principle that did not really allow them to solve problems in chemistry, biology, economics or psychology. In contrast, reductionism in principle was alive and well and was being the unwitting victim of the anti-reductionists' axe. In essence Weinberg was saying that, sure, even if quarks cannot directly help you to solve the mysteries of chromosomes, they still surely account for chromosomal properties in principle.

But to me such a distinction is meaningless beyond a point. The working scientist in his or her everyday scientific life really only cares about reductionism as a working principle, not as final causation. The fact that quarks can account for chromosomes in principle is not very consequential; a biologist could care less if there were goblins manipulating the fundamental constituents of biological systems. In addition, a lot of biology and chemistry progresses through the construction of models which are not even required to reflect the presence of the very fundamental laws. At the very least, the extolling of reductionism as being able to ultimately account for all kinds of phenomena is a trivial statement; it's like saying that everything is made out of atoms. So what? That hardly helps us cure cancer.

Ultimately, I suspect the argument may be more about semantics, about the meanings of the words "explain" and "account for". But the last word actually belongs to Paul Dirac. In his quote, Bernstein left out something crucial that Dirac said. Yes, Dirac did seem to claim that quantum mechanics could explain "the whole of chemistry", but he also said later that

"...The difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble. It therefore becomes desirable that approximate practical methods of applying quantum mechanics should be developed, which can lead to an explanation of the main features of complex atomic systems without too much computation."

It's those words "approximation", "complex" and "computation" that encompass the essence of chemistry, biology and all the other sciences which Dirac did not mention.

He may have been right after all.

Labels:

Monday, April 18, 2011

A singular lament

When I was in high school I used to play keyboards in a band. While my own sweet PSR series Yamaha gave me much pleasure, I used to often salivate over some of the high-end models which I could not afford. Among these, keyboards made by the Kurzweil company used to seem especially sophisticated and insanely expensive and the most I could do was occasionally try these out when I attended concerts arranged by friends who were professional musicians.

I had absolutely no idea then that the founder of the keyboard company was really known for things that your average keyboard designer could not possibly dream of. Ray Kurzweil- child prodigy, engineering genius and inventor of several socially significant technologies like the flatbed scanner and a machine that reads out to the blind, multimillionaire, bestselling author, winner of the National Medal of Technology, founder of myriad start-ups- is best known today as one of the world's most high-profile soothsayers. In the pantheon of thinkers who think of technology as a panacea to all our troubles, Kurzweil is certainly at the forefront. In his 2005 book "The Singularity is Near" he laid out an astonishing version of the future in which mankind's intelligence will seamlessly fuse with machine intelligence in an unprecedented, warp-speed event called the "singularity". This would happen no later than 2029. Hearing this, it would be easy to dismiss Kurzweil out-of-hand without further thought as yet another loony new-age guru until you find out that many quite accomplished and clear-thinking individuals including Bill Gates, inventor Dean Kamen and the founders of Google take him quite seriously.

So who exactly is this Raymond Kurzweil? Filmmaker Barry Ptolemy decided to find out and the result is "Transcendent Man", a film about Kurzweil which I watched yesterday with a mixture of fascination and disappointment. The film is playing in selected cities but the DVD and a digital download are already available on the movie's site. Ptolemy probes into Kurzweil's life and finds a brilliant, articulate, curious, sad, haunted man who sheds tears over his father's grave, collects cat figurines and undergoes monthly blood tests. Along the way, several individuals who either agree or disagree with Kurzweil are interviewed. Philip Glass's haunting, edgy score adds to the allure of this unique individual. Overall Ptolemy does a good job of bringing out Kurzweil's essence, although the Glass score could not help but occasionally remind me that Errol Morris would have done an even better job with the film.

A child prodigy who built a music-composing computer when he was 17, Kurzweil holds dozens of award-winning patents worth millions. But beneath the success flows a silent undercurrent of emotional upheaval. Kurzweil is a man who had such a close relationship with his father and such a profoundly negative view of death that he has resolved to bring him back from the dead by recreating him from memories and memorabilia about him. He is someone who pops about 200 pills a day in the hope of staying alive at least until the day when his intelligence and personality can be downloaded into a computer so that he could discard this wretched, mortal body that we are all cursed with. And he sincerely believes that the day will arise when our only identity will be online and that using nanotechnology and AI, we will expand our intelligence to span the entire universe in a kind of grand cosmic denouement that will make the universe come alive and allow the human species to achieve immortality.

Yes, it is easy to dismiss Kurzweil as someone who has discovered an unusually liberating new controlled substance. And yet Kurzweil is not your garden variety wild-eyed rapture-seeking bearded madman. In fact, not having seen much of Kurzweil before, I was struck by how reasonable and self-assured he appears. Absent are the strenuous gesticulations, defensive maneuvers, dismissive put-downs and jargon-flinging that are the mainstay of snake-oil salesmen like Deepak Chopra. Kurzweil seems genuinely familiar with much of today's cutting-edge research in artificial intelligence, nanotechnology, genomics and medical science and lays out his thoughts rather carefully. The problem is that the probability space of his prognostications is highly expansive and inhomogeneous. There are predictions that seem to be within the realm of possibility in a very general sense. There are those which are at least based on currently existing technology. And then there are those that are not just out there but demonstrate a decided failure on Kurzweil's part to think things through. Unfortunately that last category dominates Kurzweil's thinking to such a significant (and often fatally flawed) extent that while fascinated, I ended up ultimately underwhelmed with both the man and the film.

First of all, let me lay down the areas where I do agree with Kurzweil. Unlike some others I don't think he is a "sophisticated crackpot"; it seems to be more a case of blinkered vision that's based on some generally accepted principles. Technological innovation can indeed be exponential and unpredictable. As Kurzweil puts it, it took a very short while (and a very startled Gary Kasparov) before a computer was able to defeat a human expert at chess. The twentieth century was the epitome of amazingly rapid technological advancement and most of today's innovations would seem like miracles for someone from 1900. The twenty-first century is very likely to witness future such miracles. Most importantly, I agree with Kurzweil that perhaps the defining technological event of this century would be the integration of the human body with electronics. This would likely start with simple but breakthrough implements that enable physically and mentally disabled people to access the world around them but would then probably lead to astonishing inventions that allow us to remotely manipulate objects through embedded electronic components. My agreement with Kurzweil also extends to breakthrough medical diagnostics enabled by nanotechnology that allow us to diagnose and treat diseases like cancer at a very easy stage. Nanoparticles are already being used for drug delivery and there is every reason to believe that disorders would be treated in the near future by injecting cell-sized nano-'robots' that are about as intelligent in sensing and manipulating their chemical environment as you can imagine. Yes, I am on board with Kurzweil in sharing a sense of wonder at all these possibilities and I suspect that's the main reason why so many reasonable people seem to hear him out.

So where's the glitch? As Neil Gershenfled who knows Kurzweil and directs the Center for Bits and Atoms at MIT puts it, "What Ray does consistently is to take a whole bunch of steps that everybody agrees on and take principles for extrapolating that everybody agrees on and show they lead to things that nobody agrees on," because "they just seem crazy."

That is indeed the gist of what's wrong. To me Kurzweil's thinking seems to suffer from two main…drawbacks (to put it mildly), even ignoring the fantastic nature of his predictions. First of all, he seems to regard historical precedent as virtually sacrosanct. As with many others, Kurzweil's starting point is Moore's Law which basically applies not just to microelectronics and transistors but also to technologies like genome sequencing and brain-mapping. I think pretty much everyone agrees that the time is not far at all when we could get our genomes sequenced for 100$ apiece. The rate of progress in mapping the activities of single neurons is also very impressive and likely to accelerate. Technology has indeed manifested itself exponentially. But that does not mean that there are no limits and that every successive stage is as facile as the previous one. Just because we have gotten through eight exponential cycles of technological expansion in thirty years does not automatically mean that the next eight cycles are going to be equally smooth. They may possibly be, but it may well be that the next four cycles are a breeze and then we get really stuck at the fifth stage. Or it may be virtually impossible to overcome the obstacles that we encounter in the third stage. A computer simulating chess is very different from a computer simulating a human brain. Given the complexities of the systems we are dealing with, it's virtually impossible to predict the exact course of events that progress might take, no matter how rosy a picture of limitless technological adaptation the past sets up before us. Especially when it comes to our view of future technology, Kurzweil should be the first one to tell us that the past is far from a perfect guide to the future (Or as Niels Bohr put it far more succinctly, "Prediction is difficult...especially about the future").

This brings me to the second and most important problem with Kurzweil's predictions, which is that for all his acumen, the man seems to be almost completely unconcerned with details. You know, the things that can actually matter in developing any kind of science and engineering. This leads to him virtually ignoring all the ways in which thing can go wrong. Consider this: one of the central events in Kurzweil's journey to the singularity will occur when we are able to reverse-engineer the human brain. Chew on that a bit. Reverse-engineering the brain entails mapping every connection, every axon, synapse and dendrite inside our remarkable 3-pound 'thin-bone vault'. And why exactly is Kurzweil so optimistic about this astonishing development? Why, because not only are we making unprecedented progress in mapping neuron activity, but the essence of neural reverse-engineering will be based almost entirely on capturing the genome sequence that codes for the brain. There is so much wrong with this viewpoint that I will leave it to others (for instance see Derek Lowe, PZ Myers, Luysii) to demolish the argument and emphasize the complexity of the brain. I have no doubt that deciphering the genomic basis of neuronal connectivity will be a landmark discovery, but for all his engineering genius, Kurzweil seems to be woefully ignorant of the sheer complexity of biology. With this viewpoint he also affirms his membership in the group of starry-eyed optimists who are completely enamored with the "omics" revolutions. These optimists seem to equate data with meaning.

The genome is the raw material, the starting point for any kind of biological organization. If the genomics revolution has taught us anything, it's how impoverished our knowledge of biology remains even after sequencing the genome. Most importantly, we have light years to go before we can understand the complex signaling networks that functional proteins form with each other and with genes, the subtle and fine-grained interdependencies of the components and networks with each other and the non-linear and startlingly indirect effects that perturbing these networks can have on physiological processes. Add to these layers of complication the control that epigenetic modifications exercise and you have a Dante's version of the hell of biological complexity that goes far beyond anything the genome sequence can tell us.

And this is where Kurzweil ultimately disappoints. Anyone (and certainly an engineer) who is studying or engineering complex systems knows how much the details matter. Every bench scientist or code-writer knows how the most unexpected and annoying details can thwart the design of simple experiments. And let's not even get started on the details of the technical, existential and ethical problems that true AI would engender. It would be one thing if Kurzweil discussed these problems and gave reasons for why he doesn't think they matter. But it's quite another when he steers virtually clear from and does not even allude to details and pitfalls. In the absence of recipes for identifying details and solving problems, prognostications are castles in the air, ephemeral beasts whose existence is at the mercy of hard reality. In fact, the same self-assured demeanor that impressed me before later started giving the impression of a man who has cocooned himself into his own little world of beliefs so fully that his world is impervious to doubt. Kurzweil's faith is unwavering, criticisms just don't seem to count.

Among his predictions are the end of aging, the distinct transfer of human intelligence into machines and the unbounded expansion of human intelligence into the entire universe. Many have relegated such dreams to the long-standing bin of human hubris. I myself am not too bothered about the part about hubris; if humans really wanted to give up their conceit and stop asserting their dominance over nature, they have long-since lost the chance. I also don’t have a problem with technological optimism and I take a rather dim view of the criticism of technology as a cure to all our woes; that’s precisely how we as a species have been developing and using technology since the discovery of fire, and as far as we do it responsibly and realistically, it’s a little hypocritical to scream foul murder now when someone proposes grand technological solutions to humanity’s most pressing problems.

No, what disturbs me most about Ray Kurzweil is that, quite apart from his blithe indifference to details and problems, it appears that in his quest to make mankind immortal, Kurzweil is somehow falling prey to the same fears that haunt him. He seems to evince a genuine distaste of death (he wouldn't be the first one) and much of his feelings seem to be motivated by the profound feeling of loss he faced after his father's death. He says that we lure ourselves into a false feeling of satisfaction by constructing all kinds of myths and comforting stories around death. Perhaps, by postulating mankind's and his own immortality by 2029, Kurzweil is doing the same thing?

Labels: ,

Wednesday, April 13, 2011

Hedgehogs and foxes in chemistry

Isaiah Berlin's parable about hedgehogs and foxes has long since served as a thought-provoking lens for looking at science and scientists. Berlin quoted the Greek poet Archilochus who said that "the fox knows many things, but the hedgehog knows one big thing". Scientists who spend most or all of their careers drilling down deep into one field, problem or idea are hedgehogs. Scientists who instead spend their careers sniffing out and solving interesting problems from several areas are foxes. How does the dichotomy apply to chemists?

At the outset, one thing seems clear: chemistry is much more of a fox's game than a hedgehog's. This is in contrast to theoretical physics or mathematics which have sported many spectacular hedgehogs. It's not that deep thinking hedgehogs are not valuable in chemistry. It's just that diversity in chemistry is too important to be left to hedgehogs alone. In chemistry more than in physics or math, differences and details matter. Unlike mathematics, where a hedgehog like Andrew Wiles spends almost his entire lifetime wrestling with Fermat's Last Theorem, chemistry affords few opportunities for solving single, narrowly defined problems through one approach, technique or idea. Chemists intrinsically revel in exploring a diverse and sometimes treacherous hodgepodge of rigorous mathematical analysis, empirical fact-stitching, back of the envelope calculations and heuristic modeling. These are activities ideally suited to foxes' temperament. One can say something similar about biologists.

The dominance of foxes in chemistry is demonstrated through its long history. Robert Boyle, the first modern chemist who is widely credited for separating chemistry from alchemy, is of course universally known for Boyle's Law but also made contributions to understanding combustion, respiration, color and electricity. The father of modern chemistry, Antoine Lavoisier, discovered the all-pervasive law of conservation of mass, preceded Mendeleev in putting together a tentative classification of elements and pioneered chemical book-keeping (stoichiometry). Similarly, the great chemists of the nineteenth century- Davy, Wöhler, Liebig, Kekule, Mendeleev- were all foxes who, while known for one or two important discoveries, worked in diverse facets of their chosen discipline. Chemical foxes also proliferated in the twentieth century, with Lewis, Langmuir, Curie, Fischer and Sanger being typical examples. Linus Pauling was the ultimate fox, but more on that below.

This does not mean that chemistry has no use for hedgehogs. Far from it. If there's one field of applied chemistry which has reaped riches from hedgehogs' talents, it's crystallography and especially protein crystallography. Crystallography also belongs to physics and biology, but it has enough of a crucial chemical component for crystallographers to call themselves chemists. Among crystallographers, Max Perutz was a hedgehog par excellence. Perutz spent his entire career shining his intellect like a laser beam on the structure of hemoglobin. It was largely his efforts that turned hemoglobin into one of the best studied proteins that we know and it was because of his extensive studies on it that we gained insights into some of the most general and important concepts in protein science, including cooperative effects and allostery. The chemists who won the Nobel Prize two years ago for their solution of the ribosome structure were also supremely focused hedgehogs. Special mention must be made among this trio of Ada Yonath, the "mother" hedgehog who made the ribosome her life's mission and stuck with it longer than anyone else. There are also hedgehogs in some other subfields of chemistry. For instance, Rudolf Marcus who spent his lifetime developing a comprehensive theory of electron transfer processes comes close to being a hedgehog. Similarly, Peter Mitchell is known for one big thing- the development of the fundamental theory of chemiosmosis which is of paramount importance in understanding biological energy transfer.

In every science there are also a few unique individuals who seem to be able to magically morph into both hedgehogs and foxes. The greatest chemist of the century belonged to this class. During his life, Pauling was known especially for the astounding diversity of his contributions and this puts him squarely into the fox camp. But remarkably, the hedgehogs could also claim him as one of their own since the depth of his contributions easily rivals the breadth of his interests. If Pauling had made no other contribution except his theories of chemical bonding, he would have still been hailed as one of the century's great chemical hedgehogs. That Pauling managed to be a fox and still made hedgehog-like contributions to at least three key fields (quantum chemistry, protein structure and molecular medicine) attests to the stature of his accomplishments. Among twentieth century scientists, Enrico Fermi is the only individual in my opinion who commanded both depth and breadth of this magnitude.

It is much harder to locate hedgehogs among organic chemists. The greatest of organic chemists, R B Woodward, was undoubtedly a fox, albeit one of the highest caliber. Other leading figures in the field like Corey, Stork, Djerassi, Westheimer, Breslow and Danishefsky have also been first-rate foxes. Interestingly, there are hedgehogs among synthetic chemists but they are not as well-known. One that comes to mind is the chemist John Sheehan who spent fifteen years of his life trying to synthesize penicillin. Even the great Woodward had stayed away from this molecule's perilous, highly strained beta lactam ring. Sheehan recounted his single-minded obsession in a highly readable book with a fitting title- "The Enchanted Ring". Another organic hedgehog was H C Brown who devoted his career to perfecting the chemistry of boron. Yet another example is George Olah who has had a fifty-year love affair with the chemistry of carbocations. Also, as Derek Lowe hints, organometallic chemistry may yet be a field full of hedgehog riches. The chemists who developed palladium-catalyzed reactions and olefin metathesis were very hedgehog-like.

Many leading contemporary chemists on the other hand are exceptionally gifted foxes. Harry Gray, Stephen Lippard, Stuart Schreiber, George Whitesides, David Baker, Jean-Marie Lehn, Ad Bax, Alan Fersht, Jacqueline Barton, Martin Karplus, Roald Hoffmann, Paul Schleyer, Christopher Dobson, C N R Rao, Donna Blackmond, Chad Mirkin, Eiichi Nakamura, Fraser Stoddart, Ken Houk and Dieter Seebach are but a few examples of individuals who have made first rate contributions to diverse areas of chemistry. In fact, there can be no better tribute to their identity as foxes than the fact that many of them could also be easily classified as physicists or biologists.

We mentioned the quintessential nature of chemistry as a field of dreams more attractive to foxes rather than hedgehogs. Why is this so? In chemistry unlike physics, overarching general principles are not as important as specific instances and diverse manifestations of these principles. Key unifying principles of course exist and are taught to every budding college chemist, but they are often not as deep compared with general laws in physics or theorems in mathematics. For instance, the theory of acids and bases or that of hybridization is undoubtedly a unifying theory, but it's more a set of rules derived through a mix of theoretical analysis and empirical facts. Few would equate acid-base theory with Maxwell's laws of electromagnetism, the laws of thermodynamics or the theory of Lie groups in terms of depth, fundamental importance and universal applicability. In addition, unifying concepts in chemistry (free energy, crystal field theory, conformational analysis, oxidative phosphorylation, solubility laws) are usually fathered by several individuals and not just one. The ideal of the lone thinker shunning himself or herself from society and heroically wresting nature's secrets from her grasp through single-minded pursuit is an ideal that is alien to chemistry's nature and practice. Finally, many key chemical contributions consist of methods or instrumental advances (NMR, crystallography, gene sequencing, chromatography, PCR) that are necessarily the work of many people.

Does the future of chemistry belong to hedgehogs or foxes? I see no reason for the trends of the past five hundred years to change. Chemistry will essentially remain a game for foxes. This will be even more true in the future than it was in the past because the hottest fields in chemistry like energy, nanotechnology, chemical genetics and drug discovery are especially fox-friendly. However, occasional hedgehogs of the kind described above will also remain an integral part of its development. Foxes will be needed to explore the uncharted territory of chemical discovery, hedgehogs will be needed to probe its corners and reveal hidden jewels. The jewels will further reflect light that will illuminate additional playgrounds for the foxes to frolic in. Together the two creatures will make a difference.

Labels: , ,

Tuesday, April 05, 2011

How (not) to get tenure

Over at the blog Cosmic Variance, there are two (1, 2) excellent and informative posts on how to get tenure and how to kill your chances of getting it. While the tips and caveats apply mainly to physics positions (and primarily for large, research-focused universities), most of the points will be valid for chemistry positions too. Two especially stood out for me:

1. You may think diversity in research counts, but it does not, at least not for tenure: This is the age of interdisciplinary research where an ability to transcend boundaries is key to solving important scientific problems. Thus you may think that a track record of having worked on diverse projects may help. Apparently not for tenure; tenure committees still seem to be more impressed with hedgehogs rather than foxes. They don't want a "dabbler"; they want someone who has proven his or her expertise in a single, narrowly defined area of research. Personally I find this approach disappointing, not only because thinkers working on diverse projects can enrich a department but because scientific progress itself needs all kinds of tinkerers, from the ones obsessed with a single problem for fifteen years to ones having their fingers in several scientific pies. Sure, being able to probe to the core of your chosen speciality is important and in fact is indicative of sustained scholarship, but the capacity to think outside the box and apply your knowledge to a variety of problems is increasingly important. The way I see it, tenure committees seem to be stuck in the transition period between the age of specialization and that of diversity. In twenty years perhaps they will start to appreciate diversity more, but for now, the lesson seems to be that narrow specialization is much more important, at least for getting tenure. Once you get tenure of course you can break free of such constraints.

2. Interests outside actual research don't count, and paradoxically, interests related to research may harm your prospects: This point was even more revealing. Yes, you can have hobbies (thank you!), but the more unrelated a hobby is to your research, the more benign it will seem to the tenure committee. The good news is that cooking and horse-riding are good. The bad news is that blogging and textbook writing are bad. If you are blogging in an area related to your work, there is a much greater chance for the committee to think that you are wasting your time which could be more fruitfully spent in actual research. Similarly, textbook writing will be frowned upon, even if you write a best-selling textbook. Basically any time away from research by definition is time that can be spent on research, and that's how tenure committees think. This is not too different from the attitude that certain PhD advisors have toward their unfortunate graduate students, but that's how it is.

However, there are probably ways in which you can try to put a positive spin on your blogging and other activities in a way that makes the committee appreciate your efforts in these areas. Recently I attended ScienceOnline2011 and there was a session in which tenured professors who thought that their blogging actually helped their tenure process gave some valuable advice on pitching blogs in tenure applications. First of all, try to convince the committee that blogging is not just a pastime but a valuable way to communicate science. Thus, you could possibly make a good case that skills gained from such communication could and do help you in the classroom. Secondly, try to convey the impact of your blogging on department visibility by citing references to your blogs in the media and in scientific journals. International citations could especially help. Ultimately, however, I don't think any of these strategies would work in the majority of cases since again, none of these activities contribute to actual research as much as they do to teaching and outreach, aspects of science that are usually considered less important by tenure committees.

Are you depressed yet? Well, all this is probably not as unfair as it sounds. Think a little from the perspective of the tenure committee. They are going to run the risk of hiring someone who will hang around for thirty or more years and become a permanent department fixture. Thus they want to be absolutely sure that they hire someone who has demonstrated scholarship (and funding potential). The fact is that if you can prove your mastery in one speciality, the committee can be more confident of your potential in tackling other complex problems. They want someone who can do sustained work in a single area for half a dozen years and bring in the bacon.

At the same time, tenure committees need to awaken to the new reality where an ability to appreciate and work in diverse disciplines is as important as the ability to delve deep in one specialty. As for blogs and textbook writing, while I find the attitude disappointing, it again makes sense. Departments hire you first and foremost for your research and publishing potential. They may treat your blogging and other related activities with mild interest at best but one cannot blame them, at least in the first few years, for relegating such activities to the side when it comes to considering you for tenure.

It is also worth noting that the caveats listed above mostly apply to research-oriented universities. Blogging, textbook writing and diversity may all be appreciated more in institutions equally or more focused on teaching. But there it is; a picture that's disappointing but sensible in its own way. Say goodbye to your utopian childhood impression of science as a career in which you are free to pursue any line of activity and work in any area that interests you. At least until you get tenure, you will have to stick to a narrowly defined set of constraints and toe the line.

After that the world's your oyster. Almost.

Labels: ,