Monday, July 30, 2012

Gene duplication and molecular promiscuity

Head over to the Scientific American blog for a post that discusses a recent article suggesting a link between gene duplication and the promiscuity of enzymes involved in secondary metabolism in plants. Since gene duplication frees up one copy of the gene to "experiment", it can potentially accumulate mutations that confer the ability to bind and process more than one substrate. We should partly thank gene duplication for giving us many secondary metabolites which are used as drugs (both recreational and non-recreational), flavors and food products.

Labels: , ,

Friday, July 27, 2012

Blogging at Scientific American

I now have a blog on the Scientific American blog network: "The Curious Wavefunction".

Labels:

Thursday, May 17, 2012

Physics's PR problem: Moving beyond string theory and multiple universes

 I was reminded of this by a timely post by MJ at "Interfacial Digressions". As everyone knows, chemistry has a PR problem. Fear of "chemicals" runs rampant without context or qualification. In addition, unlike physics and biology, chemistry is not considered to be the science that answers profound questions about the origins of life or the future of the universe. Of course there's evidence to the contrary for each one of these thoughts - modern life would be impossible without chemistry and the origin of life can claim to be the ultimate grand chemical question - but it's been hard to convince the public of this truth. The acute PR problem for chemistry is illustrated by the fact that popular literature on chemistry does not sell half as well as that on physics; just count the number of chemistry versus physics books in your Barnes & Noble the next time you visit (if you are still obsessed with paper that is).

But I think physics also has a PR problem, and it's of a different kind than chemistry's. This statement should elicit gasps of indignation, since the Greenes, Hawkings and Kakus seem to be doing quite well; they are household names and every one of their books instantly gathers hundreds of positive reviews on Amazon. But there's still a problem and it's not one that is acknowledged by many of these leading popular expositors, at least partly because doing so would rob them of their next big NewYork Times bestseller and the accompanying profits. Look at the physics section in your B&N next time and you will understand what I am talking about.

The problem is that most of the popular physics that the public enjoys constitutes perhaps 10% of the research that physicists worldwide are engaged in. Again, count the number of physics books in your local bookstore, and you will notice that about 90% of them cover quantum mechanics, cosmology, particle physics and "theories of everything". You would be hard-pressed to find volumes on condensed matter physics, biophysics, the physics of "soft" matter like liquids and non-linear dynamics. And yes, these are bonafide fields of physics that have engaged physics's best minds for decades and which are as exciting as any other field of science. Yet if you ask physics-friendly laymen what cutting-edge physics is about, the answers will typically span the Big Bang, Higgs boson, black holes, dark matter, string theory and even time-travel. There will be scant mention if any of say spectroscopy, optics, polymers, magnetic resonance, lasers or even superconductivity.

Whether physicists admit it or not, this is a PR problem. Laymen are being exposed to what is an undoubtedly exciting but tiny fraction of the universe of physics research. For eager readers of the popular physics literature, the most exciting advances in physics are encapsulated between the Higgs boson and the Big Bang and that's all they think exists in heaven and earth. In my opinion this does a great disservice to the majority of physicists around the world who work on other, equally exciting topics. Just consider one major academic physics department, say Stanford, and you get an idea of the sheer variety of projects physicists work on. Physics books may still sell, but the physics they describe is something which most of the world's physicists don't do. I cannot see how this cannot be called a PR problem.

So who is responsible for this situation? Well, in one sense, nobody. The fact is that the public has always shown a taste for "big picture" topics like cosmology and quantum mechanics and physicists have been indulging this taste for quite a while now. And who can blame the public for being attracted to relativity with its time paradoxes or quantum mechanics with its cats and famous personal rivalries. Even in the 1920s, the popular physics literature sported the likes of Arthur Eddington and James Jeans who were pitching nuclear physics and relativity to packed audiences. The mantle was passed on in the postwar era to scientists like George Gamow and Isaac Asimov who spread the gospel with gusto. And the trend continues to the present day, with even a mind-numbingly well-trodden topic like the history of quantum theory finding eager expositors like Louisa Gilder, Manjit Kumar and David Lindley. All their books are highly engaging, but they are not doing a favor to other equally interesting branches of physics.

The popular physics literature also has started turning quasi-religious, and writers like Brian Greene and Michio Kaku are unfortunately responsible for this development. Greene in particular is a remarkably charismatic and clear writer and lecturer who has achieved almost rock-status. Sadly, his popular expositions are seeming more like rock concerts rather than serious physics lectures. Part of the problem is his almost evangelic espousal of highly speculative, experimentally unverified (and perhaps even unverifiable) but deliciously tantalizing topics like string theory and multiple universe. Greene's books seem to indicate that the more speculative the topic, the more eagerly it will be assimilated by lay audiences. This cannot but be a disturbing trend, especially for those thousands of physicists whose research may sound pedestrian but which is also more solidly grounded in experiment and as interesting as perpetually splitting universes. One suspects that even the famous popular physics writers of lore like George Gamow would have been hesitant in pitching highly speculative topics merely for their "Wow" factor. If the biggest selling point of a popular physics book is its dependence on experimentally unverified ideas that sound more like science fiction, popular physics is in trouble indeed. 

In addition, whatever lacks the "Wow" factor seems to evidence the "Yawn" factor. By this I am referring to books describing the same old ideas which nonetheless keep on finding new audiences. A good example is Lisa Randall's latest book. It's an extremely well-written and spirited volume but it mostly treads the same tired ground of quantum mechanics, relativity and the Large Hadron Collider. The bottom line is that the popular physics literature seems to have reached a point of diminishing marginal returns. It's become very difficult to write anything on the subject that's either not well-trodden or highly speculative.

There is another unintentional effect of this literature which is more serious. Today's popular physics gives people the impression that the only questions worth addressing in physics are those that deal with unified theories or the birth and death of the cosmos. Everything else is either not worth doing or is at best done by second-rate minds or graduate students (take your pick). Not only does this paint a skewed picture of what's important and difficult in the field, it also inflates the importance and intellectual abilities of physicists working on fundamental problems at the expense of those working on more applied ones. This again does a great disservice to very many challenging problems in physics and the people addressing them. Building a room-temperature superconductor, understanding turbulence, designing new materials for capturing solar energy, keeping atoms stable at cold temperatures, kicking DNA around with lasers and of course, beating nuclear fusion at its own thermodynamic game are still long-unsolved problems that promise to engage the finest minds in the field. Yet the myth that the greatest problem in physics is finding the theory describing "everything" persists. This constant emphasis on "big" questions provides a biased view not just of physics but in fact of all of science, most of which involves solving interesting but modest problems. As MJ says in his/her post, most physicists he/she knows aren't really after 3 laws that describe 99% of the universe but would be content finding 99 laws that describe 3%. 

So what's the solution? As with other problems, the first step would be to acknowledge that there is indeed a problem. Sadly this would mean somewhat blunting the public's starry-eyed impression of cutting-edge physics, which the leading expositors of physics would perhaps be unwilling to do. At least some physicists might be basking in the public's mistaken grand impression that cosmology and quantum theory are all that physicist work on. If I were a soft condensed matter physicist and if I told someone at a cocktail party that I do physics, the images that response would evoke would most likely include long-haired professors, black holes, supernovae, nuclear weapons and time-travel. I may be excused for sounding hesitant to dispel this illusion and emphasize that I actually work on understanding the exact shape of coffee stains.

Nonetheless, this harsh assessment of reality might be necessary to cut the public's umbilical cord to the Hawkings, Greenes and Randalls. But this would have to be done by someone else and not by Brian Greene. Now let me make it clear that as speculative as I might find some of his proclamations, I don't blame Greene at all for doing what he does. You cannot fault him for not reminding the public about the wonders of graphene since that's not his business. His business is string theory, that's what he is passionate about, and nobody can doubt that he is exceedingly good at practicing his trade. Personally I have enjoyed his books, and in an age where ignorance of science seems to reach new lows, Greene's books provide at least some solace. But other physicists would have to tread into territory that he does not venture into if they want to solve physicists' PR problem. 

Gratifyingly some physicists have already started staking their claims in this territory, although until now their efforts have sounded more like tiptoeing and less like confident leaps. Nevertheless, James Gleick proved in the 1990s with his "Chaos" that one can indeed grab the public's attention and introduce them to an entirely new branch of science very successfully. In recent years this tradition has been carried on with varying degrees of success by other scientists, and they provide very promising examples of how the PR problem could be addressed. Let me offer a few suggestions. Robert Laughlin has talked about emergence and condensed matter in his "A Different Universe". David Deutsch has laid out some very deep thoughts in his two books, most recently in "The Beginning of Infinity". Philip Anderson expounds on a variety of interesting topics in his recent collection of essays. And while not entirely about physics, Stuart Kauffman's books have done a great job at dismantling the strong reductionist ethic endemic in physics and suggesting new directions for inquiry.

Sadly most of these books, while exceedingly interesting, are not as engagingly written as those by Greene or Randall. But the modest success they have enjoyed seems to indicate that the public does have a taste for other areas of physics as long as they are described with verve, passion and clarity. Maybe someday someone will do the same for turbulence, DNA dynamics, non-Newtonian liquids and single-molecule spectroscopy. Then physics will finally be complete, at least in a popular sense.

Image source

The other blog

I am currently active only on my other blog: The Curious Wavefunction

Sunday, December 25, 2011

Steve Job's Christmas message for our friends in pharma

I am at the end of Walter Isaacson's excellent biography of Steve Jobs and it's worth a read even if you think you know a lot about the man. Love him or hate him, it's hard to deny that Jobs was one of those who disturbed our universe in the last few decades. You can accuse him of a lot of things, but not of being a lackluster innovator or product designer.

The last chapter titled "Legacy" has a distillation of Jobs's words about innovation, creativity and the key to productive, sustainable companies. In that chapter I found this:

"I have my own theory about why decline happens at companies like IBM or Microsoft. The company does a great job, innovates and becomes a monopoly or close to it in some field, and then the quality of product becomes less important. The company starts valuing the great salesmen, because they're the ones who can move the needle on revenues, not the product engineers and designers. So the salespeople end up running the company. John Akers at IBM was a smart, eloquent, fantastic salesperson but he didn't know anything about product. The same thing happened at Xerox. When the sales guys run the company, the product guys don't matter so much, and a lot of them just turn off."

Jobs could be speaking about the modern pharmaceutical industry. There the "product designers" are the scientists of course. Although many factors have been responsible for the decline of innovation in modern pharma, one of the variables that strongly correlates is the replacement of product designers at the helm by salespeople and lawyers beginning roughly in the early 90s.

There's a profound lesson in there somewhere. Not that wishes come true, but it's Christmas, and while we don't have the freedom to innovate, hold a stable job and work on what really matters, we do have the freedom to wish. So with this generous dose of wishful thinking, I wish you all a Merry Christmas.

Labels: ,

Tuesday, November 15, 2011

Autism studies among the Asian-American diaspora.

Last week's issue of Nature had a special section on autism research. One look at the series of articles should convince anyone how complex the determination of causal factors for this disorder is. From a time when pseudoscientific environmental factors (such as "frigid" mothers) were supposed to play a major role, we have reached a stage where massive amounts of genetic data are uncovering tantalizing hints behind Autism Spectrum Disorders (the title itself pointing to the difficulty of diagnosis and description) without a clear indication of causes. Indeed, as pointed out in the Nature articles, some researchers think that the pendulum has now swung to the other side and environmental factors need to be taken into account again.

All the articles are worth reading, but for me the most interesting was a piece describing the research of psychologist Simon Baron-Cohen (brother of the colorful actor Sacha Baron-Cohen) who believes that there is a link between autistic children and the probability of their having technically-minded parents like engineers or scientists. Baron-Cohen's hypothesis has not been validated by rigorous studies but it's extremely intriguing. He thinks that the correlation may have to do with the existence of a "systematizing" brain, one which is adept at deciphering and constructing working relationships in mechanical and logical systems, but which is simultaneously rather poor at grasping the irrational, ill-defined nature of human relationships. Baron-Cohen's hypothesis would be consistent with the lack of empathy and human understanding sometimes found among autistic individuals who also seem to have an aptitude for mathematics, science and engineering.

The moment I read the article, I immediately thought of the substantial Asian-American diaspora in the US, especially Indian and Chinese. I don't have precise statistics (although these would be easy to obtain) but I would think that the majority of Indians or Chinese who emigrated to the US in the last twenty years or so are engineers. If not engineers then they would mostly be scientists or doctors, with businessmen, lawyers and others making up the rest. Chinese and Indian engineers and scientists have always been immigrants here, but the last twenty years have undoubtedly seen a dramatic increase in their numbers.

Now most of the Asians who migrated to the US in the last few years have children who are quite young. From what I read in the Nature article, it seems to me that this Asian community, especially concentrated in places employing large numbers of technically-minded professionals like Silicon Valley and New Jersey, might provide a very good population sample to test Baron-Cohen's hypothesis between autism in children and their probability of having parents who are engineers or physical scientists. Have there been any such studies indicating a relatively higher proportion of ASDs among Asian-American children? I would think that geographic localization and a rather "signal-rich" sample to test Baron-Cohen's hypothesis would provide fertile ground. And surveys conducted with these people by email or in person might be a relatively easy way to test the idea. In fact you may even gain some insight into the phenomenon by analyzing existing records detailing the ethnicity and geographic location of children diagnosed with autism in the last two decades in the US (however, this sample may be skewed since awareness of autism among Asian parents has been relatively recent).

If the prevalence of autism among recent Chinese and Indian children turns out to have seen an upswing in the last few years (therefore contributing to the national average), it would not prove Baron-Cohen's hypothesis but it would certainly be consistent with it. And it would provide an invitation to further inquiry. That's what good science is about.

Labels:

Tuesday, November 08, 2011

Age is of course a fever chill?: Why even older scientists can make important contributions

A ditty often attributed to Paul Dirac conveys the following warning about doing scientific work in your later years:

Age is of course a fever chill
That every physicist must fear
He is better dead than living still
When past his thirtieth year

Dirac was of course a member of the extraordinary generation of physicists who changed our understanding of the physical world through the development of quantum mechanics in the 1920s and 30s. These men were all in their 20s when they made their revolutionary discoveries, with one glaring exception - Erwin Schrodinger was, by the standards of theoretical physics, a ripe old thirty-eight when he stumbled across the famous equation bearing his name.

When we look at the careers of these individuals we could be forgiven for assuming that if you are past thirty you will probably not make an important contribution to science. The legend of the Young Turks has not quite played out in other fields of science though. Now a paper in PNAS confirms what we suspected, that since the turn of the twentieth century there has been a general increase in the age at which scientists make their important discoveries. What is more surprising is that this increase exists more across time than across fields. The study looks at Nobel Laureates, but since many people who make Nobel Prize worthy discoveries never get the prize, the analysis applies to others too.

So what does the paper find out? It finds out that in the early years of the last century, young men and women made contributions in every field at a relatively young age, often in their twenties. This was most pronounced for theoretical physics. Einstein famously formulated special relativity when he was a 26-year-old patent clerk, and Heisenberg formulated quantum mechanics when he was 24. The same was true for many other physicists including Bohr, Pauli, Dirac and De Broglie and the trend continued into the 70s, although it was less true for chemists and biologists studied by the authors.

The reasons postulated in the paper are probably not too surprising but they illustrate the changing nature of scientific research during the last one hundred years or so. It is easiest for a brilliant individual to make an early contribution to a theoretical field like theoretical physics or mathematics, since achievement in such fields depends more on raw, innate ability than skills gained over time. In an experimental field it's much harder to make early contributions since one needs time to assimilate the large body of experimental work done and to learn the painstaking trade of the experimentalist, an endeavor where patience and perseverance count much more than innate intelligence. As the paper puts it, deductive knowledge lends itself more easily to innate analytical thinking skills visible at a young age than inductive knowledge based on a large body of existing work. This is true even for theoretical physicists where fundamental discoveries have become extremely hard and scarce, and where new ideas depend as much on integrating an extensive set of facts into your thinking process as on "Eureka!" moments. And this difference holds even more starkly for social sciences like economics and psychology where you find very few young people making Nobel Prize winning contributions. In these cases success depends as much on intellectual maturity gained from a thorough assimilation of data about extremely complex systems ("humans") as it does on precocity.

But if this were purely the distinction, then we wouldn't find young people making contributions even to experimental chemistry and biology in the early twentieth century. The reason why this happened is also clear; there was a lot of low-hanging fruit to be picked. So little was known for instance about the molecular components of living organisms that almost every newly discovered vitamin, protein, alkaloid, carbohydrate or steroid could bag its discoverer a Nobel prize. The mean age for achievement was not as early as in theoretical physics, but the contrast is still clear. Even in theoretical physics, the playing field was so rife for new discoveries in the 1930s that in Dirac's words, "even a second-rate physicist could make a first-rate discovery". The paper draws the unsurprising conclusion that there is much more opportunity for a young person to discover something new in a field where little is known.

This conclusion is starkly illustrated in the case of DNA. Watson and Crick are the "original" Young Turks. Watson was only 25 and Crick was in his early thirties when they cracked open the DNA structure, although one has to give Crick a pass since his career was interrupted by the war. What's important to note is that both Watson and Crick came swinging into the field with very little prior knowledge. For instance they both knew very little chemistry. But in this case this lack of knowledge did not really hold them back and in fact freed up their imagination because they were working in a field where there were no experts, where even newcomers could use the right kind of knowledge (crystallography and model building in this case) to make important discoveries. Watson and Crick's story points to a tantalizing thought- that it may yet be possible to make fundamental contributions at a young age to fields in which virgin territory is still widely available. Neuroscience comes to mind right away.

Since this is a chemistry blog, let's look at the authors' conclusions as they apply to chemistry. Linus Pauling provides a very interesting example since he plays into both categories. The "early Pauling" made his famous contributions to chemical bonding in his twenties, and this contribution was definitely more of the deductive kind where you could indulge in much armchair analysis based on principles and approximations drawn from quantum theory. In contrast, contributions by the "late Pauling" are much more inductive. These would include his landmark discovery of the fundamental elements of protein structure (the alpha helix and the beta sheet) and the first description of a disease at a molecular level (sickle cell anemia). Pauling did both these things in his 40s, and both of them needed him to build up from an extensive body of knowledge about crystallography, chemical bonding and biochemistry. It would be hard to imagine even a Linus Pauling deducing protein structure the way he deduced orbital hybridization.

If we move to more inductive fields then the relatively advanced age of the participants is even more obvious. In fact in chemistry, in contrast to mathematics or physics, it's much harder to pinpoint a young revolutionary precisely because chemistry more than physics is an experimental science based on the accumulation of facts. Thus even exceptional chemists are often singled out more for lifelong contributions than for lone flashes of inspiration. Even someone as brilliant as R. B. Woodward (who did make his mark at a young age) was really known for his career-wide contributions to organic synthesis rather than any early idea. It's also interesting that Woodward did make a very important contribution in his late 40s - to the elucidation of the Woodward-Hoffmann rules- and although Hoffmann provided a robust deductive component, inspiration for the rules came to Woodward through anomalies in his synthesis of Vitamin B12 and his vast knowledge of experimental data on pericyclic reactions. Woodward was definitely building up from a lot of inductive knowledge.

An additional factor that the authors don't discuss is the contribution of collaborations. From a general standpoint it has now become very difficult for scientists in any field to make lone significant contributions.
In fact one can make a good case that even the widely cited lone contributions to theoretical physics in the 1920s involved constant collaboration and exchange of ideas (mostly through Niels Bohr's institute in Copenhagen). This was far from the case for most of scientific history, when you had people like Cavendish, Lavoisier, Maxwell, Faraday, Kekule and Planck working alone and producing spectacular results. But things have significantly changed, especially in the case of experimental particle physics and genomics where even the most outstanding thinkers can often work only as part of a team. In such cases it may even be meaningless to talk about the young vs advanced age dichotomy since no one individual makes the most important discovery.

Finally, one rather disturbing reason that could potentially contribute to an even greater advancement of age in the context of important discoveries is left undiscussed. As the biologist Bob Weinberg lamented in an editorial a few years ago, the mean age at which new academic researchers receive their first important research grant has been advancing. This means that even brilliant scientists may be held back from making important discoveries simply because they lack the resources. While this trend has really been visible in the last decade or so, it could contribute as an unfortunate factor to the age-corrected generation of novel ideas. One only hopes that this does not make things so bad that scientists are forced to consider contributing to their fields in their 70s.

Ultimately there's one thing that age brings that's hard to replace with raw brilliance, and that's the nebulous but invaluable entity called 'intuition'. As scientific problems become more and more complex and interdisciplinary, it is inevitable that intuition and experience will play more important roles in the divining of new scientific phenomena. And these are definitely a product of age, so there may be something to look forward to when you grow old after all.

Labels: ,

Wednesday, October 05, 2011

The future of science: Will models usurp theories?

This year's Nobel Prize for physics was awarded to Saul Perlmutter, Brian Schmidt and Adam Riess for their discovery of an accelerating universe, a finding leading to the startling postulate that 75% of our universe contains a hitherto unknown entity called dark energy. All three were considered favorite candidates for a long time so this is not surprising at all. The prize also underscores the continuing importance of cosmology since it had been awarded in 2o06 to George Smoot and John Mather, again for confirming the Big Bang and the universe's expansion.

This is an important discovery which stands on the shoulders of august minds and an exciting history. It continues a grand narrative that starts from Henrietta Swan Leavitt (who established a standard reference for calculating astronomical distances) through Albert Einstein (whose despised cosmological constant was resurrected by these findings) and Edwin Hubble, continuing through George Lemaitre and George Gamow (with their ideas about the Big Bang) and finally culminating in our current sophisticated understanding of the expanding universe. Anyone who wants to know more about the personalities and developments leading to today's event should read Richard Panek's excellent book "The 4 Percent Universe".

But what is equally interesting is the ignorance that the prizewinning discovery reveals. The prize was really awarded for the observation of an accelerating universe, not the explanation. Nobody really knows why the universe is accelerating. The current explanation for the acceleration consists of a set of different models, none of which has been definitively proven to explain the facts well enough. And this makes me wonder if such a proliferation of models without accompanying concrete theories is going to embody science in the future.

The twentieth century saw theoretical advances in physics that agreed with experiment to an astonishing degree of accuracy. The culmination of achievement in modern physics was surely quantum electrodynamics (QED) which is supposed to be the most accurate theory of physics we have. Since then we have had some successes in quantitatively correlating theory to experiment, most notably in the work on validating the Big Bang and the development of the standard model of particle physics. But dark energy- there's no theory for it that remotely approaches the rigor of QED when it comes to comparison with experiment.

Of course it's unfair to criticize dark energy since we are just getting started on tackling its mysteries. Maybe someday a comprehensive theory will be found, but given the complexity of what we are trying to achieve (essentially explain the nature of all the matter and energy in the universe) it seems likely that we may always be stuck with models, not actual theories. And this may be the case not just with cosmology but with other sciences. The fact is that the kinds of phenomena that science has been dealing with recently have been multifactorial, complex and emergent. The kind of mechanical, reductionist approaches that worked so well for atomic physics and molecular biology may turn out to be too impoverished for taking apart these phenomena. Take biology for instance. Do you think we could have a complete "theory" for the human brain that can quantitatively calculate all brain states leading to consciousness and our reaction to the external world? How about trying to build a "theory" for signal transduction that would allow us to not just predict but truly understand (in a holistic way) all the interactions with drugs and biomolecules that living organisms undergo? And then there's other complex phenomena like the economy, the weather and social networks. It seems wise to say that we don't anticipate real overarching theories for these phenomena anytime soon.

On the other hand, I think it's a sign of things to come that most of these fields are rife with explanatory
models of varying accuracy and validity. Most importantly, modeling and simulation are starting to be considered as a respectable "third leg" of science, in addition to theory and experiment. One simple reason for this is the recognition that many of science's greatest current challenges may not be amenable to quantitative theorizing, and we may have to treat models of phenomena as independent, authoritative explanatory entities in their own right. We are already seeing this happen in chemistry, biology, climate science and social science, and I have been told that even cosmologists are now extensively relying on computational models of the universe. Admittedly these models are still far behind theory and experiment which have had head starts of about a thousand years. But there can be little doubt that such models can only become more accurate with increasing computational firepower. How accurate remains to be seen, but it's worth noting that there are already books that make a case for an independent, study-worthy philosophy of modeling and simulation. These books extol philosophers of science to treat models not just as convenient applications and representations of theories (which are then the only fundamental things worth studying) but as ultimate independent explanatory devices in themselves that deserve separate philosophical consideration.

Could this then be at least part of the future of science? A future where robust experimental observations are encompassed not by beautifully rigorous and complete theories like general relativity or QED but only by different models which are patched together through a combination of rigor, empirical data, fudge factors and plain old intuition? This would be a new kind of science, as useful in its applications as its old counterpart but rooting itself only in models and not in complete theories. Given the history of theoretical science, such a future may seem dark and depressing. That is because as the statistician George Box famously quipped, although some models are useful, all models are wrong. What Box meant was that models often feature unrealistic assumptions about all kinds of details that nonetheless allow us to reproduce the essential features of reality. Thus they can never provide the sure connection to "reality" that theories seem to. This is especially a problem when disparate models give the same answer to a question. In the absence of discriminating ideas, which model is then the "correct" one? The usual answer is "none of them", since they all do an equally good job of explaining the facts. But this view of science, where models that can be judged only on the basis of their utility are the ultimate arbiters of reality and where there is thus no sense of a unified theoretical framework, feels deeply unsettling. In this universe the "real" theory will always remain hidden behind a facade of models, much as reality is always hidden behind the event horizon of a black hole. Such a universe can hardly warm the cockles of the heart of those who are used to crafting grand narratives for life and the universe. However it may be the price we pay for more comprehensive understanding. In the future, Nobel Prizes may be frequently awarded for important observations for which there are no real theories, only models. The discovery of dark matter and energy and our current attempts to understand the brain and signal transduction could well be the harbingers of this new kind of science.

Should we worry about such a world rife with models and devoid of theories? Not necessarily. If there's one thing about science that we know, it's that it evolves. Grand explanatory theories have traditionally been supposed to be a key part- probably
the key part- of the scientific enterprise. But this is mostly because of historical precedent as well a psychological urge for seeking elegance and unification. Such belief has been resoundingly validated in the past but it's utility may well have plateaued. I am not advocating some "end of science" scenario here - far from it - but as the recent history of string theory and theoretical physics in general demonstrates, even the most mathematically elegant and psychologically pleasing theories may have scant connection to reality. Because of the sheer scale and complexity of what we are trying to currently explain, we may have hit a roadblock in the application of the largely reductionist traditional scientific thinking which has served us so well for half a millennium

Ultimately what matters though is whether our constructs- theories, models, rules of thumb or heuristic pattern recognition- are up to the task of constructing consistent explanations of complex phenomena. The business of science is explanation, whether through unified narratives or piecemeal explanation is secondary. Although the former sounds more psychologically satisfying, science does not really care about stoking our egos. What is out there exists, and we do whatever's necessary and sufficient to unravel it.

Labels: , ,

Sunday, October 02, 2011

Book Review: Robert Laughlin's "Powering the Future"

In the tradition of physicists writing for the layman, Robert Laughlin has emerged as a writer who pens unusually insightful and thought-provoking books. In his "A Different Universe" he explored the consequences and limitations of reductionism-based physics for our world. In this book he takes an equally fresh look at the future of energy. The book is not meant to be a comprehensive survey of existing and upcoming technologies; instead it's more like an assortment of appetizers designed to stimulate our thinking. For those who want to know more, it offers an impressive bibliography and list of calculations which is almost as long as the book itself.

Laughlin's thinking is predicated on two main premises. The first is that carbon sources are going to eventually run out or become inaccessible (either because of availability or because of legislation). However we will still largely depend on carbon because of its extraordinarily fortuitous properties like high energy density, safety and ease of transportation. But even in this scenario, simple rules of economics will trump most other considerations for a variety of different energy sources. The second premise which I found very intriguing is that we need to uncouple our thinking on climate change from that on energy instead of letting concerns about the former dictate policy about the latter. The reason is that planetary-level changes in the environment are so vast and beyond the ability of humans to control that driving a few more hybrids or curbing carbon emissions will have little effect on millennial events like the freezing or flooding of major continents. It's worth noting here that Laughlin (who has been called a climate change skeptic lately) is not denying global warming or its consequences here; it's just that he thinks that it's sort of beside the point when it comes to thinking about future energy, which will be mainly dictated by economics and prices more than anything else. I found this to be a commonsense approach based on an appreciation of human nature.

With this background Laughlin takes a sweeping and eclectic look at several interesting technologies and energy sources including nuclear energy, biofuels, energy from trash, wind and solar power and energy stored beneath the sea. In each case Laughlin explores a variety of problems and promises associated with these sources.

Because of dwindling uranium resources, the truly useful form of nuclear energy for instance will come from fast breeder reactors which produce their own plutonium fuel. However these reactors are more susceptible to concerns about proliferation and theft. Laughlin thinks that a worldwide, tightly controlled system of providing fuel rods to nations would allow us to fruitfully deploy nuclear power. One of his startling predictions is the possibility that we may put up with occasional Chernobyl-like events if nuclear power truly becomes cheap and we don't have any other alternatives.

Laughlin also finds promises and pitfalls in solar energy. The basic problem with solar energy is its irregular availability and problems with storage. Backup power inevitably depends on fossil fuel sources which sort of defeats the purpose. Laughlin sees a bright future for molten salt tanks which can very efficiently store solar energy as heat and which can be used when the sun is not shining. These salts are simple eutectic mixtures of potassium and sodium nitrates with melting points that are conveniently lowered even more by the salts' decomposition products. Biofuels also get an interesting treatment in the book. One big advantage of biofuels is that they are both sources and sinks of carbon. Laughlin talks about some recent promising work with algae but cautions that meeting the sheer worldwide demand for energy with biofuels that don't divert resources away from food is very challenging. Further on there's a very intriguing chapter on energy stored under the sea. The sea provides a stupendous amount of land beneath it and could be used for energy storage through novel sources like high-density brine pools and compressed natural gas tanks. Finally, burning trash which has a lot of carbon might appear like a useful source of energy but as Laughlin explains, the actual energy in trash will provide only a fraction of our needs.

Overall the book presents a very thought-provoking treatment of the nature and economics of possible future energy sources in a carbon-strapped world. In these discussions Laughlin wisely avoids taking sides, realizing how fraught with complexity and ambiguity future energy production is. Instead he simply offers his own eclectic thoughts on the pros and cons of energy-related topics which may (or may not) prove important in the future. Of the minor gripes I have with the volume is the lack of discussion of promising recent advances in solar cell design, thorium-based fuels and next generation nuclear reactor technology. Laughlin's focus is also sometimes a little odd and meandering; for instance at one point he spends an inordinate amount of time talking about interesting aspects of robotic technology that may make deep sea energy sequestration possible. But these gripes detract little from the volume which is not really supposed to be an exhaustive survey of alternative energy technologies.

Instead it offers us a very smart scientist's miscellaneous musings on energy dictated by commonsense assumptions based on the simple laws of demand and supply and of human nature. As responsible citizens we need to be informed on our energy choices which are almost certainly going to become more difficult and constrained in the future. Laughlin's book along with others will stimulate our thinking and help us pick our options and chart our direction.

Labels: , ,

Tuesday, September 27, 2011

The flame of life and death: My favorite (insufferable) chemical reaction

For me, the most astounding thing about science has always been the almost unimaginably far-reaching and profound influence that the most trite truths about the universe can have on our existence. We may think that we are in charge of our lives through our seemingly sure control of things like food, water, energy and material substances and we pride the ability of our species to stave off the worst ravages of the natural environment such as disease, starvation and environmental catastrophe. We have done such a good job of sequestering ourselves from the raw power of nature that it's all too easy to take our apparent triumph over the elements for granted. But the truth is that we are all without exception critically and pitifully beholden to a few numbers and a few laws of physics.

And a few simple chemical reactions. Which brings me to my favorite reaction for this month's blog carnival. It's a reaction so elementary that it will occupy barely a tenth of the space on a napkin or t-shirt and which could (and should) be productively explained to every human being on the planet. And it's a reaction so important that it both sustains life and very much has the potential to end it.

By now you might have guessed it. It's the humble combination of hydrocarbons with oxygen, known to all of us as combustion.

First the reaction itself which is bleedingly simple:

CnH2n+2 + (3n+1)/2 O2 → (n+1) H2O + n CO2 + Energy

That's all there is to it. There, in one line, is a statement about our world that packs at least as much information into itself as all of humanity's accumulated wisdom and follies. A hydrocarbon with a general formula CnH2n+2 reacts with oxygen to produce carbon dioxide, water and energy. That's it. You want a pithy, multifaceted (or two-faced, take your pick) take on the human condition, there you have it. While serving as the fundamental energy source for life and all the glory of evolution, it's also one that drives wars, makes enemies out of friends, divides and builds ties between nations and will without a doubt be responsible for the rise, fall and future of human civilization. Faust himself could have appeared in Goethe's dream and begged him to use this reaction in his great work.

First, the hydrocarbon itself. Humanity launched itself onto a momentous trajectory when it learnt how to dig carbon out of the ground and use it as fuel. Since then we have been biding our time for better or worse. The laws of quantum mechanics could not have supplied us with a more appropriate substance. Carbon in stable hydrocarbons is in its most reduced state, which means that you can get a bigger bang out of your buck by oxidizing it compared to almost any other substance. What billions of controlled experiments over the years in oil and natural gas refineries and coal plants have proven is that you really can't do better than carbon when it comes to balancing energy density against availability, cost, ease of handling and transportation and safety. In its solid form you can burn it to stay warm and to produce electricity, in its liquid form you can pump it into an incredibly efficient and compact gas tank. For better or worse we are probably going to be stuck with carbon as a fuel (although the energy source can wildly differ).

The second component of the chemical equation is oxygen. Carbon is very fortunate in not requiring a pure source of oxygen to burn; if it burned, say, only in an environment with 70% or more oxygen that would have been the end of modern civilization as we know it. Air is good enough for combusting carbon. In fact the element can burn under a wide range of oxygen concentrations, which is a blessing because it means that we can safely burn it in a very controlled manner. Varying the amount of oxygen can also lead to different products and can minimize the amount of soot and toxic byproducts. The marriage of carbon and oxygen is a wonderfully tolerant and productive one and we have gained enormously from this union

The right side of the combustion equation is where our troubles begin. First off, water. It may seem like a trivial, harmless byproduct of the reaction but it's precisely its benign nature that allows us to use combustion so widely. Just imagine if the combustion of carbon had produced some godforsaken toxic substance in addition to carbon dioxide as a byproduct. Making energy from combustion would then have turned into a woefully expensive activity, with special facilities required to sequester the poisonous waste. This would likely have radically altered the global production and distribution of energy and human development would have been decidedly hampered. We may then have been forced to pick alternative sources of energy early on in our history, and the face of politics, economics and technology would consequently have been very different.

Moving on we come to what's almost ubiquitously regarded as a villain these days- carbon dioxide. If carbon dioxide were as harmless as water we would live in a very different world. Sadly it's not and its properties again underscore the profound influence that a few elementary facts of physics and chemistry can have on our fate. The one property of CO2 that causes us so much agony is the fact that it's opaque to long-wavelength infrared radiation and absorbs it, thus warming the surroundings. This is not a post to discuss global warming but it's obvious to anyone not living in a cave that the issue has divided the world like no other. We still don't know for sure what it will do, either by itself or because of the actions taken by human beings from merely perceiving its effects. But whatever it is, it will profoundly alter the landscape of human civilization for better or worse. We can all collectively curse the day that the laws of physics and chemistry decided to produce carbon dioxide as a product of combustion.

Finally we come to the piece de resistance. None of this would have mattered if it weren't for the most important thing combustion produces- energy (in fact we wouldn't have been around to give a fig). In this context combustion is exactly like nuclear fission; twentieth-century history would have been very different if all uranium did was break up into two pieces. Energy production from combustion is what drives life and human greed. We stay alive by eating carbon-rich compounds - especially glucose - which are then burned in a spectacularly controlled manner to provide us with energy. The energy liberated cannot be used directly for our actions and thoughts. Instead it is used to construct devilishly clever chemical packages of ATP (adenosine triphosphate) which then serves as the energy currency.

Our bodies (and those of other creatures) are staggeringly efficient at squeezing oxidation-derived energy out of compounds like glucose; for instance in the aerobic oxidation of glucose, a single glucose molecule can generate 32 molecules of ATP. Put another way, the oxidation of a gram of glucose yields about 4 kilocalories of energy. This may not seem like a lot until we realize that the detonation of a gram of TNT yields only about 1 kilocalorie (the reason the latter seems so violent is because all the energy is liberated almost instantaneously). Clearly it is the all-important energy term in the combustion equation that has made life on earth possible. We are generously contributing to this term these days by virtue of quarter pounders and supersizing but our abuse does not diminish its importance.

The same term of course is responsible for our energy triumphs and problems. Fossil fuel plants are nowhere as efficient in extracting energy from carbon-rich hydrocarbons as our bodies, but what matters is whether they are cheap enough. It's primarily the cost of digging, transporting, storing and burning carbon that has dictated the calculus of energy. Whatever climate change does, of one thing we can be sure; we will continue to pay the cheapest price for our fuel. Considering the many advantages of carbon, it doesn't seem like anything is going to substitute its extraordinarily fortuitous properties anytime soon. We will simply have to find some way to work around, over or through its abundance and advantages.

If we think about it then, the implications of combustion for our little planet and its denizens are overwhelming and sometimes it's hard to take it all in. At such times we only need to take a deep breath and remember the last words spoken by Kevin Spacey's character from "American Beauty":

"Sometimes I feel like I'm seeing it all at once, and it's too much, my heart fills up like a balloon that's about to burst... And then I remember to relax, and stop trying to hold on to it, and then it flows through me like rain and I can't feel anything but gratitude for every single moment of my stupid little life..."

That's right. Let's have it flow through us like rain. And watch it burn.

Image source

Monday, September 19, 2011

Book Review: "Feynman"

With his colorful personality and constant propensity to get into all kinds of adventures, Richard Feynman is probably the perfect scientific character to commit to comic book form, so in one way this graphic novel is long due. What is remarkable is how powerfully Jim Ottaviani and Leland Myrick harness this unique medium to accurately dramatize the life and qualities of this genius. Both authors are uniquely qualified for this endeavor, having already penned graphic portraits of Niels Bohr, Robert Oppenheimer and Leo Szilard.

Ottaviani and Myrick manage to capture the essential characteristics that made Feynman such a cherished teacher, scientist, friend, colleague, and public personality. Most importantly, the book succeeds in vividly bringing out Feynman's quintessential quality of almost obsessively staking out his own iconoclastic path both in science and in life. The biography is really a memoir akin to "Surely You're Joking Mr. Feynman" since it features Feynman's own account of his life, work and intellectual development. The great strength of the book is that it uses close-ups and color to highlight key words and moments from Feynman's life. While the biographical information in the book has been covered in other works and most notably in Feynman's own memoirs, the comic book form has a very different impact because of the combined literary-visual effect it has on the viewer.


For instance, in describing Feynman's time at Los Alamos, one can actually see people's bewildered faces as they struggled to comprehend both his genius in solving intractable physics problems and his wildly successful attempts at safe-breaking. There are evocative close-ups of Feynman's father teaching him to appreciate and truly understand nature during walks in the park, of Feynman encouraging his sister to learn science and his wonderful and tragic relationship with his first wife. Also included are Feynman's strip-club forays (during which he solved physics problems), his famous dunking of the Challenger space shuttle's O-rings into a glass of cold water to demonstrate their failure (again rendered much more dramatic by the graphic medium) and some fairly detailed albeit brief discussions of his pioneering work in quantum mechanics.


I was especially convinced of the power of the graphic form during the parts dealing with Feynman's lectures about scientific wonder and humility. As he paced the podium at Caltech and stressed the importance of holding oneself to an absolute standard of integrity, successive panels of the book zoomed in on his face. This device which is commonly employed in comic books imparts a heightened sense of importance to the words in a way that would not be evident on simply reading them. The other idea used in the comic medium is to intersperse the narrative with divergent panels; for instance, Feynman's eloquent description of science as a great game of chess intersects with snapshots of a chess game played by two people in a park where his father has taken him for a walk.

The minor gripe I have with this comic account is that the faces of different characters are sometimes not easily distinguishable. In addition the narrative would have had a bigger impact if the characters resembled their real life counterparts. But these minor points detract little from the volume's novelty. Ottaviani and Myrick have done a wonderful job in making a unique scientist and human being come alive in these pages. With the mountains of literature written about Feynman one would think that there's nothing new that could be said or done. But this "dramatic picture" of Richard Feynman, as his friend Freeman Dyson calls it, will occupy a proud place on the shelves of Feynman fans. Knowing his fondness for fun, Dick would undoubtedly have approved.

Labels:

Thursday, September 08, 2011

Nobel Prizes 2011

So it's that time of the year again, the time when just like Richard Feynman and Paul Dirac, three lucky people get to mull over whether they will incur more publicity by accepting the Nobel Prize or rejecting it.

Predicting the Nobel Prizes gets easier every year ((I said
predicting, not getting your predictions right) since there's very little you can add in the previous year's list, although there are a few changes; the Plucky Palladists can now happily be struck off the list. As before, I am dividing categories into 'easy', and 'difficult' and assigning pros and cons to every prediction.

The easy ones are those regarding discoveries whose importance is (now) ‘obvious’; these discoveries inevitably make it to lists everywhere each year and the palladists clearly fell into this category. The difficult predictions would either be discoveries which have been predicted by few others or ones that that are ‘non-obvious’. But what exactly is a discovery of ‘non-obvious’ importance? Well, one of the criteria in my mind for a ‘non-obvious’ Nobel Prize is one that is awarded to an individual for general achievements in a field rather than for specific discoveries, much like the lifetime achievement Academy Awards given out to men and women with canes. Such predictions are somewhat harder to make simply because fields are honored by prizes much less frequently than specific discoveries.

Anyway, here's the N-list

2. Computational chemistry and biochemistry (Difficult):
Pros: Computational chemistry as a field has not been recognized since 1999 so the time seems due. One obvious candidate would be Martin Karplus.
Cons: This would definitely be a lifetime achievement award. Karplus did do the first MD simulation of a protein ever but that by itself wouldn’t command a Nobel Prize. The other question is regarding what field exactly the prize would honor. If it’s specifically applications to biochemistry, then Karplus alone would probably suffice. But if the prize is for computational methods and applications in general, then others would also have to be considered, most notably Ken Houk who has been foremost in applying such methods to organic chemistry. Another interesting candidate is David Baker whose program Rosetta has really produced some fantastic results in predicting protein structure and folding. It even spawned a cool game. But the field is probably too new for a prize.

3. Chemical biology and chemical genetics (Easy)
Another favorite for years, with Stuart Schreiber and Peter Schultz being touted as leading candidates.
Pros: The general field has had a significant impact on basic and applied science
Cons: This again would be more of a lifetime achievement award which is rare. Plus, there are several individuals in recent years (Cravatt, Bertozzi, Shokat) who have contributed to the field. It may make some sense to award Schreiber a ‘pioneer’ award for raising ‘awareness’ but that’s sure going to make a lot of people unhappy. Also, a prize for chemical biology might be yet another one whose time has just passed.

4. Single-molecule spectroscopy (Easy)
Pros: The field has obviously matured and is now a powerful tool for exploring everything from nanoparticles to DNA. It’s been touted as a candidate for years. The frontrunners seem to be W E Moerner and M Orrit, although Richard Zare has also been floated often.
Cons: The only con I can think of is that the field might yet be too new for a prize

5. Electron transfer in biological systems (Easy)
Pros: Another field which has matured and has been well-validated. Gray and Bard seem to be leading candidates.

Among other fields, I don’t really see a prize for the long lionized birth pill and Carl Djerassi; although we might yet be surprised, the time just seems to have passed. Then there are fields which seem too immature for the prize; among these are molecular machines (Stoddart et al.) and solar cells (Gratzel).

MEDICINE:

1. Nuclear receptors (Easy)
Pros: The importance of these proteins is unquestioned. Most predictors seem to converge on the names of Chambon/Jensen/Evans.

2. Statins (Difficult)
Akira Endo’s name does not seem to have been discussed much. Endo discovered the first statin. Although this particular compound was not a blockbuster drug, since then statins have revolutionized the treatment of heart disease.
Pros: The “importance” as described in Nobel’s will is obvious since statins have become the best-selling drugs in history. It also might be a nice statement to award the prize to the discovery of a drug for a change. Who knows, it might even boost the image of a much maligned pharmaceutical industry...
Cons: The committee is not really known for awarding actual drug discovery. Precedents like Alexander Fleming (antibiotics), James Black (beta blockers, antiulcer drugs) and Gertrude Elion (immunosuppresants, anticancer agents) exist but are far and few in between. On the other hand this fact might make a prize for drug discovery overdue.

2. Genomics (Difficult)
A lot of people say that Venter should get the prize, but it’s not clear exactly for what. Not for the human genome, which others would deserve too. If a prize was to be given out for synthetic biology, it’s almost certainly premature. Venter’s synthetic organisms from last year may rule the world, but for now we humans still prevail. On the other hand, a possible prize for genomics may rope in people like Carruthers and Hood who pioneered methods for DNA synthesis.

3. DNA diagnostics (Difficult)
Now this seems to me to be a field whose time is very much due. The impact of DNA fingerprinting and Western and Southern Blots on pure and applied science, everything from discovering new drugs to hunting down serial killers, is at least as big as the prizeworthy PCR. I think the committee would be doing itself a favor by honoring Jeffreys, Stark, Burnette and Southern.

4. Stem Cells (Easy)
This seems to be yet another favorite. McCulloch and Till are often listed.
Pros: Surely one of the most important biological discoveries of the last 50 years, promising fascinating advances in human health and disease.
Cons: Politically controversial (although we hope the committee can rise above this). Plus, a 2007 Nobel was awarded for work on embryonic stem cells using gene targeting strategies so there’s a recent precedent.

4. Membrane vesicle trafficking (Easy)
Rothman and Schekman
Pros: Clearly important. The last trafficking/transport prize was given out in 1999 (Blobel) so another one is due and Rothman and Schekman seem to be the most likely canidates. Plus, they have already won the Lasker Award which in the past has been a good indicator of the Nobel.

PHYSICS

I am not a physicist
But if I were
I would dare
To shout from my lair
“Give Hawking and Penrose the Prize!”
For being rock stars of humungous size

Also, Anton Zeilinger, John Clauser and Alain Aspect probably deserve it for bringing the unbelievably weird phenomenon of quantum entanglement to the masses. Zeilinger's book "Dance of the Photons" presents an informative and revealing account of this book.

I have also always wondered whether non-linear dynamics and chaos deserves a prize. The proliferation and importance of the field certainly seems to warrant one; the problem is that there are way too many deserving recipients (and Mandelbrot is dead).

Labels:

Wednesday, August 31, 2011

Thoughts on personalised medicine


This is a piece I had written up for the annual report of this year's Lindau Meeting of Nobel Laureates. The final version had to be significantly edited because of space limitations so I thought I would post the full version here.

The future of personalized medicine

In this year’s Lindau meeting, the Israeli biochemist Ciechanover expressed great hope for the future of personalised medicine, an age in which medical treatments are customized and tailored to individual patients based on their specific kind disease.

In some ways personalised medicine is already here. Over centuries of medical progress, astute doctors have fully recognized the diversity of patients who are suffering from what appears to be the same disease. Based on their rudimentary knowledge of disease processes, empirical data and experience, physicians would then prescribe different combinations of medicines for different patients. But in the absence of detailed knowledge of disease at the genetic and molecular level, this kind of approach was naturally subjective; it continued to rely on extensive personal experience and ad hoc interpretations of incompletely documented empirical data.

This approach saw a paradigm shift in the latter half of the twentieth century as our knowledge of DNA and genetics revealed to us the rich diversity and uniqueness of individual genomes. Concomitantly, our knowledge of the molecular basis of disease led us to recognize molecular determinants unique to every individual. We are already taking advantage of this knowledge and harnessing it to personalize therapy.

Take the case of the anticancer drug temozolomide for instance. Temozolomide is prescribed for patients with a particularly pernicious form of brain cancer with poor prognosis. The drug belongs to a category of compounds called alkylating agents, a common class of anticancer drugs in which a reactive chemical group is transferred onto DNA in cancer cells, rendering them incapable of efficient cell division and causing their death. The problem is that because of its key role in sustaining life processes, DNA division is tightly controlled. Any kind of modification of the kind caused by temozolomide is treated as DNA damage and- for good reason- life has evolved multiple mechanisms to reverse such damage. In this case the body produces an enzyme that strips DNA of the reactive functionality attached by the drug. Thus the body unwillingly helps cancer cells by reversing the drug’s action. The understanding of this mechanism has led doctors to personalize temozolomide treatment only for individuals who have low levels of the drug-resisting enzyme. For other patients that produce high levels of the enzyme, temozolomide will unfortunately not be effective and doctors will have to turn to other drugs.

We will undoubtedly witness the proliferation of such advances in personalizing individual treatments in the future. But what appears to be an even more promising approach is to start at the source, at the fundamental genomic sequences that dictate the phenotypical changes associated with enzymes and proteins. The deciphering of the human genome has opened up exciting and promising new avenues for mapping differences in individual genomes and harnessing these differences in drug discovery. The most important strategy has been to compare genomes of individuals for single nucleotide polymorphisms (SNPs) which are changes in single base pairs in the DNA sequence. In fact much of the genetic variation between individuals and populations arises from these single nucleotide changes. SNPs have been of enormous value in tracing genetic diseases and generally categorizing variations in our species. They are typically utilised in genome-wide association studies in which the genomes of members of a certain homogeneous population with and without a disease are compared. Knowing the differences can enable scientists to pinpoint genetic markers responsible for the disease. These genetic markers can then be linked to phenotypes like enzyme overproduction or deficiency that are more directly related to the disease. In addition SNPs are unusually stable and remain constant between generations, providing scientists with a relatively time-invariant handle to study genetic disorders. One of the most notable instances of using SNPs to determine propensity toward disease involves the so-called ApoE gene in Alzheimer’s disease. Two SNPs in this gene lead to three alleles- E2, E3 and E4. Each individual inherits one maternal and one paternal copy of the ApoE gene and there is now solid evidence that the inheritance of the E4 allele leads to a greatly increased risk of Alzheimer’s disease.

In the long run, SNP’s may provide the foundation for much of personalised medicine. This is because SNPs also often dictate individuals’ propensity toward drugs, pathogens and vaccines. Thus in an ideal scenario, one might be able to predict a patient’s response to a whole battery of drugs using knowledge of specific SNPs associated with his or her disease.

Unfortunately this ideal scenario may be much farther than imagined. For one thing, we have still only scraped the surface of all possible SNPs, and there are already an estimated three million out there. But more importantly, the difference between knowing all the SNPs and knowing their causal connections to various diseases is almost like the difference between a list of all human beings on the planet on one hand and everything about their lives on the other; their professions, origins, hobbies, political views, family lives. Knowing the former is far from understanding the latter.

In this sense the problem with SNPs illustrates the problems with all of personalised medicine. In fact it’s a problem that plagues scientific research in general, and that’s the dilemma of separating correlation from causation. The problem is even more acute in a complex biological system like a human being where the ratio of extraneous unrelated correlations to genuinely causative factors is especially high. Simply knowing the SNP variations between a healthy and diseased individual is very different from being able to pinpoint the SNP that is directly connected to the disease. The situation is made exponentially more complex by the fact that these putative determinants usually act in combination with each other. Thus one has to now account not only for the effect of an individual SNP but also for the differential effects of its combination with other SNPs. And as if this complexity were not enough, there’s also the fact that many SNPs occur in non-coding regions of the human genome, leading to even bigger questions about their exact relevance. Sophisticated computers and statistical methods are enabling us to sort through this jungle of data, but as of now the data itself clearly outnumbers our ability to intelligently analyse it. We need to become far more capable at distinguishing signal from noise if we are to translate genetic understanding into practical therapeutic strategies.

In addition, while a certain kind of SNP may be able to determine disease tendency, there are also many false positives and negatives. Only a small percentage of SNPs are typically linked to a condition, especially when it comes to complex conditions like cancer, diabetes and psychological disorders. Many SNPs may simply be surrogate SNPs that have little to do with the disease themselves but which have come along for the ride with other SNPs. It is a difficult task to say the least to separate the wheat from the chaff and hone in on the few SNPs that are truly serving as disease determinants or markers. In such cases it is instructive to borrow from the example of temozolomide and remember that ultimately we will be able to untangle cause and effect only by looking at the molecular level interaction of drugs and biomolecules. No amount of data sequencing and analysis can really be a substitute for a robust study designed to directly demonstrate the role of a particular enzyme or protein in the etiology of a disease. It’s also worth noting that such studies have always benefited from the tools of classical biochemistry and pharmacology, and thus practitioners of these arts will continue to stand on an equal footing with the new genomics experts and computational biologists in unraveling the implications of genetic differences.

Finally, there’s the all-pervasive question of nature versus nurture. Along with genomics, one of the most important advances of the last decade has been the development of epigenetics. Epigenetics refers to changes in the genome that are induced by the environment and not hard-coded in the DNA sequence. An example includes the environmentally stimulated silencing or activation of genes by certain classes of enzymes. Epigenetic factors are now known to be responsible for a variety of processes in diseases and health. Some of these factors can even operate in the fetal stage and influence physiological responses in later life. While epigenetics has revealed a fascinating new layer of biological control and has much to teach us, it also adds another layer of complexity to the determination of individual responses to therapy. We have a long way to go before we can perfect the capability to clearly distinguish genetic from epigenetic factors as signposts for individualized therapy.

The future of personalised medicine is therefore both highly exciting as well as extremely challenging. There is much promise to be had in mapping the subtle genetic differences that make us react differently to diseases and their cures, but we will also have to be exceedingly careful in not leading ourselves astray with incomplete data, absence of causation and confirmation bias. It is a tough, but ultimately rewarding problem which will lead to both fundamental understanding and new medical advances. It deserves our attention in every way.

Image source

Labels: ,

Sunday, August 21, 2011

It's not blackmail if...

I have stayed away from the Anna Hazare story mostly because I have had mixed feelings about it and because it's not really been high on my list of interesting topics. However I have to say that I am bamboozled by those who call Hazare's fast-unto-death "undemocratic" and say that he is "blackmailing" the government. Blackmail is when you force someone to do something against their will and threaten to harm them if they don't accede. Hazare is threatening to harm himself, so I don't see how it is blackmail. Now sure, people can call it blackmail because he is indirectly trying to harm the government by encouraging people to come out on the street in throngs. But how can he be held responsible for what the people do and do not decide to do based on his protests?

In fact, the way I see it, it is precisely in a free democracy that everyone has a right to openly demand whatever he or she wants out in the street. And they also have a right to kill themselves if they think their demands are not met. And equally importantly, the government has every right to refuse their demands. The government has absolutely no obligation to give in to Hazare's demands until there is a formal majority that seeks it from them through a democratic process. If they cave in to Hazare's protests it's really their problem, not his.

Plus of course, I personally think that a little "blackmail" is pretty mild treatment for the kind of power-bloated politicians steeped in corruption that seem to define India's polity. But that's a different matter. First and foremost, I don't see how it's blackmail, and I don't see why the government has to give in to Hazare's demands. The way I see it, Hazara's protests are a sign of a healthy democracy at work and I for one feel quite satisfied.

Labels:

Tuesday, July 19, 2011

What is chemical intuition?

Recently I read a comment by a leading chemist in which he said that in chemistry, intuition is much more important than in physics. This is a curious comment since intuition is one of those things which is hard to define but which most people who play the game appreciate when they see it. It is undoubtedly important in any scientific discipline and certainly so in physics; Einstein for instance was regarded as the outstanding intuitionist of his age, a man whose grasp of physical reality unaided by mathematical analysis was unmatched. Yet I agree that "chemical intuition" is a phrase which you hear much more than "physical intuition". When it comes to intuition, chemists seem to be more in the league of traders, geopolitical experts and psychologists than physicists.

Why is this the case? The simple reason is that in chemistry, unlike physics, armchair mathematical manipulation and theorizing can take you only so far. While armchair speculation and order-of-magnitude calculations can certainly be very valuable, no chemist can design a zeolite, predict the ultimate product of a complex natural product synthesis or list the biological properties that a potential drug can have by simply working through the math. As R B Woodward once said of his decision to pursue chemistry rather than math, in chemistry, ideas have to answer to reality. Chemistry much more than physics is an experimental science built on a foundation of rigorous and empirical models, and as the statistican George Box once memorably quipped, all models are wrong, but some are useful. It is chemical intuition that can separate the good models from the bad ones.

How then, to acquire chemical intuition? All chemists crave intuition, few have it. It's hard to define it, but I think a good definition would be that of a quality that lets one skip a lot of the details and get to the essential result, often one that is counter intuitive. It is the art of asking the simple, decisive question that goes to the heart of the matter. As in a novel mathematical proof, a moment of chemical intuition commands an element of surprise. And as with a truly ingenious mathematical derivation, it should ideally lead us to smack our foreheads and ask why we could not think of something so simple before.

Ultimately when it comes to harnessing intuition, there can be no substitute for experience. Yet the masters of the art in the last fifty years have imparted valuable lessons on how to acquire it. Here are three I have noticed:


1. Don't ignore the obvious: One of the most striking features of chemistry as a science is that very palpable properties like color, smell, taste and elemental state are directly connected to molecular structure. There is an unforgettably direct connection between the smell of cis-3-hexenol and that of freshly cut grass. Once you smell both independently it is virtually impossible to forget the connection. Chemists who are known for their intuition never lose sight of these simple molecular properties, and they use them as disarming filters that can cut through the complex calculations and the multimillion dollar chemical analysis.

I remember an anecdote about the chemist Harry Gray (an expert among other things on colored coordination complexes) who once deflated the predictions of some sophisticated quantum chemical calculation by simply asking what the color of the proposed compound was; apparently there was no way the calculations could have been right if the compound had a particular color. As you immerse yourself in laborious compound characterization, computational modeling and statistical significance, don't forget what you can taste, touch, smell and see. As Pink Floyd said, this is all that your world will ever be.

2. Get a feel for energetics: The essence of chemistry can be boiled down to a fight unto death of countless factors that rally either for or against the free energy of a system. When you are designing molecules as anticancer agents, for hydrogen storage or solar energy conversion or as enzyme mimics, ultimately what decides whether they will work or not is energetics, how well they can stabilize and be stabilized and ultimately lower the free energy of the system. Intimate familiarity with numbers can help in these cases. Get a feel for the rough contributions made by hydrogen bonds, electrostatics, steric interactions and solvent influences. This is especially important for chemists working at the interface of chemistry and biology; remember, life is a game played within a 3 kcal/mol window and any insight that allows you to nail down numbers within this window can only help. The same goes for some other parameters like Van der Waals radii and bond lengths. Linus Pauling was lying in bed with a cold when he managed to build accurate models of the protein alpha helix, largely based on his unmatched feel for such numbers.

A striking case of insights acquired through thinking about energetics is illustrated by a story that Roald Hoffmann narrates in a recent issue of "American Scientist". Hoffmann was theoretically investigating the conversion of graphene to graphane, which is the saturated counterpart of graphene, under high pressure. After having done some high-level calculations, his student came into his office and communicated a very counter-intuitive result; apparently graphane was more stable per CH group than the equivalent number of benzenes. What happened to all that discussion of unsaturation in aromatic rings contributing to unusual stability that we learnt in college? Hoffmann could not believe the result and his first reaction was to suspect that something must be wrong with the calculation.

Then, as he himself recalls, he leaned back in his chair, closed his eyes and brought half a century's store of chemical intuition to bear on the problem. Ultimately after all the book-keeping had been done, it turned out that the result was a simple consequence of energetics; the energy gained in the formation of strong carbon-carbon bonds more than offset that incurred due to the loss of aromaticity. The fact that it took a Nobel Laureate some time to work out the result is not in any way a criticism but a resounding validation of thinking in terms of simple energetics. Chemistry is full of surprises- even for Roald Hoffmann- and that's what makes it endlessly exciting.


Another example that comes to my mind is an old paper by my PhD advisor which refuted an observation indicating that a group in cyclohexane was purportedly axial. In this case unlike the one above, the intuitive and commonly held principle- that substituents in cyclohexanes are equatorial- turned out to be the right one, again based on some relatively simple NMR-assisted computational energetic analysis. On the other hand, the same kind of thinking also led to thediscovery that the C-F groups in substituted difluoro-piperidines are axial! Sometimes intuition leads to counter intuition, and sometimes it asserts itself.

3. Stay in touch with the basics, and learn from other fields: This is a lesson that is often iterated but seldom practiced. An old professor of mine used to recommend flipping open an elementary chemistry textbook every day to a random page and reading ten pages from it. Sometimes our research becomes so specialized and we become so enamored of our little corner of the chemical world that we forget the big picture. Part of the lessons cited above simply involve not missing the forest for the trees and always thinking of basic principles of structure and reactivity in the bigger sense.

This also often involves keeping in touch with other fields of chemistry since an organic chemist never knows when a basic fact from his college inorganic textbook will come in handy. Most great chemists who were masters of chemical intuition could seamlessly transition their thoughts between different subfields of their science. This lesson is especially important when specialization has become so intense that it can sometimes lead to condescension toward fields other than your own. Part of the lesson also involves collaboration; what you don't have you can at least partially borrow.

Ultimately if we want to develop chemical intuition, it is worth remembering that all our favorite molecules, whether metals, macrocyles or metalloproteases, are all part of the same chemical universe, obeying the same rules even if in varied contexts. Ultimately, no matter what kind of molecule we are interrogating, Wir sind alle chemikers, every single one of us.

Labels: