Yesterday, I saw a Systems Biology talk by Peter Sorger
, and well there was some hocus pocus, but some insight. Professor Sorger is part of the new Cell Decision Processes Center
at MIT, a fancy name for the new Systems Biology Center there.
Sorger’s strategy is to study intracellular signaling cascades with a mix in silico
models with cell based experiments. The former involves identifying all the signaling components and their various states and assigning differential equations to describe how a molecule changes from one state to the next or how one molecule alters a second molecule.
Molecule changing state: A1 =(v)=> A2
Molecules interacting: A + B =(v)=> AB
Molecule changing state (catalyzed by a second molecule): A1 =(v[B])=> A2
The in vivo
experiments involve the activation of cellular signals by adding various extracellular factors and/or removing signaling components (or reducing their levels) using RNAi. These cells are then analyzed by measuring various factors such as the “activation” state of a key downstream signaling molecule. Using the in vivo
data points, Sorger’s group has been able to “teach” their model. After tweeking the model they attempted to generate in silico
based predictions and gained some insight into how the system is rigged – and you know what? their model produced some interesting ideas.
They first studied the programmed cell-death pathway (apoptosis; read the history of this field from Nobel.se
). The immediate problem with such a decision,
is that the cell must either choose to die or not – you do not want a half-dead cell. By adding signaling factors that activate the death pathway and then measuring a key downstream component (caspase activity) on a cell to cell level, Sorgers group found that when the decision is made, cell death is always fast. As they reduced the level of extracellular signal, cells took longer to make the decision to go from low caspase activity to high caspase activity, but once the decision was made the kinetics of caspase activation was always the same. Thus the way this system was set up, was using a “light-switch” mechanism. Levels of signal influenced how long it took to flip the switch, but once it was flipped the pathway acted just the same. The insight is that the model predicted a "switch" and pointed that switch-components were set up in several feedback mechanisms. These switch-components at first glance seem to act redundantly - there are many loops, and several interchangegable components took part in each step of the loop. Remove components and the pathway could be activated but the switch was now not as rapid and became sensitive to the levels of other “switch factors”. You still got cell-death but the switch mechanism was altered. So the intact system is ROBUST and INSENSITIVE to the fluctuations of other components – in other words the feed back loops buffer slight changes to generate a bistable equilibrium. So molecular components of the death pathway should not be seen as “pro-death” or “anti-death” but compoenents of the switch.
I sense a paradigm shift here.
Classical biochemistry/cell biology involves determining the molecular components involved in a process. Molecule X repairs DNA damage, molecule Y helps cells to stop dividing when there is DNA damage, and molecule Z relieves the inhibition allowing cells to finally divide. The moto of our field for the last 30 years has been, “find the molecules”. But X, Y and Z are interlocking components – and a better way of looking at these is part of a module that has certain properties (it can act as a switch ...) So to understand X, Y and Z you need to understand the module. So maybe a better moto would be “understand the module”.
These modules act independently between cells, so by studying whole populations we may miss how the module works IN THE CONTEXT OF A SINGLE CELL. To better understand this we need to have biochemical assays at the single cell level, and preferably we would want temporal information too. So you need to look at how the molecular state evolves in a single cell over time.
There are (often too) many parameters. Sorger’s group simplified their model by excluding how the molecules are made and destroyed – with each level of complexity you get more equations, more variables and then many potential models fit the data. What Sorger’s model was good for was to indicate which molecules fit together in a module. The system was insensitive to the levels of some molecules while extremely sensitive to the levels of other molecules. Thus certain molecules were purely functioning to "make things work" and their concentrations mattered little, while other molecules acted to probe the biochemical state of the cell and their levels had great influence over how cellular decisions were being made. The key then is to figure out in which of these two groups any particular molecule falls into.
Lots of voodoo. The models are so complex that we need new ways to present how they work. It was very hard to evaluate his approach because a lot of these in silico
findings ... you just have to accept what he says ... and it isn’t apparent what are the caveats (and other problems) are. This is highlighted by the fact that crucial factors (such as protein/molecular synthesis and decay) were not taken into account.
The final judge of good science is (IMHO) is the insights generated by the work. On that account, Systems Biology (despite what I've written in the past
) may be good for biology after all.