APS March Meeting self-organizes online

I was ready. I was so ready. I had all my chargers and AV adapters. My presentation was backed up on a USB drive. I had every talk I wanted to go to on my calendar. I had sent emails to professors I wanted to meet and network with. I reached out to friends I only see in March in a different city every year. It was 10 pm on Saturday, February 29. My flight to Denver was leaving at 6 am in the morning. Then the email arrived —

URGENT: 2020 APS March Meeting in Denver, CO – CANCELED.
Due to rapidly escalating health concerns relating to the spread of the coronavirus disease (COVID-19), the 2020 APS March Meeting in Denver, CO, has been canceled. Please do not travel to Denver to attend the March Meeting. More information will follow shortly.

That was it. All the preparation, all the planning, thrown out the window because of the novel coronavirus that was quickly spreading throughout the world. Writing this more than 2 weeks later in the face of a bona fide pandemic with entire countries locked down, the decision was extremely prescient and prudent. At the time, it felt a bit like being forced to eat vegetables as a child.

But as soon as the APS March Meeting was cancelled, would-be participants started self-organizing their own virtual conferences. Fittingly, the Division of Soft Matter (DSOFT), which largely studies self-organization, led the charge. Professor Karen Daniels from NC State University used Twitter to announce that plans were underway to host virtual talks, and one day later had a place for people to sign up to host and give the talks they had prepared for Denver. The Division of Biological Physics (DBIO) also followed suit, led by Professor Phillip Nelson from the University of Pennsylvania. By the end of the week, a remarkable 60 sessions between both DSOFT and DBIO were held online, all due to grassroots organization over social media and word-of-mouth.

 The method of choice was definitely Zoom, with BlueJeans and Google Hangouts also being used. The first virtual session I attended was (appropriately) The Physics of Social Interactions, hosted by Professor Orit Peleg. I was shocked to see over 50 people in the Zoom meeting, but even more shocked at how smoothly the entire process unfolded. Professor Peleg kept the speakers on time and the “Raise hand” feature on Zoom made asking and answering questions easy and painless. The first virtual DBIO session held was the Delbruck Prize session, where Professor Jim Collins from MIT was being honored for his work in synthetic biology, including some pertinent work on cheap and fast testing for various diseases. That session had over 100 participants and still went just as smoothly as the smaller ones. I was also able to give my talk virtually to a total of 5 people present, but I’ll put my ego aside to be happy I gave it at all.

These virtual talks definitely do not replicate the conference going experience, but they do come close. In my experience, many people attending talks are already on their laptops attending to their own business, so listening to a virtual talk while answering emails from the comfort of one’s desk felt familiar. When it comes to gathering scientific information, the virtual meeting was a great success. Even those who were unable to attend the meeting were able to see some talks of interest to them. However, as one might expect, the social component of the meeting was largely lost. Hallways are where the magic really happens at conferences, and the closest approximation the virtual APS meeting had was Twitter. Nevertheless, sending a quick email to a speaker whose talk you liked could accomplish most of the networking one might try to accomplish.

While it’s hard to find a silver lining in COVID-19, these virtual meetings will almost certainly open the doors for future scientific meetings allowing virtual talks to be given by those who cannot attend. As scientists, we should model what it looks like to lower our carbon footprint without impeding professional advancement. In addition, virtual talks give access to meetings for those who cannot physically travel for financial, familial, and/or physical reasons, promoting the inclusion of traditionally marginalized populations in the larger scientific community. As more meetings get corona-cancelled, such as the upcoming APS April Meeting, the use of virtual conferences will only become more streamlined and normalized. I personally believe this will change the way scientists communicate around the world for the better, even if it took a pandemic to get here.

Spell Checking Boiolgy

Original paper: Kinetic Proofreading: A new mechanism for reducing errors in biosynthetic processes requiring high specificity


Cells are sacks of chemicals that, through the trials and tribulations of evolution, have gained the ability to read information from their environment and then produce an output that assists in the larger organism’s survival. Mis-handling this information can lead to cell malfunction, mutation, or death, so it is important to understand how this works and how often it doesn’t. Using simple thermodynamic calculations, error rates are estimated to be thousands of times higher than they actually are (lucky for us!). Fundamentally, cells must obey the laws of thermodynamics, so some unknown intermediate process(es) must be dramatically reducing the number of errors in information processing. The question then becomes, what is that process? In a classic paper from 1974, J.J. Hopfield gives us an answer in a process he called kinetic proofreading. In doing so, Hopfield introduced the field of biophysics to the fundamental trade-offs that cells must make between using energy, accurately making a decision and the speed with which decisions can be made.

Before getting into kinetic proofreading, let’s get a better feel for our process of interest: protein synthesis. The central dogma of molecular biology can be summarized as DNA ? RNA ? Proteins. For the sake of simplicity, we will focus a bit more on the RNA ? Proteins part. Like DNA, RNA molecules are long polymers that can be written down (or coded) as a simple sequence of letters. What’s really important is each three letter group called a codon. As the name suggests, each codon is a three-letter code associated with a specific amino acid [1]. When chained together in the order dictated by RNA, amino acids form a protein. Proteins then go on to perform almost every biological function you can imagine.

The RNA and the proteins are the input and output for our simplistic thermodynamic error estimate above (the one which predicts too many mistakes). Well, it turns out that this picture isn’t quite complete — there is also an emissary between the RNA and amino acids called “transfer” RNA, or tRNA. The tip of the tRNA directly binds onto the right amino acid, holding it in place as the growing protein gets formed.

It is extremely important that tRNA can (1) bind the right amino acid and (2) hold on to it for long enough to build the protein. Let’s call the tRNA binding site c and the amino acid X. When c and X meet, they create a combined unit cX, which is then produced into a protein. This can be written as the following reaction equation:

c + X \underset{k_{on}}{\overset{k_{off}}{\leftrightharpoons}} cX \overset{W}{\rightarrow} protein.

Let’s step through it. It says that X and c combine at an on-rate kon to form combined product cX, which we can think of as the amino acid attached to the tRNA molecule. cX can then either break apart at an off-rate koff, or it can go on to create the protein at a rate W. It turns out that kon doesn’t vary much for different amino acids, but the off-rates do. Experiments have measured that tRNA bound to the correct amino acid has a lower off-rate, giving more time for it to be produced into the correct protein. While tRNA unbinds a wrong amino acid faster, it still might go on to make a protein by accident. You can think of the wrong amino acid as being more “slippery” than the right one — tRNA can grab either one, but it is harder to hang on to the wrong amino acid. Using these differences in off-rates, one can estimate what the expected error fraction would be for proteins. The problem is that this estimate is way bigger than measured error fractions! Hopfield hypothesized that there must be something actively happening to close the gap. He named the exact mechanism he came up with kinetic proofreading.

The key to kinetic proofreading is to extend the time that the amino acid and the tRNA stay bound. This way, the tRNA is more likely to let go of the wrong, “slippery” amino acid, but hang on to the right one. Hopfield proposes putting an intermediate step between cX and the product. However, to be actually useful, going into the intermediate step has to be irreversible. To make something irreversible means to break time-reversal symmetry, which requires the consumption of free energy, usually in the form of “burning” ATP, the fuel used by cells. By burning this fuel, a slightly different form of cX is created, let’s just call it cX*. This new combined product is the one that then goes on to make the final product.  By adding this new intermediate step — the act of using up ATP to create cX* — it is possible for the amino acid to stay bound to the tRNA for twice as long as without the intermediate step. You get to run the process of discriminating between right and wrong amino acids twice, decreasing the error fraction significantly.

This argument that spending free energy can increase the accuracy of synthesizing proteins, has become a staple in understanding biophysics. Cells operate out-of-equilibrium by consuming energy and therefore are able to accomplish tasks much more accurately. Another trade-off is that the decision to be made, i.e. making the right protein, is done more slowly. These energy-speed-accuracy trade-offs are essential not only for protein synthesis but also DNA replication and triggering immune responses by T-cells recognizing foreign invaders. Equilibrium is death. By consuming energy from our surroundings, we are able to fend off the onslaughts of entropy and remain alive.


[1] There are approximately 20 amino acids, and there are different codon sequences that code for the same amino acid. ^

 

Scaling up biology

Original paper

A General Model for the Origin of Allometric Scaling Laws in Biology. By Geoffrey B. West, James H. Brown, and Brian J. Enquist. Science 1997


Physics is a discipline that attempts to develop a unifying, mathematical framework for understanding diverse phenomena. It connects things as different as planets orbiting the sun and a ball thrown through the air by showing that both these motions come from a single equation [1]. Living things do not seem to obey such simplicity, but hidden beneath all the diversity and complexity of life are remarkably universal patterns called scaling laws. In a landmark 1997 paper by Geoffrey West, James Brown, and Brian Enquist, a simple explanation is given for how all organisms, from fleas to whales to trees, can be thought of as non-linearly scaled versions of each other.

A scaling law tells you how a property of an object, say the rate at which energy is consumed by an organism (its metabolic rate), changes with the object’s size. Just by looking at the data, many quantities scale as a power law of the mass, 

A \propto M^{\alpha}    (Eq. 1)

where ? is some number that, from the data, always seems to be a multiple of 1/4 [2]. West, Brown, and Enquist build a theory showing how biology could have come up with this 1/4 power law, but in this article, I’m just going to focus on one specific example. I’m going to walk through the author’s arguments for how the metabolic rate, the rate at which an organism consumes energy, scales with an exponent of 3/4. They show that it all comes up from some basic assumptions about the networks that distribute nutrients to your body — your circulatory system [3].

These networks are assumed to have two characteristics [4]. First, they are space-filling fractals. Fractals are shapes made of smaller, repeating versions of themselves no matter how far you zoom into it. However, our fractal blood vessels can’t get arbitrarily small, they have a “terminal unit”— the capillary. The second assumption about these networks is that all terminal units are the same size, regardless of organism size. With these two assumptions, the authors are able to derive the 3/4 power law for metabolic rate.

Branching veins representing as a regular, branching network
Figure 1: Cartoon of a mammalian circulatory system on the left, which can be represented as a branching network model on the right. Adapted from Figure 1 of the original paper.

First, let’s build up a picture of what these networks look like. Figure 1 shows how the circulatory system can be thought of as a network structured into N levels, where each level k has N_k tubes. At each level, a tube breaks into a number (m_k) of smaller tubes. Each one of these tubes is idealized as a perfect cylinder with length l_k and radius r_k, as shown in Figure 2.

Tube parameters
Figure 2: Illustration of the different parameters that each tube on the kth level of the network has. Adapted from Figure 1 of the original paper

How does blood move through this network? Well, the blood flow rate at each level of the network must be equal to the blood flow rate at every other level. Otherwise, you would have the equivalent of traffic jams in your arteries. You don’t want those. If the blood flow speed through one tube in the kth level is u_k, the blood flow rate through the entire kth level is

\dot{Q}_k = N_k \pi r_k^2  u_k = N_{cap} \pi r_{cap}^2 u_{cap} = \dot{Q}_{cap}    (Eq. 2)

Your metabolic rate, B, depends on the flow rate through your capillaries, \dot{Q}_{cap}, so the authors assume that the two are proportional to each other: B \propto \dot{Q}_{cap}. Because all terminal units are the same size, the only variable left in Eq. 2 to relate to an animal’s mass is N_{cap}. Assuming that B scales like B \propto M^{\alpha}, and the authors predict

N_{cap} \propto M^{\alpha}    (Eq. 3)

branchingRatios-01
Figure 3: Schematic of a branching point along the network, illustrating the definitions of the ratios \beta_k and \gamma_k. In this case, m_k = 2.

To figure out the value of the exponent \alpha, the key is to get N_{cap}, which depends on the size of the organism, in terms of the capillary dimensions r_c and l_c, which do not. To do this, the authors use relations derived using the self-similar geometry of the fractal network. When a tube breaks into smaller tubes, it does so with a ratio between the successive radii, \beta_k = r_{k+1} / r_k, and another ratio between the successive lengths, \gamma_k = l_{k+1}/l_k. This is illustrated in Figure 3. Because the network is fractal, the number of tubes each branch breaks into,  m_k, the ratio of radii, \beta_k, and the ratio of lengths, \gamma_k, are all assumed to be constant for every k,

\beta_k = \beta, \; \gamma_k = \gamma, \; m_k = m \;\; \forall k

Since, at every level, each branch breaks into m smaller branches, the total number of capillaries (i.e. the number of branches at level N) is m^N. Plugging this into Eq. 3,

\alpha = \frac{N \ln(m)}{\ln(M/M_0)}    (Eq. 4)

Where  M_0^{\alpha} is the proportionality constant between N_{cap} and M^{\alpha}. Remember, we’re trying to show that \alpha = 3/4.

Now that N_{cap} has been rewritten in terms of network properties, the authors next turn their attention to  another quantity that scales with the organism size — its mass, M. To do this, the authors use the empirical fact that the total volume of blood, V_b, is proportional to the total mass of the organism, V_b \propto M. The total volume of blood is given by:

V_b = \sum_{k=0}^N V_k N_k = \sum_{k=0}^N \pi r_k^2 l_k m^k \propto \left( \gamma \beta^2 \right)^{-N} \propto M    (Eq. 5)

In the above equation, the first proportionality sign (summing the series) requires a calculation that’s given here. The main idea of this calculation is that, because the ratios r_{k+1} / r_k and l_{k+1}/l_k are each constant, the sum in Eq. 5 can be turned into a geometric series which can be summed analytically. Plugging the final proportionality from Eq. 5 into Eq. 4,

\alpha = - \frac{\ln(m)}{\ln(\gamma \beta^2)}    (Eq. 6)

To make further progress, we have to know something about \gamma and \beta. Every tube of the network gives nutrients to a group of cells. As every good physicist does, the authors will assume that this group of cells has the volume of a sphere with a diameter equal to the length of the tube. The volumes serviced by each successive level are approximately equal to each other,  4/3 \pi (l_{k+1} / 2)^3 N_{k+1} \approx 4/3 \pi (l_k / 2)^3 N_k. From this, they get an expression for \gamma:

\gamma_k^3 \equiv \left(\frac{l_{k+1}}{l_k}\right)^3 \approx \frac{N_k}{N_{k+1}} = \frac{1}{m}    (Eq. 7)

which means

\gamma \approx m^{-1/3}

Now the authors move on to \beta. Earlier, I argued that the flow rate has to be the same from one level of the network to the next to avoid “traffic jams” of blood. Since the tubes are assumed to be perfect cylinders, this boils down to the idea that the cross-sectional area of a parent tube being equal to the total cross-sectional area of its daughter tubes, \pi r_k^2 = \pi r_{k+1}^2 m. From this, the authors find an expression for \beta:

\beta_k^2 \equiv \left( \frac{r_{k+1}}{r_k} \right)^2 = \frac{1}{m}     (Eq. 8)

Similar to the expression for \gamma, this means

\beta \approx m^{-1/2}

Plugging in the expressions for \gamma and \beta in terms of m, we finally arrive at our desired result:

\alpha =  - \frac{\ln(m)}{\ln(\gamma \beta^2)} = - \frac{\ln(m)}{\ln(m^{-1/3}(m^{-1/2})^2)} = 3/4    (Eq. 9)

What West and his colleagues have done is use the fact that all organisms have to deliver nutrients to their individual parts to derive a general, universal scaling law. The authors go on to show that when you add a pump to the system, such as our heart, the analysis may get more complicated, but the ultimate result remains unchanged. All living things, regardless of size, seem to have arrived at the same solution for nutrient supply, building systems that are space-filling, fractal, and have the same size “terminal units”. Turns out we’re not so different after all.


[1] F = Gm_1 m_2 / r^2. ^

[2] For example:

  • \alpha = 3/4 for cross section area of aortas of mammals, tree trunk sizes
  • \alpha = -1/4 for cellular metabolic rate, heartbeat rate, population growth
  • \alpha = 1/4 for time of blood circulation, life span, embryonic growth rate ^

[3] All the arguments hold for other distribution systems, such as our pulmonary system, plant vascular systems, and insect respiratory systems. ^

[4] There’s an additional assumption that the network is designed to minimize energy, but that won’t come into play in the part of the author’s arguments that I will be presenting here. ^

How the leopard got its spots

Original paper: Alan Turing, The Chemical Basis of Morphogenesis, Philosophical Transactions of the Royal Society B (1952)

The human brain has evolved an arguably overactive habit of finding patterns. Who hasn’t spent an afternoon morphing clouds into various shapes? Similarly, groups of stars are transformed into bears, ladles, and warriors by ancient mystics and modern stargazers alike. Pattern recognition becomes more obviously useful looking at living things, where recognizing the difference between a harmless garter snake and a deadly asp is a skill worth having. We identify zebras by their stripes and leopards by their spots. But biology doesn’t just use external patterns. Organisms use internal, self-generated patterns to guide body formation. During the formation of a mouse’s paw, what will eventually become fingers are blueprinted by the certain proteins. Having the correct amount of each protein is critical to the correct number of the shape of fingers that are created  (Figure 1).

digitFormation
Figure 1: The formation of fingers in the foot of a mouse requires a careful patterning of different proteins found during mouse development. If the amount of these proteins is changed, so does the number of eventual fingers. From Sheth et al. Science (2012)

In the mouse, these patterns are formed partly by genetic expression and partly by physical interactions between proteins. In today’s paper, we look at one of the first explanations of these physical interactions, given by the great mathematician Alan Turing. In his classic 1952 paper, The chemical basis of morphogenesis, Turing provides a simple explanation for how chemicals, like proteins, can interact with each other under known laws of physics to create striking patterns from a blank, uniform canvas.

Turing creates a pair of equations describing the rate of change of two different chemical species, X and Y, that he calls morphogens. The type of equations that Turing established is called reaction-diffusion equations. To understand Turing’s conclusions, I’ll first describe what is meant by reaction and diffusion and then put these concepts together to give a physical picture of how they can produce stripes. (See the appendix for a more detailed, mathematical description of the same phenomenon).

The reaction part of a reaction-diffusion equation describes the interactions between X and Y. A generic pair of coupled reaction equations will look like:

(\partial X / \partial t)_{reaction} = \alpha X + \beta Y

(\partial Y / \partial t)_{reaction} = \gamma X + \delta Y

Let’s look at just the first equation. On the left-hand side is the time derivative of the concentration of X, which measures how X changes in time depending on what is on the right-hand side. On the right-hand side, we see that the change in X over time is dependent on both itself and Y, multiplied by constant coefficients, alpha (?) and beta (?).

? tells us how strongly X affects itself, while ? tell us how strongly Y affects X. If ? is positive, we call Y an activator of X. If ? is negative, we call Y an inhibitor of X (similarly the sign of ? tells us whether X activates or inhibits itself, and the variables gamma (?) and delta (?) describe the behavior of how X and Y affect Y). We then can play around with different combinations of signs for all these coefficients to see how the system evolves in time.

The diffusion part of the reaction-diffusion equations describes how morphogens will move, on average, when they aren’t evenly distributed in space. The morphogens will move from areas of high concentration to areas of low concentration until everything is even, the same way a puff of perfume spreads across a room. The main difference between two morphogens is how quickly they move from high concentration to low concentration, which is quantified by a diffusion constant, D. If D is large, morphogens move rapidly, and if D is small, morphogens move slowly.

Turing found that a stable pattern can arise if we have a fast inhibitor and a slow activator. To be concrete, let’s call X the slow activator and Y the fast inhibitor[1]. This means that DX<DY, and ?,?>0 and ?,?<0 in the reaction equations shown above. Therefore more X creates more of both X and Y, and more Y destroys more of both X and Y (Figure 2).

reactionDiagram-01
Figure 2: Reaction network for X and Y in the activator-inhibitor model. The sign of the coefficients that couple the ?X/?t or ?Y/?t to X or Y determine what kind of arrow is used in the diagram. A positive coupling (activation) is shown with a pointed end (?), while a negative coupling (inhibition) is shown with a flat end (?).

Let’s start with a 1 dimensional, uniform distribution of both X and Y everywhere. Wherever Y exists, both X and Y get destroyed, and wherever X exists, both X and Y get created. If the distribution of X and Y are exactly equal, and they create and destroy at the same rates, then nothing will change. In reality, there will be very small variations over space of both X and Y[2]. These small variations will lead to slightly more X or Y in particular spots (Figure 3).

Where more X exists, there will be an overall creation of both X and Y. Since X moves around more slowly, it will stay in the same spot for longer, while Y will quickly move away. The lingering X will produce even more X and Y, creating a region of excess X.  Y has then moved to neighboring areas and represses the production of both X and Y. However, eventually the concentration of Y decreases enough for the production properties of X to dominate again, and the whole system repeats itself. Thus, a static wave pattern is created from an initially (almost) uniform distribution (Figure 3).

Schematic of how a stripe pattern forms in a reaction-diffusion system
Figure 3: Schematic of pattern formation using reaction diffusion equations. First, a small shift away from zone 0 in the activator (X) occurs. This then leads to an increased amount of both the activator (X) and the inhibitor (Y) in zone 0. The inhibitor diffuses outwards faster than the activator. This diminishes the activator in zones ±1. Within zones ±1, the inhibitor also inhibits itself, leading to new regions, zones ±2, with an excess of activator again. Eventually, the system reaches a steady state: a wave pattern that no longer changes in time.

That gets at the gist of Turing’s argument. It truly is an amazing discovery because diffusion plays such an important role. Usually, diffusion is considered a process that “smooths” things. As stated above, it tends to move things from regions of high concentration to low concentration, until the same concentration is left everywhere. Here, it’s doing the exact opposite. Diffusion is helping create pockets of very high concentrations of morphogens. Turing’s genius was to recognize that diffusion could do this at all. It’s not only an equalizer, it can create order from chaos.

The wave described above is an example of what can happen in one dimension. The pattern evolves until a wave is set up, and then stops changing. With slight tweaks in the parameters put in the equations, more interesting patterns — ones that never stop changing in time — are also possible. Even more striking is the role that the size and shape of the area you consider has on the resulting patterns. Depending on the starting parameters, a number of different patterns can emerge, such as labyrinths, stripes, and spots (Figure 4).

 

 

 

Today, Turing’s model serves as the foundation for more realistic models of pattern formation. There hasn’t been much evidence for the exact activator-inhibitor mechanism that Turing has suggested, but by adding more than two chemicals or adjusting the parameters that Turing put forth, reaction-diffusion models have been used to help explain everything from the patterns on cat fur to the number of fingers on mouse paws.  Turing’s model shows that mathematical analysis is a valid way to explain some of the most striking and complex biological phenomena seen in nature.


[1] We can make this arbitrary choice because, for right now, X and Y are just labels. As long as one is a fast inhibitor and one is a slow activator, we get to describe them with whatever label we want.

[2] The reason for this is statistics. Think of flipping 500 coins. You know that you should expect 250 heads and 250 tails, but the probability of getting exactly that outcome is incredibly small (more likely you’ll be close, 248 heads and 252 tails, or 240 heads and 260 tails). For this reason, we may expect the concentrations of X and Y to both be exactly equal, but in reality, there will be some small difference between the two.


Appendix

A more mathematical explanation of Turing patterns can be found here

Dividing Liquid Droplets as Protocells

Original paper: Growth and division of active droplets provides a model for protocells


In the beginning there was… what, exactly? Uncovering the origins of life is a notoriously difficult problem. When a researcher looks at a cell today, they see the highly-polished end product of millennia of evolution-driven engineering. While living cells are not made of any element that can’t be found somewhere else on earth, they don’t behave like any other matter that we know of. One major difference is that cells are constantly operating away from equilibrium. To understand equilibrium, consider a glass of ice water. When you put the glass in a warm room, the glass exchanges energy with the room until the ice melts and the entire glass of water warms to the temperature of the room around it. At this point, the water is said to have reached equilibrium with its environment. Despite mostly being made out of water, cells never equilibrate with their environment. Instead, they constantly consume energy to carry out the cyclic processes that keep them alive. As the saying goes, equilibrium is death[1]: the cessation of energy consumption can be thought of as a definition of death. The mystery of how non-equilibrium living matter spontaneously arose from all the equilibrated non-living stuff around it has perplexed scientists and philosophers for the better part of human history[2].

An important job for any early cell is to spatially separate its inner workings from its environment. This allows the specific reactions needed for life, such as replication, to happen reliably. Today, cells have a complicated cell membrane to separate themselves from their environment and regulate what comes in and what goes out. One theory proposes that, rather than waiting for that machinery to create itself, droplets within a “primordial soup” of chemicals found on the early Earth served as the first vessels for the formation of the building blocks of life. This idea was proposed independently by the Soviet biochemist Alexander Oparin in 1924 and the British scientist J.B.S. Haldane in 1929[3]. Oparin argued that droplets were a simple way for early cells to separate themselves from the surrounding environment, preempting the need for the membrane to form first.

In today’s paper, David Zwicker, Rabea Seyboldt, and their colleagues construct a relatively simple theoretical model for how droplets can behave in remarkably life-like ways. The authors consider a four-component fluid with components A, B, C, and C’, as shown in Figure 1. Fluids A and B comprise most of the system, but phase separate from each other such that a droplet composed of mostly fluid B exists in a bath of mostly fluid A. This kind of system, like oil droplets in water, is called an emulsion. Usually, an emulsion droplet lives a very boring life. It either grows until all of the droplet material is used up, or evaporates altogether. However, by introducing chemical reactions between these fluids, the authors are able to give the emulsion droplets in their model unique and exciting properties.

 

modelSchematic_fig1b
Fig. 1: Model schematic. A droplet composed mostly of fluid B (green) within a bath of fluid A (blue). Inside the droplet, B degrades into A. Outside the droplet, fluids C and A react to form fluids B and C’. Adapted from Zwicker and colleagues.

 

The chemical reactions in the model are fairly simple (see figure 1). Fluid B spontaneously degrades into fluid A and diffuses out of the droplet. While fluid A cannot easily turn back into fluid B (since spontaneous degradation implies going from a high energy state to a low one), fluid C can react with A to create fluids B and C’, and this fluid B can diffuse back into the B droplet.

B \to A \quad \text{and} \quad A+C \to B+C'

If C and C’ are constantly resupplied and removed, respectively, they can be kept at fixed concentrations. Without C and C’, the entire droplet would disappear by degrading into fluid A, reaching equilibrium. Here, C and C’ act as fuel that constantly drives the system away from equilibrium, creating what the authors dub an “active” emulsion. Active matter systems like this one have had success in describing living things because they, like all living matter, fulfill the requirement of being out-of-equilibrium.

Because the equations that describe how fluids A and B flow over time are so complicated, the authors solve their model using a computer simulation. When they do, something remarkable happens. Emulsions with no chemical reactions with their surrounding fluids never stop growing as long as there is more of the same material nearby to gobble up. This process is called Ostwald ripening[4]. The authors find that an active emulsion system, due to the fact that material is constantly turning over, suppresses Ostwald ripening and allows the emulsion droplet to maintain a steady size.

In addition to limited growth, the authors also find that the droplets undergo a shape instability that leads to spontaneous droplet division (see this movie). This occurs due to the constant fuel supply of C and C’. The chemical reaction A+C ? B+C’ creates a gradient in the concentration of fluids A and B outside the droplet. Just outside the droplet, there is a depletion of B and an abundance of A, while far away from the droplet, A and B reach an equilibrium concentration governed by the rate of their reactions with C and C’. The authors dub this excess concentration of B far away from the droplet the supersaturation. Where there exists a gradient in the concentration of a material, there exists a flow of that material, called a flux. This is the reason a puff of perfume in one corner of a room will eventually be evenly distributed around that room. The size of the droplet is dependent on the flux of fluid B into and out of the droplet.

Two quantities determine the evolution of the droplet. The first is the supersaturation that reaches a steady value once all fluxes stop changing in time, and the second is the rate at which the turnover reaction B?A occurs. For a given supersaturation and turnover rate, the authors can calculate how large the droplet will grow, and they find three distinct regimes. In one regime, the droplet dissolves and disappears as the turnover rate outpaces the flow of fluid B back into the droplet. Another has the droplet grow to a limited size and remain stable, since the turnover and supersaturation balance each other out and give a steady quantity of fluid B. The third and most interesting regime occurs if the droplet grows beyond a certain radius due to the influx of fluid B outpacing its efflux. Here, a spherical shape is unstable and any small perturbation will result in the elongation and eventual division of the droplet (Figure 2).

 

dropletStabilityDiagram_fig2b
Fig. 2: Stability diagram of droplets for normalized turnover rate \nu_-/\nu_0 vs supersaturation \epsilon. For a given value of \epsilon, the diagram shows regions where droplets dissolve and eventually disappear (white), grow to a steady size and remain stable (blue), and grow to a steady size and begin to divide (red). Adapted from Zwicker and colleagues.

 

And that’s it. If you have two materials that phase separate from each other, coupled to a constant fuel source to convert one into the other, controlled growth and division will naturally follow. While these droplets are more sophisticated than regular emulsion droplets, they are still a far cry from even the simplest microorganisms we see today. There is no genetic information being replicated and propagated, nor is there any internal structure to the droplets. Further, the droplets lack the membranes that modern cells use to distinguish themselves from their environments. An open question is whether a synthetic system exists that can test the model proposed by the authors. Nevertheless, these active emulsions provide a mechanism for how life’s complicated processes may have gotten started without modern cells’ complicated infrastructure.

Though many questions still remain, Zwicker and his colleagues have lent considerable credence to an important, simple, and feasible theory about the emergence of life: it all started with a single drop.


[1]: This isn’t exactly true. Some organisms undergo a process called anhydrobiosis, where they purposefully dehydrate and rehydrate themselves to stop and start their own metabolism. Also, some bacteria slow their metabolism to avoid accidentally ingesting antibiotics in a process called “bet-hedging”.

[2]: For example, ancient Greek natural philosophers such as Democritus and Aristotle believed in the theory of spontaneous generation, eventually disproven by Louis Pasteur in the 19th century.

[3]: Oparin, A. I. The Origin of Life. Moscow: Moscow Worker publisher, 1924 (in Russian), Haldane, J. B. S. The origin of life. Rationalist Annual 148, 3–10 (1929).

[4]: Ostwald ripening is a phenomenon observed in emulsions (such as oil droplets in water) and even crystals (such as ice) that describes how the inhomogeneities in the system change over time. In the case of emulsions, it describes how smaller droplets will dissolve in favor of growing larger droplets.