Sunday 25 August 2013

Topological Insulators- The New Physics

We have all heard of conductors and insulators. And indeed some of us are more familiar with magnets and semiconductors or even superconductors; they are all manifestations of electronic band structure. But what about the topological insulator? They conduct on the outside but insulate on the inside, much like a plastic wire wrapped with a metallic layer. Weird enough, they create a 'spin-current', where the conducted electrons themselves into spin-down electrons moving in one direction and spin-up moving in the other. Such a topological insulator is an exotic state resulting from quantum mechanics: the spin-orbit interaction and invariance (symmetry) under time reversal. What's more, the topological insulator has topologically protected surface state which is free of impairment of impurities. So how can we understand this 'new' physics? The insulating state has a conductivity of exactly zero around a temperature of absolute zero due to the energy gap segregating the vacant and occupied electron states. The quantum Hall state (QHS) near absolute zero has a quantised Hall conductance (ratio of current to voltage orthogonal to flow of current), unlike other materials like ferromagnet which have order arising from a broken symmetry, topologically ordered states are made distinct by wound up quantum states of electrons (and this protects the surface state). The QHS (the most basic topologically ordered state) happens when electrons trapped to a 2-D interface in between a pair of semiconductors encounters a strong magnetic field. This field causes the electrons to 'feel' an orthogonal Lorentz force, making them move around in circles (like electrons confined to an atom). Quantum mechanics substitutes these circular movements with discrete energies, causing a energy gap to segregate the vacant and occupied states like in an insulator.

However, at the boundary of the interface, the electron's circular motion can rebound off the edge, creating so called 'skipping orbits'. At the quantum scale, such skipping orbits create electronic states that spread across the boundary in a one-way manner with energies that are not discrete; this state can conduct owing to the lack of an energy gap. In addition, the flow in one-direction creates perfect electric transport (electrons have no other option but to move forward because there are no backward-motion modes). Dispationless transport emerges because the electrons don't scatter and hence no energy or work is lost (it also explains the discrete transport). But topological insulators happen without a magnetic field, unlike the quantum hall effect; the job of the magnetic field is taken over by spin-orbit coupling (interplay between orbital motion of electrons via space and the electron's spin). Relativistic electrons arise in atoms with high atomic numbers and thus produce strong spin-orbit forces; so any particle will experience a strong spin-momentum reliant force that plays the part of the magnetic field (when spin changes, its direction changes). Such a comparison between a  spin-reliant magnetic field and spin-orbit coupling allows us to introduce the most basic 2-D topological insulator; the quantum Hall spin state. This happens when both the spin-up and spin-down electrons experience equal but opposite 'magnetic fields'.

Just as in a regular insulator, there exists an energy gap but there are edge states where the spin-up and spin-down electrons propagate in opposition to another. Time-reversal invariance exchanges both the direction of spin and propagation; hence swapping the two oppositely-propagating modes. But the 3-D topological insulator can't be explained by a spin-dependant magnetic field. The surface state of 3-D topological insulators promotes the movement of electrons in any direction, but the direction of electronic motion decides the spin direction. The relation between momentum and energy has a Dirac cone structure like in graphene.

Wednesday 21 August 2013

Gravitational Waves- Einstein's Final Straw

Like ripples through a rubber sheet, they squeeze and stretch spacetime and move outwards at the speed of light. Gravitational waves are still up for grabs. An exotic prediction of general relativity yet to be observed yet having profound implications for cosmology and astrophysics. If we picture a star in a relativistic orbit around a supermassive black hole, it may continue so for thousands of years but never forever. Neglecting even drag due to gas, the orbit would lose energy gradually until the star spiralled into the hole; the reason for this plunge is the emission of gravitational radiation. We know that if the shape or size of an object is altered, so is the gravity surrounding it; Newton realised the sphere was an exception since the gravitational field outside it is invariant (remains the same) if it merely expands or contracts. Changes in the gravitational field can't spread out instantly because this would imply a conveyance of information about the shape and size of an object at superluminal speeds (which is forbidden by relativity). If the sun were to somehow alter its shape and the gravitational field around it, 8 minutes would elapse for the effect to be 'felt' on the earth and at very large distances, this is evident as radiation (a wave of changing gravity) moving away from its source. This is analogous to the manner in which fluctuations in an electric field produces electromagnetic waves (a rotating bar with a charged ends produces an electric field unlike which is different from when the bar is end-on or sideways-on). But there are two main distinctions to be made between gravitational and electromagnetic waves. Firstly, gravitational waves are especially weak (except if very large masses are involved). Diatomic molecules are great emitters of electromagnetic radiation but terrible at transmitting gravitational waves. Because there is no such thing as negative mass (negative gravitational charge) to neutralise (or cancel out) positive ones (like in electricity), on large scales, gravity competes with electromagnetism. This lack of negative gravitational charge gives gravity an advantage over electromagnetism but it implies a deep paradox: it weakens the strength of an object to make gravitational radiation. Which brings us to the second difference between gravitational and electromagnetic waves:

The most productive (i.e. efficient) way of making electromagnetic radiation is for the 'centre of electric charge' to stagger or wobble  in relation to the centre of mass. Dipole radiation is an example of this, where the ends of a spinning bar are positively charged on one end while negative on the other. But the Equivalence Principle (which dictates that gravitation is indistinguishable from acceleration, much like how a rising elevator makes you feel heavier while a descending one makes you feel lighter) also mentions that everything exerts a gravitational force equal to its inertial mass, hence at all points in spacetime, all bodies experience the same gravitational acceleration. Translating into english: this implies that the 'centre of gravitational charge' is really just the centre of mass and since the former can't wobble relative to the latter, dipole gravitational radiation can't exist. We compare gravitational radiation to the spinning bar by envisaging it possesses positive charges  at both ends so that 'the centre of charge' remains set (fixed) at the centre and thus, low amounts of radiation are produced owing the existence of a quadrupole moment (it's only quantity that changes: it describes the distribution of shape and charge). Due to gravitational radiation, binary systems loses energy and their orbital period shrinks progressively, causing the component stars to coalesce; when two black holes meet, their even horizons combine into a larger one and in accordance with the 'no hair theorems', returns to a state described by the Kerr Metric (hole has mass and spin). 

But the detection of such gravitational radiation (or waves) is causing a stir, it is Einstein's final straw. Any object in the way of a gravitational wave would experience a tidal gravitational force that acts transverse (perpendicular) to the direction in which the wave moved outward. If you interrupt a gravitational wave some sort of circular hoop head-on, it will eventually be contorted into an ellipse. In Louisiana, the LIGO detector uses laser interferometry, where a laser beam is divided and reflected off mirrors which are connected to two masses (kilometers away) in a perpendicular fashion (an L shape). If a gravitational wave were to arrive, it would cause two lengths, X and Y to change. To be continued...

Sunday 18 August 2013

Y Chromosome- An Evolutionary Curiosity

The X and Y chromosomes are an odd couple. But the Y reads more like a rule-breaker of human genetics; most of it refuses to recombine, more than half of it consists of tandem repeats of satellite DNA and it's not a prerequisite for life (females don't have or need one). So why bother with a chromosome that tells us about 50% of the population (assuming a 1:1 sex ratio)? Since it passes directly from father to son, its sex-determining role means it is specific to males and haploid. It contains vast numbers of unique SNPs and has some notable exceptions such as 2 pseudoautosomal regions that do recombine with the X as well as euchromatin sequences (which are loosened during interphase). Largely escaping recombination, the Y can bequeath haplotypes which are passed down a robust phylogeny (changes only via mutation) and can be used to trace back the most recent matrilineal ancestor, Y-chromosomal Adam. A gene called SRY (sex-determining-region-Y), derived from SOX3, which transcribes a protein to activate the formation of the testes, such is the origin of the sex-determining role. We can infer that the sex chromosomes started off initially as a matched pair (due to the identical telomeric sequences at the tips, which can engage in recombination); during the course of meiosis (the process of gamete formation), the homologous chromosomes align and exchange segments, subsequently sending off a copy of an autosome and and a sex chromosome to each cell. Other indications that Y and X were once alike include the non-recombining sites on the Y, most genes in this region have corresponding duplicates on the X. What makes the Y-chromosome an evolutionary curiosity is that its profound lack of recombination it makes it more prone to accumulating mutations and then decay; something must have happened to cease the exchange of DNA between the X and Y. The Y forfeited its ability to exchange DNA with the Y in discrete stages; firstly, a strip of DNA flanking the SRY gene spreads down the chromosome. But only the Y decayed in response to the loss of X-Y chromosome recombination, in contrast to the X which in females undergoes recombination when a pair of copies meet during meiosis. So what then could explain away the interruption of recombination between the X and Y?

As the early Y tended to exchange segments, a portion of DNA experienced an inversion (effectively turning the sequence upside down) relative to the X and since a prerequisite for recombination is that analogous sequences are aligned, any inversion would prevent interaction between the two regions. Comparative genomics unveils that autosomal precursors of the X and Y were unbroken (intact) in reptilian species before the mammalian lineage began . But monotremes like platypi were among the earliest to speciate and have a SRY gene aged back to 300 million years. X-inactivation followed (in which female embryo cells arbitrarily shut down a majority of the genes in one of the 2 X-chromosomes) to compensate for the degeneration. If we reduce the whole human population to two people (one man and woman), together this couple carries four copies of each autosome and three X chromosomes and a single Y. The effective population size of the Y can be therefore predicted to be similar to that of haploid mtDNA, 1/3 that of any X and 1/4 that of any autosome. Hence, we can expect much lower rates of diversification in the Y than any other region of the nuclear genome. We can predict is to also be more subject to genetic drift (random changes in frequency of haplotypes) and such drift would act as a catalyst for the differentiation between aggregates of Y-chromosomes in different populations.

Saturday 17 August 2013

Molecular Clocks- Timing the Gene Pool

It certainly doesn't tick. And it has no hands either. But the molecular clock is more than a faceless clock. It is a fairly new technique, employing a relatively constant rate of evolution to date almost anything from the divergence of taxa or species to the appearance of a viral epidemic. But this tool is made possible by an incredibly simple observation: the range of difference of DNA between species is essentially a function of the time ever since their divergence. Though the practical applications may seem subtle, molecular clocks put the final nail in the coffin of claims that HIV was first propagated by tainted polio vaccines in the 1950's, made using SIV (simian immunodeficiency virus) by dating the strain back to the 1930's. Essentially, the modern molecular clock has shown that a given protein has a characteristic rate of molecular evolution while genes are different in their characteristic rates . And that molecular evolution per se better fits into a neutralist rather than selectionist view. Linus Pauling reported a range of constant rates of evolution for different proteins (histones are characteristically slow, cytochrome c is slightly quicker (yet slower than haemoglobin) and fibrinopetides are quicker overall). Motoo Kimura and Tomoko Ohta explained away this fairly constant characteristic rate for each protein by positing that most amino acids changes were effectively neutral, so the change has no influence on the overall fitness and as a result the rate of change was no under the effects of natural selection. So on average, beneficial mutations were predicted to be rare, deleterious ones would be quickly wiped out by natural selection and a large fraction of the amino acids changes are effectively neutral. The actual mutation rate of the neutral mutations would only shaped by the mutation rate (and would be fairly constant, taken that the base mutation rate remained unchanged). Such predicts that in a species, the long-term rate of neutral molecular evolution is equivalent to the neutral mutation rate in the individuals. But why do different proteins have different characteristic rates of evolutionary change? We may explain these variations in terms of the assumption that proteins differed in the proportion of amino acid positions that were neutral (so that altering an amino acid has zero selective effect) or constrained (so any mutation was probably deleterious).

Summing up, the greater proportion of neutral sites, the more rapid the rate of molecular evolution. So in accordance with the neutral theory, the rate of which genes evolve is determined by the overall rate of mutation and proportion of neutral sites. Darwin actually predicted two phenomena (rate of fixation of mutations and high level of polymorphisms), which may be accounted by the neutral theory. And the amount of divergence between genes tends to increase (with time) since their evolutionary separation. But molecular clocks themselves may vary either as a result of 'sloppiness' of the tick-rate or variation in the mutation rate; since the clock is probabilistic (ticks are irregular intervals which can be described by a Poisson distribution), but where did this variation stem from? An important source of variation comes from the influence of population size on the rate of fixation of mutation. Ohta expanded the neutral theory (with the nearly-neutral theory) by acknowledging the important role of effective population size; smaller populations are more severely influenced by fluctuations in allelic frequency, so genetic drift can vanquish selection for alleles with small selection coefficients. So in effect, the fixation of of nearly-neutral alleles of small selection effect is predicted to be the greatest in the smallest populations, if a population undergoes a decrease in population, this might coincide with an influx of fixation of nearly-neutral alleles, so population flux can increase the sloppiness of molecular clocks. Another application of molecular clocks can be made to the Hawaiian Islands, where the phylogeny of endemic birds and fruit flies is confirmed by molecular dates that follow a linear correlation between divergence and time in which DNA distance is compared against Island age. Since viruses leave behind no fossil record, we can also reassemble the history of viral outbreaks using viral lineages (viral molecular clock). In the case of endogenous retroviruses (ERVs), dates of origin can be fine tuned by comparing the pair of long terminal repeats (LTRs) that surround the genome. 

Saturday 10 August 2013

Homochirality- Left-Handed Life

Life is anything but ambidextrous. And the problem of life's origins is compounded by a basic configuration of amino acids and sugars. Amino acids are molecules consisting of both an amino group (NH2) and a carboxylic group (COOH); the alpha amino acids have the carbon atom in the centre attached to both groups. The infamous Miller-Urey experiment showed how at least 22 amino acids could be produced in a spark discharge tube simulating a prebiotic environment containing water (H2O), methane (CH4), ammonia (NH3), molecular hydrogen (H2) and very little oxygen (O2). But an anomaly arises when trying to reconcile such a result with the chirality of the amino acids; just as your left and right hand can't be translated into each other, they can in principle when you look at the mirror image of one hand. Hence, hands possess a mirror symmetry. Chirality can also be explained in terms of the direction of rotation of circularly polarised light, rightly-polarised and leftly-polarised light behave very differently when they pass via a medium consisting of molecules that have selected chirality. All amino acids naturally occurring on earth are left-handed (except glycine which is non-chiral) but the Miller-Urey experiment produced racemics (equal numbers of left and right handed amino acids), so how did the amino acids get left handed? Such homochirality is critical to protein function, if proteins made of L-amino acids had random incorporations of their D-enantiomers, they would have varying conformations. Sugars also possess homochirality, they are classified as D-sugars based on the arrangement of the chiral centre furthest away from the carbonyl group of the sugar; so sugars are essentially right handed. But why? The Murchison meteorite that landed in Australia in 1969 had five alpha-methyl amino acids and an excess of L-enantiomers; these translate as S-enantiomers (a configuration where the methyl group is attached where the hydrogen atom would normally be in the L-amino acids). So why such an excess? The discovery that our place in the interstellar neighbourhood contains more right-circularly polarised light than anywhere else in the universe hints many possible explanations for homochirality. Among them is processing by ultraviolet photons in outer space, such as their polarisation is highest when they pass via a dense nebulae to leave stars (allowing scattering) and would thereby destroy molecules of one chirality while preserving the others.

The weak interaction of beta decay is the only force with the potential of producing a chirality due its parity violation. Conservation of parity means that the mirror image of an object has to be identical as the object itself, hence the weak interaction could distort the balance between right and left-handed molecules. One way this could be achieved is by electrons produced via beta decay, which have antiparallel spins to the direction of motion (longitudinal polarisation); more energetic, relativistic electrons are entirely longitudinally polarised and would produce Bremsstrahlung photons which interact with molecules to cause chiral discrimination. A similar hypothesis involves amplification via catalytic reactions: an agent that could act as a catalyst for its own synthesis and an inhibitor for the synthesis of the chiral opposite. Imagine a left handed molecule L and a right handed molecule R (both are made of the constituents A and B); once synthesised they trigger 'autocatalysis' where they can drive the synthesis of new molecules of their identical handedness from A and B. Finally merging to form molecule B', which leads to the destruction of one R and one L molecule.

An approach from astrobiology involves the interplay between neutrinos, amino acids and supernovae. 14N (nitrogen-14) is a constituent common to all amino acids and has a non-zero spin. The recently described 'Buckingham effect' occurs when the interaction of a nuclear magnetic moment with the magnetic moment possessed by electrons (produced by the Faraday effect), would behave in a different manner in a right-handed molecule than in a left-handed molecule. So the non-zero spin of the 14N nucleus, coupled with a strong magnetic field could allow a mechanism for chiral discrimination. The SNAAP (Supernova Neutrino Amino Acid Processing) model proposes that supernovae produce carbon, nitrogen, oxygen and a racemic assortment of amino acids (which synthesise in supernova nebulae).  Neutrinos from other supernovae, together with the magnetic field from a neutron star or black hole, make the racemic mixture enantiomeric by selectively destroying one type of chirality of 14N based molecules. Subsequently, chemical evolution quickly amplifies the enantiomers and more L-amino acids are produced as the galaxy is permeated with molecular clouds.

Wednesday 7 August 2013

Nucleosynthesis- Making the Elements

The recipe for the making of the elements reads like a cookbook. In the first 3 minutes following the universe's fiery birth, very little was produced during big bang nucleosynthesis (BBN) due to some nuclear anomalies; there are no stable mass-5 or mass-8 nuclides making it almost impossible make anything other than 2H, 3He, 4He and 7Li (which is difficult to produce in abundance). Let's see how the light elements were initially synthesised. Firstly, a neutron coalesces with 1H to produce a deutron (2H) and a gamma ray; the deutron acts as a bottleneck for the rest of fusion events. Since a free neutron is highly unstable (half life of ~10 min) it decays into a proton and electron antineutrino; so you end up with around half the neutrons you started off with (which get captured into nuclei). 3He is produced when a proton is captured onto a deutron, which is converted to 4He via either neutron capture or a reaction where the deutron tosses in its neutron to the 3He and gives up its proton. In the another set of reactions, neutron capture by a deutron to produce 3He (a triton) gets converted to a 4He via either proton capture or a reaction in which a deutron gives up its proton and frees a neutron. That's pretty much hat was produced during BBN, aside from the fact that 7Li was made in minuscule amounts via the combination of 4He and 3H (in lows baryon density) or through the fusion of 4He and 3He to produce 7Be which was then fused with an electron neutrino. However, WMAP data seems to agree with theoretical calculations of 2H and 4He but not for 7Li (the prediction for lithium is about three times higher than actually observed). The 'lithium problem' may be addressed by short-lived hypothetical particles called axions which bind to nuclei; assuming it was negatively charged, the axion would reduce the Coulomb barrier between particles as the universe cooled to a certain point, hence triggering a revival of nucleosynthesis. So now that hydrogen, helium and a little bit of lithium were produced via BBN, the rest of the elements from carbon to lead and even as far as thorium and uranium were synthesised by nuclear reactions in stars.

Stellar nucleosynthesis begins with the initial stage of hydrogen burning, where hydrogen is converted to helium. In each of the 3 pp (proton-proton) chains 4 protons undergo fusion to form a 4He nucleus. In the pp-I branch, 6 protons actually go into the chain but only 2 remain in the final reaction with the 4He nucleus (so the net number of protons consumed is 4). The pp-II branch, the final reaction produces 2 4He nuclei, but one of them is put in to restart the chain (net number is 1). While the pp-III branch begins with 7Be (4He nucleus and 3 protons), so the proton that enters the chain makes one net 4He nucleus when 8Be decays. The CNO (carbon/nitrogen/oxygen) cycle is used for hydrogen burning in more massive stars and uses 12C as a catalyst. Next, the triple alpha and alpha processes of helium burning are rather simple; 2 4He are fused to form 8Be, 8Be is fused with 4He to produce 12C and 12C combines with 4He to make 16O. Hoyle discovered a resonance (an excited energy level) in the carbon nucleus of 7.7 MeV, to compensate for the instability of 8Be (which lives for 10^-16 secs). Subsequent nuclear reactions involve silicon burning following oxygen burning; the temperature is high enough so photons can interact with 28Si to make 24Mg and a 4He nucleus. Other photons can interact with 24Mg to make 20Ne and 4He nuclei, moreover, the light 4He nuclei can be captured by other 28Si to make 32S followed by 36Ar (very simplified); so nuclei around nickel and iron are products of silicon burning.

But the picture of nucleosynthesis is not complete without a mechanism for making the elements heavier than iron and nickel. Most of which are produced via the s-process (slow neutron capture) and r-process (rapid neutron capture); the s-process happens during helium burning and makes around half the nuclei heavier than iron. Such a process continues until it encounters the closed shells of the nucleons, which makes it difficult to capture an additional neutron. The s-process peaks in element abundance at barium, lead and strontium but the heaviest element made is 209Bi; attempt to add an another neutron and it undergoes beta decay to 210Po, releasing a 4He nucleus and ending up at 206Pb. The favoured site for the r-process is core-collapse supernovae, as a star cools, the seed nuclei form nuclides all the way up to uranium and plutonium and beyond.