Wiring The Brain

how the brain wires itself up during development, how the end result can vary in different people and what happens when it goes wrong
  1. How free is our will?


    https://charlestlee.com/ideation/intuition-matters-in-idea-making/
    If we all come with pre-wired traits and with adaptations based on our past experiences, are our decisions ever truly free? 

    When I give talks demonstrating that we all have innate psychological predispositions – traits that influence our behaviour across our lifetimes – I often get asked what implications this has for free will. If our behaviours are affected in some way by our genes or by the way our brains are wired, doesn’t that mean that we’re really not that free after all?

    The answer depends, I think, on the kind of free will you’re after and on an understanding of the mechanisms by which we make choices. And let me say at the outset that we do make choices. The idea that neuroscience has somehow done away with free will altogether or proven that it is an illusion is nonsense. All neuroscience has shown is that when you are making decisions, things are happening in your brain.

    This is, to put it mildly, not a surprise – where else would things be happening? And it really has no implications for free will, unless you are a dualist. If you think of the mind as some kind of object that has existence independent of the brain, then I suppose you might be upset to find that your decisions have a physical basis in brain activity. But if you think of “mind” not as an object but as an activity or process – the brain in action – then, well, seeing the brain in action as you make a decision is just what you’d expect.

    So, yes, we make choices – really, really. But how free are those choices? How much are they constrained by other things over which we really have no control? How much are they affected by antecedent causes?

    In particular, if I have some psychological traits over which I had (and continue to have) no control, and those traits influence my behaviour (or at least my behavioural tendencies) then am I really fully in control of my own actions? If someone asks me to a party and I decide not to go, is that because I’m wired to be shy? Perhaps I couldhave chosen to go, and maybe sometimes I do, but maybe only because I happen to be in a sociable mood or feeling brave that day, and maybe Iam not in control of that either.

    Well, the first thing to say is that this problem arises no matter the origin of our psychological traits. In my book INNATE, I present the evidence that variation in genetics and in the processes of brain development lead to innate psychological differences between people, which affect the trajectory of their lives, influencing their experiences, the way they react to them, and the types of habitual behaviours they develop. But if you’d rather believe (in the face of overwhelming contrary evidence) that all such traits come completely from experience instead, the problem is just as acute.  

    If we each have real and stable characteristics of temperament and personality, then it doesn’t really matter for this question of free will whether they came from genetics and brain development, or our experiences and environment. In either situation, some antecedent causes have affected the physical structures of our brains in a way that influences our decisions, right now, at this moment. In which case, you could argue, that our will is not so free after all.

    In one sense, this is trivial – our decisions, in any given situation, are of course affected by our prior experiences and our current goals. The whole point of having a brain is that it lets you learn from the results of actions you have taken in the past in various types of scenarios. That information is then used to predict the outcomes of a range of possible actions that could be taken when such a scenario is encountered again.

    I don’t think anyone sees that as undermining our free will – indeed, you could say that choosing between those options, based on what we have learned of the world, in order to further our own goals, is the process of free will in action.

    It is the idea that the options open to us are constrained somehow by our underlying psychological predispositions that seems to threaten our freedom.

    And this does seem to be the case. In the first instance, the range of options that even occur to us – that somehow arise in our brains for consideration – is limited by our personality traits and experiences. Two different people in ostensibly the same situation, with the same primary goal, may nevertheless be choosing from a very different set of possible actions. This is because the interplay of their underlying traits and their experiences across their lives will have created a very different set of additional goals, constraints, and heuristics.

    For example, two people in a meeting may share a goal of advancing their ideas on some topic under discussion. But one of them may have a conflicting goal – avoid social embarrassment at all costs. This may be due to a natural inclination towards shyness, reinforced by a lifetime of experience, where social interaction is not as intrinsically rewarding as it is for other people, and where the subjective feeling of embarrassment is more acutely felt.

    Even if it is not consciously perceived, that goal of avoiding embarrassment may act as a powerful constraint on the person’s behaviour. They may come home and complain to their partner how they’d wished they’d been brave enough to speak up – instead, stupid Gary who never shuts up dominated the meeting as usual and ended up getting his way. “I wish I had more confidence!”, they might say, conceding that their conscious desires were somehow thwarted by their underlying psychological make-up. 

    The decider-in-chief 

    This seems to be the type of thing people are worrying about when confronted with the evidence that we really do have lasting psychological traits that influence our behaviour. And this worry appears to be more keenly felt when such traits are shown to have a physical basis in the way our brains are wired. It seems to threaten the primacy of our conscious selves in the decision-making process.

    Perhaps we’re like a puppet president – making “decisions” about what to do, but only from the highly limited set of options presented to us by the generals and civil servants – limited based on criteria we are never aware of. Or maybe we’re not even really making the decisions at all – perhaps even that stage of the process is dominated by subconscious factors. Maybe we’re like a magician’s stooge, impelled to make certain decisions through influences beyond our apprehension, with only an illusion of control.

    Personally, I think this goes too far. It can certainly be demonstrated that many of the decisions we make are affected by things of which we are not aware. That does not mean that all the decisions we make are like that. Even if we’re on cognitive autopilot most of the time, that doesn’t mean we can’t ever take the controls. And anyway, being on cognitive autopilot most of the time is not necessarily a bad thing – quite the opposite, in fact.

    The last thing we would want is to have to make decisions from first principles every time we are doing something. If we had to consciously weigh up every aspect of every decision in every situation we find ourselves in we’d be paralysed by indecision. And we’d quickly be some other critter’s lunch. Life comes at you fast – vacillate and die.

    Habits and heuristics

    Instead, most of our behaviour is effectively habitual. We learn from experience over our lifetimes that certain behaviours are profitable or appropriate in certain situations – these are the heuristics that subconsciously guide most of our actions. And our behaviour is even shaped by our ancestor’s experiences, in the sense that we have inherited a suite of genetically determined behavioural tendencies that were adaptive in the environments and scenarios that our ancestors tended to find themselves in in the past.

    Now, some people argue that if we can’t make decisions that are completely divorced from any preceding events, effects, or causes, that we are not really completely free at all. But why would we want to do that? Totally free decisions, uninformed by any prior events, would be essentially random and pointless (and highly likely to get you killed sooner or later).

    Being free – to my mind at least – doesn’t mean making decisions for no reasons, it means making them for your reasons. Indeed, I would argue that this is exactly what is required to allow any kind of continuity of the self. If you were just doing things on a whim all the time, what would it mean to be you? We accrue our habits and beliefs and intentions and goals over our lifetime, and they collectively affect how actions are suggested and evaluated.

    Whether we are conscious of that is another question. Most of our reasons for doing things are tacit and implicit – they’ve been wired into our nervous systems without our even being aware of them. But they’re still part of us ­– you could argue they’re precisely what makes us us. Even if most of that decision-making happens subconsciously, it’s still you doing it.

    Ultimately, whether you think you have free will or not may depend less on the definition of “free will” and more on the definition of “you”. If you identify just as the president – the decider-in-chief – then maybe you’ll be dismayed at how little control you seem to have or how rarely you really exercise it. (Not never, but maybe less often than your ego might like to think).

    But that brings us back to a very dualist position, identifying you with only your conscious mind, as if it can somehow be separated from all the underlying workings of your brain. Perhaps it’s more appropriate to think that you really comprise all of the machinery of government, even the bits that the president never sees or is not even aware exists.

    That machinery is shaped by our shared evolutionary past, by each individual’s genetic heritage, by the particular trajectories of development of their brain, and by their accumulated experiences over their lifetime. Those things all shape the way we tend to behave in any given circumstance. That doesn’t mean we can never exercise deliberative and conscious control over our decisions – just that most of the time we don’t (in part because most of the time we don’t need to).

    Can we choose not to be a certain way? No, probably not. But can we choose to actin a certain way despite having opposing tendencies – yes, absolutely, in some circumstances at least. This may be effortful – it may require habits of introspection and a high degree of self-awareness and discipline – but it can clearly be done. In fact, one of the strongest pieces of evidence that we really do have free will is that some people seem to have more of it than others.


  2. Life after GWAS – where to next, for psychiatric genetics?


    GWAS (genome-wide association studies) for psychiatric illnesses may be about to become a victim of their own success. The idea behind these studies is that common genetic variation – ancient mutations that segregate in the population – may partly underlie the high heritability of common psychiatric and neurological disorders, such as schizophrenia, autism, epilepsy, ADHD, depression, and so on. The accumulating evidence from over ten years of GWAS strongly supports that idea, with many hundreds of such risk variants now having been identified. The problem is it’s not at all clear what to do with that information.

    GWAS are a method to carry out a kind of genetic epidemiology, based on a simple premise – if a particular genetic variant at some position in the genome (say an “A” base, as opposed to a “T” at position 236,456 on chromosome 9) – is associated with an increased risk of some condition, then the frequency of the “A” version should be higher in people with the condition than people without. (Just as the frequency of smoking is higher in people with lung cancer than without).

    If you examine the frequency of those kinds of variants across the whole genome, in a large enough sample of people, you can exhaustively search for any that confer risk to the condition, above a certain effect size. The larger the sample, the smaller the effect on risk that can be statistically detected. Initial GWAS for conditions like schizophrenia came up empty. With samples in the low thousands, all that could be concluded from such studies was that there were no common variants in the genome that contributed even a moderate increase in risk, individually. (This isn’t surprising, given the expectation that risk variants should be negatively selected against). 

    But as samples sizes have grown, into many tens of thousands, common variants have begun to be identified that are reliably statistically associated with risk of the condition (i.e., that are slightly more frequent among cases than among controls). There are now well over a hundred specific variants that reach the stringent threshold for genome-wide statistical significance for schizophrenia (here and here) and for depression. Each of these is associated with only a very small statistical increase in risk for the condition (usually on the order of 1.05-fold). This is almost negligible, but not quite.

    https://www.ncbi.nlm.nih.gov/pubmed/25056061

    In addition, below that significance threshold, there are thousands more that seem likely to convey some of the genetic risk for the condition, even if we can’t yet say so with statistical confidence. Indeed, it is possible to estimate the collective effect on risk that all such common variants in the genome might convey and this is typically sizable. By calculating “polygenic scores”, which take all of these putative associations into account, one can determine where any given individual lies on an idealised continuum of risk. For schizophrenia, these scores can explain about 5% of the variance in disease status across the population in new test samples.


    Polygenic scores

    Polygenic scores may not currently explain much of the variance in these conditions but may nonetheless be useful for lots of things. One is for examining the shared genetics of different traits or conditions. For example, polygenic scores for intelligence are negatively correlated with risk of a number of psychiatric conditions, including schizophrenia and ADHD. That’s an interesting finding, that fits with the idea that intelligence may be partly an indicator of general neurodevelopmental robustness.
     
    Polygenic scores may also be useful in experiments aimed at identifying possible environmental or experiential factors contributing to mental illness, by allowing researchers to control for underlying genetic risk, which could otherwise obscure or confound other effects.

    Where they will probably notbe very useful is in identifying people at higher than average risk of disease. Well, let me rephrase that – they may be useful at doing that, statistically speaking, but that information may not be actionable, not for psychiatric conditions at least.

    This is for several reasons: first, the scores will only ever capture a small portion of the genetic risk of the condition. This is because: (i) un-captured rare mutations also make an important contribution to risk for many psychiatric disorders; and (ii) the specific combinations of risk variants present in any individual may be more important than just the additive burden. These effects will never be captured by signals that have been averaged across the whole population. 

    Second, genetics only confers a portion of the overall variance in risk – the heritability of psychiatric disorders ranges from 20-70%, meaning a sizable fraction of the variance in risk is non-genetic in origin. There is a tendency to think if it’s not genetic, it must be environmental, but this is a mistake – much of this non-genetic variation may be due to the inherent randomness of the cellular processes of brain development.

    For these reasons, the predictive precision of polygenic scores will always be low for individuals – this is a limit in principle, not just in practice. A statistical prediction might be useful enough to be acted on by insurance companies, or even in pre-implantation genetic screening if people are inclined to use it, but won’t carry much actionable information for doctors treating individual patients. In any case, for most such conditions, there are few preventive measures that can be taken, beyond generally looking after oneself – moderating alcohol use, staying away from hard drugs, avoiding smoking, reducing stress, better exercise, diet, and sleep habits. People don’t get prescribed antidepressants or antipsychotics prophylactically, for good reasons.  

    So, for psychiatric conditions, polygenic scores will likely be useful for some kinds of research, but less so for clinical purposes.


    What about the biology?

    One thing that GWAS for psychiatric disorders have not been that useful for is elucidating the underlying biology – or at least not in the way it was hoped. One of the driving motivations of GWAS was the idea that associated variants would implicate specific genes or biochemical pathways in the pathogenesis of psychiatric disease, possibly even providing direct molecular targets for new therapeutics. This has turned out not to be the case.

    It’s not that GWAS haven’t identified anything – just the opposite – they’ve identified too much. If we take schizophrenia as an example, GWAS have highlighted hundreds of variants that implicate nearby genes, encoding proteins with very diverse functions.

    Among that list of genes, there is an enrichment for ones with functions in various neurodevelopmental processes or functions at neuronal synapses, for genes expressed in the brain, especially the fetal brain, for genes with greater expression in broad brain areas like the cerebral cortex, or for genes with greater expression in very broad classes of cell types, like glutamatergic or GABAergic neurons.

    Collectively, these data suggest that something about some neurons in some parts of the developing brain goes awry in schizophrenia, somehow.

    Now, it’s important to note that that’s not nothing. At the very least, it provides strong evidence that the GWAS signals are real – they didn’t land on genes for liver enzymes or connective tissue proteins or eye colour pigments. But these findings don’t tell us much more, biologically, than we knew before – that schizophrenia is a disorder of disturbed neural development.

    Even worse, a new model of “omnigenic” inheritance suggests that every genethat is expressed in the relevant tissue for a given disease (the brain in this case) will harbor some genetic variants that are statistically associated with disease risk. Larger and larger GWAS will no doubt uncover more and more of these statistical associations, though the effect sizes will get smaller and smaller. Beyond a certain stage, it seems reasonable to ask: “What’s the point?”

    It seems clear now that GWAS will not, by themselves, implicate very specific pathways or directly yield new mechanistic insights into the nature of the associated conditions. This is especially true for psychiatric conditions, because, while they may have genetic origins, they do not have proximal genetic mechanisms. Unlike a condition like cancer, which really reflects an altered state of gene expression at a cellular level, psychiatric conditions like schizophrenia do not. They reflect altered states of distributed brain circuits and systems. Genetic variation may cause such a state to emerge, but there may be nothing in the molecular functions of the affected genes that specifically relates to that state in any acute or on-going fashion.

    For cancer, if discovering risk genes is part A, and B is the resultant phenotype, you can go from A to B in one step. For psychiatric disorders, you need to go from A to Z, where the phenotype, Z, is not directly resultant, but very indirectly emergent, through dynamic interactions between hundreds of different cell types across distributed circuits over long periods of time, as development and maturation play out.

    It will therefore take experimentally tractable model systems to work out the chain of steps from A to Z. Characterising the properties of the genes themselves (by cross-referencing with other large-scale “omics” datasets, for example) just won’t do it. And the problem with the variants implicated by GWAS is that they simply don’t provide an experimental handle for those kinds of follow-up experiments.


    From analysis to experiment

    In the first instance, the genetic variant assayed in GWAS is just a marker – the actual functional variant will tend to be co-inherited with that marker but it may take some effort to find it. And then one has to figure out which gene it is affecting and how. Sometimes it’s the nearest gene, but other times the affected gene may be some distance away. And most often the effect will be just a small change in expression level, possibly only in some cell type or other. It is possible that you could find some cellular process where that small change in expression makes an obvious difference in an experimental assay. But you could spend a long time looking for what that process might be, for any given gene. 

    Moreover, even if you did find some cellular process that was affected by a modest expression change in Gene X, you’d still be many, many steps away from understanding how a change in that process could ultimately contribute (a tiny amount) to an increase in risk for a condition like schizophrenia. And there seems no way to bridge that gap. The effect on risk of any given variant alone is simply too small to be elucidated in this way.

    For example, one of the variants associated with risk of schizophrenia tags a gene encoding the C4 complement protein. This protein is involved in the immune system, but also, as it turns out, in pruning surplus synapses in the developing nervous system. An impressive paper tracked down the causal functional variants in the gene, which turn out to relate to how many copies of the protein-coding sequence are present. The differences in copy number are correlated with overall expression level of the protein in the brain.  

    But what effect does this have? The authors turned to mice as a model system to investigate this – kind of. They showed that complete removal of both copies of the gene affects the pruning of synapses in a specific part of the developing mouse brain. Now, that’s very nice and all (really beautiful work in fact), but it’s actually just telling us what the normal function of the encoded protein is (by completely removing it), not what the effect might be of a modest increase in expression, which was what the functional risk variants were associated with.

    The problem is that the analyses in the mouse are completely incommensurate with the effect sizes in humans – there’s just no useful way to relate them. It’s not like we can really infer that excessive synaptic pruning underlies schizophrenia – not when this is just an arbitrarily chosen one of hundreds of genes harboring variants that collectively contribute only a modest proportion of the overall risk of the condition. If there were massive convergence onto this biochemical pathway or cellular process that would be one thing, but there is not.

    We could follow the same approach for hundreds of other common risk variants and be none the wiser as to the biology of schizophrenia. We’d end up with lots of inferences and no way to test them.

    But if we can’t make much headway investigating these common variants individually, maybe we can figure out what their collective effects are. Well, maybe, but to move beyond correlative analyses in human subjects we would need to recapitulate these high-risk polygenic profiles in some experimental system. That won’t be possible in animals. Perhaps the only way to do it is to generate induced pluripotent stem (iPS) cells from people with high versus low polygenic risk scores.

    These iPS cells can then be turned into neurons, or even “minibrains” (cerebral organoids) in a dish and the effects of the polygenic burden can be assayed. This could be a powerful method, in theory. In practice, there are a few problems with its application to psychiatric disorders.

    First, what phenotype do you look at? For a condition like microcephaly, minibrains are perfect as they recapitulate early stages of brain development really well, where processes of neuronal proliferation can be directly modelled. For psychiatric conditions, the defect is more likely in subtle aspects of connectivity, including activity-dependent refinement over months or years. You might get at early stages of synapse formation in a minibrain, which could prove very interesting, but the emergence of the relevant brain-wide pathophysiological states will prove much harder to model in a dish. If you want to understand the full chain of events leading to brain dysfunction, you’re going to need a real brain, in a behaving animal.

    Second, the actual variants contributing to a high-risk polygenic profile will differ between any two people. They will be an overlapping but unique subset of all the risk variants in the population. Who knows if the effects will converge onto any particular cellular process? They might, but they might just as well not. It seems likely that insults to many different primary processes might lead to convergence on a particular pathophysiological state (as is the case with epilepsy, for example).

    Third, we all carry hundreds of rare mutations, in addition to our polygenic burden of common variants. These are especially important for neuropsychiatric conditions. Differences in relevant phenotypes between cells from any two individuals (especially if one has a disorder and the other does not) could just as well – indeed, more likely – be due to those unknown rare variants than to the polygenic profile.


    Polygenic burden degrades general robustness

    The best way to think of polygenic burden may be not so much as affecting any particular process so much as reducing the robustness of all processes. The genome has evolved to buffer many insults – environmental variation, molecular noise, and genetic variation. This robustnessallows genetic variation to accumulate in the population if the individual mutations are not too severe. However, these accumulating variants collectively degrade the robustness of the system, by compromising the evolved interactions of all the components. The higher a person’s polygenic burden of risk variants, the lower their ability to buffer additional insults.

    Under this model, the polygenic profile may not cause disease by itself, even at the highest end of the distribution of common risk variant burden. Instead, it may make it harder to buffer the effects of rare mutations, acting as a strong genetic modifier to determine whether a disorder results or not.

    There is a variety of evidence to support this view. First, polygenic risk for various psychiatric disorders is highly overlapping, consistent with a general vulnerability, not related to the specific symptoms of any diagnostic category. Second, as mentioned above, the polygenic signal for intelligence is negatively correlated with risk of schizophrenia, ADHD, and several other psychiatric conditions (and vice versa). This is likely not because being intelligent is protective, per se, but rather because intelligence is an index of neurodevelopmental robustness. Finally, recent studies have shown that the background of common variants acts as a modifier of the severity of the effects of rare mutations associated with very specific Mendelian conditions. 

    This view is equivalent to the well known genetic background effects commonly seen for many kinds of phenotypes (especially behavioral ones) when mutations are crossed into different strains or lines of mice or flies. These effects can be quite sizable – sometimes causing very different outcomes in different strains. However, they are extremely hard to study, and it isn’t obvious that you would really learn much from working out their underlying mechanisms in detail.


    Follow that gene!

    Indeed, the lesson from model organisms is that if you want to understand biology, you should follow the big effects. In humans, those effects are due to rare mutations. In my next post I will explore how rare mutations can provide experimental entry points to elucidate the biological pathways leading from genetic risk to emergent psychopathology. In the meantime, this review from a few years ago outlines a framework for “following the genes”.


  3. Calibrating scientific skepticism – a wider look at the field of transgenerational epigenetics


    I recently wrote a blogpost examining the supposed evidence for transgenerational epigenetic inheritance (TGEI) in humans. This focused specifically on a set of studies commonly cited as convincingly demonstrating the phenomenon whereby the experiences of one generation can have effects that are transmitted, through non-genetic means, to their offspring, and, more importantly, even to their grandchildren. Having examined what I considered to be the most prominent papers making these claims, I concluded that they do not in fact provide any evidence supporting that idea, as they are riddled with fatal methodological flaws.

    While the scope of that piece was limited to studies in humans, I have also previously considered animal studies making similar claims, which suffer from similar methodological flaws (here and here). My overall conclusion is that there is effectively no evidence for TGEI in humans (contrary to widespread belief) and very little in mammals more generally (with one very specific exception).

    Jill Escher (@JillEscher), who is an autism advocate and funder of autism research, recently posted a riposte, arguing that I was far too sweeping in my dismissal of TGEI in mammals, and listing 49 studies that, in her opinion, collectively represent very strong evidence for this phenomenon.
     
    So, have I been unfair in my assessment of the field? Could it possibly be justified to dismiss such a large number of studies? What is the right level of skepticism to bring to bear here? For that matter, what level of skepticism of novel ideas should scientists have generally?

    As on many subjects, Carl Sagan put it rather well:

    Some ideas are better than others. The machinery for distinguishing them is an essential tool in dealing with the world and especially in dealing with the future. And it is precisely the mix of these two modes of thought [skeptical scrutiny and openness to new ideas] that is central to the success of science.
    — Carl Sagan
    In 'The Burden of Skepticism', Skeptical Inquirer (Fall 1987), 12, No. 1.

    Too much openness and you accept every notion, idea, and hypothesis—which is tantamount to knowing nothing. Too much skepticism—especially rejection of new ideas before they are adequately tested—and you're not only unpleasantly grumpy, but also closed to the advance of science. A judicious mix is what we need.
    — Carl Sagan
    In 'Wonder and Skepticism', Skeptical Enquirer (Jan-Feb 1995), 19, No. 1.

    So, in case I have come across as merely unpleasantly grumpy, let me spell out my general grounds for being skeptical of the claims of TGEI in mammals. Some of these relate to the methodology or design of specific papers but some relate to the field as a whole. (You may notice some similarities to other fields along the way.)

    I am not going to go into the details of each of the 49 papers listed by Jill Escher, because I’d rather just go on living the rest of my life, to be honest. In all sincerity, I thank her for collating them all, but I will note that merely listing them does not in any way attest to their quality. Of all the papers on this topic that I have previously delved into (as detailed at length in the linked blog posts), none has approached what I would consider a convincing level of evidence.

    Nor do I consider the fact that there are 49 of them as necessarily increasing the general truthiness of the idea of TGEI. I could cite over 4900 papers showing candidate gene associations for hundreds of human traits and disorders and they’d still all be suspect due to the shared deficiencies of this methodology.

    I will, however, pick out some examples to illustrate the following characteristics of the field:

    1. It is plagued by poor statistical methodology and questionable research practices.

    In my piece on human studies and in the previous ones on animal studies I wrote about one major type of statistical malpractice that characterises papers in this field, which is using the analysis of covariates to dredge for statistical significance.

    Sex is the most common one. If your main analysis doesn’t show an effect, don’t worry – you can split the data to see if it only shows up in male or female offspring or is only passed through the male or female germline. These are often combined, so you get truly arcane scenarios where an effect is supposedly first passed only through, say, the female germline, but then only through the male germline in the next generation, and then only affects female offspring in the third generation. These are frankly absurd, on their face, but nevertheless are commonplace in this literature.

    In human epidemiology, trimester of exposure is another commonly exploited covariate, giving three more bites of the cherry, with the added advantage that any apparently selective effects can be presented as evidence of a critical period. (Could be true, in some cases, but could just as well be noise).

    This speaks to a more general problem, which is the lack of a clearly defined hypothesis prior to data collection. Generally speaking, any effect will do. In animals, this may mean testing multiple behaviours and accepting either an increase or a decrease in any one of them as an interesting finding. Do the offspring show increased anxiety? Great – we can come up with a story for why that would be. Decreased anxiety? Great – there’s an alternate narrative that can be constructed, no problem. Memory defect, motor problems, increased locomotion, decreased locomotion, more marbles buried, less social interaction? All cool, we can work with any of that. If you run enough tests and take any difference as interesting, you massively increase your chances of finding something that is statistically significant (when considered alone).

    That brings me to two additional methodological issues related to these kinds of analyses:

    First, incorrect analysis of a difference in the difference. This is a widespread problem in many areas, but seems particularly common to me in the TGEI literature, due to the typical experimental design employed.
     
    In these experiments (in animals), one typically creates a test group, descended from animals exposed to the inducing factors to be tested, and a control group, descended from unexposed animals. These are then commonly compared in various experimental tests, which often themselves involve a comparison between two parameters or conditions. For example, to test recognition memory, the time spent investigating a novel versus a familiar object or animal is compared.

    This is usually analysed separately for the test and the control animals to see if there is a significant difference between conditions, for the test animals but not for the control animals. (As, for example, in this paper by Bohacek et al., cited by Escher):

    This is interpreted as the treatment having an effect on recognition memory. It is based on an inference that the difference between the two groups is significant, but that was not actually tested. The correct way to analyse these kinds of data is to combine them all and test for an interaction between group and condition.(In this particular instance, my bet would be that this would not show a significant difference, given that the direction and magnitude of effect look pretty comparable across the groups).

    This may seem like a minor or esoteric point to make, but if you look through these papers you will find it cropping up again and again, as this kind of analysis is the mainstay of the basic experimental design of the field.

    Second, lack of correction for multiple testing.Again, this problem is widespread in many areas of science, though some (like human genomics) have recognised and corrected it. Mouse behaviour is appalling as a field, in this regard, but epigenomics gives it a run for its money.

    Simply put, if you test enough variables for a difference between two groups, and set p less than 0.05 as your threshold of statistical significance, then five percent of your tested variables would be expected to hit that supposedly stringent threshold by chance. 
     
    So, if you test lots of behaviours in an exploratory way, you should correct your p-value threshold accordingly. This is almost never done in this literature (or in the wider mouse behavioural literature, to be fair).

    Correction for multiple tests is also rarely done in epigenomics analyses, such as looking at the methylation of multiple CpG sites across a gene (or sometimes across multiple genes). Again, not to pick on it, but just because it was one of the first papers in Escher’s list that I read, this figure from Bohacek et al illustrates this widespread problem:




    The difference in CpG 6 is taken as significant here even though the p-value (0.038 in this case) would not survive the appropriate correction for multiple testing. (I have previously described the same statistical problem in other papers doing this kind of analysis, not to mention the widespread problems with the application and interpretation of the experimental methods used to generate the data).

    So, one of the reasons I am skeptical of studies in this field is that the collective standards for statistical practices are regrettably low. This means that many of these papers (in fact all of the ones I’ve looked at it in detail) that claim to prove the existence of TGEI as a phenomenon in mammals in fact do no such thing.

    This brings me to a few more general points about the field.


    2. It is not progressive.

    In most areas of biology, there is a natural progression from discovery of some novel phenomenon to gradual, but steady, elucidation of the underlying mechanisms. RNA interference springs to mind as a good example – very quickly after initial reports of this truly surprising phenomenon, researchers had made progress in figuring out how it works at a detailed molecular level. They didn’t just keep publishing papers demonstrating the existence of the phenomenon over and over again, but never explaining how it could occur. 

    The TGEI field has not, in my judgment at least, been like that. It’s been at least 15 years since these kinds of ideas became prominent, yet almost all of the papers in the 49 listed by Escher are simply claiming to show (again) that the phenomenon exists. In that regard, it reminds me of the psychological literature on social priming – there are scores of papers purportedly demonstrating the existence of the phenomenon and none explaining how it works. (Spoiler: it doesn’t).

    This comparison highlights another, more insidious problem in the TGEI field – publication bias. When the papers in a field are merely showing an effect, then studies that do indeed find some difference that “achieves statistical significance” are vastly more likely to be published than ones that do not. It might be compelling if 49 groups had independently stumbled upon evidence of a phenomenon by accident. But that is not what happened. The researchers did not just serendipitously observe the phenomenon. They were not even asking the question of whether or not the phenomenon exists in a disinterested fashion. They were motivated to find evidence for it and to selectively publish those studies that showed it. This kind of publication bias massively inflates the truthiness of the phenomenon. Escher lists 49 supposedly positive studies but we will never know how many negative ones never saw the light of day.

    [I don’t, by the way, mean to impugn the motives of any of the researchers involved – these are the incentives that the modern scientific enterprise has put in place and that affect us all.]

    By contrast, studies that are working out details of mechanism are far less prone to this kind of bias because no one has a stake in what the answer turns out to be. I would have expected by now that one or other of the systems in which this phenomenon has been reported would have proven robust enough to allow the experimental investigation of the underlying mechanisms. The only real example where the underlying mechanism has been elucidated is at the agouti locus in mice, and it is an exceptional case as it involves heterochromatic silencing of a transposable element.

    Now, perhaps I am being unfair – maybe 15 years is not that long, especially if you’re doing mouse work. Perhaps there is real progress being made on the mechanisms and I just haven’t seen it. I’ll be delighted to be convinced if someone develops a robust experimental system where the phenomenon can be reliably elicited and then works out the details of how that happens – perhaps there will be some amazing and important new biology revealed.

    But there is a deeper problem, especially for those systems where people claim that some behavioural experience in one generation is somehow transmitted by molecular marks on the DNA across generations to affect the behaviour of the grandchildren. The problem is not just that the mechanism has not been worked out – it’s that it’s hard to even imagine what such a mechanism might be.


    3. It lacks even a plausible mechanism in most cases

    This problem is illustrated by a prominent paper in Escher’s list that claimed that olfactory experiences (association of an odor with a shock) in one generation of animals could lead to an alteration in sensitivity to that odor in the grandchildren of the animals exposed. The paper suffers from some of the statistical issues described above but actually if you just took all the data at face value you might well come away convinced that this is a real thing.

    But that’s when you need to calibrate your priors. As I previously tweeted, in order for this phenomenon to exist, this is what would have to happen:


    You can say the same for similar proposed scenarios in humans, involving specific types of experiences and specific behavioural outcomes in descendants.

    [I should note that the effects of direct exposure to chemicals or toxins of various kinds differ somewhat in this regard in that at least the initial exposure could plausibly directly affect the molecular landscape in the germline. However, one still has to answer why and how there would be a particular pattern of epigenomic modifications induced, how such modifications would persist through multiple rounds of epigenomic “rebooting”, as well as through the differentiation of the embryos so as to affect specific mature cell types, and why they would manifest with the particular effects claimed, again often on specific behaviours.]

    Many of the phenomena described in the TGEI literature are thus actually quite extraordinary. And extraordinary claims require extraordinary evidence. As discussed above, the evidence presented usually does not even rise to “ordinary”, but even if it did I think it is still worthwhile assessing the findings in another way.

    When I see a paper making an extraordinary claim I think it is appropriate to judge it not just on the statistical evidence presented within it, which refer only to the parameters of that particular experiment, but on the more general relative likelihood set up here:

    Which is more likely? That the researchers have happened on some truly novel and extraordinary biology or that something funny happened somewhere?

    A few years ago, when an international team of physicists published evidence that neutrinos can travel 0.002% faster than light, even the authors of the paper didn’t believe it, though all their measurements and calculations had been exhaustively checked and rechecked and the evidence from the experiment itself seemed pretty conclusive. Turns out a fiber-optic cable in some important machinery was loose, leading to a very small error in timing.

    Their skepticism was thus entirely justified, based on their wider knowledge, not just the isolated evidence from the particular experiment. This gets back to Sagan’s point about balancing wonder with skepticism – scientific training and knowledge should inform our judgment of novel findings, not just the p-values within a given paper.

    Now, I’m not saying that models of TGEI are as heretical as faster-than-light travel, but they’re certainly unexpected enough to make us calibrate our prior expectations downwards. Indeed, the one described above on olfactory experiences would require not just one but a multiplicity of novel mechanisms to achieve.

    But, hey, weird shit happens – who am I to rule it out? It may seem the height of arrogance or even dogmatic intransigence to do so. But there is one other general reason to be skeptical of the described phenomena and the supposed mechanisms that would have to exist to mediate them: there’s no reason for them. They don’t solve any problem – neither one facing scientists nor one facing organisms.


    4. It doesn’t solve any problem

    One of the problems facing scientists, to which researchers often suggest TGEI might be part of the answer, is known as the case of the “missing heritability”. I know, it sounds like Scooby Doo… “What do you know, it was Old Man Epigenetics all the time!” Well, it wasn’t.

    The missing heritability problem refers to the fact that many human traits and disorders have been shown to highly heritable, but it has proven difficult to find the genetic variants that account for all of this heritability. Heritability is a highly technical term and should not be confused with the more colloquial term heredity.

    When we say a trait is, say, 60% heritable, what that means is that 60% of the variance in that trait across the population (the degree of spread of the typically bell-shaped curve of individual values) is attributable to genetic variation. This can be estimated in various ways, including twin or family studies, or, more recently, by studies across thousands of only very, very distantly related people.

    The important thing about these designs is that they explicitly dissociate effects due to genetics from effects due to environmental factors. In twin studies, for example, it is the excess similarity of monozygotic twins over that of dizygotic twins that allows us to attribute some proportion of the MZ twins’ similarity to shared genetics (and, inversely, some proportion of the phenotypic variance across the population to genetic differences).

    Since any TGEI effects due to environmental factors should affect all offspring equally, they would, based on these experimental designs, be explicitly excluded from the heritability term. They therefore cannot, by definition, help explain the missing heritability.

    Another idea is that TGEI provides a mechanism of passing on knowledge gained through behavioural experiences – such as an individual learning that an odor is associated with a shock. You might imagine a more natural scenario where an animal learns that a certain foodstuff make it sick. Maybe it would be good to have some kind of mechanism to pass that knowledge on in animals that don’t have language and culture. But there just isn’t any evidence for the existence of such a phenomenon from the ecological and behavioural literature. There is no mystery in the field that TGEI is solving.  

    The same can be said for the idea that TGEI is a mechanism of extending developmental plasticity across generations until evolution through genetic changes can catch up with the need for adaptation to a new environment. Developmental plasticity is itself a genetically encoded phenotype. It doesn’t need any help from epigenetics.

    Finally, the notion that epigenetics (transgenerational or not) may be a mediator of environmental causes of conditions like autism also has no real support. In fact, the notion that there are important environmental causes of autism has no real support. The heritability of autism is over 80%. It is an overwhelmingly genetic condition. People with autism were, on average, genetically at enormously high risk of developing autism (based on the average concordance between MZ twins). Twin and family studies strongly suggest that the shared family environment makes zero contribution to risk. So, even the fact that 20% of variance in risk that is not attributable to genetics does not necessarily indicate the existence of environmental risk factors. (Indeed, much of the remaining variance may be due to intrinsic noise in the processes of brain development).

    Ultimately, there is nothing where we can say: “We know that X happens, but we don’t know how. Maybe TGEI is a mechanism that can mediate X.” Instead, the introduction to these papers usually reads like this: “We know that TGEI can happen in X. [Narrator: we don’t know that]. Maybe it also happens in Y”. 

    So, until someone can show me a scenario where TGEI solves a known problem, has at least a conceivable, biologically plausible mechanism, is robust enough to provide an experimental system to work out the actual mechanism, and has convincing enough evidence of existing as a phenomenon in the first place, I will keep my skepticometer dialled to 11.

  4. Grandma’s trauma – a critical appraisal of the evidence for transgenerational epigenetic inheritance in humans


    Can molecular memories of our ancestors’ experiences affect our own behaviour and physiology? That idea has certainly grabbed hold of the public imagination, under the banner of the seemingly ubiquitous buzzword “epigenetics”. Transgenerational epigenetic inheritance is the idea that a person’s experiences can somehow mark their genomes in ways that are passed on to their children and grandchildren. Those marks on the genome are then thought to influence gene expression and affect the behaviour and physiology of people who inherit them. 

     
    The way this notion is referred to – both in popular pieces and in the scientific literature – you’d be forgiven for thinking it is an established fact in humans, based on mountains of consistent, compelling evidence. In fact, the opposite is true – it is based on the flimsiest of evidence from a very small number of studies with very small sample sizes and serious methodological flaws. [Note that there is, by contrast, very good evidence for this kind of mechanism in nematodes and plants and in specific circumstances involving transposable elements in mice].

    To save you the trouble, I dig into the dismal details below. But first a quick tour of some recent articles in the popular press on the idea of ancestral epigenetic effects:

    This one is from Discover magazine:Grandma's Experiences Leave a Mark on Your Genes

    "Your ancestors' lousy childhoods or excellent adventures might change your personality, bequeathing anxiety or resilience by altering the epigenetic expressions of genes in the brain."

    “According to the new insights of behavioral epigenetics, traumatic experiences in our past, or in our recent ancestors’ past, leave molecular scars adhering to our DNA. Jews whose great-grandparents were chased from their Russian shtetls; Chinese whose grandparents lived through the ravages of the Cultural Revolution; young immigrants from Africa whose parents survived massacres; adults of every ethnicity who grew up with alcoholic or abusive parents — all carry with them more than just memories.” 


    “The new field of epigenetics is showing how your environment and your choices can influence your genetic code — and that of your kids”


    This recent one is from the New York Review of Books: Epigenetics: The Evolution Revolution
    “This mechanism can be the hidden cause of our feelings of depression, anxiety, or paranoia. What is perhaps most surprising of all, this alteration could, in some cases, be passed on to future generations who have never directly experienced the stresses that caused their forebears’ depression or ill health.”
     

    And this one is from ABC Science in Australia: Epigenetics: how your life could change the cells of your grandkids

    “There's a very famous well-documented case where we can clearly see the impact of famine during pregnancy on a population over generations”, Professor Clark said. [Professor Susan Clark, Head of Genomics and Epigenetics at the Garvan Institute of Medical Research.] "In humans, the best example is during the WWII and the Dutch winter," she said. During WWII, the Germans cut off food supplies to parts of the Netherlands causing a famine. Professor Clark said babies born to women during this time had a lower birthweight. When those babies grew up and had their own babies, the third generation had significantly more problems with diabetes and obesity than the rest of the population.”

    There are dozens of others I could have chosen, from equally prominent titles. They almost all give the impression that the evidence for transgenerational epigenetic effects in humans is very strong, even if the underlying mechanisms remain mysterious. (Here is an exception, by Adam Rutherford).

    Many of them go further and claim that such findings have revolutionary implications, overturning Darwinian theories of evolution, refuting genetic determinism (a straw man), and implicating epigenetics as a crucial new mechanism in medicine and public health – both a cause of disease and a potential therapeutic target.

    So, let’s take a look at some of these studies and see if the hype is warranted. (Spoiler: it isn’t).

    Here is an early one, from 2006: 

    Sex-specific, male-line transgenerational responses in humans.
    Pembrey ME1, Bygren LO, Kaati G, Edvinsson S, Northstone K, Sjöström M, Golding J; ALSPAC Study Team. Eur J Hum Genet.2006 Feb;14(2):159-66.

    The authors state that:

    We analysed food supply effects on offspring and grandchild mortality risk ratios (RR) using 303 probands and their 1818 parents and grandparents from the 1890, 1905 and 1920 Överkalix cohorts, northern Sweden… Sex-specific effects were shown in the Överkalix data; paternal grandfather’s food supply was only linked to the mortality RR of grandsons, while paternal grandmother’s food supply was only associated with the granddaughters’ mortality RR. These transgenerational effects were observed with exposure during the SGP [slow growth phase] (both grandparents) or fetal/infant life (grandmothers) but not during either grandparent’s puberty. We conclude that sex-specific, male-line transgenerational responses exist in humans and hypothesise that these transmissions are mediated by the sex chromosomes, X and Y. Such responses add an entirely new dimension to the study of gene–environment interactions in development and health.

    A couple of things jump out here – first, the sample is tiny, for an epidemiological study – just 303 people. Second, the sex-specific effects were not specifically hypothesised – they just emerged from the data. They are also bizarrely arbitrary.

    The authors found no general effect of grandparent’s nutrition during their slow growth phase (preteen years) on the mortality of their grandchildren. What do you do when you get no main effect? Arbitrarily test some covariates, of course, and in these studies, sex is the covariate that keeps on giving, especially because as you test it in combinations across generations it exponentially increases the hypothesis space that you can gratuitously explore. In this case, the probands’ paternal grandfather’s nutrition had an effect (and not any of their other grandparents) but only if the proband was male. And the paternal grandmother’s food supply had an effect but only if the proband was female.

    Why? How? These are presented as interesting sex-specific effects and the authors hypothesise post hoc that they may involve epigenetic modifications of genes on the X and Y chromosomes, but really this is wild speculation. A more skeptical interpretation (appropriately so in my view) is that these “findings” are simply noise. They pop up as statistically significant amid a sea of non-significance, but they are in fact most likely just spurious statistical blips.

    We will see this trend repeated over and over in other studies. Here’s another one:

    Transgenerational effects of prenatal exposure to the Dutch famine on neonatal adiposity 
    andhealth in later life. Painter RC, Osmond C, Gluckman P, Hanson M, Phillips DI, Roseboom TJ. 
    BJOG. 2008 Sep;115(10):1243-9. doi: 10.1111/j.1471-0528.2008.01822.x.

    OBJECTIVE:Maternal undernutrition during gestation is associated with increased metabolic and cardiovascular disease in the offspring. We investigated whether these effectsmay persist in subsequent generations. DESIGN:Historical cohort study. SETTING:Interview during a clinic or home visit or by telephone. POPULATION:Men and women born in the Wilhelmina Gasthuis in Amsterdam between November 1943 and February 1947. METHODS:We interviewed cohort members (F1) born around the time of the 1944-45 Dutch famine, who were exposed or unexposed to famine in utero, about their offspring (F2). MAIN OUTCOME MEASURES:Birthweight, birth length, ponderal index and health in later life (as reported by F1) of the offspring (F2) of 855 participating cohort members, according to F1 famineexposure in utero. RESULTS:F1 famine exposure in utero did not affect F2 (n = 1496) birthweight, but, among the offspring of famine-exposed F1 women, F2 birth length was decreased (-0.6 cm, P adjusted for F2 gender and birth order = 0.01) and F2 ponderal index was increased (+1.2 kg/m(3), P adjusted for F2 gender and birth order = 0.001). The association remained unaltered after adjusting for possible confounders. The offspring of F1 women who were exposed to famine in utero also had poor health 1.8 (95% CI 1.1-2.7) times more frequently in later life (due to miscellaneous causes) than that of F1 unexposed women. CONCLUSIONS:We did not find transgenerationaleffects of prenatal exposure to famine on birthweight nor on cardiovascular and metabolic disease rates. F1 famine exposure in utero was, however, associated with increased F2 neonataladiposity and poor healthin later life. Our findings may imply that the increase in chronic disease after famine exposure in utero is not limited to the F1 generation but persists in the F2 generation.


    Here’s the table showing the data on which those findings are based:



    Again, a small sample, with lots of parameters studied (e.g., causes of death, where “Other” was the only category to show a significant effect, with a tiny number of people), with no particular hypotheses about which ones are expected to show an effect, in which direction. Basically, any difference anywhere will do.

    The tiny differences in that table are taken as justifying the sweeping general claim made in the title of the paper. (And of course, many people will cite it based on the title, presuming the evidence actually supports such a claim).

    An interesting point about this study is that the children of F1 males who were exposed to the famine conditions around the time of their birth showed no effect on any measure. But… wait for it… a follow-up study of the exact same peoplelater in life found an “effect” on the children of F1 males but not those of F1 females:

    Transgenerational effects of prenatal exposure to the 1944-45 Dutchfamine.

    Veenendaal MV1, Painter RC, de Rooij SR, Bossuyt PM, van der Post JA, Gluckman PD, Hanson MA, Roseboom TJ. BJOG. 2013Apr;120(5):548-53. doi: 10.1111/1471-0528.12136. Epub 2013 Jan 24.

    OBJECTIVE:We previously showed that maternal under-nutrition during gestation is associated with increased metabolic and cardiovascular disease in the offspring. Also, we found increased neonatal adiposityamong the grandchildren of women who had been undernourished during pregnancy. In the present study we investigated whether these transgenerationaleffects have led to altered body composition and poorer health in adulthood in the grandchildren.DESIGN:Historical cohort study.SETTING:Web-based questionnaire.POPULATION:The adult offspring (F2) of a cohort of men and women (F1) born around the time of the 1944-45 Dutch famine.METHODS:We approached the F2 adults through their parents. Participating F2 adults (n = 360, mean age 37 years) completed an online questionnaire.MAIN OUTCOME MEASURES:Weight, body mass index (BMI), and health in F2 adults, according to F1 prenatal famine exposure.RESULTS:Adult offspring (F2) of prenatally exposed F1 fathers had higher weights and BMIs than offspring of prenatally unexposed F1 fathers (+4.9 kg, P = 0.03; +1.6 kg/m(2), P = 0.006). No such effect was found for the F2 offspring of prenatally exposed F1 mothers. We observed no differences in adult health between the F2 generation groups.CONCLUSIONS:Offspring of prenatally undernourished fathers, but not mothers, were heavier and more obese than offspring of fathers and mothers who had not been undernourished prenatally. We found no evidence of transgenerationaleffects of grandmaternal under-nutrition during gestation on the health of this relatively young group, but the increased adiposity in the offspring of prenatally undernourished fathers may lead to increased chronic disease rates in the future.

    Surely these kinds of studies are not all that bad, you say. Perhaps I’m picking some particularly egregious ones? Well, no. These are the ones that get cited all the time as the evidence for transgenerational effects of famine. And I've yet to find one that is in any way convincing.

    Let’s do another:

    Change in paternal grandmothers' early food supply influenced cardiovascular mortality of the female grandchildren.Bygren LO, Tinghög P, Carstensen J, Edvinsson S, Kaati G, Pembrey ME, Sjöström M.BMC Genet. 2014 Feb 20;15:12. doi: 10.1186/1471-2156-15-12.

    Background: This study investigated whether large fluctuations in food availability during grandparents' early development influenced grandchildren's cardiovascular mortality. We reported earlier that changes in availability of food - from good to poor or from poor to good - during intrauterine development was followed by a double risk of sudden death as an adult, and that mortality rate can be associated with ancestors´ childhood availability of food. We have now studied transgenerational responses (TGR) to sharp differences of harvest between two consecutive years´ for ancestors of 317 people in Överkalix, Sweden. Results: The confidence intervals were very wide but we found a striking TGR. There was no response in cardiovascular mortality in the grandchild from sharp changes of early exposure, experienced by three of the four grandparents (maternal grandparents and paternal grandfathers). If, however, the paternal grandmother up to puberty lived through a sharp change in food supply from one year to next, her sons´ daughters had an excess risk for cardiovascular mortality (HR 2.69, 95% confidence interval 1.05-6.92). Selection or learning and imitation are unlikely explanations. X-linked epigenetic inheritance via spermatozoa seemed to be plausible, with the transmission, limited to being through the father, possibly explained by the sex differences in meiosis. Conclusion: The shock of change in food availability seems to give specific transgenerational responses.

    This is another orgy of covariate mining in a tiny sample. The great thing is that you can mine by sex combinatorially across generations, so you really get extra juice out of it for dredging for “significant” results somewhere. In this case, it is supposedly an effect only on the paternal grandmother that matters – so transmitted first through the female germline and then through the male germline, AND it only affects the granddaughters, not the grandsons! That’s some serious multiple testing, with no prior hypothesis, in a sample of 317 people!

    There are a number of other studies along the same lines, including some looking at the supposed effects of things like grandparents smoking from an early age. They all suffer from the same problems:

    1.    Very small samples
    2.    Lack of predefined hypotheses
    3.    Extreme, combinatorial covariate dredging (i.e., massive multiple testing)
    4.    HARKing (hypothesising after results are known)

    These all fall under the banner of Questionable Research Practices – the kinds of things that have filled the scientific literature in many fields with spurious findings and false positives. This is the difference between wanting to test something (good science) and wanting to find something (bad science).

    Taking whatever “significant” results pop up from these kinds of analyses at face value (as opposed to seeing them for noise) leads the authors to contort themselves into some truly arcane positions, like this one: “The evidence from this study suggests that when the mother does not smoke in pregnancy the maternal grandmother's smoking habit in pregnancy has a positive association with her grandson's fetal growth.”Got that?Grandma’s smoking can have an effect on her daughter’s (not her son’s) sons (not daughters) but only if mom didn’t smoke herself. 

    These kinds of uber-specific scenarios are absurd on their face, to say nothing of the fact that they would require the invention of multiple new biological mechanisms to explain the sex-specific transmission (often switching from one sex to the other as it goes), as well as the sex-specific effects on grandchildren.They certainly don’t justify the sweeping generalisations made in the field, when the only way to get a significant result is to carve the data eight ways.

    Things don’t get any better in recent papers that have attempted to identify the supposed genomic marks (thought to be mediated by DNA methylation) responsible for these supposed effects, like this one:

    Grandmaternal stress during pregnancy and DNA methylation of the third generation: an epigenome-wide association study.

    Serpeloni F, Radtke K, de Assis SG, Henning F, Nätt D, Elbert T. Transl Psychiatry. 2017 Aug 15;7(8):e1202. doi: 10.1038/tp.2017.153.

    Abstract: Stress during pregnancy may impact subsequent generations, which is demonstrated by an increased susceptibility to childhood and adulthood health problems in the children and grandchildren. Although the importance of the prenatal environment is well reported with regards to future physical and emotional outcomes, little is known about the molecular mechanisms that mediate the long-term consequences of early stress across generations. Recent studies have identified DNA methylation as a possible mediator of the impact of prenatal stress in the offspring. Whether psychosocial stress during pregnancy also affects DNA methylation of the grandchildren is still not known. In the present study we examined the multigenerational hypothesis, that is, grandmaternal exposure to psychosocial stress during pregnancy affecting DNA methylation of the grandchildren. We determined the genome-wide DNA methylation profile in 121 children (65 females and 56 males) and tested for associations with exposure to grandmaternal interpersonal violence during pregnancy. We observed methylation variations of five CpG sites significantly associated with the grandmother's report of exposure to violence while pregnant with the mothers of the children. The results revealed differential methylation of genes previously shown to be involved in circulatory system processes. This study provides support for DNA methylation as a biological mechanism involved in the transmission of stress across generations and motivates further investigations to examine prenatal-dependent DNA methylation as a potential biomarker for health problems.

    Yes, that’s right – an epigenome-wide association study with a sample size of 121 – and the “cases” numbered 27. I’ll just leave that there.

    So, what are we to make of all this? You could be charitable and say the evidence is weak, circumstantial, observational, and correlative, and that it warrants circumspection and careful interpretation (and further research, of course!). I would go further and say that nothing in any of those papers rises to the level of what should properly be called a finding. There’s no there there.

    But wait, you say, what about all the animal studies that supposedly clearly show transgenerational epigenetic inheritance? Well, they suffer from all the same methodological problems as these human studies, as I have previously discussed here and here.


    How data become lore

    So, if these data are so terrible, why do these studies get published and cited in the scientific literature and hyped so much in the popular press? There are a few factors at work, which also apply in many other fields:

    1.    The sociology of peer review. By definition, peer review is done by experts in “the field”. If you are an editor handling a paper on transgenerational epigenetic inheritance in humans (or animals), you’re likely to turn to someone else who has published on the topic to review it. But in this case all the experts in the field are committed to the idea that transgenerational epigenetic inheritance in mammals is a real thing, and are therefore unlikely to question the underlying premise in the process of their review. [To be fair, a similar situation pertains in most fields].

    2.    Citation practices. Most people citing these studies have probably not read the primary papers or looked in detail at the data. They either just cite the headline claim or they recite someone else’s citation, and then others recite that citation, and so on. It shouldn’t be that way, but it is – people are lazy and trust that someone else has done the work to check whether the paper really shows what it claims to show. And that is how weak claims based on spurious findings somehow become established “facts”. Data become lore.

    3.    The media love a sexy story. There’s no doubt that epigenetics is exciting. It challenges “dogma”, it’s got mavericks who buck the scientific establishment, it changes EVERYTHING about what we thought we knew about X, Y and Z, it’s even got your grandmother for goodness sake. This all makes great copy, even if it’s based on shaky science.

    4.    Public appetite. The idea of epigenetic effects resonates strongly among many members of the general public. This is not just because it makes cute stories or is scientifically unexpected. I think it’s because it offers an escape from the spectre of genetic determinism – a spectre that has grown in power as we find more and more “genes for” more and more traits and disorders. Epigenetics seems to reassure (as the headline in TIME magazine put it) that DNA is not your destiny. That you – through the choices you make – can influence your own traits, and even influence those of your children and grandchildren. This is why people like Deepak Chopra have latched onto it, as part of an overall, spiritual idea of self-realisation.


    So, there you have it. In my opinion, there is no convincing evidence showing transgenerational epigenetic inheritance in humans. But – for all the sociological reasons listed above – I don’t expect we’ll stop hearing about it any time soon.



  5. Genetics, IQ, and ‘race’ – are genetic differences in intelligence between populations likely?


    Last week (May 2nd 2018) the Guardian published a piece by me headlined “Why genetic IQ differences between ‘races’ are unlikely”. In it, I argued that the genetic architecture and evolutionary history of intelligence make it different from other traits and inherently unlikely to vary systematically for genetic reasons between large population groups.

    Image credit: https://mashable.com/2013/04/02/obama-brain/#kwOQGJUunEqn
       
    I was rather quickly (and, in some cases, rather aggressively) taken to task by a number of population geneticists on Twitter for being vague, overly general and hand-wavy, and for ignoring or not citing relevant papers in population genetics. Or indeed, for being flat out wrong. (See also a critique here). Some of those criticisms may well be valid, but some reflect the limitations of writing a short piece for the general public, so I wanted to go into more detail here on my reasoning.  

    The other criticism was that I seemed to be making statements as if they were strong scientific claims or settled positions of the field when actually they were a series of arguments representing what I think about the issue. It wasn’t my intent to misrepresent my arguments as settled facts but I can certainly see how it reads like that, so I’ll have to plead guilty as charged on that one.

    The reason I wrote the piece, for a newspaper in particular, is that there has been a lot of recent public debate about the issue of genetic differences in intelligence between ‘races’, which I felt suffered from an overly casual extrapolation from what we know about the genetics of other traits, especially physical ones. If we are going to talk about the genetics of intelligence, we should talk about the genetics of intelligence. 

    This is liable to get long (and technical in places, though maybe not enough for some people)…


    Intelligence is a real thing and is really heritable

    To begin, there are a few general points to make. First of all, intelligence is a thing. It’s not easy to define precisely and it may not be one thing, but I think it is probably everyone’s experience that some people are smarter (brighter, quicker, sharper, cleverer) than others and that such differences are apparent from an early age.

    There are scores of different definitions of intelligence, including things like: “the abilities to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to manipulate one's environment.” To my mind, it is the ability to form abstract concepts of things, types of things, relations between things, and higher-order relations in complex scenarios, and to apply those abstract concepts to solve problems in new situations that is the most crucial aspect of intelligence as a cognitive faculty and of intelligent behaviour. 
     
    Second, IQ (intelligence quotient) tests do measure something real that relates to intelligence. Scores on IQ tests provide a very imperfect proxy of a complicated trait, and clearly over-simplify it to variation along a single dimension. But they give us – especially geneticists – something to work with. If we want to investigate the genetics of a trait, we need some way to measure it and IQ tests provide that. Even if they are imperfect, they are at least quite reliable, in that if the same person takes the test multiple times, the scores are highly correlated. And they seem to also have some validity, in the sense that IQ scores predict (really predict, not just correlate with) a host of real-world outcomes that people care about, such as educational attainment, type of job, income, and even health and longevity.

    They’re not intended to sum up everything about a person’s cognitive abilities in a single number, as sometimes charged. They are simply a rough measure that proves useful as an experimental tool and for real-world predictions. That said, they are not without their biases, which becomes important when interpreting observed differences in IQ scores between people in different cultures.

    Third, IQ scores really are heritable. If you measure IQ across any given population, you get the familiar bell-shaped curve or normal distribution. (With a little bump at the low end representing people with intellectual disability). That distribution can be described by the mean, or average value, and by the variance – how wide the bell is. Heritabilitymeasures how much of that variance is due to genetic differences between people. It can be estimated by flipping the idea around and asking whether people who are more genetically similar to each other are also more similar in intelligence.

    It turns out they are, and twin and adoption studies have shown this is not due to being reared in the same family environment, but is really due to their shared genes. Newer methods confirm this using only very distantly related people across general population cohorts. These methods provide a convergent estimate of 50% for the heritability of the trait. That doesn’t mean 50% of your intelligence comes from your genes, however – it means around 50% of the variation that we see in the trait across the population is due to genetic variation. (In other words, if we were all clones, we’d all be a lot more similar to each other in intelligence).


    Group differences in intelligence

    Okay, now what has all this to do with ‘races’? (See here for more on the limitations of that term). People have been doing IQ tests in many countries across the world for nearly a century now. The results of those tests show lots of differences in the mean IQ scores between populations, at the level of countries and also at the higher level of continents. These are reflected, to a lesser degree, as differences in IQ scores between various ethnic groups within countries like the United States.

    Given IQ is a partly heritable trait, the simple conclusion seems to be that the observed differences in mean IQ between different populations will also be partly attributable to genetic differences. This is a fallacy. Heritability measures the proportion of variance in a trait that can be attributed to genetic variation, within the particular population under study. It is not a stable or universal number, but can vary in different populations. In particular, if there is a lot of variation in environmental factors that affect a trait within a given population, then the heritability will be lower, because proportionally more of the variance in phenotype will be due to environmental variation.

    This means that if two populations have considerable differences in relevant environmental factors between them, these could completely explain the observed difference in mean IQ, even if much of the variance within populations is due to genetic variation. (And we know there are huge differences between the populations in question in highly relevant factors like infant and maternal health, nutrition and education, for example).

    This point is widely acknowledged, even by proponents of genetic differences like Charles Murray and Richard Herrnstein, the authors of The Bell Curve. However, while conceding that the existence of mean differences between populations in IQ scores does not necessarily imply they are driven by genetic differences, they conclude that genetic effects are “highly likely” to be at play (in addition to environmental ones).

    This sounds quite reasonable, on the face of it – the explanation is “probably a bit of both”. That position was given some support by geneticist David Reich in a recent Op-Ed piece in The New York Times. He stated:

    … since all traits influenced by genetics are expected to differ across populations (because the frequencies of genetic variations are rarely exactly the same across populations), the genetic influences on behavior and cognition will differ across populations, too.”

    He goes on to say that: You will sometimes hear that any biological differences among populations are likely to be small, because humans have diverged too recently from common ancestors for substantial differences to have arisen under the pressure of natural selection. This is not true. The ancestors of East Asians, Europeans, West Africans and Australians were, until recently, almost completely isolated from one another for 40,000 years or longer, which is more than sufficient time for the forces of evolution to work.”

    The positions of Murray and Herrnstein (and of many other commentators like Sam Harris and Andrew Sullivan), along with the statements of David Reich, thus state that we should, by default, expect there to be genetic differences affecting IQ (indeed, all traits) between populations that were genetically isolated from each other for long periods.

    Reich’s position is based on an explicit extrapolation from the genetics of physical traits to the genetics of intelligence and this is what I was addressing in my piece in the Guardian. What does the genetics of intelligence really tell us about how to calibrate these expectations?


    Intelligence as a special trait

    In the Guardian piece, I argued that intelligence is not like most other traits, because of its central role in our evolution:

    Intelligence is our defining characteristic and our only real advantage over other animals. It gave us an initial leg-up in colonising diverse environments and its usefulness was massively amplified by the invention of culture and language. This increasing selective advantage of ever greater intelligence led to a snowball effect, which was probably only stopped by the limitations of the size of the birth canal and the metabolic demands of a large brain.”

    Some people challenged me on that, offering a few other traits that may also have been crucial in our evolutionary success, such as standing upright, the ability to run long distances, and throwing skill, and I guess I would add manual dexterity myself. That’s all fine – it doesn’t really change any of the subsequent arguments.

    I contend that intelligence has been more or less maximised over the course of evolution along the lineage leading to humans. That’s a pretty stark claim (the “more or less” is very important there!) and I’m sure will provoke howls of disagreement from some quarters as being essentially untestable or just wrong. To be more specific, what I mean is that intelligence appears to have been under strong directional selectionthroughout our evolution. Being smarter became advantageous, and, through the snowball effects referred to above, became more and more advantageous over time. So, natural selection progressively selected for greater and greater intelligence, up to a point where the costs became unsustainable.

    This is in contrast to most traits, which are effectively optimised, not maximised. I used height as a comparison, but you can think of how thick our skin is, our blood pressure, levels of liver enzymes, how active our immune system is, etc., etc. It wasn’t good, over the course of our evolution, for any of these things to just keep increasing. They are set at a “just right” level, while I argue that intelligence is set at an “as much as you can bear” level. (With the costs including difficult or dangerous births of big-headed babies, extremely long periods of infant helplessness and consequent parental investment, and the energy needs of our metabolically greedy brains).

    That’s certainly arguable, but it seems defensible to me, so I’ll stick with it. 

    The upshot is this (again in my opinion): Evolution, over millions of years, selected for a program of neural development that directs formation of our incredibly complex brain. Not just a big brain – size doesn’t get you far by itself – a complicated, sophisticated, highly organised brain, capable of extraordinary things, even being impressed by itself. 

    Once evolution got us to this point, where we were completely reliant on our intelligence to survive in all kinds of environments, I argue that natural selection shifted to protecting its investment. What I mean is that intelligence went from being subject to strong directional selection (which may have exhausted its potential over millions of years) to being subject to strong purifying selection, where keeping that genomic program of neural development free from harmful mutations became the key challenge.


    The genetic architecture of intelligence

    What does that mean for what we call the genetic architecture of intelligence – the patterns of genetic variation that affect it and the relationship between genotypes and the phenotype of intelligence? I argue that it means that:

    “…most genetic random mutations that affect on intelligence will do so negatively.
    Statistically speaking, random mutations are vastly more likely to mess up the complicated genetic program for brain development than improve it, especially in ways that natural selection has not already fixed in our species. For the same reason, random tinkering with the highly tuned engine of a Formula One car is vanishingly unlikely to improve performance. Similarly, we shouldn’t expect intelligence to be affected by a balance of IQ-boosting mutations and IQ-harming mutations. Instead, genetic differences in intelligence may largely reflect the burden of mutations that drag it down.

    So, unlike a trait like height, which we can think of as being determined in any individual by a balance between height-decreasing and height-increasing genetic variants, my contention is that the genetic contribution to variation in intelligence is determined mainly by the burdenof intelligence-decreasing genetic variants. (That’s why I previously suggested, only partly tongue-in-cheek, that we should call it “the genetics of stupidity”). 

    I went on to say that:

     “Because most random mutations that affect intelligence will reduce it, evolution will tend to select against them. Inevitably, new mutations will always arise in the population, but ones with a large effect on intelligence – that cause frank intellectual disability, for example – will be swiftly removed by natural selection. Mutations with moderate effects may persist for a few generations, and ones with small effects may last even longer. But because many thousands of genes are involved in brain development, natural selection can’t keep them all free of mutations all the time. It’s like trying to play multiple games of Whack-a-mole at once, with only one hammer.”

    There are a few intertwined points here. First, the program of neural development is incredibly complicated and can therefore be affected by mutations in thousands of genes. This means that intelligence will be a highly polygenic trait. I clearly gave the impression that I thought this alone meant it would be difficult to select for intelligence. This is not the case at all and not what I was trying (and clearly failing) to say. 


    Polygenicity

    Polygenic traits can, of course, be selected for. Most of the traits selected for in animal or plant breeding are highly polygenic. This includes behavioral traits, as in the case of dog breeds. Different breeds of dogs were selected for all kinds of behaviors, including herding, guarding, retrieving, chasing, tracking, fighting, etc. And it also includes cognitive traits. For example, rats can be selectively bred from animals that are better or worse at completing a maze. After multiple generations you can end up with lines of “maze-bright” and “maze-dull” rats that have a huge difference in how well they can perform this task. 



    However, these examples required intense artificial selection to induce a change in phenotype. Moreover, though I don’t know if this has been tested in these cases, my guess is that that selective pressure would have to be sustained for the phenotypic difference to be maintained. (That is certainly typical for lab-based artificial selection experiments). My expectation is that if you left these populations of rats or dogs alone for multiple generations they would naturally drift back towards the species-typical set point of the trait. 

    In any case, I’m explicitly not trying to say that selection on intelligence is impossible. I’m trying to assess whether the circumstances required for it to happen are, a priori, likely or not. In fact, I’m trying to do something even more specific – assess whether it is plausible that such circumstances might have pertained in a differential way between continents but in a systematic and consistent way withincontinents over long periods of time. That is the scenario required to end up with the supposed systematic genetic differences in intelligence between populations with different continental ancestry.  

    The reason I think polygenicity is important in this case is that it means there is a huge mutational target that natural selection has to keep an eye on. The constant production of new mutations in sperm and egg cells, the fact that so many of them could affect intelligence, and the fact that they will tend to do so negatively, should, in my opinion, make it harder to push intelligence consistently upwards, when new mutations will constantly be pulling it back down.

    Again, I would argue this is a different situation to many other traits. For any trait, new mutations are likely to degrade, rather than improve, the developmental program and biological pathways underlying it. But for some traits, the “goal” of that program is to hit a species-optimal set point. Mutations affecting that program could mean you miss high or miss low – there’s no reason to expect to go one way or the other, really (as far as I can see). 

    For intelligence, following my argument above, the goal is to hit the maximal level possible. New mutations will thus not just replenish genetic variation affecting the trait (in either direction, as in standard models of stabilising selection); they will tend to push it downwards. 

    Now, maybe someone will tell me why that actually doesn’t matter, but it seems to me that this will tend to oppose any efforts of directional selection to push intelligence upwards in any given population. Whether that is true or not (or the size of the effect it could have) may depend on how much the trait is dominated by the effects of rare mutations. Various lines of evidence suggest that the collective influence of such mutations on intelligence is very substantial.

    For example:

    Chromosomal deletions and duplications that cause clinical intellectual disability in some carriers, are associated with a more subtle reduction in cognitive performance in others in the "general population":

    Kendall KM, et al. Cognitive Performance Among Carriers of Pathogenic Copy Number Variants: Analysis of 152,000 UK Biobank Subjects. Biol Psychiatry. 2017 Jul 15;82(2):103-110.

    Similarly, ultra-rare mutations that disrupt brain-expressed proteins are associated with decreased educational attainment:

    Ganna A, et al. Ultra-rare disruptive and damaging mutations influence educational attainment in the general population. Nat Neurosci. 2016 Dec;19(12):1563-1565.

    Those studies relate to very rare classes of mutations but the logic extends across the whole spectrum – rarer mutations will have larger effects on the phenotype. The collective importance of these effects is illustrated by the fact that pedigree-based analyses using identity-by-descent captures significantly more heritability than SNP-based methods:

    Hill WD, et al. Genomic analysis of family data reveals additional genetic effects on intelligence and personality. Mol Psychiatry. 2018 Jan 10. doi: 10.1038/s41380-017-0005-1. https://www.ncbi.nlm.nih.gov/pubmed/?term=29321673

    Moreover, even for the common variants that have been associated with intelligence or one of its proxies, there is evidence that these are under negative selection, with comparatively rarer and newer ones explaining disproportionately more of the variance:

    Zeng J, et al. Signatures of negative selection in the genetic architecture of human complex traits. Nat Genet. 2018 May;50(5):746-753.https://www.nature.com/articles/s41588-018-0101-4


    Pleiotropy 

    One of the other points I made in the Guardian relates to the way in which genetic variants affect intelligence, at a biological level, which is likely to be highly indirect and non-specific:
     
    Another crucial point is that genetics tends to affect intelligence in a much more indirect way than it does skin colour, height, and other physical traits. Like that Formula One car’s performance, intelligence is an emergent property of the whole system. There is no dedicated genetic module “for intelligence” that can be acted on independently by natural selection – not without affecting many other traits at the same time, often negatively.”

    The key part there is that intelligence is an emergent property of the whole system. At a neural level, there are no specific local parameters that correlate well with intelligence. Instead, it correlates with global properties such as overall brain size, white matter “integrity” across the whole brain, and various parameters of whole brain networks, such as global efficiency. This may seem a bit vague, but effectively intelligence reflects how well the brain is put together.

    This view is supported by the fact that genes with functions in neural development are highly enriched among those found to be associated with intelligence in genome-wide association studies or analyses of rare mutations. 

    However, there are two important points about the apparent specificity of that finding: first, many proteins involved in the cellular processes of neural development, such as cell migration, axon guidance or synapse formation, for example, are also involved in other processes in other tissues. This is the norm, in fact. Second, mutations in genes whose products are not directly involved in neurodevelopmental processes (like metabolic enzymes, for example) can nevertheless indirectly affect those processes. (The genes don’t have to be “for neural development” for mutations in them to affect neural development).  

    As a result, many of the genetic variants that affect intelligence will also affect other traits (a situation known as pleiotropy). I argued, in the Guardian piece, that this widespread pleiotropy would tend to act as a brake on directional selection on intelligence, due to potentially negative offsetting effects on other traits. 

    A number of people criticized that point, and criticized me for making it apparently casually and not citing any relevant literature on the issue. That contention was not plucked out of the air but based on my reading of papers like these: 

    McGuigan K, Collet JM, Allen SL, Chenoweth SF, Blows MW. Pleiotropic mutations are subject to strong stabilizing 
    selection. Genetics. 2014 Jul;197(3):1051-62.

    Keightley, P.D., and Hill, W.G. (1990). Variation maintained in quantitative traits with mutation–selection balance: pleiotropic side-effects on fitness traits. Proc. R. Soc. Lond. B 242, 95–100.

    Eyre-WalkerA. Genetic architecture of a complex trait and its implications for fitness and genome-wide association studies. Proc Natl Acad Sci U S A. 2010 Jan 26;107 Suppl 1:1752-6.

    ZhangXS, Hill WG. Joint effects of pleiotropic selection and stabilizing selection on the maintenance of quantitative genetic variation at mutation-selection balance. Genetics. 2002 Sep;162(1):459-71.

    I am certainly happy to be enlightened if my reading of those papers was incorrect. It is clearly a hugely complex issue that has been debated since the time of Fisher in the 1930’s and that continues to be investigated. Indeed, Michael Eisen and Graham Coop both pointed out some additional papers on the subject, which come to different conclusions: 

    JohnsonT, Barton N. Theoretical models of selection and mutation on quantitative traits. Philos Trans R Soc Lond B Biol Sci. 2005 Jul 29;360(1459):1411-25.

    SimonsYB, Bullaughey K, Hudson RR, Sella G. A population genetic interpretation of GWAS findings for human quantitative traits. PLoS Biol. 2018 Mar 16;16(3):e2002985.

    In particular, Simons et al. find that directional selection can act on a trait, even when the individual variants affecting it are pleiotropic, by slightly increasing the collective frequency of alleles pushing the trait in one direction, while only slightly increasing the frequency of each one.  

    On the other hand, this paper that just came out on the (completely awesome) preprint server bioRxiv, reinforces the view that pleiotropic alleles will indeed be under stronger stabilising or purifying selection:

    Emily S Wong, Steve Chenoweth, Mark Blows and Joseph E Powell
    Evidence for stabilizing selection at pleiotropic loci for human complex traits

    So, the question of how pleiotropy affects directional selection is clearly a very active one in the field of quantitative genetics. While I admittedly overstated the assertion that pleiotropy will tend to act as a brake on directional selection in my original article, I think it is fair to say that it is at the very least a parameter that should be taken into account and that may differ across traits.

    There is, in addition, a more general way in which pleiotropy may be relevant.


    Intelligence as a general fitness indicator

    There is one other aspect to my argument that I did not have space to go into in the Guardian. It comes back to the idea that the trait that we recognise as intelligence does not reflect the functioning of some specific “cognition module” in the brain, but rather reflects overall “performance” of the brain. 

    If performance is determined by how well the brain is put together, then the load of mutations affecting neural development will be a crucial factor. Each such mutation could have specific effects on various developmental processes, resulting in a phenotype of brain organisation that is farther from the “wild-type” plan in some particular ways. But another factor is also at play. Each mutation is also expected to reduce the robustness of the developmental program in a much more general way.

    The developmental program has evolved to be robust to the inherent noisiness of molecular processes. All of the myriad feedback processes in development are aimed at ensuring that the outcome of development is within the species-typical range. Mutations do not only affect specific processes, they also degrade these general control relationships that normally ensure robustness. (Including the ability to buffer the effects of other mutations).  

    This means that intelligence may, at least partly, reflect overall mutational load in a very general sense. It may, in fact, be not so much a thing in itself, driven by variation in some dedicated genetic and neural modules, but rather a general fitness indicator. This fits with the observation that intelligence correlates with many aspects of general health and longevity – not because being intelligent makes you healthier, but because greater “genomic fitness” makes you both more intelligent and more healthy.

    If that is the case (and it is certainly an arguable point), then intelligence will get at least a partly free ride from natural selection. It will always be beneficial to keep the general mutation load as low as possible and it is hard to see why that would differ in different populations. Indeed, direct measurements of the number of derived non-synonymous variants (new mutations affecting the sequence of a protein) show no differencebetween large population groups.


    The improbability of differential selection on intelligence across continents

    For the reasons outlined above, it still seems to me that directional selection will have a harder time operating on intelligence than on many other traits. I contend that it will be difficult to differentially push intelligence upwards in any given population because new mutations will constantly be dragging it down, in every population. And regardless of selection acting directly on intelligence itself, it will always be good to try and keep the load of such mutations to a minimum, in every population. 

    I recognise that the conceptual framework presented here is unorthodox and people will no doubt take issue with various points. The overall point is that all this stuff is complex and much of it is unsettled. 

    At the very least, it is therefore important to consider the parameters of polygenicity, mutational target, pleiotropy, mutational load, and general relationship to fitness – as well as how they all interact – in the discussion of whether it is in fact likely – a priori – that systematic genetic differences in intelligence would arise between ancient population groups. (As opposed to simply asserting that any trait that is heritable is subject to directional selection, as if these other factors are irrelevant). 

    Whether you buy any of the arguments I have made here with regard to the genetics of intelligence, the idea that such pressures would align with continental divisions remains inherently implausible in itself, to my mind. These pressures would have to be consistent across entire continents, each comprising hugely diverse environments, but different across different continents, and also sustained over thousands of years, in order to end up with stable differences between ‘races’. (That is without even going into details of the arbitrariness of such divisions, as discussed here or the environmental and cultural factors which are known to affect intelligence and IQ scores and which clearly do differ in systematic ways between the relevant population groups).

  6. Why do lemons taste sour? The puzzle of innate qualia.


    A really nice recent paper reported the identification of a family of proteins that seem to act as sour taste receptors. They are expressed in our taste buds and allow us to detect the positively charged hydrogen ions produced by acidic substances. This is important because it lets us identify foods that are unripe or spoiled by bacterial growth, like sour milk. The discovery of these sour receptors is a big step forward – it adds to our understanding of how different kinds of chemicals are detected in the sensory neurons of the tongue and processed in the brain. But it leaves one really big question unanswered – why do sour things taste like that?

     Image credit: http://www.funcage.com/blog/babies-tasting-lemons-for-the-first-time/

    Why does eating a lemon produce that specific reaction – the scrunched up face, puckered lips, eyes squinting, head drawn back, eyebrows raised in surprise? This is an incredibly universal and apparently innate reaction – you can see it in unsuspecting babies eating lemons for the first time (to great comic effect). It’s not just humans, either – you can see something roughly similar in the way dogs react when they taste lemons (again, worth a look just for kicks and giggles). 

    That very specific response to that very specific stimulus is clearly wired into our nervous systems. Now, maybe that’s not that amazing – we have lots of reflexive responses to various stimuli that are pre-wired into our neural circuitry, like withdrawing your hand from a hot stimulus, for example. You could imagine programming that kind of thing into a robot.

    But I think we can say something much more profound – that the qualitative nature of the experience of eating lemons is somehow wired into our nervous systems. In fact, you might even say that the qualia associated with that experience are in effect encoded in our genomes, as this is where the instructions are to wire the nervous system in such a way that entails that response. 

    I’m on shaky ground, here, I know, making inferences about subjective states. However, I think we can say, first off, that even babies and dogs are having an experiencewhen they eat a lemon and react that way. Given that the outward signs of that experience are so universal, I see no good reason to think that the subjective experience is likely to differ between individuals – it certainly seems more parsimonious to expect that it wouldn't. The experience that babies and dogs have will not be exactly like the one adult humans would have, of course, but it seems likely that there must be some shared perceptual primitive that is the basis for this experience.

    How could this possibly be established? How is the system wired to drive this kind of perceptual experience?

    Wired for taste

    The anatomical system for detecting and discriminating tastes is now quite well understood. Taste and smell are our two chemical senses and they have quite different jobs to do. Our sense of smell is all about detecting and discriminating between a huge range of different chemicals, each of which smells different to us. We have thousands of different odorant receptor proteins that do that job – each one specialised to bind to a different chemical. The taste or gustatory system is quite different – rather than discriminating, it instead lumps things together into just six or seven broad categories (that we know of): sweet, bitter, sour, salty, fatty, and savoury (and maybe carbonated).

    http://blogs.discovermagazine.com/scienceandfood/tag/taste-receptor/#.Wqo154IuA4w 
    Image credit:  http://blogs.discovermagazine.com/scienceandfood/tag/taste-receptor/#.Wqo154IuA4w

    These categories refer in one sense to the chemical properties of the substances being tasted (the “tastants”) and, in another, to the nature of the perceptual experience they induce. Sugars, salts and fats are all types of molecules with specific chemical properties, which are detected by specialised proteins expressed in the taste buds. The taste of savouriness (or “umami”) is induced by the chemical monosodium glutamate, which is present in things like meats and cheeses. And things that taste sour are chemically acidic – they produce positively charged hydrogen ions, which is what sour receptors detect. (Since hydrogen atoms are made up of one proton and one electron, a positively charged hydrogen ion is simply a proton).


    Compounds that taste bitter are an exception – they do not necessarily share a specific chemical property or structure. They are, in fact, extremely chemically diverse – the one thing they have in common is that they may be toxic to us and should be avoided if we don’t want to poison ourselves. Animals have therefore evolved a large family of bitter taste receptor proteins, capable of detecting a wide range of such chemicals, but the taste system does not discriminate between them – that’s the job of the olfactory system. The taste system simply codes them all as “bitter”, sounding a general alarm that they should be avoided.

    The thing that links the chemical properties of these tastants to the perceptual experiences of sweetness, bitterness, etc., is the way the taste receptor neurons are wired into the brain. Each of our taste buds contains a dozen or more taste receptor neurons. Each one of these neurons expresses exclusively just one of the types of taste receptor proteins – sweet receptors, or bitter receptors, or sour receptors and so on.

    Image credit:  http://blogs.discovermagazine.com/scienceandfood/tag/taste-receptor/#.Wqo154IuA4w

    This exclusivity is the key, because it means each taste receptor neuron responds to only one class of tastants. So the brain just has to figure out which neurons were activated to know what kind of chemical was detected. That is accomplished by specifically wiring the different kinds of taste receptor neurons into different regions of the gustatory cortex in the brain, creating labelled lines for each taste.

    A set of primary sensory neurons associated with the facial and glossopharyngeal cranial nerves innervate the taste receptor cells in the tongue and send another projection into the brainstem. Cells from that area of the brainstem project onwards to a specific part of the thalamus (a central subcortical structure which acts as a relay station for lots of types of sensory information). And cells from this ventral posterior nucleus of the thalamus project in turn to the primary gustatory cortex, which is located near the front of the brain.

    The important thing in all this wiring is that the different types of taste receptor neurons get selectively wired into distinct subregions of the primary gustatory cortex. Despite being intermingled and distributed across the surface of the tongue, the nerves carrying information for each taste get segregated as they project into the brain, so that, ultimately, the different tastes are mapped across the gustatory cortex. 

    This is what that looks like in mouse cortex:


    And in humans: 


    The mechanisms that direct this selective wiring are not fully understood but some of the molecules responsible for the very first wiring "decision" - which sensory neurons innervate which taste receptor neurons - have recently been discovered. They are members of the semaphorin gene family (my favourite!) that are used as connectivity labels in many areas of the developing nervous system. 

    In this way, the coding of tastes gets transformed from a distributed set of different cell types in the periphery into a segregated spatial map in the brain. Now the rest of the brain just has to know which part of the gustatory cortex is active to know which type of chemical was detected. That’s a nice, even elegant, system for coding chemical tastants in the brain. An observer looking at patterns of brain activity could probably even infer what taste someone was detecting.

    The problem is, there is no such observer inside the brain. The logic of the anatomy doesn’t really tell us anything about how specific patterns of neural activity in those areas give rise to conscious, subjective percepts. And it certainly doesn’t explain the qualitative nature of these percepts.
      
    Where is the sourness of lemons perceived?

    If we go back to our lemons, here’s what we know so far: the protons produced by citric acid are detected by these newly discovered specialised proteins, which are expressed by dedicated taste receptor neurons, which eventually send information, through converging connections, via several relays, to the “sour” domain of the gustatory cortex. That’s all important information, but we might ask: at what point along this pathway does perception actually occur?

    Clearly, if we just stimulated the tongue, but it wasn’t connected to the brain, you would not perceive anything (in the same way that you wouldn’t see anything if your optic nerves were destroyed, even though your retina might still be electrically responding to light). And if the gustatory regions of the brainstem or of the thalamus were activated, but not connected to the cortex, my guess is you wouldn't perceive anything either.

    Now, what about the primary gustatory cortex? If you stuck an electrode in there and gave it a zap, you might well induce a taste percept. Indeed, cross-activation of gustatory cortex might underlie certain forms of synaesthesia, where various stimuli in other sensory or conceptual modalities – like words or musical notes, for example – induce strong, involuntary taste percepts.

    But, again, if primary gustatory cortex were activated and not telling any other part of the brain about it, you probably wouldn’t perceive anything. In fact, you can see the problem that arises with this kind of thinking – you keep on passing the signal from one station to the next, but you never reach any area that could possibly do the job of perception all by itself. The mistake is to think that perception just entails feedforward propagation and processing of sensory stimuli from the periphery. This leads to an infinite regress. There is no final station in the brain that “does conscious perception”. 

    Instead, we should think of perception as a comparison between incoming sensory stimuli and an internal model of the world, which is instantiated in widespread activity patterns across the brain. This comparison of bottom-up signals with top-down expectations can lead to an updating of the model to accommodate new information, which, (in some way that remains completely mysterious) may constitute the act of perception. This kind of process necessarily involves information flowing in both directions and neuronal activity reverberating through multiple cortical areas and subcortical regions, such as the thalamus.

    The point of all this is to enable the organism to infer what it is out in the world that is the source and explanation of the sensory stimuli it is receiving. In the case of taste, the inference is that there is something sour, or sweet, or bitter in your mouth and there are certain appropriate responses to those different stimuli.

    So, all that (vague as it is) gets us a little further in understanding how perception may happen. It probably requires all kinds of additional weird recursiveness (in the vein of Douglas Hofstadter’s Strange Loops) to get conscious awareness out of the physical system of the brain. But let’s say, for argument’s sake, that those kinds of structures and system dynamics exist. Now, one of our specialised sour receptor proteins has bound a proton and, through the magic of all this distributed and recursive circuitry, we have perceived “sour” and can infer there is an acidic thing in our mouth.

    But why are sour things sour?

    But we still haven’t explained why sour things taste like that. This goes beyond the fact that mildly sour things are pleasant and attractive, while extremely sour things are unpleasant or aversive. It is not just a matter of attaching a positive or negative valencefor the organism to the sensations or the casual stimuli. Again, that kind of thing can be implemented in a robot, without it involving any qualitative experience.

    If we assume that tasting lemons really does involve a very particular, common, or even universal, qualitative experience, then there is something else (a very big something!) that we still have to explain. (And, actually, if we assume the opposite – that the quality of the experience may vary across individuals – that only gives us more to explain). Where does the quality of this experience come from?

    The chemical senses are unlike vision or touch or hearing. For those other senses, the stimuli have some properties that can be actively explored and that can be compared across the senses. They have statistical and physical properties and sensorimotor contingencies that in some way can inform the nature of the attendant visual or tactile or auditory percepts. Information from each sense can also be used to calibrate responses in the others, especially as babies and infants grow and explore their world – calibrating their visual system based first on things they can touch and only later on things at a distance. You can even think of vision as a skill – one that we get better at with experience.

    Some people hold that because that kind of “ecological information” is always available in the environment, we don’t even really have to have internal representations of these stimuli. I wouldn’t go that far myself, but it is certainly true that many aspects of the phenomenal experience of visual or tactile or auditory stimuli are experience-dependent, integrative, and responsive to active exploration (enactive and embodied).

    This really isn’t the case for the chemical senses. The phenomenology of smell or taste doesn’t map in any sensible way to the properties of the stimulus (unlike for audition or vision). There is no physical property of a proton that in any way relates to the properties of the sour percept it induces. And it’s also not experience-dependent or learned or calibrated versus the other senses. When you smell or taste something – especially for the first time – there is no reference point, no perceptual anchor, no sensorimotor contingencies – it just is like that.

    Somehow, the quality of that experience is entailed in the wiring of the gustatory system, and the way it is linked to other areas of the brain. Which means, ultimately, that it is entailed in the program in the genome that directs that wiring. In a brain that is wired that way, detecting protons with your taste buds just will lead to that kind of subjective experience that feels like that. Those qualia are somehow innate and nothing we know (maybe nothing we can know) about the anatomy and physiology of the system can even approach providing an explanation for that.

    And that is doing my head in.

  7. Lessons for human genetics from genetic screens in model organisms


    Why did the axon cross the midline? That seems like a simple enough biological problem to solve. In the developing nervous system, especially in the anatomically simple spinal cord, some nerve cells send a slender nerve fibre (called an axon) across the midline of the nervous system to connect to cells on the other side. The projections of other neurons are restricted to the same side as their own cell bodies. The connections between the two sides are crucial in coordinating movement of the two sides of the body. But, more importantly for this discussion, this system is simple enough to be genetically tractable – at least it seems so.

    When I arrived as a graduate student in the lab of Corey Goodman at the University of California at Berkeley, his group had just carried out a genetic screen in fruit flies to try and understand how this developmental decision was controlled. Flies have an equivalent of a spinal cord, called the ventral nerve cord, and Corey and his colleagues had spent many years characterising the cells that make it up and the repeating patterns of simple circuits in each segment. In the developing embryo it is possible to identify specific neurons that either cross or don’t cross the midline. A collection of antibodies to various proteins expressed on the surface of neurons allowed robust visualisation of these projections and were used as a tool for screening for mutations that affected whether neurons project across the midline or not. 

    The results illustrate some general points about the logic of genetic screens, which, I think, are instructive for our understanding of the genetic architecture of human traits and the types of genes that may be implicated in them.

    The design of the screen was pretty straightforward: generate many thousands of mutant lines of flies (each carrying multiple random mutations) and examine the embryos of each line by staining them with an antibody (BP102) that allowed visualisation of the full axonal scaffold in each embryo. This antibody highlights a stereotyped ladder-like pattern of axonal projections in the ventral nerve cord – one big tract extending longitudinally on each side of the midline and two rungs of the ladder in each segment – the “commissures” of axons projecting across the midline. The phenotypes they were looking for were simply any deviation from this normal pattern, especially ones where the commissures were affected.

    The logic of this was simple: it was known, in a general sense, that the axonal projections of developing neurons are guided by molecular cues in their environment, which they detect with specialised receptor proteins expressed on their surface. Some of these interactions are attractive and some are repulsive. Given the differential behaviour of specific neurons with respect to the midline, it seemed likely that some neurons were being attracted to it and others repelled by it. There must thus exist some genes encoding proteins whose job it was to direct these processes – guidance cues and receptors. At the time, hardly any such molecules were known in any system and the hope was that this genetic screen would turn up mutations in just those kinds of genes (by screening for alterations in the anatomical structure they produce).

    And it did. There were indeed mutations found in genes encoding guidance cues (like Slit) and receptors (like Roundabout (Robo) and Frazzled) and in a protein involved in dynamically regulating guidance receptor expression (Commissureless). These were hugely important for the field – especially as those genes are highly conserved and equally important in wiring the human nervous system. 



    But the point I want to make is not about them. It’s about all the othermutations that caused defects in the axonal scaffold that were not in genes encoding guidance cues or receptors.There were mutations that affected early nerve cord pattering, the production of neurons, the cellular identity of specific neurons, the specification of cells at the midline that produce the attractive and repulsive cues, the ability of neurons to extend an axon, and on and on. Defects in any of these diverse processes could indirectly lead to an aberrant pattern of axonal projections.

    Corey and his colleagues used a series of other antibodies to further characterise each mutant line that showed a defect in the axonal scaffold in order to exclude ones with defects in all these other processes. They comprised the majority of mutant lines. Only a handful remained that encoded proteins with a directfunction in axonal guidance and it took a huge amount of work and a very detailed understanding of the system to distinguish them from the much larger set of genes that were only indirectly affecting the axonal scaffold. 


    Genetic screens in humans

    Now, it may be becoming apparent where I’m going with this, in relation to human genetics. The midline screen in flies involved “saturation mutagenesis” – creating enough mutant lines such that every gene that could be mutated to cause a defect in the axonal scaffold would be mutated, several times over. Experimentally, this is achieved by feeding male flies a chemical mutagen that induces dozens of mutations per sperm and then establishing thousands of different mutant lines from their offspring.

    In humans, all this work has been done for us. The human population is at saturation mutagenesis. Every time sperm or eggs are generated, new mutations are introduced. Not due to any chemical or environmental mutagen – just because it’s not easy to copy 3 billion letters of DNA with complete accuracy and every time that is done a few mistakescreep past the quality control machinery. Because of the recent explosion in the size of the human population, we can be sure that every base in the genome is mutated in multiple people somewhere on the planet (at least every one that is still compatible with life).

    So, when we investigate the genetics of any given phenotype in humans, we are really doing a genetic screen – asking what genes can be mutated to cause a particular phenotype (often a clinical disorder) or affect a particular trait. And the same logic applies as we saw above: some of the mutations or genetic variants that we find affecting a trait will be in genes that encode proteins directly involved in the systems underlying that trait. But many more will be having only very indirect effects on the phenotype – often so distantly related that no real functional relationship holds between the system affected and the cellular roles of the encoded protein.

    The proportion of each class depends hugely on what kind of phenotype we are looking at and in how much detail we can characterise it.


    The genetics of human brain development

    If we are looking at tightly defined neurological conditions, where the phenotype is quite specific, then we may expect a pretty direct relationship between the function of the gene and the effect when it is mutated. Microcephaly, for example, is characterised by a smaller than normal head and brain. Neuroimaging shows this is mostly due to a smaller neocortex. And, sure enough, a subset of the genes identified as mutated in this condition encode proteins that are directly involved in controlling neuronal proliferation in the developing neocortex.  

    However, there are over 1200 entries for ‘microcephaly’ in the OMIM (Online Mendelian Inheritance in Man) database, and the vast majority of implicated genes do not encode proteins that are directly involved in neurogenesis. They affect hundreds of other kinds of processes, which only indirectly impair neurogenesis when compromised. This is directly analogous to the situation in the midline screen in flies. Even for a condition where the phenotype is directly anatomically observable and the underlying cellular processes reasonably well defined, the vast majority of mutations affecting it do so indirectly.

    Now consider the nature of that relationship for phenotypes that are much less well defined and more emergent, like psychological traits or psychiatric disorders. Let’s take a personality trait like impulsivity as an example. A lot of pharmacological and neural systems work has implicated the serotonin signaling system in this trait. And severe mutations in genes encoding components of this system (such as enzymes involved in making or breaking down serotonin and a number of different serotonin receptors) have been shown to affect impulsivity in both rodents and humans, often manifesting in risk-taking and physically aggressive behaviour.  

    Yet genome-wide association studies of risk-taking behaviour have not landed on variants in genes encoding components of the serotonergic pathway. A recent one identified associated common variants in 116 genes. These were enriched for genes expressed in the brain and involved in developmental pathways pretty generically, but did not include many previous candidate genes directly involved in serotonin (or dopamine) signaling.

    Based on what we saw in flies, this should not be a surprise. Even if all of the phenotypic variation in risk-taking involved serotonergic neural pathways (an admittedly simplistic hypothesis), we should not expect all of the genetic variation affecting the trait to be directly impacting serotonin-related biochemical pathways. In fact, we shouldn’t even expect most of the genetic variation affecting the trait to do so – quite the opposite. This is just a statistical corollary of the fact that there are thousands of times more ways, genetically speaking, to mess up that system indirectly than directly. 

    A similar situation holds for psychiatric disorders, such as schizophrenia. This condition is quite highly heritable, meaning the majority of the variation in risk is genetic in origin. The underlying biology of the symptoms of the condition is not well understood, but alterations to dopaminergic signalling are a likely common feature in psychosis. However, of over a hundred genesimplicated by common genetic variants, only one is involved directly in dopamine signaling. And of the dozens of genes with identified rare, high-risk mutations, none directly encode components of the dopamine pathway.

    Instead, both sets of genes are enriched for ones with neurodevelopmental functions, defined pretty broadly. The common endpoint may involve dysfunction of dopaminergic neural pathways (again, this is simplistic), but the genetic origins are much more diverse and seem to be centred on how the brain develops. These are not “genes for schizophrenia”. They are not genes for working memory, or for veridical perception, or for not being paranoid. They are certainly not genes for dopaminergic signaling. They are genes for building a human brain. 


    The omnigenic model

    A recent paper from Jonathan Pritchard’s group has considered the genetic architecture of a number of complex human traits and disorders, including schizophrenia, and come to what seems like a very surprising and somewhat disconcerting conclusion – that genetic variation in perhaps all of the genes expressed in the relevant tissue can contribute to phenotypic variation in a trait or disorder across the population.

    They were looking specifically at the contribution of common genetic variants to these conditions. For schizophrenia, such variants (called SNPs) collectively explain a proportion of overall variance in risk across the population (maybe 25%). Individually, the effects on risk of one version or another at any given SNP are tiny – almost negligible, in fact – but they can be detected statistically by looking at the frequencies of each version in people with the disease versus people without.

    Given large enough samples, one can look through the pattern of SNP frequencies to see which kinds of SNPs tend to show a statistical association. The hope is that some particular biochemical pathways or cellular processes will be implicated. As mentioned above, for schizophrenia, the main gene sets that show enrichment in associated SNPs are those involved in neural development. But that is by no means exclusive.

    What Pritchard and his colleagues showed was that some signal of association could be found for effectively everySNP that was associated with a gene that is expressed in the brain. Moreover, while SNPs associated with genes that are specifically expressed in the brain explain more variance on a one-by-one basis, SNPs in more broadly expressed genes explain more variance collectively because there are so many more of them. Again, indirect effects outnumber direct ones.


    Pleiotropy is the norm

    Not only will most of the mutational effects be indirect, they will also be largely non-specific. Any given mutation that happens to indirectly affect something like serotonin signaling (e.g., by affecting neural development) will probably be indirectly affecting lots of other things too. (In genetic parlance, their effects are pleiotropic). And if you look across a bunch of mutations that affect serotonin signaling in the brain, their other effects will likely be quite diverse.



    This will be true for any given phenotype that you screen for. So, just because a construct like impulsivity is heritable, does not mean there are “genes for impulsivity”. It certainly does not mean there is some dedicated genetic module as often proposed in evolutionary psychology models. The apparent selectivity is an illusion created by viewing the effects of pleiotropic genetic variants from the perspective of a single trait at a time.


    Is there any point in doing genetics then?

    The main rationale for doing genetic screens in model organisms is to elucidate the molecular basis of some biological process. This approach is incredibly powerful and has been extraordinarily successful. However, it depends on a detailed understanding of the processes being probed, and, as we saw in the midline screen, some secondary means to distinguish mutations in genes directly involved in a process of interest from the much larger set of mutations that affect it indirectly.

    For many human traits or disorders, especially ones involving the human mind, that detailed understanding is lacking. Oftentimes the phenotype is simply a word on a form – like “schizophrenia”. Moreover, while in model organisms we can simply screen out the indirect and non-specific mutations and focus on the ones directly involved in the processes of interest, we don’t have that luxury in humans. The indirect and non-specific ones will contribute most of the variance in risk.

    At one level, that’s okay – just identifying these genetic risk factors can be tremendously useful in a clinical setting. But it does make getting at the underlying biology much more challenging. Nature is under no obligation to make things simple for us. It is going to take a hell of a lot more work after the initial discovery of genetic variants to unravel the biology of complex traits and disorders.


    References

    Seeger M, Tear G, Ferres-Marco D, Goodman CS. Mutations affecting growth cone guidance in Drosophila: genes necessary for guidance toward or away from the midline. Neuron. 1993 Mar;10(3):409-26.

    Boyle EA, Li YI, Pritchard JK. An Expanded View of Complex Traits: From Polygenic to Omnigenic. 
    Cell. 2017 Jun 15;169(7):1177-1186.
  8. Panpsychism – not even wrong. Or is it?

    I had an interesting exchange with philosopher Philip Goff on Twitter (@Philip_Goff)recently, prompted by his article: “Panpsychism is crazy, but it’s also most probably true”, published in Aeon. There he lays out a series of arguments that he claims make it likely that all pieces of matter possess some degree of consciousness.

    According to panpsychism, the smallest bits of matter – things such as electrons and quarks – have very basic kinds of experience; an electron has an inner life.”


    The idea is that consciousness may be not solely a property of highly complex systems, such as ourselves, but a fundamental property of every piece of matter in the universe, like mass. Some bits of matter would have more of it than others, but all – down to the level of elementary particles like electrons and quarks – would have some kind of subjective experience. I’d like to say there’s more to it than that, that it involves a whole fleshed out framework that explains all manner of phenomena in a new way, but actually that’s pretty much it.

    It’s easy to make fun of panpsychism – so let’s begin. On the face of it, such claims are absurd. However, the idea has been around for millennia, with many prominent supporters – Plato, Spinoza, William James, Alfred North Whitehead, for example – and it is experiencing a recent resurgence of sorts. But if you look more deeply, you can see, or at least I will argue, that panpsychism has so little real content that it’s questionable whether it rises even to the level of a hypothesis, never mind a theory.

    It is instructive, however, to follow the logic of the arguments put forward, if only for illustrating what I contend is exactly the wrong way to think about consciousness – as an elemental property of bits of matter, as opposed to an emergent property of an organised dynamic system that is made of bits of matter.

    Strangely, most of the arguments that Goff offers in support of panpsychism centre more on broad issues in the philosophy of science, than on any advantages of the idea itself as an explanatory framework. He begins by saying that we should not reject panpsychism out of hand just because it is counter-intuitive. Fair enough. As he points out, some other theories are counter-intuitive, like special relativity or wave-particle duality. I would certainly accept that, though I definitely would not include the idea that we are descended from apes in that list. However, the fact that some scientific theories are counter-intuitive does not make it a general selling point in favour of plausibility.

    More specifically, Goff lays out three premises:

    1. Physical science tells us nothing about the “intrinsic nature” of matter, and hence there's a gap in our picture of physical reality that must be filled by broader theorising. By this he means that physics defines the properties of things by their dispositions – by what they do, rather than what they are.

    This assertion is fine with me, as far as it goes, though it’s basically just an expression of our ignorance – never a strong starting point for an argument. More crucially, the term “intrinsic nature” is doing a lot of heavy lifting for something that is left so vague here. It’s not really clear what it means, nor is it clear that it’s actually a thing. As we’ll see below, there’s also some sleight of hand involved as we go from thinking of the intrinsic nature of elementary particles to the intrinsic nature of organised systems of matter. Should we expect those things to be in any way commensurate? Using the same term simply makes the assumption that we should. (That assumption, however, is also the conclusion – this is the first circular argument).

    2. The only thing we know about the intrinsic nature of matter is that some of it, the stuff in the brains, has a consciousness-involving nature.

    Here we can see the easy slide from the microscopic to the macroscopic – from elements of matter to organised systems of matter. If we simply use the term “intrinsic nature” to refer to both, and we simply refer to human beings as “matter”, without reference to the crucial fact that it is organisedmatter, then this move almost makes sense – it implies there is no reason to think there is anything qualitatively different at the two extremes of complexity. (Again, that is both an assumption and a conclusion).

    3. The simplest theory consistent with that data is panpsychism. That is, if we know some of the matter in the universe has an intrinsic nature that entails consciousness, then it is parsimonious to assume that all matter does – at least more parsimonious than assuming some does and some doesn’t and then having to explain the difference.

    There is an appeal here to Occam’s Razor –the well accepted scientific principle that if you have two theories that both explain something, the simpler one is usually better. (Both more heuristically useful and more likely to be correct). The key to wielding this razor, however, is that the two theories in question actually explain something. A simple statement that just declares consciousness to be a fundamental property of all matter does not explain anything. It’s merely a cop-out. It does not advance our understanding in any way. It does not constitute a theory at all, never mind one that explains what needs explaining.

    As H. L. Mencken almost said: “For every complex problem, there is an answer that is clear, simple, and wrong.”

    Again, youcan see in premise 3 another circular argument. The fact that only some matter (i.e., that constituting human beings, and probably many other kinds of animals) has the property of consciousness is exactly the thing we are trying to explain. Simply saying everything is conscious to some extent – that it is an intrinsic property of bits of matter, like mass – assumes the thing the theory is trying to conclude.

    Now, you might counter by saying that I am the one making circular assumptions – if I simply insist that only certain complex systems are conscious, then of course I rule out panpsychism before giving it a chance. So, let’s see if it actually has anything to recommend it – does it explain anything or predict anything that would make us want to take it seriously?

    Goff claims (here) that panpsychism “solves the hard problem of consciousness” – the mystery of how mere physical matter can give rise to subjective experience. This would be pretty remarkable, if true, given that is one of the deepest mysteries left for science to even begin to resolve. The “solution”, however, is simply to assert that consciousness is a fundamental property of all matter. There’s no real reason to think that is the case – certainly no evidence that it is. Nothing follows from the assertion. It makes no predictions, testable or otherwise. It doesn’t explain the nature of subjective experience that a rock may be having or how that property comes to be. The hard problem remains just as hard – harder even, as now we have to ask it about electrons and photons too.  

    Indeed, you can make exactly the same series of arguments with respect to “life” instead of “consciousness”, highlighting the absurdity not just of the claim, but of the logic:

    1.    We don’t understand the intrinsic nature of matter.
    2.    Some forms of matter are alive.
    3.    It is therefore parsimonious to conclude that all forms of matter are a bit alive.

    Again, that’s a simple statement, but it’s not a simple theory, because it’s not a theory at all.

    If you would counter that “life” is too nebulous a concept for this comparison to be apt, I would argue that though the boundary between living and non-living is fuzzy at certain points, if you think about the boundary between livingand dead, that makes it pretty clear that being alive is a real, definable property of some things, under some conditions, and not others.

    More broadly, the comparison with life highlights a huge unstated premise – the hidden assumption – that underlies this chain of logic. It is that the properties of organised, complex, dynamic systems derive solely from the properties of their components (or at least may do so). Though Goff refers to the theory as “non-reductive”, I can’t think of anything more reductive than claiming that the most crucial property of what may be the most complex system we know of – the human brain – inheres in its simplest components.

    The answer to the mystery of consciousness – and it remains very much a mystery – surely lies in a nonreductive physicalism that recognises that complex, even seemingly miraculous properties (like conscisousness, or life itself), can and do emerge from the dynamic interactions of matter when it is organised in certain highly complex ways, not from the bits of matter themselves.In this view, consciousness is a property of a process (or of many interacting processes), not of a substance.

    So, after due consideration (maybe more than it is due), I will stick by my assessment, that panpsychism is not even wrong. But I remain willing to be convinced that it is.


  9. “Like father, like son”: Testing folk beliefs about heredity in the arena of assisted reproduction.


    “The apple doesn’t fall far from the tree”.
    “Chip off the old block”.
    “Cut from the same cloth”.
    “Black cat, black kitten”.
    “Chickens don’t make ducks”.
    “He didn’t lick it off the stones”.
    “It’s not from the wind she got it”.
    “She comes by it honestly”.

    Every culture seems to have its own phrases describing the power of heredity – not just for physical traits, but also for behavioural ones. (Those last three are peculiar to Ireland, I think). This folk wisdom, accumulated from centuries of observation of human behaviour, seems to reflect a widespread belief that genetic effects on behaviour and personality are strong, indeed dominant over effects of upbringing.

    (Image credit: https://schoolworkhelper.net/essay-nature-vs-nurture-or-both/)

    Of course, folk wisdom can be wrong. And old folk sayings may not reflect current thinking – perhaps people’s opinions on the subject have changed. Indeed, if you were to take academic discourse on the subject as a barometer of views of the general public, you might think that many people ascribe no power to nature at all and most or all of it to nurture instead. Those debates do not remain within the walls of the academy – we see them played out in social policies on education, early intervention, in the criminal justice system, in psychiatric practice and other arenas.

    So, do those folk sayings accurately reflect public opinion on the power of nature over nurture? Well, the easiest way to find out would be to just ask a bunch of people. However, while there are lots of surveys of people’s attitudes towards genetic testing or screening (such as here and here), I was unable to find any on more general beliefs about heredity, especially of psychological or behavioural traits. (If dear readers know of any, please let me know).

    However, there is one area where these beliefs are directly tested, which is in assisted reproduction, especially where it involves sperm or egg donors. In many cases, couples can choose sperm or egg donors on the basis of any number of characteristics, which prominently include things like intelligence, educational attainment, musical talent, and general personality traits, in addition to physical characteristics like height, body-mass index, athleticism, and general health. Clearly, an interest in the psychological traits of potential donors reflects an underlying belief in the heritability of such traits.

    This was thrown into stark relief by a case from 2016 that received a lot of media attention. A couple had selected a sperm donor on the basis of a profile in which he claimed to have an IQ of 160, a bachelor’s degree in neuroscience, a master’s in artificial intelligence and to be studying for a PhD in neural engineering. They found out later that he was a college dropout, with a criminal record, having served time in prison for burglary, and that he had been diagnosed with schizophrenia and narcissistic personality disorder. (More details of his story emerged later but the important thing for this discussion is this initial presentation).

    This donor’s sperm had been used to father 36 babies. The couple in question had selected him because his high intelligence and even his scientific interests matched one of the couple. On learning of his actual profile, they described it as “A dream turned nightmare in an instant”. They went on to say: “In hindsight, a hitchhiker on the side of the road would have been a far more responsible option for conceiving a child.” 

    This story received a lot of media attention, in newspaper articles, online magazines, and on radio and television. All of these stories played up the horror of the couple in question and the outrageousness of the deceit that had been perpetrated on them. What struck me at the time, though, in almost all of the coverage I saw, was that no one questioned whether this couple, and others who had children using this donor’s sperm, were right to be horrified. It was taken for granted that the traits of the donor would indeed have a significant impact on the trait’s of the offspring.

    None of the commentators argued that mental illnesses like schizophrenia were really caused by cold parenting, childhood experiences, or environmental factors. The common wisdom was clearly that mental illness runs in families and that having a father with schizophrenia greatly increased the risk of this highly debilitating and often devastating disorder in the offspring. (And, of course, this is absolutely true).

    Similarly, no one claimed that all children are born with equal intellectual potential, that eventual differences in intelligence solely reflect differences in education or societal factors, or that IQ tests only measure how good you are at taking IQ tests. The understanding that intelligence is real, important, and substantially heritable was so implicit that it never came up. (And, of course, this is true too, at least that genetic differences make a major contribution to relative differences in intelligence between people – though education and other factors affect the absolute levels that any individual attains).

    When talking about these things in the abstract or in general terms across the population, many people may espouse a view that weights nurture more heavily than nature. But when the rubber meets the road, when people are making choices that they feel may directly and possibly profoundly affect their children, they clearly place a heavy emphasis on the power of heredity. And most neutral commentators seem to think that view is so reasonable that it doesn’t even occur to them to comment on it, never mind question it.

    People looking for sperm or egg donors clearly prefer those with certain traits and without others. Now, you might say they’re just hedging their bets. If there is an option to choose donors with traits deemed more desirable, then they might as well, whether they strongly believe they are heritable or not. If they’re not really heritable, there’s no harm done. But it goes beyond just exercising that choice – people are willing to pay extra for donors with more desirable traits. (I’m not arguing here that some traits should be seen as more or less desirable – just that the “consumers” in this scenario see them as such).

    Gamete donation is big business. This is especially true for egg donors, because eggs are much more difficult and expensive to collect than sperm and there are both far fewer willing donors and far fewer actual eggs. This is a strangely unregulated market, especially in the United States. The American Society for Reproductive Medicine has ethical guidelines that propose a cap on how much women should be paid for their eggs ($10,000), but most fertility agencies operate outside the medical establishment and many ignore this guideline. The legality of the ASRM’s position has been challenged by a number of women, given the overall amounts of money that such agencies can make from clients and the very high value that some women can demand for their eggs on the open market. (See here for a personal story of how this kind of interaction can play out).  

    The willingness of prospective parents to pay for desirable traits indicates both the value attached to such traits as well as the confidence attached to the idea that they are really heritable. If people didn’t think a trait like intelligence was largely heritable, they wouldn’t pay for more intelligent donors, no matter how much they value the trait. Clearly, they think it is, as there are agencies specifically marketing educational attainment as one of the main selling points of the egg donors from whom prospective parents can choose. Musical and artistic abilities are also much sought after and many descriptors of donors include all kinds of other personality traits and lifestyle descriptors that seem to be of interest to clients.

    For example, the Egg Donor Program “markets itself as an exclusive club where selected donors are referred to as “Premier Donor Angels: beautiful, accomplished, highly educated” (www.eggdonation.com)”.


    The Donor Concierge agency promises intelligent donors, students in or graduates of Ivy League universities, with a minimum grade point average.


    So, if we take donor selection as the ultimate test-bed of people’s true beliefs on heredity – where they literally put their money where their mouth is – it certainly seems that the folk sayings referred to above do accurately reflect beliefs about psychological traits. (At least among the admittedly non-random set of people who undertake this kind of assisted reproduction). Beliefs about the heritability of intelligence or of mental illness are, in fact, well founded and match our current scientific understanding. Beliefs about other psychological traits probably substantially overestimate the importance of genetic effects.

    One final note: it seems inevitable that the market in egg and sperm donation will soon incorporate molecular genetic profiling and trait prediction. The direct to consumer genomics company 23andMe had a patent granted in 2008 for what they called their “Inheritance Calculator”, which would enable people to predict traits of offspring from genotyped parents or donors. The backlash against the idea led the company to state that they had “no plans to pursue the idea”.

    However, another company, GenePeeks, Inc., was established precisely for the purpose of molecularly genotyping potential donors, though it is currently only aimed at predicting possible rare diseases in offspring between clients and donors. It seems a small step to include other non-medical traits of interest, however, especially if they can be accurately predicted from polygenic profiles. Currently, things like intelligence cannot be accurately predicted for an individual, but it may be possible to generate comparative scores that would influence donor selection. The new company Genomic Prediction, Inc., aims to use polygenic profiles to predict risk for complex disorders – the same approach could certainly be used for many non-medical traits.

    Whatever one thinks about the ethics of this kind of consumer-driven eugenics, it clearly is only going to increase. The lay understanding of genetics and heredity is thus already a factor in the market of assisted reproduction and seems likely to grow in importance over the coming years. 

  10. What are the Laws of Biology?


    The reductionist perspective on biology is that it all boils down to physics eventually. That anything that is happening in a living organism can be fully accounted for by an explanation at the level of matter in motion – atoms and molecules moving, exerting forces on each other, bumping into each other, exchanging energy with each other. And, from one vantage point, that is absolutely true – there’s no magic in there, no mystical vital essence – it’s clearly all physical stuff controlled by physical laws.

    But that perspective does not provide a sufficient explanation of life. While living things obey the laws of physics, one cannot deduce either their existence or their behaviour from those laws alone. There are some other factors at work – higher-order principles of design and architecture of complex systems, especially ones that are either designed or evolved to produce purposefulbehaviour. Living systems are for something – ultimately, they are for replicating themselves, but they have lots of subsystems for the various functions required to achieve that goal. (If “for something” sounds too anthropomorphic or teleological, we can at least say that they “do something”).

    Much of biology is concerned with working out the details of all those subsystems, but we rarely discuss the more abstract principles by which they operate. We live down in the details and we drag students down there with us. We may hope that general principles will emerge from these studies, and to a certain extent they do. But it feels like we are often groping for the bigger picture – always trying to build it up from the components and processes we happen to have identified in some specific area, rather than approaching it in any principled fashion or basing it on any more general foundation. 

    So, what are these principles? Do they even exist? Can we say anything general about how life works? Is there any theoretical framework to guide the interpretation of all these details?

    Well, of course, the very bedrock of biology is the theory of evolution by natural selection. That is essentially a simple algorithm: take a population of individuals, select the fittest (by whatever criteria are relevant) and allow them to breed, add more variation in the process, and repeat. And repeat. And repeat. The important thing about this process is it builds functionality from randomness by incorporating a ratchet-like mechanism. Every generation keeps the good (random) stuff from the last one and builds on it. In this way, evolution progressively incorporates design into living things – not through a conscious, forward-looking process, but retrospectively, by keeping the designs that work (for whatever the organism needs to do to survive and reproduce) and then allowing a search for further improvements.

    But that search space is not infinite – or at least, only a very small subsection of the possible search space is actually explored. I often use a quote from computer scientist Gerald Weinberg, who said that: “Things are the way they are because they got that way”. It nicely captures the idea that evolution is a series of frozen accidents and that understanding the way living systems are put together requires an evolutionary perspective. That’s true, but it misses a crucial point: sometimes things are the way they are because that’s the only way that works.

    Natural selection can explain how complex and purposeful systems evolve but by itself it doesn’t explain why they are the way they are, and not some other way. That comes down to engineering. If you want a system to do X, there is usually a limited set of ways in which that can be achieved. These often involve quite abstract principles that can be implemented in all kinds of different systems, biological or designed. 


    Systems principles

    Systems biology is the study of those kinds of principles in living organisms – the analysis of circuits and networks of genes, proteins, or cells, from an engineering design perspective. This approach shifts the focus from the flux of energy and matter to emphasise instead the flow of information and the computations that are performed on it, which enable a given circuit or network to perform its function.

    In any complex network with large numbers of components there is an effectively infinite number of ways in which those components could interact. This is obviously true at the global level, but even when talking about just two or three components at a time, there are many possible permutations for how they can affect each other. For a network of three transcription factors, for example, A could activate B, but repress C; or A could activate B and together they could repress C; C could be repressed by either A OR B or only when both A AND B are present; C could feed back to inactivate A, etc., etc. You can see there is a huge number of possible arrangements.

    The core finding of systems biology is that only a very small subset of possible network motifs is actually used and that these motifs recur in all kinds of different systems, from transcriptional to biochemical to neural networks. This is because only those arrangements of interactions effectively perform some useful operation, which underlies some necessary function at a cellular or organismal level. There are different arrangements for input summation, input comparison, integration over time, high-pass or low-pass filtering, negative auto-regulation, coincidence detection, periodic oscillation, bistability, rapid onset response, rapid offset response, turning a graded signal into a sharp pulse or boundary, and so on, and so on.

    Figure: A few common network motifs. Each of these will have a different input-output relationship. (Source)


    These are all familiar concepts and designs in engineering and computing, with well-known properties. In living organisms there is one other general property that the designs must satisfy: robustness. They have to work with noisy components, at a scale that’s highly susceptible to thermal noise and environmental perturbations. Of the subset of designs that perform some operation, only a much smaller subset will do it robustly enough to be useful in a living organism. That is, they can still perform their particular functions in the face of noisy or fluctuating inputs or variation in the number of components constituting the elements of the network itself.

    These robust network motifs are the computational primitives from which more sophisticated systems can be assembled. When combined in specific ways they produce powerful systems for integrating information, accumulating evidence and making decisions – for example, to adopt one cell fate over another, to switch on a metabolic pathway, to infer the existence of an object from sensory inputs, to take some action in a certain situation.


    A conceptual framework for systems biology

    Understanding how such systems operate can be greatly advanced by incorporating principles from a wide range of fields, including control theory or cybernetics, information theory, computation theory, thermodynamics, decision theory, game theory, network theory, and many others. Though each of these is its own area, with its own scholarly traditions, they can be combined into a broader schema. Writing in the 1960’s, Ludwig von Bertalanffy – an embryologist and philosopher – recognised the conceptual and explanatory power of an over-arching systems perspective, which he called simply General System Theory.

    Even this broad framework has limitations, however, as does the modern field of Systems Biology. The focus on circuit designs that mediate various types of information processing and computation is certainly an apt way of approaching living systems, but it remains, perhaps, too static, linear, and unidirectional.

    To fully understand how living organisms function, we need to go a little further, beyond a purely mechanistic computational perspective. Because living organisms are essentially goal-oriented, they are more than passive stimulus-response machines that transform inputs into outputs. They are proactive agents that actively maintain internal models of themselves and of the world and that accommodate to incoming information by updating those models and altering their internal states in order to achieve their short- and long-term goals.

    This means information is interpreted in the context of the state of the entire cell, or organism, which includes a record or memory of past states, as well as past decisions and outcomes. It is not just a message that is propagated through the system – it means something to the receiver and that meaning inheres not just in the message itself, but in the history and state of the receiver (whether that is a protein, a cell, an ensemble of cells, or the whole organism). The system is thus continuously in flux, with information flowing “down” as well as “up”, through a constantly interacting hierarchy of networks and sub-networks.

    A mature science of biology should thus be predicated on a philosophy more rooted in process than in fixed entities and states. These processes of flux can be treated mathematically in complexitytheory, especially dynamical systems theory, and the study of self-organisingsystems and emergence.

    Figure: A useful Map of Complexity Science (by Brian Castellani)


    In addition, the field of semiotics (the study of signs and symbols) provides a principled approach to consider meaning. It emerged from linguistics, but the principles can be applied just as well to any system where information is passed from one element to another, and where the state and history of the receiver influence its interpretation of the message.

    In hierarchical systems, this perspective yields an important insight – because messages are passed between levels, and because this passing involves spatial or temporal filtering, many details are lost along the way. Those details are, in fact, inconsequential. Multiple physical states of a lower level can mean the same thing to a higher level, or when integrated over a longer timeframe, even though the information content is formally different. This means that the low-level laws of physics, while not violated in any way, are not sufficient in themselves to explain the behaviour of living organisms that process information in this way. It requires thinking of causation in a more extended fashion, both spatially and temporally, not solely based on the instantaneous locations and momentum of all the elementary particles of a system.


    A new pedagogical approach in Biology

    In the basic undergraduate Biology textbook I have in my office there are no chapters or sections describing the kinds of principles discussed above. There is no mention of them at all, in fact. The words “system”, “network”, “computation” and “information” do not even appear in the index. The same is true for the textbooks on my shelf on Biochemistry, Molecular and Cell Biology, Developmental Biology, Genetics, and even Neuroscience.

    Each of these books is filled with detail about how particular subsystems work and each of them is almost completely lacking in any underpinning conceptual theory. Most of what we do in biology and much of what we teach is describing what’s happening – not what a system is doing. We’re always trying to figure out how some particular system works, without knowing anything about how systems work in general. Biology as a whole, along with its sub-disciplines, is simply not taught from that perspective.

    This may be because it necessarily involves mathematics and principles from physics, computing, and engineering, and many biologists are not very comfortable with those fields, or even acutely math-phobic. (I’m embarrassed to say my own mathematical skills have atrophied through decades of neglect). Mainly for that reason, the areas of science that do deal with these abstract principles – like Systems Biology or Computational Neuroscience, or more generally relevant fields like Cybernetics or Complexity Theory, are ironically seen as arcane specialties, rather than providing a general conceptual foundation of Biology.

    As we have learned more and more details in more and more areas, we seem to have actually moved further and further away from any kind of unifying framework. If anything, discussion of these kinds of issues was more lively and probably more influential in the early and mid-1900s when scientists were not so inundated by details. It was certainly easier at that time to be a true polymath and to bring to bear on biological questions principles discovered first in physics, computing, economics, or other areas.

    But now we’re drowning in data. We need to educate a new breed of biologists who are equipped to deal with it. I don’t mean just technically proficient in moving it around and feeding it into black box machine-learning algorithms in the hope of detecting some statistical patterns in it. And I don’t necessarily mean expert in all the complicated mathematics underlying all the areas mentioned above. I do mean equipped at least with the right conceptual and philosophical framework to really understand how living systems work.

    How to get to that point is the challenge, but one I think we should be thinking about.




    Resources

    Science and the Modern World. Alfred North Whitehead, 1925.


    The Strategy of the Genes. Conrad Waddington, 1957.

    The Computer and the Brain. John von Neumann, 1958.



    The Extended Phenotype. Richard Dawkins. 1982.

    Order out of Chaos. Man’s New Dialogue with Nature. Ilya Prigogine and Isabelle Stengers, 1984.

    Endless Forms Most Beautiful. Sean Carroll, 2005.



    Complexity: A Guided Tour. Melanie Mitchell, 2009.