Tuesday 1 June 2010

Worrying thoughts about specialization and growth

Could it be that the differentiation of Church and State, the development of universities (secular in their essence, even when staffed by religious), the process of specialization itself – that all these are actually first steps on an _inevitable_ path to where we are now (i.e. on the verge of a self-inflicted - almost self-willed - collapse of 'the West')?

Universities can be seen as by now vastly inflated institutions, not just parasitic but actively destructive in many ways. On the other hand Universities used to perform some functions which were essential to those aspects of modernity which we most admire: philosophy in the medieval university, classics in the next period, science (Wissenschaft) in the 19th century German universities and so on.

But, in retrospect, all these golden ages of scholarship and research were more like brief transitional periods en route to something much worse.

For instance, the flowering of science (as a specialized, largely autonomous, social system) for the couple of centuries up until the mid twentieth century was a period of constant institutional change until science became - as now - *essentially* a branch of the state bureaucracy.

It seems that useful/ efficient specialization (including separation from State and Religion) leads to over specialization (or micro-specialization) which is increasingly less efficient, then less effective - and all this seems to lead back to re-absorption of science into the State (or into Religion, in principle).

For instance the London Royal Society became more and more autonomous in its conduct until maybe the mid-twentieth century, then became progressively reabsorbed back into the State until now the Royal Society gets about ¾ of its funding directly from the UK parliament, and the organization functions like a department of the UK civil administration.

If we go back and back to find the point at which this *apparently* unstoppable yet self-undermining process began in the West - I think it may lead to the difference between (say) medieval Orthodox Byzantium and Catholic Western Europe.

To scholasticism, perhaps? That was when the divergence became first apparent - when an academic, pre-scientific discipline (i.e. philosophy) became increasing autonomous from Religion (in the West the Religious hierarchy already was separate from the State hierarchy - although sometimes the two cooperated closely. In the East, Church and State formed an intermingled, single hierarchy).

Indeed, my impression is that Thomistic scholasticism may itself be self-undermining – and that this can perhaps be seen in the history of the Roman Catholic Church and even of some specific scholastic scholars – for example Jacques Maritain or Thomas Merton? (They began as traditionalists and ended as modernizers.)

It seems that institutions can grasp the essence of Thomism, and yet the process of understanding does not at all prevent – indeed it perhaps encourages – the continuation of the process until it has destroyed the system itself. As Peter Abelard found, once the process of sceptical analysis has begun, there is not clear point at which it can be seen necessary to stop – and the only point when it is known for sure that things have gone too-far is when the system which supported the process has fallen to pieces and by then it is too late.

Something similar may apply to science. The process of science creates a social system which first really reinvents itself due to real discoveries, then later makes pseudo- discoveries in order really to reinvent itself, then finally makes pseudo-discoveries in order to pseudo-reinvent itself. At which point the full circles has been turned, and all that remains is to drop the pretence.

Of course, differentiation of society led initially to greater strength, based (probably) on frequent breakthroughs in science and technology which drove increased economic productivity and military capability. But as differentiation proceeded to micro- and destructive levels, the real breakthroughs dried up and were replaced with hype and spin, then later pure lies. Real economic growth was replaced with inflation and borrowing. Progress was replaced with propaganda.

We have already observed the whole story in the atheist and supposedly science-worshipping Soviet Union – which in Russia is maybe now returning to the more stable and robust pattern of Eastern Christian (Byzantine) theocracy - and the pattern is merely being repeated in the capitalist and democratic West.

(With the difference that the secular West will probably - in the medium term - return after the collapse of modernity to segmentary, chaotic tribalism, rather than large-scale cohesive theocracy.)

In sum, perhaps the process of social differentiation is unstoppable hence inevitably self-destroying? The increasing rate of science and technological breakthroughs from (say) 1700 to 1900 looked like progress in the conduct of human affairs until it wasn’t. The faster social system growth and differentiation proceeds, the faster it destroys itself.

Rapid growth and differentiation is therefore, in fact, intrinsically parasitic – whether or not we can actually detect the parasitism. At any rate, that's what it looks like to me.

Sunday 30 May 2010

Forbidden topics

One intractable debating point in science is the question of whether some things should *not* be researched.

And indeed, there can be very few people who don't have some topic or another which they would prefer not to be researched - although the specific topic varies widely, and indeed diverges - and the method by which research should be discouraged or prevented is also variable.

So that the people who believe that research should not be done on human embryos or human stem cells are likely to be different from the people who believe that research should not be done on chemical weapons or genetically-modified crops.

The reason for prohibited topics of research is that science is not, ever, the primary value system; and therefore science is inevitably subordinated to whatever *is* the primary value system. For most of human history the primary value system would have been religion - nowadays it might be politics.

And by 'value system' is not meant (as some people imagine) merely a moral or ethical system - but the whole system of 'goods', the positive versus negative evaluations - or what may be called transcendental values - virtue is one, beauty another, truth another and wholeness or unity is yet another possible 'good'.

So presumably, what gets done in science ought to reflect in some specific way, that which is good in some general way.

But does it? Of course not!

The whole motivational system of science is broken, there is no overall moral or ethical system except that the consensus of the powerful scientists is always right. In science, the consensus that matters is that of the peer review cartel of dominant scientists – since peer review controls scientific evaluation.

Scientists’ choice of topic and methods of work now passively reflect consensus; and the consensus is always changing, consensus has no direction, but still consensus is always right.

In other words, there is no concept of the good in modern science - since consensus is simply a word for an arbitrary and shifting outcome of social interplay among those with power to enforce their views. In other words, consensus is peer review, and peer review is consensus.

(Those who contrast science with consensus are therefore mistaken when it comes to modern science – although of course real science is *in a sense* the opposite of consensus.)

There is no concept of the good as a cohesive system located anywhere in the motivational system of science. There is therefore no responsibility.

Instead of ‘the good’ – an eternal ideal - there is an ethic of obedience to whatever is the outcome of undirected social interplay – even (or especially) when this outcome is arbitrary and ever-changing.

The advocacy of peer review as a gold standard is precisely this: that scientists must submit - swiftly, willingly, happily! - to the outcome of peer review, and that the outcome of peer review is intrinsically valid.

Even though peer review is unpreditable, changeable, and lacks any rationale: all the more *vital* that submission be swift, willing and cheerful!

***

The main method of enforcing scientific prohibition is 'defunding' - which can be done covertly by peer review on the pretext of 'scientific standards'. Also there is the failure to allocate award jobs, promotions, publications, memberships and prizes to those whose topic and/ or views transgress the accepted boundaries.

After all, any job application, any paper, any grant request can be rejected on quasi-plausible grounds.

Many of the greatest scientists have been rejected from jobs, many of the greatest ideas in science have been rejected by peer review, many of the major breakthroughs in science we turned down for grants - all on the grounds that they 'weren't good enough'.

Modern scientific evaluations equate lack of funding with illegitimacy – so defunded science is not merely ignorable science but *bad science*. That which is rejected by peer review is not merely unfashionable or mistaken science but *bad science*. Work done by people outside the peer review cartel is intrinsically *bad science* whenever and to whatever extent it conflicts with the intrinsically authoritative views of the power brokers.

It's an easy sophomoric trick of the half-educated. There has never been a paper in the history of medicine which could not be torn to shreds by a zealot with a Masters degree in epidemiology. The evaluation procedures are over-inclusively negative. The evaluation procedures always imply rejection. The question is merely when to apply the evaluation procedures, and when quietly to set them aside…

Sheltering coercive consensus behind the defense of ‘standards’, mainstream power-brokers and their apologists for prohibition can continue to advocate libertarian ideals.

Craven conformism masquerades as idealism.

And the potentially subverting self-knowledge of moral bankruptcy is deferred for yet another day…

Saturday 29 May 2010

Mainstream medical research: trivial science and useless medicine

Scientists generally assert their right to study anything they want-to study ('blue skies') as contrasted with the subject matter or direction research being dictated by government, corporations and other organizations.

Yet over the past few decades scientists have tamely allowed the subject matter and direction of their research to be dictated by the funders of science and the peer review cartel who apply funding criteria.

Worse than this, scientists have colluded with evaluation criteria which measure inputs rather than outputs, so that for career purposes externally-funded science is seen as intrinsically superior to unfunded or self-funded science. Well funded pseudo-science is privileged infinitely above unfunded real science (because unfunded science is discounted altogether, or sometimes negatively - as evidence of un-seriousness or disloyalty or misplaced effort).

So we have ended-up the worst of both worlds: the uselessness of arbitrary subject matter and the dullness of merely applied science.

My prime exhibit is mainstream medical research which is useless in the sense that it (almost) never discovers anything of medical/ clinical value (i.e. nothing useful in diagnosing or treating illnesses) - but it is plodding, incremental and organized on an industrial scale just like the most mundane industrial or military R&D.

Mainstream medical science is, in effect, R&D applied to useless and trivial (and dishonest) pseudo-medical subjects.

But IF medical research was motivated by the genuine interests of either the scientists who do it (i.e. amateur science) or by the wants and needs of patients (i.e. clinical science - usually done by clinicians) - then it might stand some chance of making (either) significant jumps in progress of science or of medicine (or both).

As it is, mainstream medical research is at the same time trivial science and useless medicine.

Friday 28 May 2010

Motivation - the key to science?

While wanting to know the truth does not mean that you will find it; on the other hand, if scientists are not even *trying* to discover the truth - then the truth will not be discovered.

That is the current situation.

'Truth' can be defined as 'underlying reality’. Science is not the only way of discovering truth (for example, philosophy is also about discovering truth - science being in its origin a sub-class of philosophy) - but unless an activity is trying to discover underlying reality, then it certainly cannot be science.

But what motivates someone to want to discover the truth about something?

The great scientists are all very strongly motivated to ‘want to know’ – and this drives them to put in great efforts, and keeps them at their task for decades, in many instances. Why they should be interested in one thing rather than another remains a mystery – but what is clear is that this interest cannot be dictated but arises from within.

Crick commented that you should research that which you gossip about, Watson commented that you should avoid subjects which bore you - http://medicalhypotheses.blogspot.com/2007/12/gossip-test-boredom-principle.html - their point being that science is so difficult, that when motivation is deficient then problems will not get solved. Motivation needs all the help it can get.

Seth Roberts, in his superb new article (which happened to be the last paper I accepted for publication in Medical Hypotheses before I was sacked) makes the important point that one motivation to discover something useful in medicine is when you yourself suffer from a problem -

http://sethroberts.net/articles/2010%20The%20unreasonable%20effectiveness%20of%20my%20self-experimentation.pdf

Seth does self-experimentation on problems which he suffers - such as early morning awakening, or putting on too much weight (he is most famous for the Shangri-La diet). He has made several probable breakthroughs working alone and over a relatively short period; and one of the reasons is probably that he really wanted answers, and was not satisfied with answers unless they really made a significant difference.

By contrast, 95 percent (at least!) of professional scientists are not interested in the truth but are doing science for quite other reasons to do with 'career' - things like money, status, security, sociability, lifestyle, fame, to attract women or whatever.

The assumption in modern science is that professional researchers should properly be motivated by career incentives such as appointments, pay and promotion – and not by their intrinsic interest in a problem – certainly not by having a personal stake in finding an answer – such a being a sufferer. Indeed, such factors are portrayed as introducing bias/ damaging impartiality. The modern scientist is supposed to be a docile and obedient bureaucrat – switching ‘interests’ and tasks as required by the changing (or unchanging) imperatives of funding, the fashions of research and the orders of his master.

What determines a modern scientist’s choice of topic, of problem? Essentially it is peer review – the modern scientist is supposed to do whatever work that the cartel of peer-review-dominating scientists decide he should do.

This will almost certainly involve working as a team member for one or more of the peer review cartel scientists; doing some kind of allocated micro-specialized task of no meaning or intrinsic interest – but one which contributes to the overall project being managed by the peer review cartel member. Of course the funders and grant awarders have the major role in what science gets done, but nowadays the allocation of funding has long since been captured by the peer review cartel.

Most importantly, the peer review cartel has captured the ability to define success in solving scientific problems: they simply agree that the problem has been solved! Since peer review is now regarded as the gold standard of science, if the peer review cartel announces that a problem has been solved, then that problem has been solved.

(This is euphemistically termed hype or spin.)

To what does the modern scientist aspire? He aspires to become a member of the peer review cartel. In other words, he aspires to become a bureaucrat, a manager, a ‘politician’.

Is the peer review cartel member a scientist as well? Sometimes (not always) he used-to be – but the answer is essentially: no. Because being a modern high level bureaucrat, manager or politician is incompatible with truthfulness, and dishonesty is incompatible with science.

The good news is that when real science is restored (lets be optimistic!) then its practitioners will again be motivated to discover true and useful things – because science will no longer be a career it will be colonized almost-exclusively by those with a genuine interest in finding real world answers.

Thursday 27 May 2010

Micro-specialization and the infinite perpetuation of error

Scientific specialization is generally supposed to benefit the precision and validity of knowledge within specializations, but at the cost of these specializations becoming more narrow, and loss of integration between specializations.

In other words, as specialization proceeds, people supposedly know more and more about less and less - the benefit being presumed to be more knowledge in each domain, the cost that nobody has a general understanding.

However, I think the supposed benefit is actually not true. People do not really know more – often they know nothing at all or everything they know is wrong because undercut by fundamental errors.

Probably the benefits of specialization really do apply to the early stages of gross specialization such as the increase of scientific career differentiation in the early 20th century - the era when there was a division of university science degrees into Physics, Chemistry and Biology - then later a further modest subdivision of each of these into two or three.

But since the 1960s scientific specialization has now gone far beyond this point, and the process is now almost wholly disadvantageous. We are now in an era of micro-specialization, with dozens of subdivisions within sciences.

Part of this is simply the low average and peak level of ability, motivation and honesty in most branches of modern science. The number of scientists has increased by more than an order of magnitude – clearly this has an effect. Scientific training and conditions have become prolonged and dull and collectivist – deterring creative and self-motivated people. And these have happened in an era when the smartest kids tended not to gravitate to science, as they did in the early 20th century, but instead to professions such as medicine and law.

However there is a more basic and insoluble problem about micro-specialization. This is that micro-specialization is about micro-validation – which can neither detect nor correct gross errors in its basic suppositions.

In my experience, this is the case for many scientific specialties:

1. Epidemiologists are fixated on statistical issues and cannot detect major errors in their presuppositions because they do not regard individual patient data as valid nor do they regard sciences such as physiology and pharmacology as relevant. Hence they do not understand why statistical knowledge cannot replace biological and medical knowledge, nor why the average of 20 000 crudely measured randomized trial patients is not a substitute for the knowledgeable and careful study of individual patients. Since epidemiology emerged as a separate specialty, it has made no significant contribution to medicine but has led to many errors and false emphases. (All this is compounded by the dominant left-wing political agenda of almost all epidemiologists.)

2. Climate change scientists are fixated on fitting computer models to retrospective data sets, and cannot recognize that retrofitted models have zero intrinsic predictive validity. The validity of a model comes from the prediction of future events, from consistency with other sciences relevant to the components of the model, and from consistency with independent data not included in the retrofitting. Mainstream climate change scientists fail to notice that complex computer modelling has been of very little predictive or analytic value in other areas of science (macroeconomics, for instance). They don't even have a coherent understanding of the key concept of global temperature – if they did have a coherent concept of global temperature, they would realize that it is a _straightforward_ matter to detect changes in global temperature – since with proper controls every point on the globe would experience such changes. If the proper controls are not known, however, then global temperature simply cannot be measured; in which case climate scientists should either work out the necessary controls, or else shut-up.

3. Functional brain imaging involves the truly bizarre practice of averaging of synaptic events: with a temporal resolution of functional imaging methods typically averaging tens to hundreds of action potentials and a spatial resolution averaging tens to hundreds of millions of synapses. There may also be multiple averaging and subtraction of repeated tasks. What this all means at the end of some billions of averaged instances is anybody's guess - almost certainly it is un-interpretable (just consider what it would mean to average _any_ biological activity in this kind of fashion!). Yet this stuff is the basis for the major branch of neuroscience which for three decades has been the major non-genetic branch of biological/ medical science - at the cost of who knows how many billions of pounds and man-hours. And at the end of the day, the contribution of functional brain imaging to biological science and medicine has been - roughly - none-at-all.

In other words, in the world of micro-specialization the each specialist’s attention is focused on technical minutiae and the application of conventional proxy measures and operational definitions. These agreed-practices are used in micro-specialities for no better reason than 'everybody else' does the same and (lacking any real validity to their activities) there must be some kind of arbitrary ‘standard’ against which people are judged. ('Everybody else' here means the dominant Big Science researchers who dominate peer review (appointments, promotions, grants, publications etc.) in that micro-speciality.)

Micro-specialists cannot even understand what has happened when there are fatal objections and comprehensive refutations of their standard paradigms which originate from adjacent areas of science.

In a nutshell, micros-specialization allows a situation to develop where the whole of a vast area of science is bogus; and for this reality to be intrinsically and permanently invisible and incomprehensible to the participants in that science.

If we then combine this fact with the notion that only micro-specialists are competent to evaluate the domain of their micro-speciality - then we have a situation of intractable error.

Which situation is precisely what we do have. Vast scientific enterprises have consumed vast resources without yielding any substantive progress, and the phenomenon continues for time-spans of several human generations, and there is no end in sight (short of the collapse of science-as-a-whole).

According to the analysts of classical science, science was supposed to be uniquely self-correcting - in practice, now, thanks in part o micros-specialization, it is not self-correcting at all. Either what we call science nowadays is not 'real science' or else real science has mutated into something which is a mechanism for infinite perpetuation of error.

Tuesday 25 May 2010

'Medieval science' - rediscovered and improved

Although I was 'brought up' on the religion of science, and retain great respect for philosophers such as Jacob Bronowski and Karl Popper, and for sociologists such as Thos. Merton, David L Hull and John Ziman; I have come to believe that this 'classic' science (the kind which prevailed from the mid-19th - mid-20th century in the UK and Western Europe) - in other words that activity which these authors described and analyzed - was 'merely' a transitional state.

In short: classic science was highly successful, but contained the seeds of its own destruction because the very processes that led to classic science would, when continued (and they could not be stopped) also destroy it.

(As so often, that which is beneficial in the short term is fatal in the longer term.)

Specifically, this transitional state of classic science was an early phase of professional science, which came between what might be called medieval science and modern science (which is not real science at all - but merely a generic bureaucratic organization which happened to have evolved from classic science). But classic science was never a steady state, and never reproduced itself; but was continually evolving by increasing growth, specialization and professionalization/ bureaucratization.

But classic Mertonian/ Popperian science was never stable - each generation of scientists had a distinctly different experience than the generation before due to progressive increasing growth, specialization and professionalization/ bureaucratization.

And indeed Classic science was not the kind of science which led to the industrial revolution and the ‘modern world’; the modern world was a consequence of causes which came before modernity. The modern world is a consequence of medieval science. So, the pre-modern forms of science were real science, and had real consequences.

What I mean is that medieval science was an activity which was so diffuse and disorganized that we do not even recognize it as science – yet it was this kind of science which led to the process of societal transformation that is only recognized by historians as becoming visible from the 17th century (e.g the founding of the Royal Society in 1660). But the emergence of classic science was merely the point at which change become so visible that it could not be ignored.

Since modernity it is therefore possible that science has been unravelling even as it expanded – i.e. that the processes of growth, specialization and professionalization/ bureaucratization were also subverting themselves until the point (which we have passed) where the damage due to growth, specialization and professionalization/ bureaucratization outstripped the benefit.

This is good news, I think.

Much of the elaborate and expensive paraphernalia of science - which we mistakenly perceive to be vital – may in fact be mostly a late and parasitic development. Effective science can be, has been, much simpler and cheaper.

When we consider science as including the medieval era prior to classic science, then it becomes clear that there is no distinctive methodology of science. Looking across the span of centuries it looks like the process of doing science cannot really be defined more specifically than saying that it is a social and multigenerational activity characterized by truth-seeking.

I would further suggest that science is usually attempting to solve a problem – or to find a better solution to a problem than the existing one (problem solving per se is not science, but science is a kind of problem solving).

The main physical (rather than political) constraint on science in the past was probably the slowness and unreliability of communication and records. This made science extremely slow to advance. Nonetheless, these advances were significant and led to the modern world.

The encouraging interpretation is therefore that even when modern professional ‘science’ collapses a new version of medieval science should easily be able to replace it, because of the already-available improvements in the speed, accuracy and durability of communications.

In other words, a re-animated ‘medieval science’ (amateur, unspecialized, individualistic, self-organized) plus modern communications probably equals something pretty good, by world historical standards - probably not so good as the brilliant but unstable phase of 'classic science', but better than 'modern science'.

Monday 24 May 2010

Master and 'prentice'

Probably the most important bit of work I did as a scientist was the malaise theory of depression - http://www.hedweb.com/bgcharlton/depression.html .

I worked on this intermittently for nearly 20 years from about 1980 when I first began to study psychiatry. My motivation was trying to understand how a mood state could apparently be cured by medication.

From what I was being told, it seemed as if 'antidepressants' were supposed to normalize mood while leaving the rest of the mind unchanged. Of course, this isn't really true, but that was what I was trying to understand initially. Or, to put it another way, I was trying to understand what was 'an antidepressant' - since none of the standard explanations made any sense at all.

So, how did I 'solve' this problem (solve at least to my own satisfaction, that is!). Part of it was 'phenomenology' (i.e. mental self observation) - especially to observe my own mood states in response to various illnesses (such as colds and flu) and in response to various medications (including some which I was taking for migraine).

But the best answer is that I did not really solve it myself, but only when I had subordinated my investigations to the work of two great scientists - two 'Masters': the Irish psychiatrist David Healy and the Portugese neuroscientist Antonio R Damasio.

This apprenticeship was almost entirely via the written world - and involved reading, thinking-about and re-reading (and thinking about and re-re-reading some more) key passages from key works of these two scientists. That is accepting these men as my mentors and discerning guides to the vast (and mostly wrong) literature of Psychiatry and Neuroscience.

The lesson that I draw from my experience is that real science (which is both rare and slow) is done and passed-on by a social groups comprising a handful of great scientists and a still small but somewhat larger number of 'disciples' who learn and apply their insights and serve to amplify their impact.

But even the great scientists have themselves mostly served as apprentices to other great scientists (as has often been documented - e.g. by Harriet Zuckerman in Scientific Elite).

So, when thinking about the social structure of real science, it would seem that real scientific work is done (slowly, over a time frame of a few decades) by small groups that are driven by Masters who make the breakthroughs; plus a larger number of 'prentices who learn discernment from the Masters ('discernment' - i.e. the correct making of evaluations - being probably more important than techniques or specific information).

But disciples by themselves are not capable of making breakthroughs, but only capable of incremental extensions or combinations of Master work.

And it is best if the Master can be primarily responsible for training the next generation Master/s to carry on the baton of real science. Disciples can - at best - only train-up more 'prentices with the humility to seek and serve a Master.

Friday 21 May 2010

Doing science after the death of real science

Science is now, basically, dead (my direct experience of science is inevitably partial - but the same mechanisms seems to have been at work everywhere; even outside of medicine in the humanities some of which I know reasonably well - and the social sciences were essentially corrupt from the word go).

What we think of as science is now merely a branch of the bureaucracy. It would, indeed it does, function perfectly well without doing any useful and valid science at all.

Indeed, modern professional science functions perfectly well while, in fact, *destroying* useful and valid science and replacing it with either rubbish or actively harmful stuff (this is very clear in psychiatry).

I find that I now cannot trust the medical research literature _at all_. I trust a few individual individuals but I do not trust journals, not fields, not funding agencies, not scholarly societies (like the Royal Society, universities, or the NAS) not citations, not prizes (Nobel etc) - in my opinion, none of these are trustworthy indices of scientific validity - not even 'on average'.

The system is *so* corrupt that finding useful and valid science (and, of course, there is some) is like finding a needle in a haystack.

The vast bulk of published work is either hyped triviality (which is time wasting at best), or dishonest in a range of ways from actual invention of data down to deliberately selective publication, or else incompetent in the worst sense - the sense that the researchers lack knowledge, experience and even interest in the problems which they are pretending to solve.

So, what should a person do who wants to do real science in an area? - if (as I think its probably the case) they need to _ignore_ the mainstream published literature as doing more harm than good.

Essentially it is a matter of going back to pre-professional science, and trying to recreate trust based interpersonal networks ('invisible colleges') of truthful, dedicated amateurs; and accepting that the pace of science will be *slow*.

I've been reading Erwin Chargaff lately, and he made clear that the pace of science really is slow. I mean with significant increments coming at gaps of several years - something like one step a decade or so, if you are lucky. And if 'science' seems fast, then that is because it is not science!

This is why science is destroyed by professionalism and its vast expansion - there are too few steps of progress, and too few people ever make these steps. Most 'scientists' (nowadays in excess of 99 percent of them) - if judged correctly - are complete and utter failures, or indeed saboteurs!

So science inevitably ought to be done as a serious hobby/ pastime paid for by some other economic activity (which has usually teaching, but was often medicine up to the early 20th century, and before that being a priest).

Why should anyone take any notice of these putative small and self selected groups of hobby-scientists? Well, presumably if they produce useful results (useful as judged by common sense criteria - like relieving pain or reversing the predictable natural history of a disease), and if the members of the group are honest and trustworthy. But whether this will happen depends on much else - their work may be swamped by public relations.

So, groups of practitioners are best able to function as amateur scientists, since they can implement their own findings, with a chance that their effectivceness might be noticed. And in the past groups of practicing physicians would function as the scientists for their area of interest.

This seems the best model I can think of for those wanting to do science. But science is intrinsically a social activity, not an individual activity. So if you cannot find or create a group that you can trust (and whose competence you trust) - then you cannot do real science.

Tuesday 29 September 2009

Stop prescribing antipsychotics! - when possible

Charlton BG. Why are doctors still prescribing neuroleptics? QJM 2006; 99: 417-20.

This is a version of my paper "Why are doctors still prescribing neuroleptics? - in which the word 'neuroleptic' has been replaced by the word 'antipsychotic'. Both neuroleptic and antipsychotic refer to the same class of drugs - but neuroleptic was the original and most scientifically-accurate name. Antipsychotic is a dishonest marketing term, since these drugs are not anti-psychotic. However, antipsychotic has now all-but taken over from neuroleptic in mainstream discourse - so I have prepared this version of my paper containing the more common term.

Bruce G Charlton

Abstract

There are two main pharmacological methods of suppressing undesired behavior: by sedation or with antipsychotics. Traditionally, the invention of antipsychotics has been hailed as one of the major clinical breakthroughs of the twentieth century, since they calmed agitation without (necessarily) causing sedation. The specifically antipsychotic form of behavioral control is achieved by making patients psychologically Parkinsonian – which entails emotional-blunting and consequent demotivation. Furthermore, chronic antipsychotic usage creates dependence so that - in the long term, for most patients - antipsychotics are doing more harm than good. The introduction of ‘atypical’ antipsychotics (ie. antipsychotically-weak but strongly sedative antipsychotics) has made only a difference in degree, and at the cost of a wide range of potentially fatal metabolic and other side effects. It now seems distinctly possible that, for half a century, the creation of many millions of Parkinsonian patients has been misinterpreted as a ‘cure’ for schizophrenia. Such a wholesale re-interpretation of antipsychotic therapy represents an unprecedented disaster for the self-image and public reputation of both psychiatry and the whole medical profession. Nonetheless, except as a last resort, antipsychotics should swiftly be replaced by gentler and safer sedatives.

* * *

It is usually said, and I have said it myself, that the invention of antipsychotics was one of the major therapeutic breakthroughs of the twentieth century [1]. But I now believe that this opinion is due for revision, indeed reversal. Antipsychotics have achieved their powerful therapeutic effects at too great a cost, and a cost which is intrinsic to their effect [2, 3]. The cost has been many millions of formerly-psychotic patients who are socially-docile but emotionally-blunted, de-motivated, chronically antipsychotic-dependent and suffering significantly increased mortality rates. Consequently, as a matter of some urgency, antipsychotic prescriptions should be curtailed to the point that they are used only as a last resort.


Behavioral suppression in medicine

Psychiatrists, especially those working in hospitals, have frequent need for interventions to calm and control behavior – either for the safety of the patient or of society. The same applies – less frequently – for other medical personnel dealing with agitation, for example due to delirium or dementia. Broadly speaking, there are two pharmacological methods of suppressing agitated behavior: with sedatives or with antipsychotics [2, 3].
Sedation was the standard method of calming and controlling psychiatric patients for many decades prior to the discovery of antipsychotics, and sedation remained the only method in situations where antipsychotics were not available (eg in the Eastern Bloc and under-developed countries) [3, 4].

The therapeutic benefits of sedation should not be underestimated. In the first place sedation can usually be achieved safely and without sinister side effects; and an improved quality of sleep makes patients feel and function better. Sedation may also be potentially ‘curative’ where sleep disturbance has been so severe and prolonged as to lead to delirium, which (arguably) may be the case for some psychotic patients such as those with mania [2, 5].

But clearly - except in the short term - sedation is far from an ideal method of suppressing agitation. The discovery of antipsychotics offered something qualitatively new in terms of behavioral control: the possibility of powerfully calming a patient without (necessarily) making them sleepy [4]. In practice, sedative antipsychotics (such as chlorpromazine or thioridazine), or a combination of a sedative (such as lorazepam or promethazine) with a less-sedating antipsychotic such as haloperidol or droperidol, were often used to combine both forms of behavioral suppression.


The Parkinsonian core effect of antipsychotics

The Parkinsonian (emotion-blunting and de-motivating) core effect of antipsychotics has been missed by most observers. This failure relates to a blind-spot concerning the nature of Parkinsonism.

Parkinsonism is not just a motor disorder. Although abnormal movements (and an inability to move) are its most obvious feature, Parkinsonism is also a profoundly ‘psychiatric’ illness in the sense that emotional-blunting and consequent de-motivation are major subjective aspects. All this is exquisitely described in Oliver Sack’s famous book Awakenings [10], as well as being clinically apparent to the empathic observer.

Emotional-blunting is de-motivating because drive comes from the ability subjectively to experience in the here-and-now the anticipated pleasure deriving from cognitively-modeled future accomplishments [2]. An emotionally-blunted individual therefore lacks current emotional rewards for planned future activity, including future social interactions, hence ‘cannot be bothered’.

Demotivation is therefore simply the undesired other side of the coin from the desired therapeutic effect of antipsychotics. Antipsychotic ‘tranquillization’ is precisely this state of indifference [8]. The ‘therapeutic’ effect of antipsychotics derives from indifference towards negative stimuli, such as fear-inducing mental contents (such as delusions or hallucinations); while anhedonia and lack of drive are predictable consequences of exactly this same state of indifference in relation to the positive things of life.

So, Parkinsonism is not a ‘side-effect’ of antipsychotics, neither is it avoidable. Instead, Parkinsonism is the core therapeutic effect of antipsychotics: as reflected in the name, which refers to an agent which ‘seizes’ the nervous system and holds it constant (ie. indifferent, blunted) [4]. Demotivation should be regarded as inextricable from the antipsychotic form of tranquillization [2]. And the so-called ‘negative symptoms’ of schizophrenia are (in most instances) simply an inevitable consequence of antipsychotic treatment [4].

By this account, the so-called ‘atypical’ antipsychotics (risperidone, olanzapine, quetiapine etc.) are merely weaker Parkinsonism-inducing agents. The behavior-controlling effect of ‘atypicals’ derives from inducing a somewhat milder form of Parkinsonism, combined with strong sedation [11]. However, clozapine is an exception, because clozapine is not a antipsychotic, does not induce Parkinsonism, and therefore (presumably) gets its behavior- controlling therapeutic effect from sedation. The supposed benefit from clozapine of ‘treating’ the ‘negative symptoms of schizophrenia’ (such as de-motivation, lack of drive, asocial behavior etc.) is therefore that – not being a antipsychotic – clozapine does not itself cause these negative symptoms.


What next?

Whatever the historical explanation for the wholesale misinterpretation of antipsychotic actions, recent high profile papers in the New England Journal of Medicine [12, 13] and JAMA [14] have highlighted serious problems with antipsychotics as a class (whether traditional or atypical), and the tide of opinion now seems to turning against them.
In particular the so-called ‘atypical antipsychotics’ which now take up 90 percent of the US market [12], and are increasingly being prescribed to children [6] seem to offer few advantages over the traditional agents [12] while being highly toxic and associated with significantly-increased mortality from metabolic and a variety of other causes [13, 14, 15, 16]. This new data has added weight to the idea that usage of antipsychotics should now be severely restricted [3, 7, 17].

Indeed, it looks as if after some 50 years widespread prescribing there is going to be a massive re-evaluation and re-interpretation of these drugs, with a reversal of their evaluation as a great therapeutic breakthrough. It now seems distinctly possible that for half a century the creation of millions of asocial, antipsychotic-dependent but docile Parkinsonian patients has been misinterpreted as a ‘cure’ for schizophrenia. This wholesale re-interpretation represents an unprecedented disaster for the self-image and public reputation – not just of psychiatry – but of the whole medical profession.

Perhaps the main useful lesson from the emergence of the 'atypical' antipsychotics is that psychiatrists did not need to make all of their agitated and psychotic patients Parkinsonian in order to suppress their behavior. ‘Atypicals’ are weakly antipsychotic but highly sedative. This implies that sedation is probably sufficient for behavioral control in most instances [3, 17]. In the immediate term, it therefore seems plausible that already-existing, cheap, sedative drugs (such as benzodiazepines or antihistamines) offer realistic hope of being safer, equally effective and subjectively less-unpleasant substitutes for antipsychotics in many (if not all) patients.

I would argue that this should happen sooner rather than later. If we apply the test of choosing what treatment we would prefer for ourselves or our relatives with acute agitation or psychosis, knowing what we now know about antipsychotics, I think that many people (perhaps especially psychiatric professionals) would now wish to avoid antipsychotics except as a last resort. Few would be happy to wait a decade or so for the accumulations of a mass of randomized trial data (which may never emerge, since such trials would lack a commercial incentive) before making the choice of less dangerous and unpleasant drugs [17].

But there is no hiding the fact that if antipsychotics were indeed to be replaced by sedatives then this would seem like stepping-back half a century. It would entail an acknowledgement that psychiatry has been living in a chronic delusional state – and this may suggest that the same could apply to other branches of medicine. Since such a wholesale cognitive and organizational reappraisal is unlikely, perhaps the most realistic way that the desired change in practice will be accomplished is not by an explicit ‘return’ to old drugs but by the introduction of a novel (and patentable) class of sedatives which are marketed as having some kind of (more-or-less plausible) new therapeutic role.

Such a new class of tacit sedatives would enable the medical profession to continue its narrative of building-upon past progress, and retain its self-respect; albeit at the price of cognitive evasiveness. But, if such developments led to a major cut-back in antipsychotic prescriptions, then this deficiency of intellectual honesty would be a small price to pay.


References

1. Charlton BG. Clinical research methods for the new millennium. Journal of Evaluation in Clinical Practice 1999; 5: 251-263.

2. Charlton B. Psychiatry and the human condition. Radcliffe Medical Press: Oxford, UK, 2000.

3. Moncrieff J, Cohen D. Rethinking models of psychotropic drug action. Psychotherapy and Psychosomatics 2005; 74: 145-153.

4. Healy D. The creation of psychopharmacology. Harvard University Press: Cambridge, MA, USA, 2002.

5. Charlton BG, Kavanau JL. Delirium and psychotic symptoms: an integrative model. Medical Hypotheses. 2002; 58: 24-27.

6. Whitaker R. Mad in America. Perseus Publishing: Cambridge, MA, USA, 2002.

7. Whitaker R. The case against antipsychotic drugs: a 50 year record of doing more harm than good. Medical Hypotheses 2004; 62: 5-13.

8. Healy D. Psychiatric drugs explained. 3rd edition. Churchill Livingstone: Edinburgh, 2002.

9. Healy D, Farquhar G: Immediate effects of droperidol. Human Psychopharmacology 1998; 13: 113-120.

10. Sacks O. Awakenings. London: Picador, 1981.

11. Janssen P. From haloperidol to risperidone. In D Healy (Ed.) The psychopharmacologists. London: Altman, 1998, pp 39-70

12. Lieberman JA, Stroup TS, McEvoy JP, Swartz MS, Rosenheck RA, Perkins DO, Keefe RS, Davis SM, Davis CE, Lebowitz BD, Severe J, Hsiao JK; Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) Investigators. Effectiveness of antipsychotic drugs in patients with chronic schizophrenia. New England Journal of Medicine 2005; 353: 1209-23.

13. Wang, Philip S.; Schneeweiss, Sebastian; Avorn, Jerry; Fischer, Michael A.; Mogun, Helen; Solomon, Daniel H.; Brookhart, M. Alan. Risk of Death in Elderly Users of Conventional vs. Atypical Antipsychotic Medications. New England Journal of Medicine 2005; 353: 2335-2341.

14. Schneider LS, Dagerman KS, Insel P. Risk of death with atypical antipsychotic drug treatment for dementia: meta-analysis of randomized placebo-controlled trials. JAMA 2005; 294: 1934-43.

15. Montout C, Casadebaig F, Lagnaoui R, Verdoux H, Phillipe A, Begaud B, Moore N. Neuroleptics and mortality in schizophrenia: a prospective analysis of deaths in a French cohort of schizophrenic patients. Schizophrenia Research 2002; 147-156.

16. Morgan MG, Scully PJ, Youssef HA, Kinsella A, Owens JM, Waddington JL. Prospective analysis of premature mortality in schizophrenia in relation to health service engagement: a 7.5 year study within an epidemiologically complete, homogenous population in rural Ireland. Psychiatry Research. 2003; 117: 127-135.

17. Charlton BG. If 'atypical' neuroleptics did not exist, it wouldn't be necessary to invent them: Perverse incentives in drug development, research, marketing and clinical practice. Medical Hypotheses 2005; 65 :1005-9

Wednesday 5 August 2009

Zombie science of Evidence-Based Medicine

*

The Zombie science of Evidence-Based Medicine (EBM): a personal retrospective

Bruce G Charlton. Journal of Evaluation in Clinical Practice. 2009; 15: 930-934.

Professor of Theoretical Medicine
University of Buckingham
e-mail: bruce.charlton@buckingham.ac.uk

***
Abstract

As one of the fairly-frequently cited critics of the socio-political phenomenon which styles itself Evidence-Based Medicine (EBM), it might be assumed that I would be quite satisfied by having collected some scraps of fame, or at least notoriety, from the connection. But in fact I now feel a fool for having been drawn into criticizing EBM with the confident expectation of being able to kill it before it had the chance to do too much harm. A 'fool' not because my criticisms of EBM were wrong – EBM really is just as un-informed, confused and dishonest as I claimed it was. But because, with the benefit of hindsight, it is obvious that EBM was from its very inception a Zombie science – a lumbering hulk reanimated from the corpse of Clinical Epidemiology. And a Zombie science cannot be killed because it is already dead. A Zombie science does not perform any scientific function, so it is invulnerable to scientific critique since it is sustained purely by the continuous pumping of funds. The true function of Zombie science is to satisfy the (non-scientific) needs of its funders – and indeed the massive success of EBM is that it has rationalized the takeover of UK clinical medicine by politicians and managers. So I was simply wasting my time by engaging in critical evaluation of EBM using the normal scientific modes of reason, knowledge and facts. It was useless my arguing against EBM because a Zombie science cannot be stopped by any method short of cutting-off its fund supply.

***

It is pointless trying to kill the un-dead

Since I am one of the fairly-frequently cited critics of the socio-political phenomenon which styles itself Evidence-Based Medicine (EBM), it might be assumed that I would be quite satisfied by having collected some scraps of fame, or at least notoriety, from the connection [1-4]. But in fact I feel rather a fool for having been drawn into criticizing EBM. Not because my criticisms were wrong – of course they weren’t. EBM really is just as uninformed, confused and dishonest as I claimed in my writings. But because - with the benefit of hindsight - it is obvious that EBM was, from its very inception, a Zombie science: reanimated from the corpse of Clinical Epidemiology.

I recently delineated the concept of Zombie science [5], partly based on my experiences with EBM, and the concept has already spread quite widely via the internet. Zombie science is defined as a science that is dead but will not lie down. Instead, it keeps twitching and lumbering around so that it somewhat resembles Real science. But on closer examination, the Zombie has no life of its own (i.e. Zombie science is not driven by the scientific search for truth [6]); it is animated and moved only by the incessant pumping of funds. Funding is the necessary and sufficient reason for the existence of Zombie science; which is kept moving for so long as it serves the purposes of its funders (and no longer).

So it is a waste of time arguing against Zombie Science because it cannot be stopped by any method except by cutting-off its fund supply. Zombie science cannot be killed because it is already dead.


When dead fish seem to swim

I ought to have seen all this more quickly, because I knew Clinical Epidemiology (CE) [7] before it was murdered and reanimated as an anti-CE Zombie, and I was by chance actually present at more-or-less the exact public moment when – by David Sackett, in Oxford, in 1994 - the corpse of CE was galvanized into motion in the UK, and began to be inflated by rapid infusion of NHS funding on a massive scale. The process was swiftly boosted by zealous promotion from the British Medical Journal, which turned-itself into a de facto journal of EBM then went on to benefit from the numerous publishing and conference opportunities created by their advocacy [3].

I ask myself now; how I could have been so naïve as to imagine that EBM – born of ignorance and deception - would be open to the usual scientific processes of conjecture and refutation? From the very beginning, from the very choice of the name 'evidence-based medicine'; it was surely obvious enough to the un-blinkered gaze that we were dealing here with something outwith the kin of science. Why couldn’t I see this? Why was I so ludicrously reasonable?

The fact is that I was lured into engagement by pride: pride at seeing-through the clouds of smoke which were being deployed to obscure the origins of EBM in CE; pride at recognizing the numerous ‘moves’ by which unfounded assertion was being disguised as evidential - at the playing fast-and-loose with definitions of ‘evidence’ in order to reach pre-determined conclusions… I think that I was excited by the possibilities of engaging in what looked like a easy demolition job. My only misgiving was that it was too easy – destroying the scientific basis of EBM would be about as challenging as shooting fish in a barrel.

Well, it was as easy as that. But shooting fish in a barrel is a pointless activity if nobody is interested in the vitality of the fish. As it turned-out – the edifice of EBM could be supported as easily by a barrel of dead fish as by one full of lively swimmers. So long as the fish corpses were swirling around (stirred by the influx of research funding) then, for all anyone could see at a quick glance (which is the only kind of glance most people will give), it looked near-enough as if the fish were still alive.

Anyway, undaunted, I and many others associated with the Journal of Evaluation in Clinical Practice set about the work of analyzing, selecting and discarding among the assertions and propositions of EBM.

There was the foundational assertion that in the past pre-EBM medicine had not been based on evidence but on a blend of prejudice, tradition and subjective whim; this now to be swept aside by the ‘systematic’ use of ‘best evidence’. This was an ignorant and unfounded belief – coming as it did after the (pretty much) epidemiology-unassisted 'golden age' of medical therapeutic discovery peaking somewhere between about 1940 and 1970 [8-11].

With regard to ‘best evidence’ there was the assertion that ‘evidence’ meant only focusing on epidemiological data (and not biochemistry, genetics, physiology, pharmacology, engineering or any other of the domains which had generated scientific breakthroughs in the past). It meant ignoring the role of ‘tacit knowledge’ derived from apprenticeship. And it was clearly untrue [12-15].

Then there was the assertion that the averaged outcomes of epidemiological data, specifically randomized trails and their aggregation by meta-analysis of RCTs were straightforwardly applicable to individual patients. This was a mistake [12, 16-18].

On top of this there was the methodological assertion that among RCTs the ‘best’ were the biggest – the ‘mega-trials’ which attempted to maximize recruitment and retention of subjects by simplifying methodologies and thereby reducing the level of control. This was erroneous [12, 16, 19].

In killing-off the bottom-up ideals of Clinical Epidemiology, EBM embraced a top-down and coercive power structure to impose EBM-defined ‘best evidence’ on clinical practice [20, 21]; this to happen whether clinical scientists or doctors agreed that the evidence was best or not (and because doctors have been foundationally branded as prejudiced, conservative and irrational –EBM advocates were pre-disposed to ignore their views anyway).

Expertise was arbitrarily redefined in epidemiological and biostatistical terms, and virtue redefined as submission to EBM recommendations – so that the job of physician was at a stroke transformed into one based upon absolute obedience to the instructions of EBM-utilizing managers [3].

(Indeed, since too many UK doctors were found to be disobedient to their managers; in the NHS this has led to a progressive long-term strategy of the replacing doctors by more-controllable nurses, who are now first contact for patients in many primary and specialist health service situations.)


Biting-off the hand that offers EBM

The biggest mistake made in analyzing the EBM phenomenon is to assume that the success of EBM depended upon the validity of its scientific or medical credentials [22]. This would indeed be reasonable if EBM were a Real science. But EBM was not a Real science, indeed it wasn’t any kind of science at all as was clearer when it had been correctly characterized as a branch of epidemiology, which is a methodological approach sometimes used by science [13-15].

EBM did not need to be a science or a scientific methodology, because it was not adopted by scientists but by politicians, government officials, managers and biostatisticians [3]. All it needed – scientifically – was to look enough like a scientific activity to convince a group of uninformed people who stood to benefit personally from its adoption.

So, the sequence of falsehoods, errors, platitudes and outrageous ex cathedra statements which constituted the ideological foundation of EBM, cannot be – and is not - an adequate or even partial explanation for the truly phenomenal expansion of EBM. Whether EBM was self-consciously crafted to promote the interests of government and management, or whether this confluence of EBM theory and government need was merely fortuitous, is something I do not know. But the fact is that the EBM advocates were shoving at an open door.

When the UK government finally understood that what was being proposed was a perfect rationale for re-moulding medicine into exactly the shape they had always wanted it - the NHS hierarchy were falling over each other in their haste to establish this new orthodoxy in management, medical education and in founding new government institutions such as NICE (originally meaning the National Institute for Clinical Excellence – since renamed [20]).

As soon as the EBM advocates knocked politely to offer a try-out of their newly-created Zombie; the door was flung open and the EBM-ers were dragged inside, showered with gold and (with the help of the like-minded Cochrane Collaboration and BMJ publications) the Zombie was cloned and its replicas installed in positions of power and influence.

Suddenly the Zombie science of EBM was everywhere in the UK because money-to-do-EBM was everywhere – and modern medical researchers are rapidly-evolving organisms which can mutate to colonize any richly-resourced niche – unhampered by inconveniences such as truthfulness or integrity [23]. Anyway, when existing personnel were unwilling, there was plenty of money to appoint new ones to new jobs.


The slaying of Clinical Epidemiology (CE)

But how was the Zombie created in the first place?

In the beginning, there had been a useful methodological approach called Clinical Epidemiology (CE), which was essentially the brainchild of the late Alvan Feinstein – a ferociously intelligent, creative and productive clinical scientist who became the senior Professor of Medicine at Yale and recipient of the prestigious Gairdner Award (a kind of mini-Nobel prize). Feinstein's approach was to focus on using biostatistical evidence to support clinical decision making, and to develop forms of measurement which would be tailored for use in the clinical situation. He published a big and expensive book called Clinical Epidemiology in 1985 [24]. Things were developing nicely.

The baton of CE was then taken up at McMaster University by David Sackett, who invited Feinstein to come as a visiting professor. Sackett turned out to be a disciple easily as productive as Feinstein; but, because he saw things more simply than Feinstein, Sackett had the advantage of a more easily understood world-view, prose style and teaching persona. So when Sackett and co-authors also published a book entitled Clinical Epidemiology in 1985 [7] –Sackett's book was less complex, less massive and much less expensive. And Sackett swiftly became the public face of Clinical Epidemiology.

But in this 1985 book, Sackett cited as definitive his much earlier 1969 definition of Clinical Epidemiology, which ran as follows: “I define clinical epidemiology as the application, by a physician who provides direct patient care, of epidemiologic and biometric methods to the study of diagnostic and therapeutic process in order to effect an improvement in health. I do not believe that clinical epidemiology constitutes a distinct or isolated discipline but, rather, that it reflects an orientation arising from both clinical medicine and epidemiology. A clinical epidemiologist is, therefore, an individual with extensive training and experience in clinical medicine who, after receiving appropriate training in epidemiology and biostatistics, continues to provide direct patient care in his subsequent career” [25] (Italics are in the original.).

Just savour those words: ‘by a physician who provides direct patient care’ and ‘I do not believe that clinical epidemiology constitutes as distinct or isolated discipline… but, rather, an orientation’. These primary and foundational definitions of clinical epidemiology were to be reversed when the subject was killed and reanimated as EBM which was marketed as a ‘distinct and isolated discipline’ (with its own training and certification, its own conferences, journals and specialized jobs) that was being practiced many individuals (politicians, bureaucrats, managers, bio-statisticians, public health employees…) who certainly lacked ‘extensive’ (or indeed any) ‘training and experience in clinical medicine’; and who certainly did not provide direct patient care.

I came across Sackett's Clinical Epidemiology book in 1989 and was impressed. Although I recognized that CE ought to be considerably less algorithm-like and more judgment-based than the authors suggested even in 1985; nonetheless I recognized that Clinical Epidemiology was a fresh, reasonable and perfectly legitimate branch of knowledge with relevance to medical practice. And Clinical Epidemiology was a good name for the new subject, since it described the methodological nature of the activity – which was concerned with the importance of epidemiological methods and information to clinical practice.

But during the period from 1990-92, Clinical Epidemiology was first quietly killed then loudly reanimated as Evidence-Based Medicine [22]. In retrospect we can now see that this was not simply the replacement of an honest name with a dishonest one that arrogantly and without justification begs all the important questions about medical practice. Nor was it merely the replacement of the bottom up model of Clinical Epidemiology with the authoritarian dictatorship which EBM rapidly became.

No, EBM was much more radically different from Clinical Epidemiology than merely a change of name and an inversion of authority; because the new EBM sprang from the womb fully-formed as a self-evident truth [3, 4]. EBM was not a hypothesis but a circular and self-justifying revelation in which definition supported analysis which supported definition – all rolled-up in an urgent moral imperative. To know EBM was to love him; and to recognize him as the Messiah; and to anticipate his imminent coming.

Therefore EBM was immune to the usual feedback and critique mechanisms of science; EBM was not merely disproof-proof but was actually virtuous – and failure to acknowledge the virtuous authority of EBM and adopt it immediately was not just stupid but wicked!

(This moralizing zeal was greatly boosted by association with the Cochrane Collaboration including its ubiquitous spiritual leader, Sir Iain Chalmers.)

In short, EBM was never required to prove itself superior to the existing model of medical practice; rather, existing practice was put into the position of having to prove itself superior to the newcomer EBM!


Zombies with translucent skins

Just think of it, for a moment. Here was a doctrine which advocated rejecting and replacing-with-itself the whole mode of medical science and practice of the past. It advocated a new model of health service provision, new principles for research funding, a new basis for medical education. And the evidence for this? Well… none. Not one particle. ‘Evidence-based’ medicine was based on zero evidence.

As Goodman articulated (in perhaps the best single sentence ever written on the subject of EBM) “…There is no evidence (and unlikely ever to be) that evidence-based medicine provides better medical care in total than whatever we like to call whatever went before…” [26]. So EBM was never required to prove with evidence what it should have been necessary to prove before beginning the wholesale reorganization of medical practice: i.e. that EBM was a better system than ‘whatever we like to call’ whatever went before EBM.

Had anyone done any kind of side-by-side prospective comparison of these two systems of practicing medicine before rejecting one and adopting the other? No. They could have don it, but they didn’t. The message was that EBM was just plain better: end of story.

But how could this happen? – why was it that the medical world did not merely laugh in the metaphorical face of this pretender to the crown? The answer was money, of course; because EBM was proclaimed Messiah with the backing of serious amounts of UK state funding. Indeed, it is now apparent that the funding was the whole thing. If EBM was a body, then the intellectual content of EBM is merely a thin skin of superficial plausibility which covers innards that consist of nothing more than liquid cash, sloshing-around.

Indeed, the thin skin of the EBM Zombie was a secret to its success. The EBM zombie has such a thin skin of plausibility that it is transparent, and observers can actually see the money circulating beneath it. The plausibility was miraculously thin! This meant that EBM-type plausibility was democratically available to everyone: to the ignorant and to the unintelligent as well as the informed and the expert. How marvelously empowering! What a radical poke in the eye for the arrogant ‘establishment’! (And the EBM founders are all outspoken advocates of the tenured radicalism of the ‘60s student generation [4].)

Compared with learning a Real science, it was facile to learn the few threads of EBM jargon required to stitch-together your own Zombie skin using bits and pieces of your own expertise (however limited); then along would come the UK government and pump this diaphanous membrane full of cash to create a fairly-realistic Zombie of pseudo-science. In a world where scientific identity can be self-defined, and scientific status is a matter of grant income [11], then the resulting inflatable monster bears a sufficient resemblance to Real science to perform the necessary functions such as securing jobs or promotions, enhancing salary and status.

The fact that EBM was based upon pure and untested assertions therefore did not weaken it in the slightest; rather the scientific weakness was itself a source of political strength. Because, in a situation where belief in EBM was so heavily subsidized, it was up to critics conclusively to prove the negative: that EBM could not work. And even when conclusive proof was forthcoming, it could easily be ignored. After all, who cares about the views of a bunch of losers who can’t recognize a major funding opportunity when they see it?


Content eluted, only power remains

Things got even worse for those of us who were pathetically trying to stop a government-fuelled Zombie army using only the peashooters of rational debate and the catapults of ridicule. Early EBM made propositions which were evidently wrong, but their recommendations did at least have genuine content. If you installed the EBM clockwork and turned the handle; then what came out was predictable and had content. EBM might have told you wrong things to do; at least it told you what to do with words that had meaning.

But then there was the stunning 1996 U-turn in the BMJ (where else?), in which the advocates of EBM suddenly announced a U-turn, they de-programmed their Zombies. “Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients” [27].

(Pause to allow the enormity of this statement to sink in…)

At a stroke the official meaning of EBM was completely changed, a vacuous platitude was substituted, and henceforth any substantive methodological criticism was met by a rolling-out and mantra-like repetition of this vacuous platitude [28].

Recall, if you will, Sackett’s foundational definition of CE as done: “by a physician who provides direct patient care’ and ‘I do not believe that clinical epidemiology constitutes as distinct or isolated discipline… but, rather, an orientation’. To suggest that EBM represented a ‘sell-out’ of clinical epidemiology is seriously to understate the matter: by 1996 EBM was nothing short of a total reversal of the underlying principles of CE. Alvan Feinstein, true founder of (real) Clinical Epidemiology, considered EBM intellectually laughable – albeit fraught with hazard if taken seriously [e.g. 29].

This fact renders laughable such assurances as: “Clinicians who fear top down cookbooks will find the advocates of evidence based medicine joining them at the barricades.” Wow! Fighting top-down misuse of EBM at the ‘barricades’, no less…

Satire fails in the face of such thick-skinned self-aggrandizement.

Instead of being based only on epidemiology, only on mega-RCTs and their ‘meta-analysis’, only on simple, explicit and pre-determined algorithms - suddenly all kinds of evidence and expertise and preferences were to be allowed, nay encouraged; and were to be ‘integrated’ with the old fashioned RCT-based stuff. In other words, it was back to medicine as usual – but this time medicine would be controlled from above by the bosses who had been installed by EBM.

And having done something similar with Clinical Epidemiology, and now operating in the ‘through-the-looking-glass’ world of the NHS, of course they got away with it! Nobody batted an eyelid. After all, reversing definitions while retaining an identical technical terminology and an identical organizational structure is merely politics as usual. "When I use a word," Humpty Dumpty said in a rather a scornful tone, "it means just what I choose it to mean – neither more nor less."

However much its content was removed or transformed, they still continued calling it EBM. By the time an official textbook of EBM appeared [28], clinical epidemiology had been airbrushed from collective memory, and Sackett’s 1969 clinical-epidemiologist self been declared an ‘unperson’.

Nowadays EBM means whatever the political and managerial hierarchy of the health service want it to mean for the purpose in hand. Mega-randomized trails are treated as the only valid evidence until this is challenged or the results are unwelcome, when other forms of evidence are introduced on an ad hoc basis. Clinical Epidemiology is buried and forgotten.

But a measure of success is that the NHS hierarchy who use the EBM terminology are the ones with power to decide its official meaning when deployed on each specific occasion. The ‘barricades’ have been stormed. The Zombies have taken over!


References
1. Charlton BG. Restoring the balance: Evidence-based medicine put in its place. Journal of Evaluation in Clinical Practice, 1997; 3: 87-98.
2. Charlton BG. Review of Evidence-based medicine: how to practice and teach EBM by Sackett DL, Richardson WS, Rosenberg W, Haynes RB. [Churchill Livingstone, Edinburgh, 1997]. Journal of Evaluation in Clinical Practice. 1997; 3: 169-172
3. Charlton BG, Miles A. The rise and fall of EBM. QJM, 1998; 91: 371-374.
4. Charlton BG. Clinical research methods for the new millennium. Journal of Evaluation in Clinical Practice 1999; 5: 251-263.
5. Charlton BG. Zombie science: A sinister consequence of evaluating scientific theories purely on the basis of enlightened self-interest. Medical Hypotheses, Volume 71, Issue 3, Pages 327-329.
6. Charlton BG. The vital role of transcendental truth in science. Medical Hypotheses. 2009; 72: 373-6.
7. Sackett DL, Haynes RB, Tugwell P. Clinical Epidemiology: a basic science for clinical medicine. Boston: Little, Brown, 1985.
8. Horrobin, D.F. 1987 Scientific medicine – success or failure?. In: Oxford Textbook of Medicine. 2nd ednD.J. Weatherall, J.G.G. Ledingham & D.A. Warrell) Oxford University Press, Oxford.
9. Wurtman, R.J. & Bettiker, R.L. 1995 The slowing of treatment discovery. 1965–95. Nature Medicine, 1, 1122 1125.
10. Le Fanu J. The rise and fall of modern medicine. Little, Brown: London, 1999.
11. Charlton BG, Andras P. Medical research funding may have over-expanded and be due for collapse. QJM. 2005; 98: 53-5.
12. Charlton BG. The future of clinical research: from megatrials towards methodological rigour and representative sampling. Journal of Evaluation in Clinical Practice, 1996; 2: 159-169.
13. Charlton BG. Should epidemiologists be pragmatists, biostatisticians or clinical scientists? Epidemiology, 1996; 7: 552-554.
14. Charlton BG. The scope and nature of epidemiology. Journal of Clinical Epidemiology, 1996; 49: 623-626.
15. Charlton BG. Epidemiology as a toolkit for clinical scientists. Epidemiology, 1997; 8: 461-3
16. Charlton BG. Mega-trials: methodological issues and implications for clinical effectiveness. Journal of the Royal College of Physicians of London, 1995; 29: 96-100.
17. Charlton BG. The uses and abuses of meta-analysis. Family Practice, 1996; 13: 397-401.
18. Charlton BG, Taylor PRA, Proctor SJ. The PACE (population-adjusted clinical epidemiology) strategy: a new approach to multi-centred clinical research. QJM, 1997; 90: 147-151.
19. Charlton BG. Fundamental deficiencies in the megatrial methodology. Current Controlled Trials in Cardiovascular Medicine. 2001; 2: 2-7.
20. Charlton BG. The new management of scientific knowledge in medicine: a change of direction with profound implications. In A Miles, JR Hampton, B Hurwitz (Eds). NICE, CHI and the NHS reforms: enabling excellence or imposing control? Aesculapius Medical Press: London, 2000. Pp. 13-31.
21. Charlton BG. Clinical governance: a quality assurance audit system for regulating clinical practice. (Book Chapter). In A Miles, AP Hill, B Hurwitz (Eds) Clinical governance and the NHS reforms: enabling excellence of imposing control? Aesculapius Medical Press: London, 2001. Pp. 73-86.
22. Daly J. Evidence-based medicine and the search for a science of clinical care. Berkeley & Los Angeles: Univesrity of California Press, 2005.
23. Charlton BG. Are you an honest scientist? Truthfulness in science should be an iron law, not a vague aspiration. Medical Hypotheses. Doi:10.1016/j.mehy.2009.05.009, in the press.
24. Feinstein AR. Clinical Epidemiology: the architecture of clinical research. Philidelphia: WB Saunders, 1985.
25. Sackett DL. Clinical Epidemiology. American Journal of Epidemiology. 1969; 89: 125-8.
26. Goodman NW. Anaesthesia and evidence-based medicine. Anaesthesia. 1998; 53: 353-68.
27. Sackett DL, Rosenberg WMC, Muir Gray JA, Haynes RB, Richardson WS. Evidence-based medicine: what it is and what it isn’t. BMJ 1996; 312: 71-2.
28. Sackett DL. Richardson WS, Rosenberg W, Haynes RB. Evidence-based medicine: how to practice and teach EBM. Edinburgh: Churchill Livingstone, 1997.
29. Feinstein AR, Horwitz RI. Problems in the ‘evidence’ of ‘Evidence-Based Medicine’. American Journal of Medicine. 1997; 103: 529-35.

Thursday 25 June 2009

Animal Spirits - Akerlof & Shiller - review

*

Review by Bruce G Charlton done for for Azure magazine - http://azure.org.il . This review was commissioned, but the editor didn't like what I wrote and it was rejected)

Animal Spirits: how human psychology drives the economy, and why it matters for global capitalism by George A Akerlof and Robert J Shiller. Princeton University Press: Princeton, NJ, USA. 2009, pp xiv, 230.

As a human psychologist by profession, I was looking forward to reading a book written by a Nobel prize-winner in economics and a Yale professor in the same field which – according to the subtitle – argued that human psychology was ‘driving’ the economy. In the event, I found this to be an appalling book in almost every respect: basically wrong, irritatingly written, arrogant, and either badly-informed or else deliberately misleading in its coverage of the aspects of human psychology relevant to economic outcomes.

The main argument of the book is that ‘rational’ models of the economy are deficient and need to be supplemented by a consideration of ‘animal spirits’ and stories. Animal spirits (the idea comes from John Maynard Keynes) are postulated to be the reason for economic growth and booms, or recessions or crashes: an excess of animal spirits leading to overconfidence and a ‘bubble’, and a deficiency of animal spirits leading to the economic stuck-ness of situations like the Great Depression in the USA.

The authors regard the rise and fall of these animal spirits as essentially irrational and unpredictable, and their solution is for governments to intervene and damp-down excessive spirits or perk-up sluggish ones. Governments are implicitly presumed, without any evidence to support the assumption, to be able to detect and measure animal spirits, but to be immune from their influence. Governments are also presumed to know how to manipulate these elusive animal spirits, to be able to achieve this, and willing to do so in order to the ‘general public interest’.

I found all this implausible in the extreme. Akerlof and Shiller will be familiar with the economic field of public choice theory, which explains why governments usually fail to behave in the impartial, long-termist and public-spirited ways that they hope governments will behave – yet this large body of (Nobel prizewinning) research is never mentioned, not even to deny it.

The phrase ‘animal spirits’ is repeated with annoying frequency throughout the book, as if by an infant school teacher trying to hammer a few key facts into her charges, however all this reiteration never succeeded in making it clear to me exactly what the phrase means.

The problem is that the concept of animal spirits is so poorly characterized that it cannot, even in principle, be used to answer the numerous economic questions for which it is being touted as a solution. So far as I can tell, animal spirits are not precisely defined, are not objectively detectable, and cannot be measured – except by their supposed effects.

Akerlof and Schiller talk often of the need to add animal spirits to the standard economic models, but it is hard to imagining any rigorous way in which this could be done. Perhaps they envisage some kind of circular reasoning by which animal spirits are a back box concept used to account for the ‘unexplained variance’ in a multivariate analysis? So that, if the economy is doing better than predicted by the models, then this gap between expectations and observations could be attributed to over-confidence due to excess animal spirits, or vice versa?

There is, indeed, precedent for such intellectual sleight-of-hand in economic and psychological analysis – as when unexplained variance in wage differentials between men and women is attributed to ‘prejudice’, or when variation not associated with genetics is attributed to ‘the environment’. But I would count these as examples of bad science – to be avoided rather than emulated.

Another major theme of the book is stories – indeed the book’s argument unashamedly proceeds more by anecdote than by data. According to Akerlof and Shiller, animal spirits are profoundly affected by the stories that people, groups and nations tell-themselves about their past present and future, their meanings and roles. For example, the relatively poor economic performance of African Americans and Native Americans is here substantially attributed to the stories these groups tell themselves. For Akerlof and Shller, stories - like animal spirits - are attributed with unpredictable and wide-ranging properties ; sometimes replicating like germs in an epidemic for reasons that the authors find hard-to comprehend. (In this respect, A&S’s ‘stories’ seem to have very similar properties to Richard Dawkins’ ‘memes’.)

The big problem with the A&S use of stories as explanations is that each specific story used to explain each specific economic phenomenon is unique. So, instead of postulating a testable scientific theory, we are left with a multitude of one-off hypotheses. The causal model a specific story asserts is in practice un-stestable – each different situation would presumably be the result of a different story. For example, if each boom or bust has its own story, then how can we know whether any particular story is correct?

And who decides the motivations of a specific story? Akerlof and Shiller suggest that the Mexian president Lopez Portillo created a bogus economic boom because he ‘lived the story’ of Mexico becoming wealthy due to oil, and the Lopez story led to excessive animal spirits with 100 percent annual inflation, unemployment, corruption and violence . Yet the most obvious explanation is that Lopez ‘lived’ the story because it served his interests, providing him with several years of wealth, status and power.

However, this story is unusual in blaming the government for excessive animal spirits, corruption and short-termism. Since Akerlof and Schiller are advocating a large expansion in the state’s role in the US economy they usually focus on irrational animal spirits in the marketplace. This is rhetorically understandable since A&S advocate that the state should to regulate markets to moderate the unpredictable and excessive oscillations of animal spirits – and this only makes sense if the government is less prone to animal spirits than the market.

But why should the government be immune to irrational animal spirits, corruption and selfish short-termism? Why should not the government instead exploit variations in the economy for their own purposes and to benefit the special interest groups who support them – surely that is what we see all the time? Indeed, what we are observing on a daily basis? - rather than the A&S utopian ideal of state economic regulation based on disinterested wisdom and compassion.

I personally am persuaded that governments were the main culprits behind the current credit crunch. If governments had allowed the international housing bubble to collapse several years ago as soon as the bubble was identified (e.g. in Our global house price index; The Economist. June 3 2004), or if necessary government had taken steps to prick the inflationary bubble, then we would have had a much smaller and manageable recession than the disaster that eventually came after several years more ‘leverage’. But instead – fearing a recession under their administration – all the important governments pulled-out-the-stops to sustain housing price inflation. Even after the crunch, US and UK governments have tried (unsuccessfully) to reinflate the housing bubble – amplifying the damage still further.

The big problem behind this type of behaviour is surely one of governance. Governments probably know what they ‘ought’ to do to benefit the economy, but they seldom do it; and instead they often do things which they know will be very likely to damage the economy. Economist Milton Freidman reported that in private conversation US President Richard Nixon admitted that he knew exactly what kind of havoc would be wrought by his policy to control gasoline prices (shortages, vast queues, gas station closures etc.) – but Nixon went ahead anyway for reasons of short term political expedience.

Democratic governments seem to find it almost impossible to enact policies that will probably have benefits for most of the people over the long term, when these policies will also enact significant costs over the immediate short term. To allow the housing bubble spontaneously to ‘pop’ in 2004, or to stick a pin in it if it didn’t burst, would undoubtedly have been wise; but governments seldom behave wisely when to do so exerts immediate costs. The problem is made worse when such ‘wise’ policies harm the specific interest groups upon whose votes and funding governments depend.

The reason why politicians ignore the long term was given by Keynes when he quipped that ‘in the long run, we are all dead’. Akerlof and Shiller use other Keynsian ideas to forcefully advocate increasing the state’s regulatory role in the economy, but in the absence of any solution to these fundamental ‘public choice’ problems, this will surely do far more harm than good because politicians live by Keynes’ principle of concentrating on the short term.

In sum, Akerlof and Shiller have a one-eyed focus on the problems of markets and their tendency towards irrationality, corruption and short-termism; while simply ignoring the mirror image problems of governments. But while market competition and cycles of ‘creative destruction’ put some kind of limit on market failures – the democratic process exerts a far weaker and much slower discipline on the failures of government. And incumbent governments have far too many possibilities of ‘buying votes’ by creating dependent client groups (as it happening with terrifying rapidity in the USA at present). If markets are bad, and they often are; then current forms of democratic government are even worse.

But for me the major fault of this book is that it is advocating an un-scientific approach, consisting of a collection of ad hoc stories to explain various phenomena, held together by the large-scale, vague and probably circular concept of animal spirits. For me this looks like a retreat from science into dogmatic assertion – admittedly assertion backed-up by the undoubted intellectual brilliance of the authors – but by little more than this. In particular, this book which purports to be about how human psychology drives the economy actually has little or nothing to do with what I would recognize as the scientific field of human psychology.

The best validated psychological concept in psychology is general intelligence, usually known as IQ and highly correlated with standardized tests results such as the American SATs, reading comprehension, and many other cognitive attributes. IQ has numerous economic implications, since both for individuals and groups IQ is predictive of salary and occupation. But Akerlof and Schiller ignore the role of IQ.

For example, they have a chapter on the question of “Why is there special poverty among minorities?’ in which they generate an unique ad hoc story to explain the phenomena. Yet there is essentially no mysterious ‘special poverty’ among minorities because observed economic (and other behavioural) differentials are substantially explained by the results of standardized testing (or estimated general intelligence). US minorities that perform better than average on standardized testing (e.g. East Asians, Ashkenazi Jews, Brahmin caste Indians) also have economic performance above average; and vice versa.

The scientific basis of Animal Spirits is therefore weak, and in this respect it is much inferior to another new book on human psychology and the economy: Geoffrey Miller’s Spent: sex, evolution and consumer behaviour – which is chock full of up-to-date psychological references and brilliant insights from one of the greatest of living human evolutionary theorists.

Saturday 23 May 2009

Disadvantages of high IQ

*

Having a high IQ is not always good news

Mensa Magazine June 2009 pp 34-5

Bruce G Charlton

*

There are so many advantages to having a high IQ that the disadvantages are sometimes neglected – and I don’t mean just short-sightedness, which is commoner among the highly intelligent. It really is true that people who wear glasses tend to be smarter!

High IQ is, mostly, good for you

First it is worth emphasizing that high IQ is mostly very good for you.
This has been known since Lewis Terman’s 1920s follow-up study of Californian high IQ children revealed that they were not just cleverer but also taller, healthier and more athletic than average; and mostly grew-up to become wealthy and successful.

Professor Ian Deary of Edinburgh University has confirmed that both health and life-expectancy improve along with increasing IQ. So that, remarkably, a single childhood IQ test done on one morning in Scotland in 1932 made significantly-valid statistical predictions about when people would die many decades later.
And other studies have shown that higher IQ people tend to be less violent, so smarter people usually make less-troublesome neighbours.

Indeed, Geoffrey Miller has put forward the idea that IQ is a measure of biological fitness. Since it takes about half of our genes to make and operate the brain, most damaging genetic mutations will show-up in reduced intelligence. So it would have made sense for our ancestors to choose their mates on the basis of intelligence, because a good brain implies good genes.

Sidis and the problems of ultra-high IQ

However, high IQ is not always beneficial. Terman’s study of the highest IQ group among his cohort revealed that more than one third grew up to be ‘maladjusted’ in some way: for example having significant problems of anxiety, depression, personality disorder or experience of ‘nervous breakdowns’.

This applied to William James Sidis (1898-1944), who is often considered to have had the highest-ever IQ (about 250-300). Sidis was a child prodigy, famous throughout the USA as having enrolling at Harvard aged 11 and graduated at 16. Yet he was certainly ‘maladjusted’, and had a chaotic, troubled and short life. Indeed, Sidis was widely considered to have been a failure as an adult – although this failure has been exaggerated, since it turns out that Sidis published a number of interesting books and articles anonymously.

In fact, there seems to be a consensus among psychometricians (and among the possessors of ultra-high IQ themselves) that - while an IQ of about 120-150 is mostly advantageous - extremely high IQ levels above this may prove to be as often of a curse as a benefit from the perspective of leading a happy and fulfilling life.

On the one hand, the ranks of genius are often recruited from amongst the more creative and stable of ultra-high IQ people; but on the other hand there are also a high proportion of chronically-disaffected ultra-high IQ people that have been termed ‘The Outsiders’ in a famous essay of that title by Grady M Towers
( www.prometheussociety.org/articles/Outsiders.html )

Socialism, atheism and low-fertility

Sidis himself demonstrated, in exaggerated form, three traits which I put forward as being aspects of high IQ which are potentially disadvantageous: socialism, atheism and low-fertility.

1. Socialism

Higher IQ is probably associated with socialism via the personality trait called Openness-to-experience, which is modestly but significantly correlated with IQ. (To be more exact, left wing political views and voting patterns are characteristic of the highest and lowest IQ groups – the elite and the underclass - and right wingers tend to be in the mid-range.)

Openness summarizes such attributes as imagination, aesthetic sensitivity, preference for variety and intellectual curiosity – it also (among high IQ people in Western societies) predicts left-wing political views. Sidis was an extreme socialist, who received a prison sentence for participating in a May Day parade which became a riot (in the event, he ‘served his time’ in a sanatorium).

Now, of course, not everyone would agree that socialism is wrong (indeed, Mensa members reading this are quite likely to be socialists). But if socialism is regarded as a mistaken ideology (as I personally would argue!), then it could be said that high IQ people are more likely to be politically wrong. But whether correct or wrong, the point is that high IQ people do seem to have a built-in psychological and political bias.

2. Atheism

Something similar applies to atheism. Sidis was an atheist, and it has been pretty conclusively demonstrated by Richard Lynn that increasing IQ is correlated with increasing likelihood of atheism. The most famous atheists – like Richard Dawkins and Daniel Dennett – are ferociously intelligent individuals.

Again, whether atheism is a disadvantage is a matter of opinion (to put it mildly!) – but what is not merely opinion is that religious people are on average more altruistic in terms of measures such as giving to charity, giving blood, and volunteering time for good causes.

So, higher IQ may be associated with greater selfishness. In other words, smarter neighbours may be less troublesome on average, but they may also be less helpful.

3. Fertility

However the biggest and least-controversial disadvantage of high IQ is reduced fertility. Again Sidis serves as an example: as a teenager he published a vow of celibacy, and he neither married nor had children.

Pioneer intelligence researchers such as Francis Galton (1822-1911) noticed that (since the invention of contraception) increasing intelligence usually meant fewer offspring. Terman confirmed this, especially among women – so the group of the highest IQ women had only about a quarter of the number of children required for replacement fertility.

This trend has, if anything, increased in recent years as ever-more high IQ women delay reproduction in order to pursue higher education and professional careers. Indeed, more than 30 percent of women college graduates in the UK and Europe have no children at all – and more than half of women now attend college.

Since IQ is highly heritable, this low fertility implies that over time high IQ will tend to select itself out of the population.

The bad news and the good news

So much for the bad news about high IQ.

The good news is that while the advantages of high IQ are built-in; the disadvantages of high IQ are mostly a matter of choice.

People can potentially change their political and religious views. For example, Sidis apparently changed from being a socialist to a libertarian, indeed many adult conservatives went through a socialist phase during their youth (declaration of interest: this applies to me).

And religious conversions among the high IQ are not unknown (declaration of interest: this applies to me). For instance, GK Chesterton and C.S Lewis being famous examples of atheists who became the two greatest Christian apologists of the twentieth century.

Indeed, although it does not often happen, smart people can also choose to be more fertile. One example is the Mormons in the USA, whose average IQ and fertility are both above the national average, and where the wealthiest Mormons also have the biggest families. Presumably - since wealth and IQ are positively correlated - this means that for US Mormons higher IQ leads to higher fertility.

So, on the whole it remains good news to have a high IQ - although perhaps not too-high an IQ. But perhaps the high IQ community needs to take a more careful look at the question of low fertility. It may be that, under modern conditions, high intelligence is stopping people from ‘doing what comes naturally’ and having large families.

Human reproduction could be one situation where the application of intelligence may be needed to over-ride our spontaneous emotions or the prevailing societal incentives.

Or else at some point in the future, high IQ could become very rare indeed.

*

For more on IQ see:

http://iqpersonalitygenius.blogspot.co.uk/

**