Monday 7 June 2010

The bureaucratization of pain

Analgesia - pain-relief, especially in the broadest sense of relief of suffering - was for most of history the primary interventional benefit of the physician (as contrasted with the surgeon) in medicine.

Among the primary benefits of medicine, perhaps prognosis is the greatest benefit - that is, the ability to predict the future; because prognosis entails diagnosis and an understanding of the natural history (natural progression) of disease.

Without knowledge of the likely natural history of a patient, then the physician would have no idea whether to do anything, and what to do.

However, through most of history, physicians were probably unable to influence the outcome of disease - at least in most instances they would diagnose, make a prognosis then try to keep the patient comfortable as events unfolded.

Keeping the patient comfortable. Relief of suffering. In other words: analgesia.

Much of medicine remains essentially analgesic (in this broad sense), even now.

But relief of actual pain is the most vital analgesic function: because at a certain level of severity and duration, pain trumps everything else.

So, perhaps the most precious of all medical interventions are those which relieve pain - not just the general pain-killers (of which the opiates are the most powerful) but the effective treatments of specific forms of pain - as when radiotherapy treats the pain of cancer, or when GTN treats the pain of angina, or steroids prevent relentless itching from eczema and so on.

The *irony* of modern medicine is that while it has unprecedented knowledge of analgesia, of the relief of pain and suffering - these are (in general) available only via prescription.

So, someone who is suffering pain and seeks relief, and effective analgesia is indeed in principle available, must *first* convince a physician of the necessity to provide them with relief.

If a physician does not believe the pain, or does not care about the pain, or has some other agenda - then the patient must continue to suffer. They do not have direct access to pain relief - only indirect access via the permission of a physician.

Pain and suffering are subjective, and it is much easier to bear another person's pain and suffering than it is actually to bear pain and suffering oneself.

Yet we have in place a system which means that everyone who suffers pain must first convince a professional before they can obtain relief from that pain.

This situation was bearable so long as there was a choice of independent physicians. If one physician denied analgesia for pain, perhaps another would agree?

The inestimable benefits of analgesia have been professionalized, and that means they have nowadays been bureaucratized since professionals now operate within increasingly rigid, pervasive and intrusive bureaucracies.

So the inestimable benefits of analgesia are *now* available to those in pain only if they fulfill whatever bureaucratic requisites happen to be in place.

If the bureaucracy chooses (for whatever reason - saving money, punishing the 'undeserving', whatever) that a person does not fulfill the requirements for receiving analgesia, then they will not get pain relief.

That is the situation, at the present moment.

Why do we tolerate this situation? Why do we not demand direct access to analgesia? Why do we risk being denied analgesia by managerial diktat?

Because, bureaucracy does not even need to acknowledge pain - it can legislate pain and suffering out of existence. It creates guidelines which define what counts as significant pain, what or who gets relief, and what or who gets left to suffer.

It is so easy to deny or to bear *other people's* pain and suffering, to advise patience, to devise long-drawn out consultations, evaluations and procedures.

Bearing pain ourselves is another matter altogether. Pain of one's own is an altogether more *urgent* business. But by the time we find ourselves in that situation, it is too late for wrangling over prescriptions, guidelines, and procedures.

Sunday 6 June 2010

Ketoconazole shampoo - a totally effective anti-dandruff treatment

The one thing that modern medicine hates and suppresses above all else, is a cheap and effective solution to a common problem.

There are scores, indeed hundreds or maybe thousands, of expensive, heavily advertized and *ineffective* 'anti- dandruff' shampoos on sale in supermarkets and pharmacists.

They are expensive non-solutions to the common problem of dandruff - and they are Big Business.

But in my experience ketoconazole shampoo is *totally* effective at stopping dandruff, and an application every week or two will keep it away.

This is because dandruff (and seborrhoeic dermatitis - which is severe dandruff) is caused by a fungus - the Pityrosporum yeast. The ‘cradle cap’ of babies is the same things too, and is also cured by ketoconazole.

The cause and cure were discovered by one of my teachers at medical school - Sam Shuster. (e.g. Shuster S.. The aetiology if dandruff and mode of action of therapeutic agents. Br J Dermatol 1984; 111: 235-242; Ford Gp, Farr Pm, Ive Fa, Shuster S.. The response of seborrhoeic dermatitis to ketoconazole. Br J Dermatol 1984; 111: 603-607.)

In other words, the cause and cure of dandruff has been known for 25 years.

SO - here we have what seems to be a completely effective solution to a problem which affects most adults at some point in their lives - yet the effective treatment is all but secret; presumably because if it were better known then the shelves would be cleared of the scores of ineffective, expensive and heavily advertized rival products.

My point?

In modern medicine, in modern life, it is possible for there to be completely effective and cheap and widely 'available' solutions to common problems, and for these to be virtually unknown.

And it is also notable that discovering the cause and cure of a common disease is not given much credit in medicine nowadays – it made the discoverer neither rich nor famous.

But at the same time there are thousands of rich and famous ‘medical researchers’ who have discovered nothing and cured nothing. Essentially they are rich and famous for ‘doing research’ – especially when that research involves spending large amounts of money.

When ‘medical researchers’ are rewarded for spending large amounts of money, and ignored for discovering the causes and cures of disease, what you end up with is ‘medical research’ that spends large amounts of money but does not discover anything.

And that is precisely what we have nowadays.

Also we end up with ‘medical researchers’ who do not even *try* to discover the causes and cures of disease.

And that is precisely what we have nowadays.

Saturday 5 June 2010

Driclor - a totally effective anti-perspirant/ deodorant

The one thing that modern culture hates and suppresses above all else, is a cheap and effective solution to a common problem.

There are scores, indeed hundreds or maybe thousands, of expensive, heavily advertized and *ineffective* deodorants and antiperspirants on sale in supermarkets and pharmacists. They neither stop odour, nor stop sweat.

They are expensive non-solutions to the common problem of smelly under-arm sweat - and they are Big Business.

But Aluminium chloride solution (which I buy in the brand called Driclor) is *totally* effective at preventing both perspiration and odour, and a single application lasts for three or four days.

The product is very reasonably priced, since a big bottle is about 6-8 US dollars and lasts me for several months.

YET - although it is sold in large pharmacies, it is not usually displayed on the shelves but needs specifically to be asked-for.

SO - here we have what seems to be a completely effective solution to a problem which affects most adults (insofar as most adults use some kind of underarm antiperspirant deodorant) - yet it is not advertized and is all-but hidden.

Presumably because if it were better known then the shelves would be cleared of the scores of ineffective, expensive and heavily advertized rival products. And probably Driclor itself would not survive this process, since the active product (aluminium chloride) is not patent-protected, and the company would no doubt be driven out of business by excess competition.

My point?

In modern medicine, in modern life, it is possible for there to be completely effective and cheap and widely 'available' solutions to common problems, and for these to be virtually unknown.

Friday 4 June 2010

Benzoyl peroxide effective treatment for shaving rash - but it bleaches!

In the spirit of rediscovered self-experimentation trail-blazed by Seth Roberts -

http://www.blog.sethroberts.net/2010/06/04/a-great-change-is-coming-part-1-of-2/

- I thought I would share one of my own discoveries:

i.e. that benzoyl peroxide (BP) cream (which is marketed as a treatment for acne) can treat shaving rash.

http://en.wikipedia.org/wiki/Benzoyl_peroxide

By shaving rash, I mean the unsightly spots which come after shaving, especially on the neck. The spots seem to be due to trauma of the beard hair follicles, and sometimes to in-growing beard hairs.

Anyway, an n=1 on-off crossover self-trial over several weeks established that this kind of rash was treatable, indeed curable, with BP.

Benzoyl peroxide is a peeling agent, so it is not particularly surprising that it works to treat this kind of problem.

HOWEVER, BP is also a bleaching agent (not surprizing for a peroxide!); and after a while I linked its usage to the bleached patches that appeared on towels and shirt collars, ruining them.

So, in the end I could not use it to treat the beard rash.

But it works.

Anti-denialism - The return of Lysenkoism?

Real science was built on a search for truth that was cooperative and competitive at the same time. Popper emphasized the mixture of hypothesis and testing, conjecture and refutation, testing for consistence and predictive ability and discarding of error.

Bronowski emphasized the need for tolerance of honest error, and that contributors to the scientific process should be respected even after their views have been refuted. Otherwise, scientists would not risk being accused/ convicted of being wrong and so would never challenge consensus; and consensus would never yield to refutation. (Which is precisely the situation in mainstream medical research.)

So we still respect Isaac Newton despite him having been superseded by Einstein; and Newton is not usually mocked or derided for having not been correct for all time.

But this balance has been difficult for many scientists, and even more difficult for those outside of science. Lamarck ranks all-time third in importance as a biologist according to Charles Murray's method in Human Accomplishment - behind Darwin and Aristotle - but Lamarck’s views on evolution are often (it seems to me) treated as a joke.

Of course, ignorant disrespect is part of growing-up. But although it has to be tolerated in teenagers, it is not an admirable trait; being a result of pride and aggression fuelled by insecurity.

Adolescents love to hate, and there are an awful lot of adolescents interested in science and working in and around science and is journalism and as pundits - many of them adolescents of advanced middle age.

Adolescents also form gangs, and gangs assert their status by seeking and beating-up victims (the victims of course ‘deserve’ this – for being who they are).

There is an awful lot of ignorant disrespect in science nowadays, and an awful lot of gangsterism. Real science used to be all about individuals – it really did! – but now science is all about gangs.

The reason for so much ignorant disrespect in science is mostly that there is so much ignorance, due to the abundance of low quality people and their micro-specialized perspective. Such have no concept of excellence higher than the standard, prevailing technical practices of their micro-discipline; anyone who does not adhere to these prevailing practices is regarded as either incompetent or wicked - hence despicable hence deserving of punishment. They deserve ‘what is coming to them’ – in gang parlance.

There is always disagreement in science, but the basis of real science was that scientific disagreement was conducted by scientific means. What is *not* acceptable to real science is that scientific disputes should be settled by non scientific methods.

Scientists must be allowed to make mistakes, to be wrong, or science cannot function.

This is necessary because in the first place they may not really have made a mistake and they may be right (or partly right) – but this may not be apparent for a while. Mainstream science may be in error, but this situation may be recoverable if dissent is tolerated.

However, in a system of real science, mistakes are tolerated only when they are *honest* mistakes – lying and deceptions are absolutely forbidden in real science; and will lead to exclusion from the community of real scientists. And incompetent errors are simply a waste of everybody’s time. So dishonesty and incompetence are rightly sanctioned by leading to a scientist’s work being disregarded by others in the field as probably unreliable or unsound.

This is why the dishonest thugs of modern pseudo-science always try to portray dissent and disagreement as always a result of incompetence or dishonesty.

The gangsters of pseudo-science cannot acknowledge even the *possibility* of an honest and competent scientist reaching a different conclusion from the one they themselves support. This is because the gangsters are transparently looking for an excuse to attack and to coerce; after all, gangsters need to make public displays of their power, or else they would soon lose it.

Gang-leaders need to beat-up dissenters, and they need people to know that this is happening, and they need these dissenters to be portrayed as deserving victims of attack.

Consequently the whole concept of honest and competent disagreement has been banished from modern bureaucratic pseudo-science.

In the world of bureaucratic pseudo-science there are only two kinds of view – the correct view which is defined and enforced by the peer review cartel; and wrong views which are held by those either too stupid to understand, or those corrupted by evil.

Lysenko was a scientific gangster in the Soviet Union – Stalin’s henchman - http://en.wikipedia.org/wiki/Trofim_Lysenko. His scientific sin was to suppress scientific opposition using non-scientific means; up to and including jail and death for his opponents. The justification for Lysenko’s use of coercion to suppress and silence dissent was that the opponents’ opposition was harmful to people, caused millions of death etc.

Modern science is just a couple of small steps away from full-blown Lysenkoism. Scientific opposition is suppressed using non-scientific means ranging from defunding, exclusion of publications and other blocks on dissemination, public humiliation, sacking and other legal threats. In many areas of science gangsterism is rife, with intimidation and threats and the administration of media ‘beatings’.

What does it mean? Many would regard the situation as regrettable – but it is much worse than regrettable. It is conclusive evidence of non-science.

A field in which the use of non-scientific modes of argument are rife is simply *not a science*. Not a science at all. It does not work. Gangsterism is incompatible with science.

For example, ‘climate science’ is not a science *at all*; as a field it does not function as a real science, it uses intimidation and coercion as a matter of routine. Therefore nothing in it can be trusted, the good and bad cannot be discriminated.

To clarify - because in general terms climate science does not settle disputes using scientific methods, but by using extra-scientific methods, therefore it is not a real science, but actually is whatever the main influence on its content happens to be: politics, mostly.

The main innovation of ‘climate science’ has been to legitimate the mass use of the hate-term ‘denialism’ to signal who ‘deserves’ a punishment-beating from the gang.

Let us call the phenomenon of labeling and beating up ‘denialists’ by the name of ‘anti-denialism’.

Anti-denialism is no accident, nor is it eradicable without massive reform because anti-denialism is functionally necessary for the cohesion of modern bureaucratic pseudo-science. Without victims to gang-up on, the gangs would fall apart into warring sects. They would fight each other because these gangs are composed of ignorant, egotistical, power-worshipping adolescents. What best holds such people together is pure hatred, and pure hatred needs victims.

With the phenomenon of anti-denialism rife in mainstream discourse, we are just a couple of small steps away from full blown Lysenkoism. We already have systematic intimidation of scientific opposition at every level short of the physical. But I have seen demands from the gangsters of science that the sanctions against denialists be escalated. Destroying livelihoods is no longer enough. Soon, perhaps very soon, unless the tide turns, we may be seeing scientists jailed for their views.

Since honest and competent dissent is not recognized, anyone who disagrees with the peer review cartel is either labeled as too stupid to understand or as competent but wicked. It is the competent dissenters who are most at risk under Lysenkoism, since disagreement with the mainstream coming from a competent scientist marks them out as evil and deserving of punishment.

Anti-denialism needs high profile victims. Lysenkoism needed to punish top scientists like Vavilov, who died following three years in prison http://en.wikipedia.org/wiki/Nikolai_Vavilov.

On present trends we may expect to see prominent denialists and dissenters jailed for being ‘wrong’ (as judged by peer review), jailed for the public good, jailed ‘to save ‘millions of lives’ – but in reality jailed for opposition to the ruling gangsters of bureaucratic pseudo-science, and because anti-denialists absolutely require a continuous supply of victims to beat-up on.

Thursday 3 June 2010

How much formal specialist training does a scientist need?

The answer is - very little.

A highly intelligent and motivated individual can get 'up to speed' in a subject, and begin work in it, in a matter of weeks.

Of course it takes much longer than this to make a significant contribution to a field - often ten years or so of persistence - but this is why it is very important to get started young working on your problem. And starting young means skipping the hyper-extended specialist preliminary 'training' which is the norm nowadays.

This is obvious from the fact that early scientists had never had any formal specialist training because it did not exist.

Further evidence comes from the example of the many physicists and mathematicians who changed field and made major contributions to, for example, biology. They were able to do so because physicists and mathematicians are the most intelligent people (i.e. the group with the highest average general intelligence or IQ) - which means they can learn and remember new material extremely rapidly compared with most of us.

Of course, modern academia insists (usually) on prolonged specialist training. But this is mostly due to careerism and restrictive practices. Major work is continually being done in biology and medicine by people without this training, indeed many of the best ideas come from outside of academia, and often from clever and motivated amateurs (such as investigative journalists). Much of this is published outside the professional literature – in books, not papers.

Intelligence is mostly inborn (i.e. the ability to reason abstractly and systematically cannot be inculcated but is - mostly - either there, or not there); and the extra discipline and baseline knowledge which is provided by education is mostly acquired during development - before college age.

So, whatever formal specialist training a scientist needs before tackling his problem ought to be done at school, in the teenage years - and *not* done at college, in the twenties.

Wednesday 2 June 2010

The culture of analgesia

Analgesia = pain-killing.

This is our culture, now. The culture of analgesia - in which relief of pain and suffering is primary - and not a means to an end.

In medicine, the relief of pain is, or ought to be, of central importance - but on examination it is not primary. The primary goal of medicine is to preserve life and functionality - and this only makes sense where life and functionality themselves have an implicit goal.

(This implicit goal of life is not a part of the concern of medicine as a specialty - but medicine as a human activity only makes sense if it can be assumed that people have something significant to do with their life and functionality. In the past this could be taken for granted - but not any more.)

Medicine is not and never has been a matter of 'first of all, do no harm; because harm is always a risk in trying to preserve life or functionality, or in relieving pain and suffering.

The primary imperative for modern secular democratic liberalism is the avoidance of suffering. Lacking any rationale or context for this, the definition of suffering has expanded without apparent limit.

In particular, hedonism - pleasure-seeking - is now re-labeled as analgesia; since there is an element of suffering involved in *not* being able to indulge one's desires.

The suffering involved in thwarted desire or the necessity for self-restraint is now inflated to a cosmic injustice, so encompassing as to trump almost anything and everything.

Any of the petty humiliations of everyday existence (being forbidden, sneered at, patronized, made to feel inferior, rejected, ignored) are amplified infinitely, and can indeed become the focus of whole life views.

To suffer *offense* is automatically to require, to deserve, analgesia - reparations, compensations, special consideration.

Sensitivity replaces morality.

In the culture of analgesia, life in both its strategy and its moment-by-moment existence, becomes a serial seeking after pain relief. A search for losing oneself, for forgetting, for tranquillization, or at least for distraction.

Ah! - to live a life of serene analgesia varied by serial pleasurable distractions…

(then to die, to sleep and *not* to dream)

- this is, in a sense, the ulimate goal of modernity.

Tuesday 1 June 2010

Worrying thoughts about specialization and growth

Could it be that the differentiation of Church and State, the development of universities (secular in their essence, even when staffed by religious), the process of specialization itself – that all these are actually first steps on an _inevitable_ path to where we are now (i.e. on the verge of a self-inflicted - almost self-willed - collapse of 'the West')?

Universities can be seen as by now vastly inflated institutions, not just parasitic but actively destructive in many ways. On the other hand Universities used to perform some functions which were essential to those aspects of modernity which we most admire: philosophy in the medieval university, classics in the next period, science (Wissenschaft) in the 19th century German universities and so on.

But, in retrospect, all these golden ages of scholarship and research were more like brief transitional periods en route to something much worse.

For instance, the flowering of science (as a specialized, largely autonomous, social system) for the couple of centuries up until the mid twentieth century was a period of constant institutional change until science became - as now - *essentially* a branch of the state bureaucracy.

It seems that useful/ efficient specialization (including separation from State and Religion) leads to over specialization (or micro-specialization) which is increasingly less efficient, then less effective - and all this seems to lead back to re-absorption of science into the State (or into Religion, in principle).

For instance the London Royal Society became more and more autonomous in its conduct until maybe the mid-twentieth century, then became progressively reabsorbed back into the State until now the Royal Society gets about ¾ of its funding directly from the UK parliament, and the organization functions like a department of the UK civil administration.

If we go back and back to find the point at which this *apparently* unstoppable yet self-undermining process began in the West - I think it may lead to the difference between (say) medieval Orthodox Byzantium and Catholic Western Europe.

To scholasticism, perhaps? That was when the divergence became first apparent - when an academic, pre-scientific discipline (i.e. philosophy) became increasing autonomous from Religion (in the West the Religious hierarchy already was separate from the State hierarchy - although sometimes the two cooperated closely. In the East, Church and State formed an intermingled, single hierarchy).

Indeed, my impression is that Thomistic scholasticism may itself be self-undermining – and that this can perhaps be seen in the history of the Roman Catholic Church and even of some specific scholastic scholars – for example Jacques Maritain or Thomas Merton? (They began as traditionalists and ended as modernizers.)

It seems that institutions can grasp the essence of Thomism, and yet the process of understanding does not at all prevent – indeed it perhaps encourages – the continuation of the process until it has destroyed the system itself. As Peter Abelard found, once the process of sceptical analysis has begun, there is not clear point at which it can be seen necessary to stop – and the only point when it is known for sure that things have gone too-far is when the system which supported the process has fallen to pieces and by then it is too late.

Something similar may apply to science. The process of science creates a social system which first really reinvents itself due to real discoveries, then later makes pseudo- discoveries in order really to reinvent itself, then finally makes pseudo-discoveries in order to pseudo-reinvent itself. At which point the full circles has been turned, and all that remains is to drop the pretence.

Of course, differentiation of society led initially to greater strength, based (probably) on frequent breakthroughs in science and technology which drove increased economic productivity and military capability. But as differentiation proceeded to micro- and destructive levels, the real breakthroughs dried up and were replaced with hype and spin, then later pure lies. Real economic growth was replaced with inflation and borrowing. Progress was replaced with propaganda.

We have already observed the whole story in the atheist and supposedly science-worshipping Soviet Union – which in Russia is maybe now returning to the more stable and robust pattern of Eastern Christian (Byzantine) theocracy - and the pattern is merely being repeated in the capitalist and democratic West.

(With the difference that the secular West will probably - in the medium term - return after the collapse of modernity to segmentary, chaotic tribalism, rather than large-scale cohesive theocracy.)

In sum, perhaps the process of social differentiation is unstoppable hence inevitably self-destroying? The increasing rate of science and technological breakthroughs from (say) 1700 to 1900 looked like progress in the conduct of human affairs until it wasn’t. The faster social system growth and differentiation proceeds, the faster it destroys itself.

Rapid growth and differentiation is therefore, in fact, intrinsically parasitic – whether or not we can actually detect the parasitism. At any rate, that's what it looks like to me.

Sunday 30 May 2010

Forbidden topics

One intractable debating point in science is the question of whether some things should *not* be researched.

And indeed, there can be very few people who don't have some topic or another which they would prefer not to be researched - although the specific topic varies widely, and indeed diverges - and the method by which research should be discouraged or prevented is also variable.

So that the people who believe that research should not be done on human embryos or human stem cells are likely to be different from the people who believe that research should not be done on chemical weapons or genetically-modified crops.

The reason for prohibited topics of research is that science is not, ever, the primary value system; and therefore science is inevitably subordinated to whatever *is* the primary value system. For most of human history the primary value system would have been religion - nowadays it might be politics.

And by 'value system' is not meant (as some people imagine) merely a moral or ethical system - but the whole system of 'goods', the positive versus negative evaluations - or what may be called transcendental values - virtue is one, beauty another, truth another and wholeness or unity is yet another possible 'good'.

So presumably, what gets done in science ought to reflect in some specific way, that which is good in some general way.

But does it? Of course not!

The whole motivational system of science is broken, there is no overall moral or ethical system except that the consensus of the powerful scientists is always right. In science, the consensus that matters is that of the peer review cartel of dominant scientists – since peer review controls scientific evaluation.

Scientists’ choice of topic and methods of work now passively reflect consensus; and the consensus is always changing, consensus has no direction, but still consensus is always right.

In other words, there is no concept of the good in modern science - since consensus is simply a word for an arbitrary and shifting outcome of social interplay among those with power to enforce their views. In other words, consensus is peer review, and peer review is consensus.

(Those who contrast science with consensus are therefore mistaken when it comes to modern science – although of course real science is *in a sense* the opposite of consensus.)

There is no concept of the good as a cohesive system located anywhere in the motivational system of science. There is therefore no responsibility.

Instead of ‘the good’ – an eternal ideal - there is an ethic of obedience to whatever is the outcome of undirected social interplay – even (or especially) when this outcome is arbitrary and ever-changing.

The advocacy of peer review as a gold standard is precisely this: that scientists must submit - swiftly, willingly, happily! - to the outcome of peer review, and that the outcome of peer review is intrinsically valid.

Even though peer review is unpreditable, changeable, and lacks any rationale: all the more *vital* that submission be swift, willing and cheerful!

***

The main method of enforcing scientific prohibition is 'defunding' - which can be done covertly by peer review on the pretext of 'scientific standards'. Also there is the failure to allocate award jobs, promotions, publications, memberships and prizes to those whose topic and/ or views transgress the accepted boundaries.

After all, any job application, any paper, any grant request can be rejected on quasi-plausible grounds.

Many of the greatest scientists have been rejected from jobs, many of the greatest ideas in science have been rejected by peer review, many of the major breakthroughs in science we turned down for grants - all on the grounds that they 'weren't good enough'.

Modern scientific evaluations equate lack of funding with illegitimacy – so defunded science is not merely ignorable science but *bad science*. That which is rejected by peer review is not merely unfashionable or mistaken science but *bad science*. Work done by people outside the peer review cartel is intrinsically *bad science* whenever and to whatever extent it conflicts with the intrinsically authoritative views of the power brokers.

It's an easy sophomoric trick of the half-educated. There has never been a paper in the history of medicine which could not be torn to shreds by a zealot with a Masters degree in epidemiology. The evaluation procedures are over-inclusively negative. The evaluation procedures always imply rejection. The question is merely when to apply the evaluation procedures, and when quietly to set them aside…

Sheltering coercive consensus behind the defense of ‘standards’, mainstream power-brokers and their apologists for prohibition can continue to advocate libertarian ideals.

Craven conformism masquerades as idealism.

And the potentially subverting self-knowledge of moral bankruptcy is deferred for yet another day…

Saturday 29 May 2010

Mainstream medical research: trivial science and useless medicine

Scientists generally assert their right to study anything they want-to study ('blue skies') as contrasted with the subject matter or direction research being dictated by government, corporations and other organizations.

Yet over the past few decades scientists have tamely allowed the subject matter and direction of their research to be dictated by the funders of science and the peer review cartel who apply funding criteria.

Worse than this, scientists have colluded with evaluation criteria which measure inputs rather than outputs, so that for career purposes externally-funded science is seen as intrinsically superior to unfunded or self-funded science. Well funded pseudo-science is privileged infinitely above unfunded real science (because unfunded science is discounted altogether, or sometimes negatively - as evidence of un-seriousness or disloyalty or misplaced effort).

So we have ended-up the worst of both worlds: the uselessness of arbitrary subject matter and the dullness of merely applied science.

My prime exhibit is mainstream medical research which is useless in the sense that it (almost) never discovers anything of medical/ clinical value (i.e. nothing useful in diagnosing or treating illnesses) - but it is plodding, incremental and organized on an industrial scale just like the most mundane industrial or military R&D.

Mainstream medical science is, in effect, R&D applied to useless and trivial (and dishonest) pseudo-medical subjects.

But IF medical research was motivated by the genuine interests of either the scientists who do it (i.e. amateur science) or by the wants and needs of patients (i.e. clinical science - usually done by clinicians) - then it might stand some chance of making (either) significant jumps in progress of science or of medicine (or both).

As it is, mainstream medical research is at the same time trivial science and useless medicine.

Friday 28 May 2010

Motivation - the key to science?

While wanting to know the truth does not mean that you will find it; on the other hand, if scientists are not even *trying* to discover the truth - then the truth will not be discovered.

That is the current situation.

'Truth' can be defined as 'underlying reality’. Science is not the only way of discovering truth (for example, philosophy is also about discovering truth - science being in its origin a sub-class of philosophy) - but unless an activity is trying to discover underlying reality, then it certainly cannot be science.

But what motivates someone to want to discover the truth about something?

The great scientists are all very strongly motivated to ‘want to know’ – and this drives them to put in great efforts, and keeps them at their task for decades, in many instances. Why they should be interested in one thing rather than another remains a mystery – but what is clear is that this interest cannot be dictated but arises from within.

Crick commented that you should research that which you gossip about, Watson commented that you should avoid subjects which bore you - http://medicalhypotheses.blogspot.com/2007/12/gossip-test-boredom-principle.html - their point being that science is so difficult, that when motivation is deficient then problems will not get solved. Motivation needs all the help it can get.

Seth Roberts, in his superb new article (which happened to be the last paper I accepted for publication in Medical Hypotheses before I was sacked) makes the important point that one motivation to discover something useful in medicine is when you yourself suffer from a problem -

http://sethroberts.net/articles/2010%20The%20unreasonable%20effectiveness%20of%20my%20self-experimentation.pdf

Seth does self-experimentation on problems which he suffers - such as early morning awakening, or putting on too much weight (he is most famous for the Shangri-La diet). He has made several probable breakthroughs working alone and over a relatively short period; and one of the reasons is probably that he really wanted answers, and was not satisfied with answers unless they really made a significant difference.

By contrast, 95 percent (at least!) of professional scientists are not interested in the truth but are doing science for quite other reasons to do with 'career' - things like money, status, security, sociability, lifestyle, fame, to attract women or whatever.

The assumption in modern science is that professional researchers should properly be motivated by career incentives such as appointments, pay and promotion – and not by their intrinsic interest in a problem – certainly not by having a personal stake in finding an answer – such a being a sufferer. Indeed, such factors are portrayed as introducing bias/ damaging impartiality. The modern scientist is supposed to be a docile and obedient bureaucrat – switching ‘interests’ and tasks as required by the changing (or unchanging) imperatives of funding, the fashions of research and the orders of his master.

What determines a modern scientist’s choice of topic, of problem? Essentially it is peer review – the modern scientist is supposed to do whatever work that the cartel of peer-review-dominating scientists decide he should do.

This will almost certainly involve working as a team member for one or more of the peer review cartel scientists; doing some kind of allocated micro-specialized task of no meaning or intrinsic interest – but one which contributes to the overall project being managed by the peer review cartel member. Of course the funders and grant awarders have the major role in what science gets done, but nowadays the allocation of funding has long since been captured by the peer review cartel.

Most importantly, the peer review cartel has captured the ability to define success in solving scientific problems: they simply agree that the problem has been solved! Since peer review is now regarded as the gold standard of science, if the peer review cartel announces that a problem has been solved, then that problem has been solved.

(This is euphemistically termed hype or spin.)

To what does the modern scientist aspire? He aspires to become a member of the peer review cartel. In other words, he aspires to become a bureaucrat, a manager, a ‘politician’.

Is the peer review cartel member a scientist as well? Sometimes (not always) he used-to be – but the answer is essentially: no. Because being a modern high level bureaucrat, manager or politician is incompatible with truthfulness, and dishonesty is incompatible with science.

The good news is that when real science is restored (lets be optimistic!) then its practitioners will again be motivated to discover true and useful things – because science will no longer be a career it will be colonized almost-exclusively by those with a genuine interest in finding real world answers.

Thursday 27 May 2010

Micro-specialization and the infinite perpetuation of error

Scientific specialization is generally supposed to benefit the precision and validity of knowledge within specializations, but at the cost of these specializations becoming more narrow, and loss of integration between specializations.

In other words, as specialization proceeds, people supposedly know more and more about less and less - the benefit being presumed to be more knowledge in each domain, the cost that nobody has a general understanding.

However, I think the supposed benefit is actually not true. People do not really know more – often they know nothing at all or everything they know is wrong because undercut by fundamental errors.

Probably the benefits of specialization really do apply to the early stages of gross specialization such as the increase of scientific career differentiation in the early 20th century - the era when there was a division of university science degrees into Physics, Chemistry and Biology - then later a further modest subdivision of each of these into two or three.

But since the 1960s scientific specialization has now gone far beyond this point, and the process is now almost wholly disadvantageous. We are now in an era of micro-specialization, with dozens of subdivisions within sciences.

Part of this is simply the low average and peak level of ability, motivation and honesty in most branches of modern science. The number of scientists has increased by more than an order of magnitude – clearly this has an effect. Scientific training and conditions have become prolonged and dull and collectivist – deterring creative and self-motivated people. And these have happened in an era when the smartest kids tended not to gravitate to science, as they did in the early 20th century, but instead to professions such as medicine and law.

However there is a more basic and insoluble problem about micro-specialization. This is that micro-specialization is about micro-validation – which can neither detect nor correct gross errors in its basic suppositions.

In my experience, this is the case for many scientific specialties:

1. Epidemiologists are fixated on statistical issues and cannot detect major errors in their presuppositions because they do not regard individual patient data as valid nor do they regard sciences such as physiology and pharmacology as relevant. Hence they do not understand why statistical knowledge cannot replace biological and medical knowledge, nor why the average of 20 000 crudely measured randomized trial patients is not a substitute for the knowledgeable and careful study of individual patients. Since epidemiology emerged as a separate specialty, it has made no significant contribution to medicine but has led to many errors and false emphases. (All this is compounded by the dominant left-wing political agenda of almost all epidemiologists.)

2. Climate change scientists are fixated on fitting computer models to retrospective data sets, and cannot recognize that retrofitted models have zero intrinsic predictive validity. The validity of a model comes from the prediction of future events, from consistency with other sciences relevant to the components of the model, and from consistency with independent data not included in the retrofitting. Mainstream climate change scientists fail to notice that complex computer modelling has been of very little predictive or analytic value in other areas of science (macroeconomics, for instance). They don't even have a coherent understanding of the key concept of global temperature – if they did have a coherent concept of global temperature, they would realize that it is a _straightforward_ matter to detect changes in global temperature – since with proper controls every point on the globe would experience such changes. If the proper controls are not known, however, then global temperature simply cannot be measured; in which case climate scientists should either work out the necessary controls, or else shut-up.

3. Functional brain imaging involves the truly bizarre practice of averaging of synaptic events: with a temporal resolution of functional imaging methods typically averaging tens to hundreds of action potentials and a spatial resolution averaging tens to hundreds of millions of synapses. There may also be multiple averaging and subtraction of repeated tasks. What this all means at the end of some billions of averaged instances is anybody's guess - almost certainly it is un-interpretable (just consider what it would mean to average _any_ biological activity in this kind of fashion!). Yet this stuff is the basis for the major branch of neuroscience which for three decades has been the major non-genetic branch of biological/ medical science - at the cost of who knows how many billions of pounds and man-hours. And at the end of the day, the contribution of functional brain imaging to biological science and medicine has been - roughly - none-at-all.

In other words, in the world of micro-specialization the each specialist’s attention is focused on technical minutiae and the application of conventional proxy measures and operational definitions. These agreed-practices are used in micro-specialities for no better reason than 'everybody else' does the same and (lacking any real validity to their activities) there must be some kind of arbitrary ‘standard’ against which people are judged. ('Everybody else' here means the dominant Big Science researchers who dominate peer review (appointments, promotions, grants, publications etc.) in that micro-speciality.)

Micro-specialists cannot even understand what has happened when there are fatal objections and comprehensive refutations of their standard paradigms which originate from adjacent areas of science.

In a nutshell, micros-specialization allows a situation to develop where the whole of a vast area of science is bogus; and for this reality to be intrinsically and permanently invisible and incomprehensible to the participants in that science.

If we then combine this fact with the notion that only micro-specialists are competent to evaluate the domain of their micro-speciality - then we have a situation of intractable error.

Which situation is precisely what we do have. Vast scientific enterprises have consumed vast resources without yielding any substantive progress, and the phenomenon continues for time-spans of several human generations, and there is no end in sight (short of the collapse of science-as-a-whole).

According to the analysts of classical science, science was supposed to be uniquely self-correcting - in practice, now, thanks in part o micros-specialization, it is not self-correcting at all. Either what we call science nowadays is not 'real science' or else real science has mutated into something which is a mechanism for infinite perpetuation of error.

Tuesday 25 May 2010

'Medieval science' - rediscovered and improved

Although I was 'brought up' on the religion of science, and retain great respect for philosophers such as Jacob Bronowski and Karl Popper, and for sociologists such as Thos. Merton, David L Hull and John Ziman; I have come to believe that this 'classic' science (the kind which prevailed from the mid-19th - mid-20th century in the UK and Western Europe) - in other words that activity which these authors described and analyzed - was 'merely' a transitional state.

In short: classic science was highly successful, but contained the seeds of its own destruction because the very processes that led to classic science would, when continued (and they could not be stopped) also destroy it.

(As so often, that which is beneficial in the short term is fatal in the longer term.)

Specifically, this transitional state of classic science was an early phase of professional science, which came between what might be called medieval science and modern science (which is not real science at all - but merely a generic bureaucratic organization which happened to have evolved from classic science). But classic science was never a steady state, and never reproduced itself; but was continually evolving by increasing growth, specialization and professionalization/ bureaucratization.

But classic Mertonian/ Popperian science was never stable - each generation of scientists had a distinctly different experience than the generation before due to progressive increasing growth, specialization and professionalization/ bureaucratization.

And indeed Classic science was not the kind of science which led to the industrial revolution and the ‘modern world’; the modern world was a consequence of causes which came before modernity. The modern world is a consequence of medieval science. So, the pre-modern forms of science were real science, and had real consequences.

What I mean is that medieval science was an activity which was so diffuse and disorganized that we do not even recognize it as science – yet it was this kind of science which led to the process of societal transformation that is only recognized by historians as becoming visible from the 17th century (e.g the founding of the Royal Society in 1660). But the emergence of classic science was merely the point at which change become so visible that it could not be ignored.

Since modernity it is therefore possible that science has been unravelling even as it expanded – i.e. that the processes of growth, specialization and professionalization/ bureaucratization were also subverting themselves until the point (which we have passed) where the damage due to growth, specialization and professionalization/ bureaucratization outstripped the benefit.

This is good news, I think.

Much of the elaborate and expensive paraphernalia of science - which we mistakenly perceive to be vital – may in fact be mostly a late and parasitic development. Effective science can be, has been, much simpler and cheaper.

When we consider science as including the medieval era prior to classic science, then it becomes clear that there is no distinctive methodology of science. Looking across the span of centuries it looks like the process of doing science cannot really be defined more specifically than saying that it is a social and multigenerational activity characterized by truth-seeking.

I would further suggest that science is usually attempting to solve a problem – or to find a better solution to a problem than the existing one (problem solving per se is not science, but science is a kind of problem solving).

The main physical (rather than political) constraint on science in the past was probably the slowness and unreliability of communication and records. This made science extremely slow to advance. Nonetheless, these advances were significant and led to the modern world.

The encouraging interpretation is therefore that even when modern professional ‘science’ collapses a new version of medieval science should easily be able to replace it, because of the already-available improvements in the speed, accuracy and durability of communications.

In other words, a re-animated ‘medieval science’ (amateur, unspecialized, individualistic, self-organized) plus modern communications probably equals something pretty good, by world historical standards - probably not so good as the brilliant but unstable phase of 'classic science', but better than 'modern science'.

Monday 24 May 2010

Master and 'prentice'

Probably the most important bit of work I did as a scientist was the malaise theory of depression - http://www.hedweb.com/bgcharlton/depression.html .

I worked on this intermittently for nearly 20 years from about 1980 when I first began to study psychiatry. My motivation was trying to understand how a mood state could apparently be cured by medication.

From what I was being told, it seemed as if 'antidepressants' were supposed to normalize mood while leaving the rest of the mind unchanged. Of course, this isn't really true, but that was what I was trying to understand initially. Or, to put it another way, I was trying to understand what was 'an antidepressant' - since none of the standard explanations made any sense at all.

So, how did I 'solve' this problem (solve at least to my own satisfaction, that is!). Part of it was 'phenomenology' (i.e. mental self observation) - especially to observe my own mood states in response to various illnesses (such as colds and flu) and in response to various medications (including some which I was taking for migraine).

But the best answer is that I did not really solve it myself, but only when I had subordinated my investigations to the work of two great scientists - two 'Masters': the Irish psychiatrist David Healy and the Portugese neuroscientist Antonio R Damasio.

This apprenticeship was almost entirely via the written world - and involved reading, thinking-about and re-reading (and thinking about and re-re-reading some more) key passages from key works of these two scientists. That is accepting these men as my mentors and discerning guides to the vast (and mostly wrong) literature of Psychiatry and Neuroscience.

The lesson that I draw from my experience is that real science (which is both rare and slow) is done and passed-on by a social groups comprising a handful of great scientists and a still small but somewhat larger number of 'disciples' who learn and apply their insights and serve to amplify their impact.

But even the great scientists have themselves mostly served as apprentices to other great scientists (as has often been documented - e.g. by Harriet Zuckerman in Scientific Elite).

So, when thinking about the social structure of real science, it would seem that real scientific work is done (slowly, over a time frame of a few decades) by small groups that are driven by Masters who make the breakthroughs; plus a larger number of 'prentices who learn discernment from the Masters ('discernment' - i.e. the correct making of evaluations - being probably more important than techniques or specific information).

But disciples by themselves are not capable of making breakthroughs, but only capable of incremental extensions or combinations of Master work.

And it is best if the Master can be primarily responsible for training the next generation Master/s to carry on the baton of real science. Disciples can - at best - only train-up more 'prentices with the humility to seek and serve a Master.

Friday 21 May 2010

Doing science after the death of real science

Science is now, basically, dead (my direct experience of science is inevitably partial - but the same mechanisms seems to have been at work everywhere; even outside of medicine in the humanities some of which I know reasonably well - and the social sciences were essentially corrupt from the word go).

What we think of as science is now merely a branch of the bureaucracy. It would, indeed it does, function perfectly well without doing any useful and valid science at all.

Indeed, modern professional science functions perfectly well while, in fact, *destroying* useful and valid science and replacing it with either rubbish or actively harmful stuff (this is very clear in psychiatry).

I find that I now cannot trust the medical research literature _at all_. I trust a few individual individuals but I do not trust journals, not fields, not funding agencies, not scholarly societies (like the Royal Society, universities, or the NAS) not citations, not prizes (Nobel etc) - in my opinion, none of these are trustworthy indices of scientific validity - not even 'on average'.

The system is *so* corrupt that finding useful and valid science (and, of course, there is some) is like finding a needle in a haystack.

The vast bulk of published work is either hyped triviality (which is time wasting at best), or dishonest in a range of ways from actual invention of data down to deliberately selective publication, or else incompetent in the worst sense - the sense that the researchers lack knowledge, experience and even interest in the problems which they are pretending to solve.

So, what should a person do who wants to do real science in an area? - if (as I think its probably the case) they need to _ignore_ the mainstream published literature as doing more harm than good.

Essentially it is a matter of going back to pre-professional science, and trying to recreate trust based interpersonal networks ('invisible colleges') of truthful, dedicated amateurs; and accepting that the pace of science will be *slow*.

I've been reading Erwin Chargaff lately, and he made clear that the pace of science really is slow. I mean with significant increments coming at gaps of several years - something like one step a decade or so, if you are lucky. And if 'science' seems fast, then that is because it is not science!

This is why science is destroyed by professionalism and its vast expansion - there are too few steps of progress, and too few people ever make these steps. Most 'scientists' (nowadays in excess of 99 percent of them) - if judged correctly - are complete and utter failures, or indeed saboteurs!

So science inevitably ought to be done as a serious hobby/ pastime paid for by some other economic activity (which has usually teaching, but was often medicine up to the early 20th century, and before that being a priest).

Why should anyone take any notice of these putative small and self selected groups of hobby-scientists? Well, presumably if they produce useful results (useful as judged by common sense criteria - like relieving pain or reversing the predictable natural history of a disease), and if the members of the group are honest and trustworthy. But whether this will happen depends on much else - their work may be swamped by public relations.

So, groups of practitioners are best able to function as amateur scientists, since they can implement their own findings, with a chance that their effectivceness might be noticed. And in the past groups of practicing physicians would function as the scientists for their area of interest.

This seems the best model I can think of for those wanting to do science. But science is intrinsically a social activity, not an individual activity. So if you cannot find or create a group that you can trust (and whose competence you trust) - then you cannot do real science.

Tuesday 29 September 2009

Stop prescribing antipsychotics! - when possible

Charlton BG. Why are doctors still prescribing neuroleptics? QJM 2006; 99: 417-20.

This is a version of my paper "Why are doctors still prescribing neuroleptics? - in which the word 'neuroleptic' has been replaced by the word 'antipsychotic'. Both neuroleptic and antipsychotic refer to the same class of drugs - but neuroleptic was the original and most scientifically-accurate name. Antipsychotic is a dishonest marketing term, since these drugs are not anti-psychotic. However, antipsychotic has now all-but taken over from neuroleptic in mainstream discourse - so I have prepared this version of my paper containing the more common term.

Bruce G Charlton

Abstract

There are two main pharmacological methods of suppressing undesired behavior: by sedation or with antipsychotics. Traditionally, the invention of antipsychotics has been hailed as one of the major clinical breakthroughs of the twentieth century, since they calmed agitation without (necessarily) causing sedation. The specifically antipsychotic form of behavioral control is achieved by making patients psychologically Parkinsonian – which entails emotional-blunting and consequent demotivation. Furthermore, chronic antipsychotic usage creates dependence so that - in the long term, for most patients - antipsychotics are doing more harm than good. The introduction of ‘atypical’ antipsychotics (ie. antipsychotically-weak but strongly sedative antipsychotics) has made only a difference in degree, and at the cost of a wide range of potentially fatal metabolic and other side effects. It now seems distinctly possible that, for half a century, the creation of many millions of Parkinsonian patients has been misinterpreted as a ‘cure’ for schizophrenia. Such a wholesale re-interpretation of antipsychotic therapy represents an unprecedented disaster for the self-image and public reputation of both psychiatry and the whole medical profession. Nonetheless, except as a last resort, antipsychotics should swiftly be replaced by gentler and safer sedatives.

* * *

It is usually said, and I have said it myself, that the invention of antipsychotics was one of the major therapeutic breakthroughs of the twentieth century [1]. But I now believe that this opinion is due for revision, indeed reversal. Antipsychotics have achieved their powerful therapeutic effects at too great a cost, and a cost which is intrinsic to their effect [2, 3]. The cost has been many millions of formerly-psychotic patients who are socially-docile but emotionally-blunted, de-motivated, chronically antipsychotic-dependent and suffering significantly increased mortality rates. Consequently, as a matter of some urgency, antipsychotic prescriptions should be curtailed to the point that they are used only as a last resort.


Behavioral suppression in medicine

Psychiatrists, especially those working in hospitals, have frequent need for interventions to calm and control behavior – either for the safety of the patient or of society. The same applies – less frequently – for other medical personnel dealing with agitation, for example due to delirium or dementia. Broadly speaking, there are two pharmacological methods of suppressing agitated behavior: with sedatives or with antipsychotics [2, 3].
Sedation was the standard method of calming and controlling psychiatric patients for many decades prior to the discovery of antipsychotics, and sedation remained the only method in situations where antipsychotics were not available (eg in the Eastern Bloc and under-developed countries) [3, 4].

The therapeutic benefits of sedation should not be underestimated. In the first place sedation can usually be achieved safely and without sinister side effects; and an improved quality of sleep makes patients feel and function better. Sedation may also be potentially ‘curative’ where sleep disturbance has been so severe and prolonged as to lead to delirium, which (arguably) may be the case for some psychotic patients such as those with mania [2, 5].

But clearly - except in the short term - sedation is far from an ideal method of suppressing agitation. The discovery of antipsychotics offered something qualitatively new in terms of behavioral control: the possibility of powerfully calming a patient without (necessarily) making them sleepy [4]. In practice, sedative antipsychotics (such as chlorpromazine or thioridazine), or a combination of a sedative (such as lorazepam or promethazine) with a less-sedating antipsychotic such as haloperidol or droperidol, were often used to combine both forms of behavioral suppression.


The Parkinsonian core effect of antipsychotics

The Parkinsonian (emotion-blunting and de-motivating) core effect of antipsychotics has been missed by most observers. This failure relates to a blind-spot concerning the nature of Parkinsonism.

Parkinsonism is not just a motor disorder. Although abnormal movements (and an inability to move) are its most obvious feature, Parkinsonism is also a profoundly ‘psychiatric’ illness in the sense that emotional-blunting and consequent de-motivation are major subjective aspects. All this is exquisitely described in Oliver Sack’s famous book Awakenings [10], as well as being clinically apparent to the empathic observer.

Emotional-blunting is de-motivating because drive comes from the ability subjectively to experience in the here-and-now the anticipated pleasure deriving from cognitively-modeled future accomplishments [2]. An emotionally-blunted individual therefore lacks current emotional rewards for planned future activity, including future social interactions, hence ‘cannot be bothered’.

Demotivation is therefore simply the undesired other side of the coin from the desired therapeutic effect of antipsychotics. Antipsychotic ‘tranquillization’ is precisely this state of indifference [8]. The ‘therapeutic’ effect of antipsychotics derives from indifference towards negative stimuli, such as fear-inducing mental contents (such as delusions or hallucinations); while anhedonia and lack of drive are predictable consequences of exactly this same state of indifference in relation to the positive things of life.

So, Parkinsonism is not a ‘side-effect’ of antipsychotics, neither is it avoidable. Instead, Parkinsonism is the core therapeutic effect of antipsychotics: as reflected in the name, which refers to an agent which ‘seizes’ the nervous system and holds it constant (ie. indifferent, blunted) [4]. Demotivation should be regarded as inextricable from the antipsychotic form of tranquillization [2]. And the so-called ‘negative symptoms’ of schizophrenia are (in most instances) simply an inevitable consequence of antipsychotic treatment [4].

By this account, the so-called ‘atypical’ antipsychotics (risperidone, olanzapine, quetiapine etc.) are merely weaker Parkinsonism-inducing agents. The behavior-controlling effect of ‘atypicals’ derives from inducing a somewhat milder form of Parkinsonism, combined with strong sedation [11]. However, clozapine is an exception, because clozapine is not a antipsychotic, does not induce Parkinsonism, and therefore (presumably) gets its behavior- controlling therapeutic effect from sedation. The supposed benefit from clozapine of ‘treating’ the ‘negative symptoms of schizophrenia’ (such as de-motivation, lack of drive, asocial behavior etc.) is therefore that – not being a antipsychotic – clozapine does not itself cause these negative symptoms.


What next?

Whatever the historical explanation for the wholesale misinterpretation of antipsychotic actions, recent high profile papers in the New England Journal of Medicine [12, 13] and JAMA [14] have highlighted serious problems with antipsychotics as a class (whether traditional or atypical), and the tide of opinion now seems to turning against them.
In particular the so-called ‘atypical antipsychotics’ which now take up 90 percent of the US market [12], and are increasingly being prescribed to children [6] seem to offer few advantages over the traditional agents [12] while being highly toxic and associated with significantly-increased mortality from metabolic and a variety of other causes [13, 14, 15, 16]. This new data has added weight to the idea that usage of antipsychotics should now be severely restricted [3, 7, 17].

Indeed, it looks as if after some 50 years widespread prescribing there is going to be a massive re-evaluation and re-interpretation of these drugs, with a reversal of their evaluation as a great therapeutic breakthrough. It now seems distinctly possible that for half a century the creation of millions of asocial, antipsychotic-dependent but docile Parkinsonian patients has been misinterpreted as a ‘cure’ for schizophrenia. This wholesale re-interpretation represents an unprecedented disaster for the self-image and public reputation – not just of psychiatry – but of the whole medical profession.

Perhaps the main useful lesson from the emergence of the 'atypical' antipsychotics is that psychiatrists did not need to make all of their agitated and psychotic patients Parkinsonian in order to suppress their behavior. ‘Atypicals’ are weakly antipsychotic but highly sedative. This implies that sedation is probably sufficient for behavioral control in most instances [3, 17]. In the immediate term, it therefore seems plausible that already-existing, cheap, sedative drugs (such as benzodiazepines or antihistamines) offer realistic hope of being safer, equally effective and subjectively less-unpleasant substitutes for antipsychotics in many (if not all) patients.

I would argue that this should happen sooner rather than later. If we apply the test of choosing what treatment we would prefer for ourselves or our relatives with acute agitation or psychosis, knowing what we now know about antipsychotics, I think that many people (perhaps especially psychiatric professionals) would now wish to avoid antipsychotics except as a last resort. Few would be happy to wait a decade or so for the accumulations of a mass of randomized trial data (which may never emerge, since such trials would lack a commercial incentive) before making the choice of less dangerous and unpleasant drugs [17].

But there is no hiding the fact that if antipsychotics were indeed to be replaced by sedatives then this would seem like stepping-back half a century. It would entail an acknowledgement that psychiatry has been living in a chronic delusional state – and this may suggest that the same could apply to other branches of medicine. Since such a wholesale cognitive and organizational reappraisal is unlikely, perhaps the most realistic way that the desired change in practice will be accomplished is not by an explicit ‘return’ to old drugs but by the introduction of a novel (and patentable) class of sedatives which are marketed as having some kind of (more-or-less plausible) new therapeutic role.

Such a new class of tacit sedatives would enable the medical profession to continue its narrative of building-upon past progress, and retain its self-respect; albeit at the price of cognitive evasiveness. But, if such developments led to a major cut-back in antipsychotic prescriptions, then this deficiency of intellectual honesty would be a small price to pay.


References

1. Charlton BG. Clinical research methods for the new millennium. Journal of Evaluation in Clinical Practice 1999; 5: 251-263.

2. Charlton B. Psychiatry and the human condition. Radcliffe Medical Press: Oxford, UK, 2000.

3. Moncrieff J, Cohen D. Rethinking models of psychotropic drug action. Psychotherapy and Psychosomatics 2005; 74: 145-153.

4. Healy D. The creation of psychopharmacology. Harvard University Press: Cambridge, MA, USA, 2002.

5. Charlton BG, Kavanau JL. Delirium and psychotic symptoms: an integrative model. Medical Hypotheses. 2002; 58: 24-27.

6. Whitaker R. Mad in America. Perseus Publishing: Cambridge, MA, USA, 2002.

7. Whitaker R. The case against antipsychotic drugs: a 50 year record of doing more harm than good. Medical Hypotheses 2004; 62: 5-13.

8. Healy D. Psychiatric drugs explained. 3rd edition. Churchill Livingstone: Edinburgh, 2002.

9. Healy D, Farquhar G: Immediate effects of droperidol. Human Psychopharmacology 1998; 13: 113-120.

10. Sacks O. Awakenings. London: Picador, 1981.

11. Janssen P. From haloperidol to risperidone. In D Healy (Ed.) The psychopharmacologists. London: Altman, 1998, pp 39-70

12. Lieberman JA, Stroup TS, McEvoy JP, Swartz MS, Rosenheck RA, Perkins DO, Keefe RS, Davis SM, Davis CE, Lebowitz BD, Severe J, Hsiao JK; Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) Investigators. Effectiveness of antipsychotic drugs in patients with chronic schizophrenia. New England Journal of Medicine 2005; 353: 1209-23.

13. Wang, Philip S.; Schneeweiss, Sebastian; Avorn, Jerry; Fischer, Michael A.; Mogun, Helen; Solomon, Daniel H.; Brookhart, M. Alan. Risk of Death in Elderly Users of Conventional vs. Atypical Antipsychotic Medications. New England Journal of Medicine 2005; 353: 2335-2341.

14. Schneider LS, Dagerman KS, Insel P. Risk of death with atypical antipsychotic drug treatment for dementia: meta-analysis of randomized placebo-controlled trials. JAMA 2005; 294: 1934-43.

15. Montout C, Casadebaig F, Lagnaoui R, Verdoux H, Phillipe A, Begaud B, Moore N. Neuroleptics and mortality in schizophrenia: a prospective analysis of deaths in a French cohort of schizophrenic patients. Schizophrenia Research 2002; 147-156.

16. Morgan MG, Scully PJ, Youssef HA, Kinsella A, Owens JM, Waddington JL. Prospective analysis of premature mortality in schizophrenia in relation to health service engagement: a 7.5 year study within an epidemiologically complete, homogenous population in rural Ireland. Psychiatry Research. 2003; 117: 127-135.

17. Charlton BG. If 'atypical' neuroleptics did not exist, it wouldn't be necessary to invent them: Perverse incentives in drug development, research, marketing and clinical practice. Medical Hypotheses 2005; 65 :1005-9

Wednesday 5 August 2009

Zombie science of Evidence-Based Medicine

*

The Zombie science of Evidence-Based Medicine (EBM): a personal retrospective

Bruce G Charlton. Journal of Evaluation in Clinical Practice. 2009; 15: 930-934.

Professor of Theoretical Medicine
University of Buckingham
e-mail: bruce.charlton@buckingham.ac.uk

***
Abstract

As one of the fairly-frequently cited critics of the socio-political phenomenon which styles itself Evidence-Based Medicine (EBM), it might be assumed that I would be quite satisfied by having collected some scraps of fame, or at least notoriety, from the connection. But in fact I now feel a fool for having been drawn into criticizing EBM with the confident expectation of being able to kill it before it had the chance to do too much harm. A 'fool' not because my criticisms of EBM were wrong – EBM really is just as un-informed, confused and dishonest as I claimed it was. But because, with the benefit of hindsight, it is obvious that EBM was from its very inception a Zombie science – a lumbering hulk reanimated from the corpse of Clinical Epidemiology. And a Zombie science cannot be killed because it is already dead. A Zombie science does not perform any scientific function, so it is invulnerable to scientific critique since it is sustained purely by the continuous pumping of funds. The true function of Zombie science is to satisfy the (non-scientific) needs of its funders – and indeed the massive success of EBM is that it has rationalized the takeover of UK clinical medicine by politicians and managers. So I was simply wasting my time by engaging in critical evaluation of EBM using the normal scientific modes of reason, knowledge and facts. It was useless my arguing against EBM because a Zombie science cannot be stopped by any method short of cutting-off its fund supply.

***

It is pointless trying to kill the un-dead

Since I am one of the fairly-frequently cited critics of the socio-political phenomenon which styles itself Evidence-Based Medicine (EBM), it might be assumed that I would be quite satisfied by having collected some scraps of fame, or at least notoriety, from the connection [1-4]. But in fact I feel rather a fool for having been drawn into criticizing EBM. Not because my criticisms were wrong – of course they weren’t. EBM really is just as uninformed, confused and dishonest as I claimed in my writings. But because - with the benefit of hindsight - it is obvious that EBM was, from its very inception, a Zombie science: reanimated from the corpse of Clinical Epidemiology.

I recently delineated the concept of Zombie science [5], partly based on my experiences with EBM, and the concept has already spread quite widely via the internet. Zombie science is defined as a science that is dead but will not lie down. Instead, it keeps twitching and lumbering around so that it somewhat resembles Real science. But on closer examination, the Zombie has no life of its own (i.e. Zombie science is not driven by the scientific search for truth [6]); it is animated and moved only by the incessant pumping of funds. Funding is the necessary and sufficient reason for the existence of Zombie science; which is kept moving for so long as it serves the purposes of its funders (and no longer).

So it is a waste of time arguing against Zombie Science because it cannot be stopped by any method except by cutting-off its fund supply. Zombie science cannot be killed because it is already dead.


When dead fish seem to swim

I ought to have seen all this more quickly, because I knew Clinical Epidemiology (CE) [7] before it was murdered and reanimated as an anti-CE Zombie, and I was by chance actually present at more-or-less the exact public moment when – by David Sackett, in Oxford, in 1994 - the corpse of CE was galvanized into motion in the UK, and began to be inflated by rapid infusion of NHS funding on a massive scale. The process was swiftly boosted by zealous promotion from the British Medical Journal, which turned-itself into a de facto journal of EBM then went on to benefit from the numerous publishing and conference opportunities created by their advocacy [3].

I ask myself now; how I could have been so naïve as to imagine that EBM – born of ignorance and deception - would be open to the usual scientific processes of conjecture and refutation? From the very beginning, from the very choice of the name 'evidence-based medicine'; it was surely obvious enough to the un-blinkered gaze that we were dealing here with something outwith the kin of science. Why couldn’t I see this? Why was I so ludicrously reasonable?

The fact is that I was lured into engagement by pride: pride at seeing-through the clouds of smoke which were being deployed to obscure the origins of EBM in CE; pride at recognizing the numerous ‘moves’ by which unfounded assertion was being disguised as evidential - at the playing fast-and-loose with definitions of ‘evidence’ in order to reach pre-determined conclusions… I think that I was excited by the possibilities of engaging in what looked like a easy demolition job. My only misgiving was that it was too easy – destroying the scientific basis of EBM would be about as challenging as shooting fish in a barrel.

Well, it was as easy as that. But shooting fish in a barrel is a pointless activity if nobody is interested in the vitality of the fish. As it turned-out – the edifice of EBM could be supported as easily by a barrel of dead fish as by one full of lively swimmers. So long as the fish corpses were swirling around (stirred by the influx of research funding) then, for all anyone could see at a quick glance (which is the only kind of glance most people will give), it looked near-enough as if the fish were still alive.

Anyway, undaunted, I and many others associated with the Journal of Evaluation in Clinical Practice set about the work of analyzing, selecting and discarding among the assertions and propositions of EBM.

There was the foundational assertion that in the past pre-EBM medicine had not been based on evidence but on a blend of prejudice, tradition and subjective whim; this now to be swept aside by the ‘systematic’ use of ‘best evidence’. This was an ignorant and unfounded belief – coming as it did after the (pretty much) epidemiology-unassisted 'golden age' of medical therapeutic discovery peaking somewhere between about 1940 and 1970 [8-11].

With regard to ‘best evidence’ there was the assertion that ‘evidence’ meant only focusing on epidemiological data (and not biochemistry, genetics, physiology, pharmacology, engineering or any other of the domains which had generated scientific breakthroughs in the past). It meant ignoring the role of ‘tacit knowledge’ derived from apprenticeship. And it was clearly untrue [12-15].

Then there was the assertion that the averaged outcomes of epidemiological data, specifically randomized trails and their aggregation by meta-analysis of RCTs were straightforwardly applicable to individual patients. This was a mistake [12, 16-18].

On top of this there was the methodological assertion that among RCTs the ‘best’ were the biggest – the ‘mega-trials’ which attempted to maximize recruitment and retention of subjects by simplifying methodologies and thereby reducing the level of control. This was erroneous [12, 16, 19].

In killing-off the bottom-up ideals of Clinical Epidemiology, EBM embraced a top-down and coercive power structure to impose EBM-defined ‘best evidence’ on clinical practice [20, 21]; this to happen whether clinical scientists or doctors agreed that the evidence was best or not (and because doctors have been foundationally branded as prejudiced, conservative and irrational –EBM advocates were pre-disposed to ignore their views anyway).

Expertise was arbitrarily redefined in epidemiological and biostatistical terms, and virtue redefined as submission to EBM recommendations – so that the job of physician was at a stroke transformed into one based upon absolute obedience to the instructions of EBM-utilizing managers [3].

(Indeed, since too many UK doctors were found to be disobedient to their managers; in the NHS this has led to a progressive long-term strategy of the replacing doctors by more-controllable nurses, who are now first contact for patients in many primary and specialist health service situations.)


Biting-off the hand that offers EBM

The biggest mistake made in analyzing the EBM phenomenon is to assume that the success of EBM depended upon the validity of its scientific or medical credentials [22]. This would indeed be reasonable if EBM were a Real science. But EBM was not a Real science, indeed it wasn’t any kind of science at all as was clearer when it had been correctly characterized as a branch of epidemiology, which is a methodological approach sometimes used by science [13-15].

EBM did not need to be a science or a scientific methodology, because it was not adopted by scientists but by politicians, government officials, managers and biostatisticians [3]. All it needed – scientifically – was to look enough like a scientific activity to convince a group of uninformed people who stood to benefit personally from its adoption.

So, the sequence of falsehoods, errors, platitudes and outrageous ex cathedra statements which constituted the ideological foundation of EBM, cannot be – and is not - an adequate or even partial explanation for the truly phenomenal expansion of EBM. Whether EBM was self-consciously crafted to promote the interests of government and management, or whether this confluence of EBM theory and government need was merely fortuitous, is something I do not know. But the fact is that the EBM advocates were shoving at an open door.

When the UK government finally understood that what was being proposed was a perfect rationale for re-moulding medicine into exactly the shape they had always wanted it - the NHS hierarchy were falling over each other in their haste to establish this new orthodoxy in management, medical education and in founding new government institutions such as NICE (originally meaning the National Institute for Clinical Excellence – since renamed [20]).

As soon as the EBM advocates knocked politely to offer a try-out of their newly-created Zombie; the door was flung open and the EBM-ers were dragged inside, showered with gold and (with the help of the like-minded Cochrane Collaboration and BMJ publications) the Zombie was cloned and its replicas installed in positions of power and influence.

Suddenly the Zombie science of EBM was everywhere in the UK because money-to-do-EBM was everywhere – and modern medical researchers are rapidly-evolving organisms which can mutate to colonize any richly-resourced niche – unhampered by inconveniences such as truthfulness or integrity [23]. Anyway, when existing personnel were unwilling, there was plenty of money to appoint new ones to new jobs.


The slaying of Clinical Epidemiology (CE)

But how was the Zombie created in the first place?

In the beginning, there had been a useful methodological approach called Clinical Epidemiology (CE), which was essentially the brainchild of the late Alvan Feinstein – a ferociously intelligent, creative and productive clinical scientist who became the senior Professor of Medicine at Yale and recipient of the prestigious Gairdner Award (a kind of mini-Nobel prize). Feinstein's approach was to focus on using biostatistical evidence to support clinical decision making, and to develop forms of measurement which would be tailored for use in the clinical situation. He published a big and expensive book called Clinical Epidemiology in 1985 [24]. Things were developing nicely.

The baton of CE was then taken up at McMaster University by David Sackett, who invited Feinstein to come as a visiting professor. Sackett turned out to be a disciple easily as productive as Feinstein; but, because he saw things more simply than Feinstein, Sackett had the advantage of a more easily understood world-view, prose style and teaching persona. So when Sackett and co-authors also published a book entitled Clinical Epidemiology in 1985 [7] –Sackett's book was less complex, less massive and much less expensive. And Sackett swiftly became the public face of Clinical Epidemiology.

But in this 1985 book, Sackett cited as definitive his much earlier 1969 definition of Clinical Epidemiology, which ran as follows: “I define clinical epidemiology as the application, by a physician who provides direct patient care, of epidemiologic and biometric methods to the study of diagnostic and therapeutic process in order to effect an improvement in health. I do not believe that clinical epidemiology constitutes a distinct or isolated discipline but, rather, that it reflects an orientation arising from both clinical medicine and epidemiology. A clinical epidemiologist is, therefore, an individual with extensive training and experience in clinical medicine who, after receiving appropriate training in epidemiology and biostatistics, continues to provide direct patient care in his subsequent career” [25] (Italics are in the original.).

Just savour those words: ‘by a physician who provides direct patient care’ and ‘I do not believe that clinical epidemiology constitutes as distinct or isolated discipline… but, rather, an orientation’. These primary and foundational definitions of clinical epidemiology were to be reversed when the subject was killed and reanimated as EBM which was marketed as a ‘distinct and isolated discipline’ (with its own training and certification, its own conferences, journals and specialized jobs) that was being practiced many individuals (politicians, bureaucrats, managers, bio-statisticians, public health employees…) who certainly lacked ‘extensive’ (or indeed any) ‘training and experience in clinical medicine’; and who certainly did not provide direct patient care.

I came across Sackett's Clinical Epidemiology book in 1989 and was impressed. Although I recognized that CE ought to be considerably less algorithm-like and more judgment-based than the authors suggested even in 1985; nonetheless I recognized that Clinical Epidemiology was a fresh, reasonable and perfectly legitimate branch of knowledge with relevance to medical practice. And Clinical Epidemiology was a good name for the new subject, since it described the methodological nature of the activity – which was concerned with the importance of epidemiological methods and information to clinical practice.

But during the period from 1990-92, Clinical Epidemiology was first quietly killed then loudly reanimated as Evidence-Based Medicine [22]. In retrospect we can now see that this was not simply the replacement of an honest name with a dishonest one that arrogantly and without justification begs all the important questions about medical practice. Nor was it merely the replacement of the bottom up model of Clinical Epidemiology with the authoritarian dictatorship which EBM rapidly became.

No, EBM was much more radically different from Clinical Epidemiology than merely a change of name and an inversion of authority; because the new EBM sprang from the womb fully-formed as a self-evident truth [3, 4]. EBM was not a hypothesis but a circular and self-justifying revelation in which definition supported analysis which supported definition – all rolled-up in an urgent moral imperative. To know EBM was to love him; and to recognize him as the Messiah; and to anticipate his imminent coming.

Therefore EBM was immune to the usual feedback and critique mechanisms of science; EBM was not merely disproof-proof but was actually virtuous – and failure to acknowledge the virtuous authority of EBM and adopt it immediately was not just stupid but wicked!

(This moralizing zeal was greatly boosted by association with the Cochrane Collaboration including its ubiquitous spiritual leader, Sir Iain Chalmers.)

In short, EBM was never required to prove itself superior to the existing model of medical practice; rather, existing practice was put into the position of having to prove itself superior to the newcomer EBM!


Zombies with translucent skins

Just think of it, for a moment. Here was a doctrine which advocated rejecting and replacing-with-itself the whole mode of medical science and practice of the past. It advocated a new model of health service provision, new principles for research funding, a new basis for medical education. And the evidence for this? Well… none. Not one particle. ‘Evidence-based’ medicine was based on zero evidence.

As Goodman articulated (in perhaps the best single sentence ever written on the subject of EBM) “…There is no evidence (and unlikely ever to be) that evidence-based medicine provides better medical care in total than whatever we like to call whatever went before…” [26]. So EBM was never required to prove with evidence what it should have been necessary to prove before beginning the wholesale reorganization of medical practice: i.e. that EBM was a better system than ‘whatever we like to call’ whatever went before EBM.

Had anyone done any kind of side-by-side prospective comparison of these two systems of practicing medicine before rejecting one and adopting the other? No. They could have don it, but they didn’t. The message was that EBM was just plain better: end of story.

But how could this happen? – why was it that the medical world did not merely laugh in the metaphorical face of this pretender to the crown? The answer was money, of course; because EBM was proclaimed Messiah with the backing of serious amounts of UK state funding. Indeed, it is now apparent that the funding was the whole thing. If EBM was a body, then the intellectual content of EBM is merely a thin skin of superficial plausibility which covers innards that consist of nothing more than liquid cash, sloshing-around.

Indeed, the thin skin of the EBM Zombie was a secret to its success. The EBM zombie has such a thin skin of plausibility that it is transparent, and observers can actually see the money circulating beneath it. The plausibility was miraculously thin! This meant that EBM-type plausibility was democratically available to everyone: to the ignorant and to the unintelligent as well as the informed and the expert. How marvelously empowering! What a radical poke in the eye for the arrogant ‘establishment’! (And the EBM founders are all outspoken advocates of the tenured radicalism of the ‘60s student generation [4].)

Compared with learning a Real science, it was facile to learn the few threads of EBM jargon required to stitch-together your own Zombie skin using bits and pieces of your own expertise (however limited); then along would come the UK government and pump this diaphanous membrane full of cash to create a fairly-realistic Zombie of pseudo-science. In a world where scientific identity can be self-defined, and scientific status is a matter of grant income [11], then the resulting inflatable monster bears a sufficient resemblance to Real science to perform the necessary functions such as securing jobs or promotions, enhancing salary and status.

The fact that EBM was based upon pure and untested assertions therefore did not weaken it in the slightest; rather the scientific weakness was itself a source of political strength. Because, in a situation where belief in EBM was so heavily subsidized, it was up to critics conclusively to prove the negative: that EBM could not work. And even when conclusive proof was forthcoming, it could easily be ignored. After all, who cares about the views of a bunch of losers who can’t recognize a major funding opportunity when they see it?


Content eluted, only power remains

Things got even worse for those of us who were pathetically trying to stop a government-fuelled Zombie army using only the peashooters of rational debate and the catapults of ridicule. Early EBM made propositions which were evidently wrong, but their recommendations did at least have genuine content. If you installed the EBM clockwork and turned the handle; then what came out was predictable and had content. EBM might have told you wrong things to do; at least it told you what to do with words that had meaning.

But then there was the stunning 1996 U-turn in the BMJ (where else?), in which the advocates of EBM suddenly announced a U-turn, they de-programmed their Zombies. “Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients” [27].

(Pause to allow the enormity of this statement to sink in…)

At a stroke the official meaning of EBM was completely changed, a vacuous platitude was substituted, and henceforth any substantive methodological criticism was met by a rolling-out and mantra-like repetition of this vacuous platitude [28].

Recall, if you will, Sackett’s foundational definition of CE as done: “by a physician who provides direct patient care’ and ‘I do not believe that clinical epidemiology constitutes as distinct or isolated discipline… but, rather, an orientation’. To suggest that EBM represented a ‘sell-out’ of clinical epidemiology is seriously to understate the matter: by 1996 EBM was nothing short of a total reversal of the underlying principles of CE. Alvan Feinstein, true founder of (real) Clinical Epidemiology, considered EBM intellectually laughable – albeit fraught with hazard if taken seriously [e.g. 29].

This fact renders laughable such assurances as: “Clinicians who fear top down cookbooks will find the advocates of evidence based medicine joining them at the barricades.” Wow! Fighting top-down misuse of EBM at the ‘barricades’, no less…

Satire fails in the face of such thick-skinned self-aggrandizement.

Instead of being based only on epidemiology, only on mega-RCTs and their ‘meta-analysis’, only on simple, explicit and pre-determined algorithms - suddenly all kinds of evidence and expertise and preferences were to be allowed, nay encouraged; and were to be ‘integrated’ with the old fashioned RCT-based stuff. In other words, it was back to medicine as usual – but this time medicine would be controlled from above by the bosses who had been installed by EBM.

And having done something similar with Clinical Epidemiology, and now operating in the ‘through-the-looking-glass’ world of the NHS, of course they got away with it! Nobody batted an eyelid. After all, reversing definitions while retaining an identical technical terminology and an identical organizational structure is merely politics as usual. "When I use a word," Humpty Dumpty said in a rather a scornful tone, "it means just what I choose it to mean – neither more nor less."

However much its content was removed or transformed, they still continued calling it EBM. By the time an official textbook of EBM appeared [28], clinical epidemiology had been airbrushed from collective memory, and Sackett’s 1969 clinical-epidemiologist self been declared an ‘unperson’.

Nowadays EBM means whatever the political and managerial hierarchy of the health service want it to mean for the purpose in hand. Mega-randomized trails are treated as the only valid evidence until this is challenged or the results are unwelcome, when other forms of evidence are introduced on an ad hoc basis. Clinical Epidemiology is buried and forgotten.

But a measure of success is that the NHS hierarchy who use the EBM terminology are the ones with power to decide its official meaning when deployed on each specific occasion. The ‘barricades’ have been stormed. The Zombies have taken over!


References
1. Charlton BG. Restoring the balance: Evidence-based medicine put in its place. Journal of Evaluation in Clinical Practice, 1997; 3: 87-98.
2. Charlton BG. Review of Evidence-based medicine: how to practice and teach EBM by Sackett DL, Richardson WS, Rosenberg W, Haynes RB. [Churchill Livingstone, Edinburgh, 1997]. Journal of Evaluation in Clinical Practice. 1997; 3: 169-172
3. Charlton BG, Miles A. The rise and fall of EBM. QJM, 1998; 91: 371-374.
4. Charlton BG. Clinical research methods for the new millennium. Journal of Evaluation in Clinical Practice 1999; 5: 251-263.
5. Charlton BG. Zombie science: A sinister consequence of evaluating scientific theories purely on the basis of enlightened self-interest. Medical Hypotheses, Volume 71, Issue 3, Pages 327-329.
6. Charlton BG. The vital role of transcendental truth in science. Medical Hypotheses. 2009; 72: 373-6.
7. Sackett DL, Haynes RB, Tugwell P. Clinical Epidemiology: a basic science for clinical medicine. Boston: Little, Brown, 1985.
8. Horrobin, D.F. 1987 Scientific medicine – success or failure?. In: Oxford Textbook of Medicine. 2nd ednD.J. Weatherall, J.G.G. Ledingham & D.A. Warrell) Oxford University Press, Oxford.
9. Wurtman, R.J. & Bettiker, R.L. 1995 The slowing of treatment discovery. 1965–95. Nature Medicine, 1, 1122 1125.
10. Le Fanu J. The rise and fall of modern medicine. Little, Brown: London, 1999.
11. Charlton BG, Andras P. Medical research funding may have over-expanded and be due for collapse. QJM. 2005; 98: 53-5.
12. Charlton BG. The future of clinical research: from megatrials towards methodological rigour and representative sampling. Journal of Evaluation in Clinical Practice, 1996; 2: 159-169.
13. Charlton BG. Should epidemiologists be pragmatists, biostatisticians or clinical scientists? Epidemiology, 1996; 7: 552-554.
14. Charlton BG. The scope and nature of epidemiology. Journal of Clinical Epidemiology, 1996; 49: 623-626.
15. Charlton BG. Epidemiology as a toolkit for clinical scientists. Epidemiology, 1997; 8: 461-3
16. Charlton BG. Mega-trials: methodological issues and implications for clinical effectiveness. Journal of the Royal College of Physicians of London, 1995; 29: 96-100.
17. Charlton BG. The uses and abuses of meta-analysis. Family Practice, 1996; 13: 397-401.
18. Charlton BG, Taylor PRA, Proctor SJ. The PACE (population-adjusted clinical epidemiology) strategy: a new approach to multi-centred clinical research. QJM, 1997; 90: 147-151.
19. Charlton BG. Fundamental deficiencies in the megatrial methodology. Current Controlled Trials in Cardiovascular Medicine. 2001; 2: 2-7.
20. Charlton BG. The new management of scientific knowledge in medicine: a change of direction with profound implications. In A Miles, JR Hampton, B Hurwitz (Eds). NICE, CHI and the NHS reforms: enabling excellence or imposing control? Aesculapius Medical Press: London, 2000. Pp. 13-31.
21. Charlton BG. Clinical governance: a quality assurance audit system for regulating clinical practice. (Book Chapter). In A Miles, AP Hill, B Hurwitz (Eds) Clinical governance and the NHS reforms: enabling excellence of imposing control? Aesculapius Medical Press: London, 2001. Pp. 73-86.
22. Daly J. Evidence-based medicine and the search for a science of clinical care. Berkeley & Los Angeles: Univesrity of California Press, 2005.
23. Charlton BG. Are you an honest scientist? Truthfulness in science should be an iron law, not a vague aspiration. Medical Hypotheses. Doi:10.1016/j.mehy.2009.05.009, in the press.
24. Feinstein AR. Clinical Epidemiology: the architecture of clinical research. Philidelphia: WB Saunders, 1985.
25. Sackett DL. Clinical Epidemiology. American Journal of Epidemiology. 1969; 89: 125-8.
26. Goodman NW. Anaesthesia and evidence-based medicine. Anaesthesia. 1998; 53: 353-68.
27. Sackett DL, Rosenberg WMC, Muir Gray JA, Haynes RB, Richardson WS. Evidence-based medicine: what it is and what it isn’t. BMJ 1996; 312: 71-2.
28. Sackett DL. Richardson WS, Rosenberg W, Haynes RB. Evidence-based medicine: how to practice and teach EBM. Edinburgh: Churchill Livingstone, 1997.
29. Feinstein AR, Horwitz RI. Problems in the ‘evidence’ of ‘Evidence-Based Medicine’. American Journal of Medicine. 1997; 103: 529-35.