Friday 16 July 2010

The submissive flaccidity of modern secular hedonism

I was always puzzled by the submissive flaccidity of modern Western societies: the way that - although they live to maximize gratification and minimize suffering - they will in practice do nothing to protect their future happiness nor to defend against future suffering.

But the reason is encapsulated by "Charlton's Law": Things must always get worse before they can get better; because otherwise they already would be better."

When a beneficial policy is a win-win option, then it gets done automatically, and we don't need to think about it - probably we don't even notice it.

But most beneficial policies have a down-side. Typically, long-term benefit can be attained only at the cost of short-term disadvantage or suffering of some kind, to some people.

So that the hedonic secular goal of making life *overall* as pleasant as possible in the *long-term* is continually being subverted by the *short-term* and *specific* gratification.

The hedonic ideal has reached such an extremity among the ruling elites that they pursue policies which will in the long term lead to lifestyles that they regard as miserable and abhorrent, because effectively to prevent these outcomes makes them feel bad now.

In other words, secular hedonism cannot take tough decisions.

***

A tough decision is precisely a decision in which the correct decision leads to short term harm.

I first recognized this dilemma in medicine, when it is often the case that in order to make a person probably feel better overall in the long term, they must suffer immediate and certain short term misery: for example, surgery. Surgeons live with this on a daily basis, and consequently to be a good surgeon requires a 'tough' attitude.

Of course surgery requires many other things too, and most tough decisions are bad - but the point is that someone who was psychologically unable to make tough decisions, but always sought to maximize the immediate comfort and well-being of patients and to take minimum risk, would be a bad surgeon.

Modern society is *soft* in precisely this fashion - its rulers have lost the ability take tough decisions: to seek long term benefits when these come at the price the cost of short term costs to themselves.

The ultimate reason is, I believe, that humans can only make tough decisions when these are supported by *transcendental aims*, in the sense that humans do not want to forgo short term gratification in this world unless life is about something *more* than gratification – and where non-worldly realities (God, heaven, truth, beauty etc.) are seen as more real and more enduring than immediate gratification - and therefore more important.

***

If human life is (as secular modernity asserts) ultimately about gratification (about maximizing happiness and minimizing suffering) then it will always seem tempting to take the short-term choice leading to immediate and certain happiness and avoid immediate and certain suffering; and to ignore the long-term consequences of these choices on the basis that the future cannot be known with certainty, and we might be dead anyway before the future arrives.

The resulting mentality is characteristic of the modern secular elite, but has spread to encompass much of contemporary life. Charles Murray has encapsulated this modern ‘sophisticated’ attitude very well: “Human beings are a collection of chemicals that activate and, after a period of time, deactivate. The purpose of life is to while away the intervening time as pleasantly as possible.”

My point is that a society which regards the purpose of life as being to while away the time between birth and death as pleasantly as possible is a society which cannot make tough decisions. It is a society which will always take the easy way out, will pursue short-termist and certain benefits, and which will therefore always submit to its enemies - because to resist enemies makes life less pleasant than to appease them.

Even to recognize the reality of threats and enemies is unpleasant, distressing, generative of negative emotions such as fear and anger – better if we can pretend that threats and enemies are harmless or benign, really; and the only truly nasty people are those who make us feel bad about ourselves, here and now…

***

So a society that values nothing higher than a pleasant life and which will seek the pleasant life whereever and whenever possible will be morally flaccid in face of opposition, will appease rather than resist, will submit rather than fight, and will therefore end-up being ruled by its most relentless and long-termist enemies - and by having an extremely un-pleasant life.

This is why secular modernity cannot survive: because it enshrines the worldly enlightened self-interest of submissive flaccidity as its ultimate form of rational, sensitive moral behavior.

Thursday 15 July 2010

Charlton's Law: Things must always get worse before they can get better...

Charlton's Law:

Things must always get worse before they can get better;

Because otherwise they already would be better.

Comments enabled, tentatively

I have tentatively enabled moderated comments, as an experiment.

However, I will only allow comments that I judge suitable for general public consumption.

Otherwise readers with something to say are welcome to e-mail me at the address provided.

Wednesday 14 July 2010

Proportion of private school kids applying to college is about 18%, not 7%.

Dishonest or statistically-incompetent state education propagandists in the UK mass media and 'research' (actually leftist propaganda) outfits such as the egregious Sutton Trust, have relentlessly propagated the factoid that 'only 7 percent' of UK school kids attend private schools.

e.g. http://www.suttontrust.com/news.asp

From this statistic they argue that only 7% of kids at private school implies that only 7% of privately educated kids should be admitted to university, or professions such as medicine or law, or postgraduate degrees - if the admissions system was fair.

And they state that any higher percentages of privately educated people found in a desirable institution implies that the system is unfair and biased in favour of the privately-educated (i.e. this statistic is used as ammunition in typical class warfare rhetoric, designed to create anger, envy and resentment).

However, seven percent is wrong; a deliberately-misleading statistic in relation to university applications.

University applicants come from people aged about 17 and over, so the relevant statistic is the proportion of UK children in the 'sixth form' i.e. those aged approximately 16-19.

The number of little children in private primary schools is irrelevant to university applications.

But the proportion of privately-educated children of university application age is actually 18 percent, not seven percent.

***

From: http://www.isc.co.uk/FactsFigures_PupilNumbers.htm

"Pupil Numbers

"Pupils in ISC schools account for around 80% of the total number of pupils in independent schools in the UK. The UK independent sector as a whole educates around 628,000 children in around 2,600 schools. The independent sector educates around 6.5% of the total number of schoolchildren in the UK (and over 7% of the total number of schoolchildren in England) with the figure rising to more than 18% of pupils over the age of 16.

"There are now 511,886 children in ISC schools in the UK and Republic of Ireland. Of these:

* 67,856 are boarders and 444,030 are day pupils
* 261,051 are boys and 250,835 are girls
* 44,792 are nursery age (0-4)
* 158,631 are primary age (5-10)
* 223,178 are senior age (11-15)
* 85,285 are sixth form age (16-19)

***

So the proportion of privately-educated applicants to universities is about 18 percent, not 7 percent, and this is more than double the media-quoted proportion.

The reason is simple enough - parents are more likely to send their children to private school the older they get. Parents are less concerned with private education at primary level than secondary level - and the more advanced the child the more likely they are to be transferred to a private school - while parents with children at private school will rarely transfer their kids to the state sector except for financial reasons.

Therefore, *if* private school kids were of equal ability to state school kids, then there would be approximately 18 percent of them at top universities if the selection system was unbiased.

But because private schools are selective (to a greater or lesser degree) they have on average better-performing pupils. This means that although there are about 18 percent of university applicants from private schools, a higher proportion than 18 percent would be expected to gain admission to universities; and the proportion would become higher as universities became more selective.

This explains why about one in five private school applicants to the most selective UK universities leads to around one in two or three private school pupils admitted to the most selective universities - on a fair system. The same applies to highly selective professions such as medicine and law.

Of course, the left does *not* want a *fair* university admissions system, but wants one that is government controlled and which is biased in favour of left voters - they want a system of preferences for their supporters.

So they lie about the numbers.

Remember: the proportion of private school educated kids aged 16 plus is about 18%, not 7%.

Byzantine Theocracy - in brief (Steven Runciman)

"[The Byzantine Empire's constitution] was based on a clear religious conviction: that it was the earthly copy of the Kingdom of Heaven. (…)

"It saw itself as the universal Empire. Ideally it should embrace all the peoples of the earth, who, ideally, should all be members of the one true Christian Church, its own Orthodox Church.

"Just as man was made in God’s image, so man’s kingdom on earth was made in the image of the Kingdom of Heaven. Just as God ruled in Heaven, so an Emperor, made in his image, should rule on earth and carry out his commandments.

"Evil had made its way into God’s creation, and man was stained with sin. But if the copy – the Greek word was mimesis, ‘imitation’ – could be achieved, with the Emperor and his ministers and counselors imitating God with His archangels and angels and saints, then life on earth could become a proper preparation for the truer reality of life in Heaven. (…)

"Constantine was lucky in having as his biographer and panegyrist Eusebius of Caeserea (….).

"According to Eusebius the triumph of history had now come, when the Roman Emperor had accepted the Christian message. He was now the wise king who was the imitation of God, ruling a realm which could now become the imitation of Heaven. (…)

"The king is not God among men but the Viceroy of God. He is not the logos incarnate but is in a special relation with the logos. He has been specially appointed and is continually inspired by God, the friend of God, the interpreter of the Word of God. His eyes look upward, to receive the messages of God. He must be surrounded with the reverence and glory that befits God’s earthly copy; and he will ‘frame his earthly government according to the pattern of the divine original, finding strength in its conformity with the monarchy of God’.

"…by and large, the Eusebian constitution survived in Byzantium down the centuries. It was never a legal constitution, so it could be adapted to suit the needs of the time. Roman traditions lasted on to temper it and remind the Emperor that while he represented God before the people, it was his duty also to represent the people before God.

"It never took root in the West, where it faded out when the practical power of the Empire declined. Western thought preferred the rival conception of Saint Augustine’s City of God.

"But to Byzantium it gave a sense of unity, of self-respect and of divine purpose that sustained the Empire to the last. (…)

"No form of government can survive for very long without the general approval of the public. (…) The ordinary man and woman in Byzantium believed their Empire to be God’s holy empire on earth, with the holy Emperor as representative of God before the people and the representative of the people before God.

"For eleven centuries (…) the theocratic constitution of the Christian Roman Empire was essentially unchanged.

"No other constitution in all the history of the Christian era has endured for so long."


From: Steven Runciman. The Byzantine Theocracy. Cambridge University Press, 1977.

***

Comment.

The Byzantine Empire was the most sustainedly-devout Christian society of history so far, and it also had the most enduring political constitution.

This combination gives Byzantium an unique status.

Byzantium is therefore deserving of particular study and reflection; and, for Christians, the essence of Byzantium might legitimately serve as an ideal aim for worldly human society.

Tuesday 13 July 2010

"Why not?"

"Why not?" is a phrase and attitude that is used to justify destruction of existing social institutions.

"Why not?" combines with a short attention span. When a wholly-conclusive answer to "Why not?" cannot instantly be provided in a single self-evidential sentence; then "Why not?" wins and carries the day.

"Why not?" is asked with an implicit assumption that anyone who disagrees thereby approves and supports all that has ever been wrong with whatever is being challenged.

By contrast, the future world opened-up by the application of "Why not?" can be painted wholly positively, with no significant disadvantages.

"Why not?" is an attack on moral comparison and policy realism.

"Why not?" feels itself to embody the free, fun-loving spirit of the bohemian counter culture.

"Why not?" includes everyone in its hopes.

Yet "Why not?" sees itself as a hard-nosed realism - those who try to explain exactly why not, are seen as engaged in fine-spun logic-chopping, or speculative prediction of the future. Anything less that complete and utter instant collapse is seen as inadequate to refute the immediate application of "Why not?".

"Why not?" represents infinite hope versus uncertain prediction.

The power of "Why not?" comes from the widespread social assumption in the mainstream media and among intellectual elites that the onus of proof lies upon those who assert "Not!"

"Why not?" is only applied, however, to selected and left-approved targets. Those who try to apply "Why not?" to leftist principles and practices are instantly identified as moral monsters.

When non-leftists try to use "Why not?" against the left, the most subjective, sensationalist, loose predictions of possible – probable - vast, ramifying harms and humiliations to approved groups are allowed to refute it.

"Why not?" in practice supports left-approved freedom, and crushes all other freedoms.

Monday 12 July 2010

Bronowski's habit of truth (now lost)

Jacob Bronowski (1908-1974) invented the term 'the habit of truth' to describe the fundamental and distinctive ethic of science: the main foundation upon which was built the success of science in providing the means (knowledge) for mankind to shape the natural world.

Bronowski emphasized this, since it was (and is) often imagined that science is a morally neutral activity. But although scientific knowledge is indeed morally neutral (and can be used for good or evil) the practice of science (including being a scientist) is indeed a moral activity - based on the habit of truth.

He argued that for science to be truthful as a whole it is not sufficient to aim at truth as an ultimate outcome, scientists must also be habitually truthful in the ‘minute particulars’ of their scientific lives. The end does not justify the means, instead the means are indivisible from the end: scientific work is ‘of a piece, in the large and in detail; so that if we silence one scruple about our means, we infect ourselves and our ends together’.

***

The idea is that – to be successful in terms of the test of shaping the natural world, scientists – scientific communications - must speak the truth as it is understood. Indeed, I think it likely that the social structure of science is at root a group of people who seek truth and speak truth habitually (and if or when they cannot be truthful, they are silent).

Bronowski perceived that societies which abandoned, indeed persecuted, the habit of truth – such as, in his time, the USSR and Nazi German – paid the price in terms of losing their inability to perceive or generate the underlying knowledge of reality which forms the basis of shaping the natural world. (Note – these were societies which had the habit of truth in science, and then lost it.)

This declining ability to shape the natural world was concealed with propaganda, but such concealment could only be temporary since the cause of the decline was strengthened by every attempt to deny it.

***

Having grown up under the influence of Bronowski (for good and for ill) and also this distinctive morality of science, I have witnessed at first hand the rapid loss of the habit of truth from science: at first an ecapsulated loss whereby scientists continued to be truthful with each other (that is, truthful in the sense of speaking the truth as they see it) while lying to outsiders (especially in order to get grants, promote their research, and to deflect criticism); the situation degenerating swiftly to the final surrender whereby scientists are no longer truthful even with themselves.

At the same time I have seen hype (i.e. propaganda) expand from being merely a superficial sheen added to real science in order to make it more interesting to the general public, to the present situation where hype defines reality for scientists (as well as everyone else) – where propaganda is so pervasive that nobody can know what – if anything – lies beneath it (there is, indeed, no ‘beneath’ since by now hype goes all the way through science from top to bottom).

At the end of his life, Bronowski saw this coming, in its early stages, and wrote an essay entitled The Disestablishment of Science about the need for science to be separated from the state. This was necessary, he argued, because the morality of government and the morality of science were so different.

***

As understand it, Bronowski’s major distinction is between government’s focus on ‘expediency’ or direct short term capability – which is substantially power over human behavior by propaganda and coercion plus already-available and useable technology; and science’s indirect generation of long term capability – which is substantially the result of greater knowledge leading to greater efficiency (more power per person).

“the hidden spring of power is knowledge; and more than this, power over our environment grows from discovery.”

Bronowski assumed that enlightened self-interest (i.e. long-termism) would be a strong force to maintain the independence and honesty of science against its erosion by short-termist government expediency.

This assumption was indeed crucial to Bronowski’s philosophy – which was atheist and utilitarian. He needed to believe that humanity needed to be and *would be* rational, sensible and far-sighted in its self-management; that humanity sought capability as a primary aim (not as an accidental by-product) and he also need to believe in the ‘democracy of intellect’: that humanity was intrinsically unified in terms of their motivation and capability, so that science was basically comprehensible to all (or the mass of) humankind and that the primacy of the habit of truth was also a universal aspiration.

The decades have made convinced me that Bronowski was factually wrong in several of his key assumptions, and this explains why the kind of rational ‘humanism’ Bronowski espoused has proven powerless to arrest the decline in the habit of truth and has indeed been a major collaborator in the erosion (the apparatus of hype and propaganda is staffed mostly by rational humanists, and justifies itself and its activities using rational humanist reasoning).

***

At root, as I understand it, Bronowski’s validation of science was power: the increased power it gave humanity, which was undeniable in terms of the vast and cumulative reshaping of the world which could be seen from the industrial revolution onwards.

Bronowski hoped that this power would be disciplined and moralized by the discipline and morality which itself generated the power: that is, by science. So his vision was of a society based on science becoming organized according to the morality of science, and thereby sustaining that science upon which it depended.

For Bronowski, science was therefore validated by the power it created, and power as an aim was validated by (long term) domination (i.e. in the long term the most scientific society would also be the strongest).

As an auxiliary justification of this seeking after power, Bronowski brought in an ethic that mankind’s deepest desire and ultimate destiny was the perpetual expansion of power, hel claimed to see this in the shape of history (the ‘ascent of man’).

This was indeed a moral principle for Bronowski – but in order to avoid the obvious problems of tyranny and aristocracy, he also needed to believe that the conditions for generating science (and power) were intrinsically ‘democratic’ – that in the long term the diffusion of power, the perpetuation of freedom, were two sides of the same coin of society becoming scientific in its mass.

***

From Science as a Humanistic Discipline:

“…science as a system of knowledge could not have grown up if men did not set a value on truth, on trust, on human equality, and respect, on personal freedom to think and to challenge, [… these are the] prerequisites of to the evolution of knowledge.”

My perspective is that ‘men’ did not value these things, but scientists did – but that they are indeed prerequisites.

Mass scientific competence and the dispersion of political power among citizenry were assumed to be linked phenomena – and mass education in science (including the morality of science) was therefore the basis of both power and freedom.

It now seems to me that Bronowski was wrong about the wellsprings of human motivation, and was engaging in wishful thinking concerning the basis of viable human societies. He grossly underestimated the intrinsically human oriented, short termist, selfish, nepotistic character of human nature; and failed to see the rarity of mathematical and scientific modes of thinking.

Far from being universal, the scientific way of thinking and the habit of truth is so rare in human history and across human societies as to look like a local and perhaps temporary ‘sport’ rather than a fundamental property of mankind.

***

Bronowski was also wrong about the hoped-for tendency for the desire for power intrinsically to regulate itself in a long-termist fashion, and I regard his installation of power seeking as a primary virtue as an instance of Nietzschian moral inversion – rather than an insight.

After all, the secular scientist (or humanist), for all his virtues, is very often a prideful egotist with an insatiable lust for status; and when he subscribes to an ethic of power he will often tend to justify himself as an instrument for the betterment of the human condition.

But the past decades have certainly confirmed that Bronowski was correct about the consequences of abandoning the habit of truth. Bronowski would have been utterly appalled at the pervasive, daily, habitual dishonesty of researchers (especially the leading researchers) in the largest and most dominant world science: medical science.

And as for the Lysenkoism of Climate Science… he might have been darkly amused at the defense of pervasive, deliberate, fundamental collusion and lying on the grounds (perfectly accurate!) that this was statistically *normal* behavior in modern science.

Because the world did not heed Bronowski’s warnings in the Disestablishment of Science, and the outcome of science becoming dependent on government funding has been wholly in line with Bronowski’s direct predictions.

As he wrote in Science as a Humanistic Disicpline: “… science is cut off from its roots, and becomes a bag of tricks for the service of governments.”

“A bag of tricks for the service of governments” – what a perfect description of a major, mainstream modern ‘science’!

Sunday 11 July 2010

Reflections on Charles Murray's Human Accomplishment and genius

Charles Murray's book length quantitative analysis of Human Accomplishment, made a big impact on me. I 'brooded' over it for quite a while, especially the summaries and speculations concerned with the possible cultural causes of 'genius'.

I have always been interested in 'genius', and have read many biographies of geniuses in a range of endeavors.

Ad on the whole, I subscribe to the Great Man theory by which specific individuals shape the course of history - some of these individuals do exceptional damage, others are exceptionally creative, while of course some do both.

And therefore I regard the ability of a society to produce potential Great Men and embody the conditions they need to make a difference, as a major influence on it - and this ability has been very unevenly distributed between societies across space and time.

For example, the modern world - the kind of society characterized by growth in scientific knowledge, technological capability and economic production, which took off in Great Britain and became apparent in the late seventeenth and early eighteenth centuries - and spread from there; this kind of society I believe was probably driven by the work of numerous Great Men (or 'geniuses') who produced qualitative advances (or 'breakthroughs') across a wide range of human activities.

I believe that these numerous breakthroughs required Great Men (i.e an adequate supply of GM were necessary but not sufficient), but once made could be exploited by ordinary men.

Most histories of society took the form of a series of mini-biographies. For example, Jacob Bronowski's Ascent of Man TV series was, for most of its length, focused on a series of specific individuals. And the implication was that this was not just a convenience for the purposes of teaching and entertainment, but an account of how things really, necessarily happened.

Up to the 1950s it was obvious for Britons to focus on Great Men, since they were living among us, and each new generation brought forth a fresh supply - so many, indeed, that only a sample could become household names.

Then, from the mid 1960s into the second half of the century, people began to notice that the supply of GM seemed to be drying up. This went along with various fashions for denying the importance of GM in human history, and attributing change to other forces (such as class). And for human affairs increasingly to be organized bureaucratically, in ways that implicitly denied the need for GM, and indeed sought to replace human creativity and genius with explicit and predictable procedure.

By the mid 1980s I noticed that the last real English poets were dying and that there was nobody to replace them. For the first time in several centuries, there was not one single living poet of real stature.

Looking around, the same situation was looming in science - and by now there are just a few elderly remnants of previous generations who might be regarded as geniuses. Medical breakthroughs also began drying up at about this time (although there have not been many major medical geniuses, according to Murray’s lists).

So apparently the age of genius is over for Britain, which probably means the age of progress via multiple breakthroughs is over; and the same situations seems to prevail everywhere else - so far as I can tell. If genius was the driver of the modern world, this means that the modern world is also over (unless you believe that genius has now effectively been replaced by bureaucracy – Ha!).

Whatever it was that created the supply of geniuses and the conditions in which they could make breakthroughs has changed. I do not know whether there are still *potential* geniuses being born, but the whole motivational structure of society is hostile to genius and it is likely that individuals who would have grown to be potential Great Men in an earlier phase of society, nowadays have their motivation and sense of self poisoned. Instead of trying to achieve great things, such people would now probably pursue a great career, or would simply find themselves fish out of water.

I find myself ambivalent about this. Of course I vastly prefer a society conducive to genius to one being destroyed by bureaucracy. And if human history is conceptualized –like Bronowski does - in terms of a story of progressively increasing power to shape nature (by increasing understanding of its underlying structure) then the prospect of a massive decline in human power is dismaying. It is also dismaying from the perspective of mass human happiness – the prospect of mass violence, displacement, starvation, disease etc.

Yet, realistically, modernity was not planned neither were the modernizing societies (such as late Medieval and early Modern England) in any sense designed to nurture, or provide opportunities for, genius. The whole thing was an unplanned by-product and the age of genius was accidental.

Indeed, it was transitional, never stable, containing the seeds of its own destruction – like so many things. The geniuses were usually transitional figures who – over the course of their own lives – rejected the religious and traditional societies into which they were born. In their own lives they sometimes combined the strengths of the traditional society of the past and the progressive society which was emerging (as a result, partly, of their own work).

Yet of course the transitional phase is necessarily temporary, evanescent, cannot be sustained – and the generations of fully modern people, who are born into the world created by the geniuses – are one-eyed, feeble, and lack the source of strength of traditionalism. They (we) are post modern hedonists, for the most part – consumers, not creators.

I now tend to regard modernity as a temporary aberration from the course of human history. It arose from an accidental conjunction of genetics and society; and the effect of genius was to destroy the genetics and society which had caused itself. Whether it would have been good to sustain modernity (and reform it – because its vices are intolerable and have grown exponentially) I don’t know for sure.

But I do know that we have not even tried to do this, has not even tried to sustain itself, but instead has parasitically exploited the heritage of genius; so the question is now unanswerable empirically.

I now see human choice, or at least our destiny, in terms of lying between traditional societies – a choice between the kinds of human societies which existed roughly between AD 500 and AD 1500.

Saturday 10 July 2010

Moral inversion in secular modern society

The most striking aspect of modern secular society, which would have amazed and horrified our ancestors, is the moral inversion by which have redefined bad as good, sin as virtue.

This has happened as part of the modern rejection of Christianity, and as a solution to the fundamental paradox of the human condition – the conflict between spontaneous human desire and spontaneous human morality.

It is, at root, this moral inversion which is causing secular modern societies to commit suicide by a combination of denial of danger and by deliberate policy.

***

It seems that our literate ancestors (such as the ancient Jews) all spontaneously recognized that for a person to live according to their spontaneous desires - living primarily for seeking gratification and avoiding (or minimizing) suffering - was morally wrong.

This was so obvious that it needed (and indeed needs) no argument - it is the natural moral law for humans that a life aiming at selfish hedonism is intrinsically wicked: that is wicked as a basic stance, not merely in terms of its consequences (which vary according to specific circumstances).

Yet it was also recognized that at some level, for humans as they are now, it is also natural and spontaneous to be selfish and hedonic.

So there was a conflict between the way that humans were 'set-up' to be self-gratifying and the moral sense by which we knew that this way wrong.

***

This was the basic situation, the human condition, as perceived by pretty much all humans throughout history - that of conflict.

And therefore the situation was bleak in the extreme, since there seemed to be no solution.

Of course, all humans also believed (in some sense) in the soul, and its potential persistence after death (in some form, perhaps as a ghost, perhaps in Hades - not the same as hell, perhaps returning to be recycled or reincarnated)

The ancient Jews attempted solution was The Law, which prescribed morality in terms (essentially) of behaviour. If a man could live according to the law, his life (in this world) would be good - although the end was the same for all - good and bad - the ghostly and depersonalized realms of Sheol/ Hades.

However, actually men could *not* live by the law. It was impossible, because of their nature. They could never achieve that to which they aspired. The human condition was tragic.

***

This was the basic human situation, about which humans could do nothing, and from which humanity needed to be rescued.

The Christian solution, the Good News, was that God's Grace had provided a solution, since the incarnation meant that God had taken-up humanity; and if a person proclaimed in their heart Jesus as Lord, and repented of his (inevitable) sinly nature, then there would be forgiveness and the soul would (instead of losing humanity in Hades) be granted eternal life with God.

Man's soul after death would become God-like instead of a gibbering, depersonalized ghostly form of persistence.

So, the Christian message is that belief and repentance in this life can lead to a solution of the fundamental paradox, but only in the next world, after death (however this life is temporary, while life after death is eternal).

***

For whatever reason, the Western elite ruling class became increasingly atheist from the advent of modernity (?c 1700). Since the elite ruling class disbelieved in the soul, they were this-worldly; and since they were this-worldly they wanted as much satisfaction from life as possible (there being nothing else).

But spontaneous natural morality gets in the way of worldly self-gratification – so spontaneous natural morality must (somehow or another) be rejected.

Yet since morality really is spontaneous, it cannot be rejected.

***

So emerged moral inversion: the morality that (contrary to the instinct of spontaneous morality) this-worldly self-gratification is the proper primary aim in life.

From this derives the many specific new ‘Laws’ of modernity; which state that what we used to think was necessary is actually un-necessary, that what we thought was bad was actually good, and what we used to think was good was actually bad.

For secular moderns the only *real* sin is to believe in the reality of sin.

***

This is the current situation, this the secular modern ‘solution’ to the fundamental paradox of the human condition – that life in this-world can be, should be, harmonious - *if only* we recognize that our spontaneous self-gratification is actually morally necessary and should be the primary explicit goal of human endeavor.

And because this solution actually solves nothing, and is merely a statement, a wish-that-this-was-so; it has necessarily been embodied in the *coercive* beliefs, practices and laws of atheist totalitarian states (notably the USSR, National Socialism and Communist China) and this process is now advanced in all Western societies (by enforcement of what is termed ‘Political Correctness’).

So the citizens in modern secular societies are not merely *encouraged* to flout the natural morality which they cannot help but feel, they are increasingly *forced* to flout natural morality. They are compelled to live (and to think and to believe) as if hedonic gratification was the primary value in a life which ends with death and extinction.

And there is no hope of resolution in this world, nor the next world (the existence of which is denied).

***

Since there is no hope of resolution, the only alternative is distraction – to lose oneself in hedonic gratification: such that intense, continuous self-gratification *obliterates* our awareness of the fundamental paradox.

Despair, distraction, denial, self-indulgence… and if these do not work, then some kind of suicide of awareness.

By this analysis, the Decline of the West is a willed societal suicide driven by the mass psychological consequences of top-down, enforced moral inversion.

A common atheist misunderstanding

Atheists commonly misunderstand it when Christians are critical of a primarily this-worldly and hedonic life - i.e. a life dedicated to maximizing gratification and minimizing suffering.

Atheists commonly misunderstand Christians to be arguing that a worldly and hedonic life *predisposes* a person to commit sins (where 'sins' are understood by the atheist as a list of behaviours that transgress the moral law).

But this is a misunderstanding of the Christian belief about the nature of sin.

The Christian perspective is that to live a this-worldly and hedonic life - a life orientated primarily toward personal gratification - precisely *is* the state of sin.

And if the atheist does not understand what the above means, then he does not understand the Christian message, and is arguing against a Straw Man.

Friday 9 July 2010

Is scientific progress a result of genius, elite, or mass effect?

Scientific progress is talked about in three main ways, depending on the numbers/ proportion of the population involved in generating this progress:

1. Genius - 10s to 100s of people per generation – a fraction of 1 percent of the population.

Science is the product of a relatively small number of geniuses - without whom there would be no significant progress.

Therefore an age of scientific progress can be boiled down to the activity of tens or hundreds of geniuses; and the history of science is a list of great men.

2. Elite - 1000s to 10,000s of people per generation – a few percent of the population

Science is the product of an elite of highly educated and trained people, usually found in a relatively small number of elite and research-orientated institutions, linked in an intensely intercommunicating network. Without this elite, and these elite institutions, there would be no significant progress.

The history of science is a history of institutions.

3. Mass - 100,000s to millions of people per generation – a large percent of the population, most ideally.

Science is the product of a 'critical mass' of scientifically orientated and educated people spread across a nation or culture; and whose attitudes and various skills add or synergize to generate scientific progress. If society is not sufficiently 'scientific' in this sense, then there will not be significant progress.

The history of science is a history of gradual transformation of populations - mainly by educational reform.

***

A (common) twist on this is the idea that humans have vast untapped potential - and that this potential might somehow be activated - e.g. by the right kind of education; leading to an elite of geniuses, or a mass-elite, or something...

Perhaps the mainstream idea nowadays is a mushy kind of belief/ aspiration that science is essentially elite but that the elite can be expanded indefinitely by education and increased professionalization.

Another variant is that scientific progress began as based on genius, then became elite-driven, and nowadays is a mass ('democratic') movement: however, this is merely a non-historical description of what has actually happened (more or less) - underpinned by the assumption that scientific progress has indeed been maintained.

But I do not accept that assumption of continued progress (given the vastly increased level and pervasiveness of hype and dishonesty in science).

Certainly there seem to be historical examples of scientific progress without need for a prior scientific mass of the population, or even a pre-existing elite gathered in elite institutions.

***


Of course, nowadays there are no geniuses in science, so admitting that genius is necessary to significant scientific progress entails admitting that we are not making progress.

Nonetheless, my reading of the history of science is that a sufficient supply of genius is necessary to significant scientific progress (although history has not always recorded the identities of the presumed geniuses) – at any rate, science has often made significant progress without elites in the modern sense, and elites often fail to make progress.

Thursday 8 July 2010

The SSRI story - corruption of medical research goes back to the 1960s

The story of the development of the SSRI (selective serotonin-reuptake inhibitor) drugs – the Prozac’ group of ‘antidepressants’ has been investigated thoroughly by David Healy (http://en.wikipedia.org/wiki/David_Healy_%28psychiatrist%29) in several books and papers such as The Antidepressant Era (1998) and Let them eat Prozac (2004) - http://www.healyprozac.com/.

***

In the late 1960s Arvid Carlsson (later a Nobel prizewinner) realized that there was something different about the tricyclic antidepressant Clomipramine – which was used in treating obsessive compulsive disorder and various types of unusual or resistant ‘depression’. He discovered that this was probably because it had a greater effect on blocking the reuptake of serotonin (5-HT) than noradrenaline, and measured this difference in several drugs. Carlsson published a paper in 1969 which identified the antihistamines chlorpheniramine (especially) and diphenhydramine as very likely to be valuable drugs of a type similar to Clomipramine (but with different side effects, less cardio-toxic and safer in overdose) (http://www.medical-hypotheses.com/article/S0306-9877%2805%2900647-X/abstract).

Chlorpheniramine has many properties including the well known formulation Piriton which is used in Hay Fever; while diphenhydramine was often used as a nocturnal cough suppressant (e.g. in one of the Benylin formulations) and as a sleeping medication (e.g. Nytol).

So, here were antihistamine drugs which were already used by millions and considered safe enough to be available without prescription; and with a profile suggesting that they might make a new category of psychotropic drug with similar uses to clomipramine. In effect Carlsson discovered the SSRIs in 1969 or thereabouts.

But the pharmaceutical companies would not do trials on these agents, since their patents had expired, and this knowledge was not disseminated – indeed it is barely known even today. Instead, the pharmaceutical companies ‘concealed’ this knowledge for a decade and a half until they had developed patent-protected compounds – first zimelidine (which was too toxic), then later fluoxetine, (Prozac) and the other drugs later marketed as ‘SSRIs’.

***

Clearly Big Pharma, and the university scientists and academic/ research-oriented psychiatrists, were not even *trying* to discover useful new treatments - if anything they were concealing them. Neither were clinicians sufficiently interested (or knowledgeable) to read and understand the scientific literature which implied that already existing antihistamines would have a valuable role in psychiatry – so psychiatrists were not trying to discover drugs to help their patients – they were only interested when these were ‘new’, glamorous and prescription-only drugs – not old-fashioned drugs available without prescription at the local drugstore or chemist.

Then when SSRIs came along to the point of pre-marketing trials around the early 1980s the pharmaceutical companies were not really interested in trying to find out what these drugs really did, how they might best be used, or their harms and dangers. The obvious use was in the treatment of anxiety – but David Healy (in The Antidepressant Era, 1998) has documented how anti-anxiety drugs were at that point regarded as intrinsically addictive due to emerging concerns about the benzodiazepines (the Valium group of drugs), and there was no interest in trying to launch new anti-anxiety agents into a market where they would be regarded as addictive. So the focus was on developing SSRIs as ‘antidepressants’.

Irving Kristol (in The Emperor’s New Drugs) has documented that by objective and rigorous criteria applied to the randomized trial evidence, the SSRIs are not effective as antidepressants. Yet, by selective and distorted reporting of the trials, the SSRIs were nonetheless licensed and marketed as antidepressants.

So the pharmaceutical corporations were not – as of the late 1970s early 1980s - interesting in telling the truth about what they did know, and were prepared to distort and to conceal even thirty years ago – this kind of behavior is not a recent phenomenon.

Another distortion and concealment related to SSRIs and suicide. Thanks mainly to the work of David Healy, it is now acknowledged ‘officially’ that SSRIs do indeed have a rare side-effect of inducing suicidal behaviour – for this reason they were labelled with a ‘black box’ by the FDA (Federal Drug Administration in the USA). 

Having found a raised rate of suicide and suicide attempts in the early placebo-controlled trials of SSRIs, Healy gave SSRIs to some normal control subjects, and a couple reported unfamiliar violent impulses. Indeed this kind of feeling (akathisia) and behaviour is found with the neuroleptic/ antipsychotic drugs, that are chemically related to the SSRIs (also being chemically modified from antihistamines). 

The behaviour is somewhat paradoxical, given that both SSRIs and antipsychotics usually tend to reduce or flatten emotions in most people – making them unemotional. Nonetheless in some people at some times both classes of drugs seem to produce aggressive impulses.

So, it is clear that, as of the 1980s at least, pharmaceutical companies were actively concealing the harmfulness of harmful drugs.

***

I do not believe that SSRIs are ineffective drugs, but I do agree that they are ineffective ‘antidepressants’ when depression is conceived in the classic way as endogenous depression or melancholia (a state of despairing emotional un-reactivity, reduced thought and movement, reduced food intake etc). SSRIs are sometimes effective in treating people with emotional instability, and in reducing anxiety – and that is where they seem to have found their niche, in the treatment of anxiety, panic, phobias, post-traumatic stress, obsessive compulsive disorder etc.

But getting to this point of understanding the value of SSRIs took a long time, much longer than it should have done – and drug company marketing and the medical research ‘evidence’ hindered rather than helped the process. Presumably many millions of people have been ineffectively or harmfully treated with SSRIs, while others who would perhaps have benefited were not tried on the drugs because they were not ‘depressed’.

Taking the SSRI story in overview, it is deeply worrying in terms of what it tells us about the motivations of pharmaceutical companies, and clinicians, over the past several decades – and its implications for interpreting the medical research literature.


This because science is hard to do: and if you are not even trying to discover useful treatments you certainly will not succeed. And if you are not even trying to be truthful, you will certainly not generate truth by accident.

***

In sum, what this story tells us that:

1. Pharmaceutical companies were concealing information of clinical value as long ago as the late 1960s. This is not a recent development – although it has gotten worse.

2. SSRIs were not a new class of drugs. Pharmaceutical companies were not primarily trying to discover useful new classes of drugs, but chemically to slightly-modify old drugs to produce patentable agents which were then hyped as entirely new classes of wonder drugs. No new class of useful drugs has been discovered in psychiatry since the 1950s. Shocking.

3. There was a delay of about 15 years between discovery of the concept of SSRIs and the marketing of patented SSRIs – and although for those 15 years it was known that SSRI-type drugs were available for use, they were never used. This exhibits complete disregard for the needs of patients.

4. Some old drugs (e.g. chlorpheniramine, diphenhydramine) are cheaply available without prescription and ‘over the counter’ that are of the same class as new and expensive drugs (Prozac, Paxil) available only on prescription. While they are more sedative than modern ‘SSRIs’, these OTC drugs are likely to be similarly effective but safer due to greater experience in their usage.

5. When SSRIs were being investigated the investigation was focused on developing them as antidepressants, because the market for antidepressants was more promising than the market for anti-anxiety drugs – or any other type of drug. So, the SSRIs were never investigated for what they actually did, they were investigated in relation to what the pharmaceutical companies hoped they would do.

6. When the trials were conducted, it was discovered that the SSRIs were all-but-ineffective as antidepressants (i.e. ineffective at treating endogenous depression/ melancholia, ineffective at treating hospital inpatients with depression) – i.e. SSRIs were ineffective at doing what the pharmaceutical companies hoped they would do, but instead of acknowledging this fact, the strategy was to distort and misrepresent the trails and also redefine ‘depression’ and expand its diagnosis - to pretend that the drugs were effective antidepressants.

7. The rare but extremely serious problem of increased rates of suicide attempts and actual accomplished suicides among SSRI-takers compared with controls (an effect which is biologically understandable and plausible given the chemical structure and ancestry of the SSRIs) was concealed and denied – for decades!

8. Overall, the official SSRI research literature is pervasively unsound and untrustworthy. I cannot see any way of correcting for such an extreme and bottom-up degree of selection and bias, and I believe therefore that the official medical research literature on SSRIs should be ignored by serious scientists and physicians of integrity.

9. The second implication is that (unless the case of the SSRIs is unique – which seems highly unlikely, given that they were such a big selling and profitable example of modern drug marketing) - the whole official medical research literature going back at least three decades, is pervasively unsound and untrustworthy, and therefore must be ignored.

10. In such a situation as prevails now, it seems that there is no reliable or discernable relationship between the official medical research literature and the actualities of science; and also no relationship between the clinical literature and the reality of clinical experience. Since there is no effective mechanism to maintain the quality of the medical research literature, and no motivation to do this, and no honesty; it is quite possible that overall medicine is getting worse, rather than better.

11. But – such is the pervasiveness of corruption and dishonesty in relation to medical research and treatment - apparently nobody with any influence is interested by any of this.

12. In other words, nobody with influence is nowadays interested in really discovering, or even significantly improving, medical treatments. Even worse, nobody with influence is even trying to tell the truth about medical treatments.

***

So how are things in the real world of science and medicine, underneath the hype and deception?

The answer is that I do not know, indeed nobody knows. Indeed, you cannot get ‘underneath’ the hype and deception, because the hype and deception goes all-the-way-down.

Wednesday 7 July 2010

Growth and the expectation of growth in scientific knowledge

We have become used to growth in scientific knowledge, and expect growth in scientific knowledge. This expectation at first shaped reality, then became reality, and eventually displaced reality. The link between expectation and actuality was broken and the world of assumptions took over.

***

The expectation that scientific knowledge will grow almost inevitably (given adequate 'inputs' of personnel and funding) is epitomized by the professionalization of scientific research (making scientific research a job) and the expectation of regular and frequent scientific publication – the expectation of regular and frequent publication would only make sense if it was assumed that scientific knowledge was accumulating in a predictable fashion.

We nowadays expect a growth in the number of scientific publications over time, and a growth in the totality of citations – these are fuelled by increases in the numbers of professional scientists and of journals for publishing science. We assume that there is an infinite amount of useful and new science waiting to be discovered, and an infinite pool of people capable of making discoveries.

The economist Paul Romer – and many others – have built this into theories of the modern economy – they argue that continued growth in science and technology fuels continual improvement in productivity (economic output per person) and therefore growth in the economy. And this is kept going by increasing the investment in science and technology. The idea is that we are continually getting better at scientific discovery, investing in scientific discovery, therefore modern society can continue to grow. (Yes, I know it doesn’t make sense, but...)

***

But how would we really know whether science was growing? I mean, who is really in a position to know this?

Who could evaluate whether change is science and increased amounts of self-styled scientific *stuff* actually corresponded to more and better science?

When – as now – scientific growth is expected, and when society acts-upon the expectation, we have an overwhelming *assumption* of growth in science, an assumption that science *is* growing – but that says nothing about whether there really is growth.

Because when people assume science is growing and when they think they perceive that science is growing, this creates vast possibilities for dishonesty, hype and spin. Because people expect science to grow, for there to be regular breakthroughs, they will believe it when regular breakthroughs are claimed (whether or not breakthroughs have actually happened).

***

But how if there is really no growth in scientific knowledge? Or how if the real growth is less than the assumed growth? How if there is actual decline in real scientific knowledge – how would we know?

Science – as a social system –resembles the economy. In the credit crunch of 2008 it was realized that the economy had not really been growing, but what we were seeing was actually some mixture of increasing inflation, increasing borrowing, and rampant dishonesty from many directions. (It is the rampant dishonesty that has prevented this from being understood – and this tactical dishonesty is itself no accident.)

So we discovered that we were not really getting richer, but we were living off ever more credit, and the value of money was less than we thought; and we (or at least I) discovered that we could not trust anybody to tell us anything about what was going on or why it had happened. They were not even trying to discover the truth, they were trying to build their careers (politicians, economists, journalists – all careerists). (To be fair, most of them are explicitly nihilists who do not believe in the truth – so why should we expect them to tell it?)

Truth about the credit crunch was something we amateurs needed to work out for ourselves, as best we could.

***

I believe that science is in the same bad state as the economy, but probably even worse.

In science, what masquerades as growth in knowledge (to an extent which is unknown, and indeed unknowable except in retrospect) is not growth in knowledge but merely an expansion of *stuff*, changes in the methods of counting, and so on.

Almost nobody in science is trying to discover the truth, and is embarrassed even by talking about the subject. Not surprising that they are embarrassed!

For instance, virtually every influential scientific article is now hyped to a variable but vast extent (the honest ones are buried and ignored).

Multiple counting is rife: progress in claimed when a grant is applied for and also when a grant is awarded, and even when the work is still happening – since scientific progress is assumed to be predictable – a mere function of resources, capital and manpower; credit for a scientific publication is counted for all of its (many) authors, for all the many forms in which the same stuff is published and republished, for the department and also for the university where it was done, and also the granting agency which provided the funds and for the journal where it was published – everyone grabs a slice of the ‘glory’.

Credit is given for the mere act of a ‘peer reviewed’ publication regardless of whether the stuff is true and useful – or false and harmful.

Thus the signal of real science is swamped utterly by the noise of hype.

***

Let us suppose that doing science is actually much *harder* than people assume; much harder and much less predictable.

Suppose that most competent and hardworking real scientists actually make no indispensible contribution to science – but merely *incremental* improvements or refinements in methods, the precision of measurements and the expression of theories. And if they personally had not done it, it would have slightly-slowed but would not have prevented progress, or somebody else would have done it.

If science is really *hard*, then this fact is incompatible with the professionalization of science – with the idea of scientific research as a career. Since science is irregular and infrequent, science could only be done in an amateur way; maybe as a sideline from some other profession like teaching, practicing medicine, or being a priest.

Professional science would then be intrinsically phony, and the phoniness would increase as professionalization of science increased and became more precisely measured, and as the profession of science expanded – until it reached a situation where the visible products of science – the *stuff* bore no relationship to the reality of science.

Professional scientists would produce stuff (like scientific publications) regularly and frequently, but this stuff would have nothing to do with real science.

Or, more exactly, the growing amount of stuff produce by the growing numbers of professional science careerists, whose use of hype would also be growing – the amount of this stuff would be so much greater than the amount of real science, that the real science would be obscured utterly.

***

This is precisely what we have.

The observation of growth in scientific knowledge became an expectation of growth in science and finally an assumption of growth in science.

And when it was assumed that science was growing, it did not really need to grow, because the assumption framed the reality.

***

But if science is as hard as I believe it is; then scientific progress cannot be taken for granted, cannot be expected or assumed.

Our society depends on scientific progress – when scientific progress stops, our society will collapse. Yet so great is our societal arrogance that we do not regard science as something real. Instead science is the subject of wishful thinking and propaganda.

Science is a way of getting at certain kinds of truth, but the way that science works is dependent on honesty and integrity. Our societal arrogance is such that we believe that we can have the advantages of real science but at the same time subvert the honesty and integrity of science whenever that happens to be expedient.

Our societal arrogance is that we are in control of this dishonesty – that the amount of hype and spin we apply is under our control and can be reversed at will, or we can separate the signal from the noise, and re-calculate the reality of science. This is equivalent to the Weimar Republic assuming that inflation was under control when prices and wages were rising unpredictably by the hour.

But we cannot do this for the economy and we cannot do it for science. In fact we have no idea of the real situation in either science or the economy, except that in a universe tending towards entropy we must assume that the noise will tend to grow and swamp the signal. The Western economy was apparently growing but in reality it was increased inflation and borrowing and deception; science has appeared to be growing but the reality is increasing hype, spin and dishonesty. The link between stuff and substance has disappeared.

***

When the signals of economics and science (money and ‘publications’ and other communications) lose their meaning, when the meaning is detached from underlying reality, then there is no limit to the mismatch.

The economy was collapsing while the economic indicators improved; and science can be collapsing while professional science is booming.

But if science is very difficult and unpredictable, and if the amount of science cannot be indefinitely expanded by increasing the input of personnel and funding, then perhaps the amount of real science has not increased *at all* and the vast expansion of scientific-stuff is not science.

If so, then the amount of real science (intermittent, infrequent, unpredictable) has surely not stayed constant but will have actually declined due to the hostile environment. At the very least, real science will be de facto unfindable since the signal is drowned by every increasing levels of noise.

So, the economy was a bubble, and science is a bubbles, and bubbles always burst; and the longer delayed the burst, the bigger the bubble will become (the more air, the less substance), and the bigger will be the collapse.

***

When the economic bubble burst, the economy was much smaller than previously realized - but of course the economy was still enormous. In effect, the economy was set back several years.

But when the scientific bubble bursts, what will be left over after the explosion? Maybe only the old science - from an era when most scientists were at least honest and trying to discover the truth about the natural world.

And, in an era of mindless technical specialization, will there be enough scientists even to understand what was left over?

At the very least, science would be set back by several decades and not just by a few years. But it could be even worse than that.

Tuesday 6 July 2010

Fr Seraphim Rose on the spiritual poverty of our times

"I must tell you first of all that, to the best of our knowledge, there are no startsi today—that is, truly God-bearing elders (in the spirit of the Optina elders) who could guide you not by their own wisdom and understanding of the Holy Fathers, but by the enlightenment of the Holy Spirit. This kind of guidance is not given to our times—and frankly, we in our weakness and corruption and sins do not deserve it.

"To our times is given a more humble kind of spiritual life, which Bishop Ignatius Brianchaninov in his excellent book The Arena (do you have it?) calls life by counsel—that is, life according to the commandments of God as learned in the Holy Scriptures and Holy Fathers and helped by those who are elder and more experienced. A starets can give commands; but a counsellor gives advice, which you must test in experience.

"We do not know of anyone in particular who would be especially able to counsel you in the English language. If this is really needful for you, God will send it to you in His time, according to your faith and need, and without your making too deliberate a search for it."

From a letter excerpted at: http://www.orthodoxinfo.com/praxis/frseraphimspeaks.aspx

Comment: The modern lack of guidance from an 'enlightened' elder, in direct lineal discipleship with the greats of the past, is a feature of many aspects of contemporary life including science and medicine; and something which is likely to get worse.

Then the aspirant must remember and accept that "To our times is given a more humble kind of spiritual life"; also a more humble scientific life, or life as a physician or surgeon.

The modern heroism is then to accept humility, and not to strive for an impressive status which can only be fake.

Monday 5 July 2010

My favourite television - Cricket Writers on TV

My favourite television show - from the perspective of purely self-indulgent entertainment - is Cricket Writers on TV.

Indeed CWoTV it is the only thing I regularly watch

This is a discussion that is currently broadcast on a Sunday morning at about 09.00 hours (Sky Sports 1) and which lasts for one and a half hours.

That is to say, I am provided with one and a half hours (minus advertisments) of four cricket journalists sitting around a table (apparently provided with some coffee and croissants) chatting about the week's news in cricket in great detail (c. 70 minutes worth of detail!).

Yesterday's episode was particularly fine since it featured the genial regular host Paul Allott with my favourite modern cricket writer: Scyld Berry who is the Sunday Telegraph Correspondent and the editor of Wisden. In addition there was the personable and opinionated Stephen Brenkley, and also Vic Marks who provided a contrasting view to my own on the perennial issue of four versus five bowlers for test matches.

(In fact my own cricketing views are closely modelled on/ copied from those of Scyld Berry!).

I realize that this programme would be regarded as a form of torture by everbody other than myself - therefore although I provide a link to the audio podcasts below I do not exactly _recommend_ it; because Cricket Writers on TV is not really intended for you, it is for me - http://www.skysports.com/story/0,19528,12172_6174957,00.html.

I find it heartening, fascinating, and almost magical that Sky Sports would go to the trouble and expense of assembling four journos each week (just the right number for conversation) and producing this show uniquely for little old *me* - and, despite the temptations of commercialism, to have continued broadcasting it for at least the past three seasons.

Well, I am very grateful; and will continue to enjoy it while it lasts.

Remember the name: Cricket Writers on TV.

But you need not bother watching it because it is *my* show.

Sunday 4 July 2010

What kind of pathology causes depression?

For several decades, most psychiatrists have propagated the idea that 'depression' (Major Depressive Disorder, Endogenous Depression, Melancholia) is a brain disease.

For example it is often said that depression is caused by some kind of chemical abnormality involving serotonin (5-HT) or its receptors, or involving norepinephrine or dopamine or something.

But if depression really is a brain disease, then it would have to be a disease of the brain substance, and this pathological process would necessarily need to be detectable using normal medical science.

There are only a limited number of pathologies recognized by medical science, and 'neurotransmitter abnormalities' is not one of them. Abnormalities of messenger chemicals are a consequence, not a cause.

When I was a medical student we were taught a diagnostic scheme called the 'surgical sieve' since it was often used in creating a list of possible ('differential') diagnoses for someone with an unknown problem that might be admitted for surgical assessment. Another name is the pathologic(al) classification.

What happens is that each of the potential pathological causes is considered in tune. There are many versions and many use acronyms to make them memorable - for use in actual clinical practice.

Here is the 'surgical sieve' described in Wikipedia

http://en.wikipedia.org/wiki/Surgical_sieve

1. There are two basic categories of disease - congenital (inborn - for example genetic) versus acquired.

Among acquired pathologies there are:

2. Neoplastic (eg cancer)
3. Metabolic (chemical)
4. Infective (germs)
5. Traumatic (accidents)
6. Autoimmune
7. Vascular (caused by blood vessels)
8. Inflammatory
9. Degenerative (due to age)
10.Idiopathic (caused by medical treatment - side effects)


Classic melancholia or endogenous depression is rare - probably less than 1 percent of the population would have an episode during their lifetime and it is devastating; usually it requires hospitalization or equivalent, and lasts for several months at least - but not more than a year or two (unless the person commits suicide, or dies of dehydration or starvation). It is perhaps the most intense suffering which a human can experience, survive and recover-from.

So, in melancholia which of the above might be a causal brain disease?

The point is that the pathological cause of melancholia needs to be relatively long lasting (months, a year) but also fully reversible.

What pathological processes fit that pattern?

The short answer is that none of them have been detected in the brain as a cause, not even metabolism (which is not a primary cause anyway - or when it metabolism is regarded as a primary cause it is usually congenital and usually long lasting and not spontaneously reversible, self-curing).

Whatever is the primary cause must be of a type that leads to a disease which lasts some months but potentially completely goes away - so the cause is *not* going to be neoplasia, for instance.

On this basis the clinical pattern of classic depression best fits with either infective or autoimmune disease.

For example an untreated infection will get worse for a while, then will eventually be defeated by the body - in pre-antibiotic days it might take months to recover from some infections.

Autoimmune disease exacerbations (e.g. rheumatoid arthritis) are not well understood, but the diseases may also wax and wane over a timescale of months.

However, it seems unlikely that there is a primary infective cause in the brain, because this is an encephalitis, and that produces different symptoms from melancholia (especially delirium).

However, depression might well be caused by infective causes elsewhere in the body since immune chemicals travel in the blood; and indeed it is well known that many infections can produce depressive symptoms (as a consequence of immune activation).

Perhaps some people with melancholia also suffer an autoimmune disease - could this primarily be a disease of the brain itself? It seems implausible, since autoimmune changes would produce inflammatory changes that would probably produce encephalitis type symptoms of delirium.

So as with infection it seems likely that melancholia may be a consequence of autoimmune disease elsewhere in the body, but not the brain itself.

In conclusion - it is not plausible that depression (classic melancholia) is a primary brain disease since the pattern of disease does not plausibly fit any of the possible causes; and it is more likely to be a consequence of pathological processes (one or more pathological processes such as infective or autoimmune disease) elsewhere in the body.

On these grounds, I would not be surprised if some episodes of serious endogenous depression/ melancholia turned out often (not always) to have an infective cause - or rather, a variety of infective causes.

This may even apply to autoimmunity, since autoimmunity can be caused by an infection.

So a fruitful investigative strategy might be 'search for infection'; and then treat it, if possible.

Or maybe even try treating presumed infection blind, by trying-out several different antimicrobial agents that cover a range of presumed infective causes?

Humility versus submission

For Christians, repentance is a first step on a short path to forgiveness; but modern secular leftist guilt has no answer.

Repentance and forgiveness versus perpetual guilt - a big difference.

While a Christian aims for humility before God, modern secular leftists can only settle for submission to those people who make them feel guilty.

Humility before God versus submission to Man - a big difference.

Saturday 3 July 2010

Why I read the History of Middle Earth

The History of Middle Earth (HoME) is a series of 12 books (preceded by a similar book named Unfinished Tales) consisting of the writings of JRR Tolkien edited by his son Christopher.

I have been reading them for several years, slowly, off and on, in a piecemeal fashion - and with tremendous satisfaction.

I read Lord of the Rings (LotR) when I was thirteen and for the remainder of my school years re-read it several times, plus everything else by and about Tolkien which I could find. While I was at college I did not read Tolkien very often, but returned to him after finding a copy of Unfinished Tales in a rented cottage, buying a copy of the selected letters secondhand, and borrowing TA Shippey's Road to Middle Earth from the library.

The problem with LotR was that by age 18 I had learned it almost off-by-heart, and it had begun to lose its effect. But now I have seen the movies several times, listened to the radio dramatization, and - most important - been reading early drafts of LotR in the HoME plus much other material on the earlier mythologies - so I have thoroughly confused my memory between what is in the real and finished LotR and all these alternative versions and extra ideas.

I can therefore become completely 'lost' in Tolkien's world, on a long term basis!

Reading the letters, and Shippey, also opened my eyes to the profound intelligence, creativity and wisdom of Tolkien as a man - and his inspired prophesy - so there is a secondary 'philosophical' level which I now explore Tolkien's world (and there are several good secondary sources to help - perhaps the best being Verlyn Flieger's books).

Essentially, the HoME has become a way to overcome the habituation – the getting-used-to – which is a consequence of mere repetition. Because, when I return to re-read the ‘real’, finished, published Lord of the Rings; I find myself astonished by its quality and coherence – an experience which recaptures some of the freshness of a first reading.

Friday 2 July 2010

Driclor is an effective treatment for shaving cuts

I have already blogged about the totally-effective antiperspirant sold under the trade name of Driclor - but this remarkable substance has another use, which my wife heard about on 'doctors' net' - a UK forum restricted to qualified physicians.

Driclor can stop bleeding - presumably by drying rapidly (in a few seconds) to form a transparent coating and block the blood flow.

It works very well for facial shaving cuts. Although pretty painless, these razer accidents are usually very difficult to stop bleeding, even when small - since they are sliced skin; and when skin is cleanly sliced (rather than torn or crushed) it does not release the various chemicals which assist in blood coagulation.

A shaving cut is usually a thin slice off the top of a protruberance like the chin - leaving a raw patch of oozing dermis; or a linear slice caused by the razer slipping sideways. These linear slices in particular often well-out drops of blood in an annoying way.

The traditional answer is to put a scrap of tissue paper onto the cut and leave it there until the blood has coagulated (maybe 10 minutes). But if the cut is one of those linear slices, then the tissue will often become sodden with blood and fall off onto your shirt - aaargh!

The other alternative is to stick an elastoplast/ band-aid on the bleeding area - which does rather attract the eye...

But, if one dabs the bleeding area dry, then immediately applies Driclor to the graze, slice or cut (it comes from a roller ball applicator - so you need rapidly to take some Driclor on a finger tip, and very quickly smear it over the bleeding area), it will usually dry instantly to form an invisible layer and will stop the bleeding.

I do not (ahem...) have any direct experience of the matter - but would I imagine that Driclor would be equally useful at stopping bleeding spots caused by shaving of the legs, as well as the face.

All that can be seen is the shape of the wound with a little surface oozing; and the invisible Driclor 'shell' is slightly shiny, so the result is not absolutely perfect cosmetically.

Still, it is a tremendous improvement on the alternative. To (almost) quote a famous English policeman of the 1970s - 'I am convinced that Driclor is a major contribution to road safety'.

Thursday 1 July 2010

Atheism provides zero guidance for life - suck it up!

Although I managed to evade admitting the fact and avoided contemplating its consequences for more than 40 years, the fact should be acknowledged that atheism provides zero guidance for life.

Atheists ought be man enough to suck-it-up (although I did *not* do so - and so I am here asking others to be more honest and rigorous that I myself managed to be).

Consider: What does atheism not do?

Atheism provides no basis for being 'good' or virtuous rather than evil (indeed atheism is squeamish about the reality of evil precisely because it cannot conceptualize good).

Atheism provides no basis for preferring altruism over selfishness, no basis for aesthetic judgement and preferring beauty over repulsiveness, no basis for seeking and telling the truth rather than propagating lies.

For atheists, such matters are – at root – arbitrary and unjustifiable personal preferences – since if they are justified in terms of their consequences (as leading to outcomes like peace, prosperity or ease), then any preference of consequences is merely an arbitrary and unjustifiable personal preference (i.e. the preference for happiness, peace, prosperity and ease over misery, war, poverty and hardship is for an atheist itself merely subjective – merely ‘my opinion’ or ‘my gang’s opinion’).

Expediency is merely an unsupported subjective preference.

Atheism provides no basis for respecting individuals, since individuals are regarded as a minority of one - insignificant and short-lived organisms compared with a planet full of powerful nations, societies, cultures, ideologies and gangs. Expediency will always sacrifice the individual to the ‘greater good’.

(By contrast, a Christian will – or should - regard nations, societies, cultures, ideologies and gangs as utterly insignificant and short-lived by comparison with a single potentially-immortal soul.)

Atheism provides no reason for rejecting a focus on short term, selfish pleasure – if that is what happens to appeal; nor for rejecting cowardice, parasitism and victimization. Atheism provides no reason, except expediency, to consider other humans as anything other than something to be exploited or evaded according to our spontaneous – or acquired – impulses and urges.

Yet atheism also provides no reason why we should follow, or reject, our evolved impulses and urges.

Atheism cannot combat demotivation and alienation – since it undercuts and destroys any attempt to discover or provide purpose and meaning to life. Even a simple hedonism of seeking pleasure and avoiding suffering is rendered utterly pointless for mortal creatures in the infinite context of time and space.

An atheist life is intrinsically incoherent – suck it up!

Evidence, experience and common sense

Medicine has, for several decades, been fixated on establishing the validity of treatment - in answering the question 'does this work?'.

How do we know whether a medical intervention works?

The correct answer is: in the same way we know anything in life - which is to say by observing what happens, compared with what we expected to happen.

If somebody's finger is cut off; but is then re-attached by a surgeon and works reasonably well - we know to attribute the result to a surgical intervention.

Or if someone suffers an agonizing pain in a particular part of their abdomen, and experience tells us that everyone in the past who got this unique kind of pain had died, but cutting out the appendix stops them from dying - then we assume the operation was life-saving.

If we expect (from experience, from knowledge) that when we get this kind of headache it will last 24 hours, but when we take tablet X it disappears in 1 hour; then we think it is probably the tablet that helped.

***

This is first order knowledge in medicine, but there are pitfalls. For example a drug may make someone feel better at first, but they become addicted to or dependent on the drug, and cannot stop taking it. This was noticed for morphine and cocaine - healthy people who took these drugs for the psychological effects would feel terrible when they tried to stop taking it, or needed increasing doses to get the same effect.

Problems of dependence and addiction therefore can usually be detected only when healthy people take the drug, or when an illness is expected to be short term but drug treatment seems to prolong the need for drug treatment.

The placebo response is also a complicating factor - where taking a drug benefits the patient, but the benefit is not due to that specific drug but to the patient's expectation of getting better.

The placebo effect is suspected when a drug's action is weak or unreliable, and varies a lot between people - when a drug effect makes a large difference to situation where we were confident of the outcome, and when it does this reliably time after time, and between many patients - then a placebo effect is usually ruled-out.

***

My point is that the conditions of medical practice and common sense are usually adequate for doctors and patients to sort-out what treatment do, and how effective they are.

Problems arise when the link between treatment and outcome is more remote due to the unpredictability of outcome in many diseases such as diabetes and breast cancer. This makes it hard to know for sure whether treatment has been helpful or not. And this is where medical science comes in, by providing proxy measures of disease such as blood sugar levels, or biopsy samples and x-ray visualization of tumours.

But uncertainties remain, and in situations of unpredictability, knowledge can never be sure. This fact is not affected by studying large numbers of patients in clinical trials and averaging the results - individual uncertainty is the same as ever it was!

***

During the golden age of medical discovery, in the mid-twentieth century, evaluation of treatments was a combination of common sense and experience aided by specialization; such that a specialist could build experience of rare conditions, and see patterns of outcomes which would not be apparent to the general practitioner.

Yet modern medicine is now dominated by vast empires of medical evaluators doing formal randomized trials - and progress has either ground to a halt or gone into reverse.

Large randomized trials are only one method among many, can be distorted and biased by selection of patients and methods used, and is bedevilled by poor control; yet they are still ‘officially’ regarded as the best evidence.

And clinical trials can only be done by those with large resources: big corporations and government.

***

This is a problem across all of our society, not just medicine. We are taught that evidence is only good if it relies on formal, large scale, expensive methodology - and such methods are the preserve of large corporations and government.

So we are in a situation where - supposedly - individual people (doctors, patients, citizens) know nothing whatsoever except what they are told by corporations and governments and those they employ.

So advanced is this process that formal, official knowledge is regarded as correct, even when it is unsupported or actually contradicted by common sense and experience. Common sense and experience in medicine now officially count for *nothing at all*. In some government agencies it is not even permitted to look at any kind of clinical evidence except randomized trials (all of which are conducted using funding from government or industry, and being ruled and regulated by them).

***

Common sense and personal experience have been officially denigrated to the level of being regarded as utterly irrelevant in understanding our society, just as formal evidence was used to attack common sense and experience in medicine.

We have now reached the point in medicine, in society, where direct personal knowledge is officially regarded as having zero significance, and where official knowledge (e.g. formal surveys and statistics) is the only admissible evidence.

The cost of this is that the worse has displaced the better because the validity of formal methods derives from their underpinning in common sense and experience. Formal methods which are not underpinned by common sense and experience, or which contradict them, are worthless. Indeed, they are deadly – since that was the system of official lies deployed to such devastating effect in totalitarian USSR or China.

The deadliness of a system of official lies (contradicted by common sense and personal experience) is not only material - things like famines, shortages, oppression - but it is deadly to the soul. Officially, a person's own judgment of what is happening, what is right and wrong, true or false, beautiful or hideous - their memories of what has happened to them... all this is officially ignored, denied, extinguished. Only officially-sanctioned statistics are really-real...

This process of denying the validity of common sense and personal experience has all-but destroyed medicine, and is destroying society.