Thursday 25 June 2009

Animal Spirits - Akerlof & Shiller - review

*

Review by Bruce G Charlton done for for Azure magazine - http://azure.org.il . This review was commissioned, but the editor didn't like what I wrote and it was rejected)

Animal Spirits: how human psychology drives the economy, and why it matters for global capitalism by George A Akerlof and Robert J Shiller. Princeton University Press: Princeton, NJ, USA. 2009, pp xiv, 230.

As a human psychologist by profession, I was looking forward to reading a book written by a Nobel prize-winner in economics and a Yale professor in the same field which – according to the subtitle – argued that human psychology was ‘driving’ the economy. In the event, I found this to be an appalling book in almost every respect: basically wrong, irritatingly written, arrogant, and either badly-informed or else deliberately misleading in its coverage of the aspects of human psychology relevant to economic outcomes.

The main argument of the book is that ‘rational’ models of the economy are deficient and need to be supplemented by a consideration of ‘animal spirits’ and stories. Animal spirits (the idea comes from John Maynard Keynes) are postulated to be the reason for economic growth and booms, or recessions or crashes: an excess of animal spirits leading to overconfidence and a ‘bubble’, and a deficiency of animal spirits leading to the economic stuck-ness of situations like the Great Depression in the USA.

The authors regard the rise and fall of these animal spirits as essentially irrational and unpredictable, and their solution is for governments to intervene and damp-down excessive spirits or perk-up sluggish ones. Governments are implicitly presumed, without any evidence to support the assumption, to be able to detect and measure animal spirits, but to be immune from their influence. Governments are also presumed to know how to manipulate these elusive animal spirits, to be able to achieve this, and willing to do so in order to the ‘general public interest’.

I found all this implausible in the extreme. Akerlof and Shiller will be familiar with the economic field of public choice theory, which explains why governments usually fail to behave in the impartial, long-termist and public-spirited ways that they hope governments will behave – yet this large body of (Nobel prizewinning) research is never mentioned, not even to deny it.

The phrase ‘animal spirits’ is repeated with annoying frequency throughout the book, as if by an infant school teacher trying to hammer a few key facts into her charges, however all this reiteration never succeeded in making it clear to me exactly what the phrase means.

The problem is that the concept of animal spirits is so poorly characterized that it cannot, even in principle, be used to answer the numerous economic questions for which it is being touted as a solution. So far as I can tell, animal spirits are not precisely defined, are not objectively detectable, and cannot be measured – except by their supposed effects.

Akerlof and Schiller talk often of the need to add animal spirits to the standard economic models, but it is hard to imagining any rigorous way in which this could be done. Perhaps they envisage some kind of circular reasoning by which animal spirits are a back box concept used to account for the ‘unexplained variance’ in a multivariate analysis? So that, if the economy is doing better than predicted by the models, then this gap between expectations and observations could be attributed to over-confidence due to excess animal spirits, or vice versa?

There is, indeed, precedent for such intellectual sleight-of-hand in economic and psychological analysis – as when unexplained variance in wage differentials between men and women is attributed to ‘prejudice’, or when variation not associated with genetics is attributed to ‘the environment’. But I would count these as examples of bad science – to be avoided rather than emulated.

Another major theme of the book is stories – indeed the book’s argument unashamedly proceeds more by anecdote than by data. According to Akerlof and Shiller, animal spirits are profoundly affected by the stories that people, groups and nations tell-themselves about their past present and future, their meanings and roles. For example, the relatively poor economic performance of African Americans and Native Americans is here substantially attributed to the stories these groups tell themselves. For Akerlof and Shller, stories - like animal spirits - are attributed with unpredictable and wide-ranging properties ; sometimes replicating like germs in an epidemic for reasons that the authors find hard-to comprehend. (In this respect, A&S’s ‘stories’ seem to have very similar properties to Richard Dawkins’ ‘memes’.)

The big problem with the A&S use of stories as explanations is that each specific story used to explain each specific economic phenomenon is unique. So, instead of postulating a testable scientific theory, we are left with a multitude of one-off hypotheses. The causal model a specific story asserts is in practice un-stestable – each different situation would presumably be the result of a different story. For example, if each boom or bust has its own story, then how can we know whether any particular story is correct?

And who decides the motivations of a specific story? Akerlof and Shiller suggest that the Mexian president Lopez Portillo created a bogus economic boom because he ‘lived the story’ of Mexico becoming wealthy due to oil, and the Lopez story led to excessive animal spirits with 100 percent annual inflation, unemployment, corruption and violence . Yet the most obvious explanation is that Lopez ‘lived’ the story because it served his interests, providing him with several years of wealth, status and power.

However, this story is unusual in blaming the government for excessive animal spirits, corruption and short-termism. Since Akerlof and Schiller are advocating a large expansion in the state’s role in the US economy they usually focus on irrational animal spirits in the marketplace. This is rhetorically understandable since A&S advocate that the state should to regulate markets to moderate the unpredictable and excessive oscillations of animal spirits – and this only makes sense if the government is less prone to animal spirits than the market.

But why should the government be immune to irrational animal spirits, corruption and selfish short-termism? Why should not the government instead exploit variations in the economy for their own purposes and to benefit the special interest groups who support them – surely that is what we see all the time? Indeed, what we are observing on a daily basis? - rather than the A&S utopian ideal of state economic regulation based on disinterested wisdom and compassion.

I personally am persuaded that governments were the main culprits behind the current credit crunch. If governments had allowed the international housing bubble to collapse several years ago as soon as the bubble was identified (e.g. in Our global house price index; The Economist. June 3 2004), or if necessary government had taken steps to prick the inflationary bubble, then we would have had a much smaller and manageable recession than the disaster that eventually came after several years more ‘leverage’. But instead – fearing a recession under their administration – all the important governments pulled-out-the-stops to sustain housing price inflation. Even after the crunch, US and UK governments have tried (unsuccessfully) to reinflate the housing bubble – amplifying the damage still further.

The big problem behind this type of behaviour is surely one of governance. Governments probably know what they ‘ought’ to do to benefit the economy, but they seldom do it; and instead they often do things which they know will be very likely to damage the economy. Economist Milton Freidman reported that in private conversation US President Richard Nixon admitted that he knew exactly what kind of havoc would be wrought by his policy to control gasoline prices (shortages, vast queues, gas station closures etc.) – but Nixon went ahead anyway for reasons of short term political expedience.

Democratic governments seem to find it almost impossible to enact policies that will probably have benefits for most of the people over the long term, when these policies will also enact significant costs over the immediate short term. To allow the housing bubble spontaneously to ‘pop’ in 2004, or to stick a pin in it if it didn’t burst, would undoubtedly have been wise; but governments seldom behave wisely when to do so exerts immediate costs. The problem is made worse when such ‘wise’ policies harm the specific interest groups upon whose votes and funding governments depend.

The reason why politicians ignore the long term was given by Keynes when he quipped that ‘in the long run, we are all dead’. Akerlof and Shiller use other Keynsian ideas to forcefully advocate increasing the state’s regulatory role in the economy, but in the absence of any solution to these fundamental ‘public choice’ problems, this will surely do far more harm than good because politicians live by Keynes’ principle of concentrating on the short term.

In sum, Akerlof and Shiller have a one-eyed focus on the problems of markets and their tendency towards irrationality, corruption and short-termism; while simply ignoring the mirror image problems of governments. But while market competition and cycles of ‘creative destruction’ put some kind of limit on market failures – the democratic process exerts a far weaker and much slower discipline on the failures of government. And incumbent governments have far too many possibilities of ‘buying votes’ by creating dependent client groups (as it happening with terrifying rapidity in the USA at present). If markets are bad, and they often are; then current forms of democratic government are even worse.

But for me the major fault of this book is that it is advocating an un-scientific approach, consisting of a collection of ad hoc stories to explain various phenomena, held together by the large-scale, vague and probably circular concept of animal spirits. For me this looks like a retreat from science into dogmatic assertion – admittedly assertion backed-up by the undoubted intellectual brilliance of the authors – but by little more than this. In particular, this book which purports to be about how human psychology drives the economy actually has little or nothing to do with what I would recognize as the scientific field of human psychology.

The best validated psychological concept in psychology is general intelligence, usually known as IQ and highly correlated with standardized tests results such as the American SATs, reading comprehension, and many other cognitive attributes. IQ has numerous economic implications, since both for individuals and groups IQ is predictive of salary and occupation. But Akerlof and Schiller ignore the role of IQ.

For example, they have a chapter on the question of “Why is there special poverty among minorities?’ in which they generate an unique ad hoc story to explain the phenomena. Yet there is essentially no mysterious ‘special poverty’ among minorities because observed economic (and other behavioural) differentials are substantially explained by the results of standardized testing (or estimated general intelligence). US minorities that perform better than average on standardized testing (e.g. East Asians, Ashkenazi Jews, Brahmin caste Indians) also have economic performance above average; and vice versa.

The scientific basis of Animal Spirits is therefore weak, and in this respect it is much inferior to another new book on human psychology and the economy: Geoffrey Miller’s Spent: sex, evolution and consumer behaviour – which is chock full of up-to-date psychological references and brilliant insights from one of the greatest of living human evolutionary theorists.

Saturday 23 May 2009

Disadvantages of high IQ

*

Having a high IQ is not always good news

Mensa Magazine June 2009 pp 34-5

Bruce G Charlton

*

There are so many advantages to having a high IQ that the disadvantages are sometimes neglected – and I don’t mean just short-sightedness, which is commoner among the highly intelligent. It really is true that people who wear glasses tend to be smarter!

High IQ is, mostly, good for you

First it is worth emphasizing that high IQ is mostly very good for you.
This has been known since Lewis Terman’s 1920s follow-up study of Californian high IQ children revealed that they were not just cleverer but also taller, healthier and more athletic than average; and mostly grew-up to become wealthy and successful.

Professor Ian Deary of Edinburgh University has confirmed that both health and life-expectancy improve along with increasing IQ. So that, remarkably, a single childhood IQ test done on one morning in Scotland in 1932 made significantly-valid statistical predictions about when people would die many decades later.
And other studies have shown that higher IQ people tend to be less violent, so smarter people usually make less-troublesome neighbours.

Indeed, Geoffrey Miller has put forward the idea that IQ is a measure of biological fitness. Since it takes about half of our genes to make and operate the brain, most damaging genetic mutations will show-up in reduced intelligence. So it would have made sense for our ancestors to choose their mates on the basis of intelligence, because a good brain implies good genes.

Sidis and the problems of ultra-high IQ

However, high IQ is not always beneficial. Terman’s study of the highest IQ group among his cohort revealed that more than one third grew up to be ‘maladjusted’ in some way: for example having significant problems of anxiety, depression, personality disorder or experience of ‘nervous breakdowns’.

This applied to William James Sidis (1898-1944), who is often considered to have had the highest-ever IQ (about 250-300). Sidis was a child prodigy, famous throughout the USA as having enrolling at Harvard aged 11 and graduated at 16. Yet he was certainly ‘maladjusted’, and had a chaotic, troubled and short life. Indeed, Sidis was widely considered to have been a failure as an adult – although this failure has been exaggerated, since it turns out that Sidis published a number of interesting books and articles anonymously.

In fact, there seems to be a consensus among psychometricians (and among the possessors of ultra-high IQ themselves) that - while an IQ of about 120-150 is mostly advantageous - extremely high IQ levels above this may prove to be as often of a curse as a benefit from the perspective of leading a happy and fulfilling life.

On the one hand, the ranks of genius are often recruited from amongst the more creative and stable of ultra-high IQ people; but on the other hand there are also a high proportion of chronically-disaffected ultra-high IQ people that have been termed ‘The Outsiders’ in a famous essay of that title by Grady M Towers
( www.prometheussociety.org/articles/Outsiders.html )

Socialism, atheism and low-fertility

Sidis himself demonstrated, in exaggerated form, three traits which I put forward as being aspects of high IQ which are potentially disadvantageous: socialism, atheism and low-fertility.

1. Socialism

Higher IQ is probably associated with socialism via the personality trait called Openness-to-experience, which is modestly but significantly correlated with IQ. (To be more exact, left wing political views and voting patterns are characteristic of the highest and lowest IQ groups – the elite and the underclass - and right wingers tend to be in the mid-range.)

Openness summarizes such attributes as imagination, aesthetic sensitivity, preference for variety and intellectual curiosity – it also (among high IQ people in Western societies) predicts left-wing political views. Sidis was an extreme socialist, who received a prison sentence for participating in a May Day parade which became a riot (in the event, he ‘served his time’ in a sanatorium).

Now, of course, not everyone would agree that socialism is wrong (indeed, Mensa members reading this are quite likely to be socialists). But if socialism is regarded as a mistaken ideology (as I personally would argue!), then it could be said that high IQ people are more likely to be politically wrong. But whether correct or wrong, the point is that high IQ people do seem to have a built-in psychological and political bias.

2. Atheism

Something similar applies to atheism. Sidis was an atheist, and it has been pretty conclusively demonstrated by Richard Lynn that increasing IQ is correlated with increasing likelihood of atheism. The most famous atheists – like Richard Dawkins and Daniel Dennett – are ferociously intelligent individuals.

Again, whether atheism is a disadvantage is a matter of opinion (to put it mildly!) – but what is not merely opinion is that religious people are on average more altruistic in terms of measures such as giving to charity, giving blood, and volunteering time for good causes.

So, higher IQ may be associated with greater selfishness. In other words, smarter neighbours may be less troublesome on average, but they may also be less helpful.

3. Fertility

However the biggest and least-controversial disadvantage of high IQ is reduced fertility. Again Sidis serves as an example: as a teenager he published a vow of celibacy, and he neither married nor had children.

Pioneer intelligence researchers such as Francis Galton (1822-1911) noticed that (since the invention of contraception) increasing intelligence usually meant fewer offspring. Terman confirmed this, especially among women – so the group of the highest IQ women had only about a quarter of the number of children required for replacement fertility.

This trend has, if anything, increased in recent years as ever-more high IQ women delay reproduction in order to pursue higher education and professional careers. Indeed, more than 30 percent of women college graduates in the UK and Europe have no children at all – and more than half of women now attend college.

Since IQ is highly heritable, this low fertility implies that over time high IQ will tend to select itself out of the population.

The bad news and the good news

So much for the bad news about high IQ.

The good news is that while the advantages of high IQ are built-in; the disadvantages of high IQ are mostly a matter of choice.

People can potentially change their political and religious views. For example, Sidis apparently changed from being a socialist to a libertarian, indeed many adult conservatives went through a socialist phase during their youth (declaration of interest: this applies to me).

And religious conversions among the high IQ are not unknown (declaration of interest: this applies to me). For instance, GK Chesterton and C.S Lewis being famous examples of atheists who became the two greatest Christian apologists of the twentieth century.

Indeed, although it does not often happen, smart people can also choose to be more fertile. One example is the Mormons in the USA, whose average IQ and fertility are both above the national average, and where the wealthiest Mormons also have the biggest families. Presumably - since wealth and IQ are positively correlated - this means that for US Mormons higher IQ leads to higher fertility.

So, on the whole it remains good news to have a high IQ - although perhaps not too-high an IQ. But perhaps the high IQ community needs to take a more careful look at the question of low fertility. It may be that, under modern conditions, high intelligence is stopping people from ‘doing what comes naturally’ and having large families.

Human reproduction could be one situation where the application of intelligence may be needed to over-ride our spontaneous emotions or the prevailing societal incentives.

Or else at some point in the future, high IQ could become very rare indeed.

*

For more on IQ see:

http://iqpersonalitygenius.blogspot.co.uk/

**

Wednesday 29 April 2009

Are you an honest academic?

Are you an honest academic? Eight questions about truth

Bruce G Charlton

Oxford Magazine. 2009; 287: 8-10.

A culture of corruption in academia

Anyone who has been in academic life for more than twenty years will realize that there has been a progressive and pervasive decline in the honesty of communication between colleagues, between colleges, between academia and the outside world.

With this is mind, I would ask you, the reader (and presumably an academic), to consider the following three sets of questions about truth.

1. Truth-telling.

a. Have you always been truthful in response to questions about your research and scholarship –questions concerning matters such as your performance and plans, or lack of plans, for future activities?

b. When asked to fill-out forms by administrators and managers, do you answer accurately?
c. Have you ever declined to complete a document because you felt you could not, or were unable to, be truthful; and you were not prepared to be dishonest?

d. Have you been correct and balanced in describing the implications and importance of your research in the RAE, in grant requests, and in job or promotions applications?

e. Would you withdraw a paper from a high impact journal if, as a condition of publication, a referee or editor insisted on modifying the text in a way which misrepresented your true beliefs?

2. Truth-seeking.

a. Are you trying your hardest to do the best work of which you are capable (given the inevitable constraints of time and resources)?

b. Would you stop working in a well-funded discipline because it was incoherent, incorrect, grossly inefficient, or where intellectual standards were corrupt?

c. Have you declined to cooperate with any of the numerous bureaucratic schemes, projects, exercises, commissions, auditors, agencies, offices or institutes that you know are predicated on dishonesty, misrepresentation and/or propaganda?

There were eight questions. The correct answer was yes, in all instances.

Interpretation: If you scored 8 then that is OK, and you have at least a chance of doing good work in academia.

If you scored less than 8, then you ought to quit your job and become a conscientious bureaucrat instead of a phoney academic.


How to become a virtuous scholar

I say you ‘ought to’ quit your job; but maybe you don’t want to quit but you do want to change, to become a virtuous scholar. Yes? In that case you must first admit to yourself your own state of complicity in the culture of corruption, and secondly embark on an immediate program of self-reform.

Truth is difficult, very difficult: it is either a habit, or you are not truthful. Humans cannot be honest in important matters while being expedient in ‘minor’ matters – truth is all of a piece. This means that in order to be truthful you need to find a philosophical basis which will sustain a life of habitual truth and support you through the pressure to be expedient (or agreeable) rather than honest.

Because truth cannot be a solitary life: the solitary truth-seeker who is unsupported either by tradition or community will soon degenerate into mere eccentricity, incoherence or covert self-justification.

There are plenty of resources to support truth – both religious and also secular (e.g. Platonic, Aristotelian or Confucian ethics). Any academic who seeks a cohesive philosophy knows how to find such resources, and it is incumbent upon you (as a would-be virtuous academic) to explore them; find one that suits you and in which you can believe; learn about it and live by it.

How did we get here? Drawing the line

We inhabit an academic system built on lies and sustained by lies. So, how did this situation came about? Another question might help clarify:

Q: Have you ever been asked to make a statement about your research, scholarship (or indeed teaching) that is less-than-fully-truthful; with the rationale that this is for the good of your department, research team, college, university or discipline?

Everyone reading this article would have to answer yes to this question. The explanation is that academics are pressured to lie for the (supposed) benefit of their colleagues or institutions. For instance, when your unit was being ‘inspected’ have you ever been told: a. that you must attend the inspection and meet with the inspectors and also; b. that you when you do meet the inspectors, you restrict your remarks to pre-arranged untruths. You are expected to lie, the inspectors expect you to lie; and the biggest collusive lie is that the process of inspection has been honest and valid.

For decent people, such quasi-altruistic arguments for lying are a more powerful inducement to becoming routinely dishonest than is the gaining of personal advantage. Indeed, lying to be agreeable is probably the primary mechanism that has driven the corruption of academia. Modern academics have become inculcated into habitual falsity by such arguments and pressures, until we have become used-to dishonesty, don’t notice dishonesty - eventually (like the inspectors, managers and administrators) come to expect dishonesty.

The solution to the current degenerate situation is radical in its simplicity – just be truthful, always. Never lie about your work, not even in a ‘good cause’. Maybe in some other professions absolute honesty can be subordinated to other imperatives (e.g. loyalty, literalistic rule-following, and obedience) – but not in academia. Here honesty is primary and ought to be non-negotiable.

As an academic, your colleagues, your employers, your institution should be able to ask a lot from you – but not to lie. As an individual you can pursue personal status, security and salary by many legitimate ways and means – but never by dishonesty.

That is where the line must be drawn. Starting now, why not?


Obstacles in the path of virtue

Why not? Because to become systematically truthful in a modern academic environment would be to inflict damage on one's own career: on one's chances of getting jobs, promotions, publications, grants and so on. In a world of dishonesty, of hype, spin and inflated estimations - the occasional truthful individual will be judged by the prevailing corrupt standards.

When 'everyone' is exaggerating their achievement, then an automatic deduction or devaluation is applied – so that the precisely accurate person will, de facto, be judged as even worse than the already modest (compared with prevailing practice) estimation which they place upon themselves. In an environment when it is routine for mainstream academics to claim 'world class' status' (and this is understood to represent national fame in the real world); an honest academic who accurately claims national status will find it is assumed that his true status is merely of local importance.

Obviously, taking a firm stance of truthfulness would mean such individuals would forgo some success in their careers at least in the immediate term; indeed the sanctions might be much more extreme than this. But over a longer timescale, the superior performance of self-selected groups of honest academics working together in pursuit of truth would become seeds from which (with luck) real scholarship could again grow.

The necessary first step would be for academics who are concerned about truth to acknowledge the prevailing state of corruption and then to make some kind of personal pledge to be truthful in all things connected with their work: to be both truth-tellers and truth-seekers.

Truth-telling would apply to matters both ‘great and small': things like grant applications: applications for jobs, tenure or promotion; communicating with the media; casual informal conversations; conference presentations; papers and books; and reviewing or refereeing. This would be done such that that a 'habit of truth' becomes thoroughly established.

Furthermore, the pledge should also be primarily to seek truth in one's work (and not mainly to seek status, power, grants, promotions, income etc). Even more difficult is the imperative to focus one's own research and scholarship where one believes there is greatest potential to make the largest contribution; and not (for example) merely to follow academic fashion, or do whatever is most likely to lead to grants, or do what most pleases the department, or do work because it leads to higher research ratings.


A 'Church' of truth

Such is our current state of corruption that the above insistence on truthfulness in academic life seems perverse, aggressive, dangerous - or simply utopian and unrealistic. But truthfulness in academia is not utopian. Indeed it was mundane reality in the UK, taken completely for granted (albeit subject to normal human imperfections) until just a few decades ago. Old-style academia had many faults, but deliberate and systematic misrepresentation was not one of them.

Now, however, academia is a communications economy that operates using debased currency. Our discourse uses paper money inflated by hype and spin - like a ten dollar bill crudely stamped-over with a ten million dollar mark; until we no longer know what is accurate and who to trust, what is exaggerated and which is trivial, and when stuff simply got made-up because people felt they could get away with it.

So I am proposing nothing short of a moral Great Awakening in academia: an ethical revolution focused on re-establishing the primary purpose of academic life, which is the pursuit of truth. Such an Awakening would necessarily begin with individual commitment, but to have any impact it would need to progress rapidly to institutional forms. In effect there would need to be a 'Church' of truth (or, rather, many such 'Churches' – especially in the different academic fields or 'invisible colleges' of active scholars and researchers).

I use the word Church because nothing less potent would suffice to overcome the many immediate incentives for seeking status, power, wealth and security. Nothing less powerfully-motivating could, I feel, nurture and sustain the requisite individual commitment.

However, given that we are in the current mess, and pre-existing safeguards have clearly proved inadequate; there is a big question over whether academia has within itself sufficient resolution to nourish individual academics in their difficult task of devotion to truth. What we need is moral courage. But there is a severe and chronic shortage of this commodity in modern British universities.

I suspect that the secular and this-worldly Zeitgeist of the modern university operates on such a here-and-now, worldly, pragmatic and utilitarian ethical basis as utterly to lack moral resources for the job I have in mind. When happiness is the ultimate arbiter, the certainty of short-term punishment weighs far more heavily in the balance than a mere possibility of greater long-term rewards.

So will it happen? – will there be a Great Awakening to truth in academia? Frankly, I doubt it; and we will probably continue to see the world of scholarship degenerate towards being merely a mask for the pursuit of other interests.

But I would love to be proved wrong.

Saturday 7 February 2009

Social Class and IQ – some facts and statistics

Social Class and IQ – the facts and statistics

Mensa Magazine, December 2008

Bruce G Charlton

The Mensa magazine of October 2008 featured three articles on the subject of the relationship between Social Class and IQ. These were apparently prompted by the media coverage associated with my article on this topic published in the Times Higher Education online version: http://charltonteaching.blogspot.com/2008/05/social-class-iq-differences-and.html.

It has been a bizarre experience to see myself so widely quoted as having views and holding opinions which bear no relation to what I wrote or believe! However, since the field of Social Class and IQ is important and frequently misunderstood, it seems worthwhile to use this opportunity to clarify some of the facts and statistics.


The research evidence for Social Class differences in IQ

The basic facts on Class and IQ are straightforward and have been known for about 100 years: higher Social Classes have significantly higher average IQ than lower Social Classes. For me to say this is simply to report the overwhelming consensus of many decades of published scientific research literature; so this information is neither new, nor is it just ‘my opinion’!

All the major scholars of intelligence agree that there are social class differences in IQ. As long ago as 1922, Professor Sir Godfrey Thomson and Professor Sir James Fitzjames Duff performed IQ tests on more than 13000 Northumbrian children aged 11-12, and found that the children of professionals had an average IQ of 112 compared with an average of 96 for unskilled labourers. These differences in IQ were predictive of future educational attainment.

Dozens of similar results have been reported since; indeed I am not aware of a single study which contradicts this finding. Social Class differences in intelligence are described in the authoritative textbook: IQ and Human Intelligence by N.J Mackintosh who is a Professor of Psychology at Cambridge University. And described in the 1996 American Psychological Association consensus statement Intelligence: knowns and unknowns: http://www.gifted.uconn.edu/siegle/research/Correlation/Intelligence.pdf.

Because IQ is substantially (although not entirely) hereditary (as has been shown by numerous studies of siblings including twins, and in adoption studies), and because IQ level is a good predictor of educational attainment; therefore with a fair system of exam-based selection, children from higher Social Classes will inevitably gain a disproportionately greater number of places at universities than those from lower Social Classes. And the more selective the university, then the higher will be the proportion of people from higher Social Classes compared with the proportion in the national population.


Statistical effects of Class IQ differences on Mensa qualification

Perhaps the best way to understand these statistics is to consider Mensa qualification.

Mensa is rather like a highly-selective university, because it admits only those people scoring in the top 2 percent of the UK population in a recognized IQ test. Indeed, Mensa imposes approximately the same degree of selection as Oxford and Cambridge Universities – although Oxbridge selects mostly on examination results rather than pure IQ. Exam results depend on a variety of factors as well as IQ, especially personality traits.

The average IQ of the UK population is defined as 100, with a standard deviation of 15. Mensa only accepts people with an IQ of about 130, or two standard deviations above average IQ.

But people in Social Class 4 & 5 – semi-skilled and unskilled workers – have an average IQ lower than 100: about 95 is a reasonable estimate. For people in Social Classes 1 and 2 (professional, managerial and technical – including teachers) the average IQ is higher than 100: about 110. (Reference: e.g. Hart et al. Public Health, 117, 187-195; I am using rounded numbers here for ease of calculation).

This means that there is approximately 15 IQ points, or one standard deviation, difference in average IQ across the Social Classes defined as above using the UK ‘Registrar General’ occupation-based system.

We can calculate that for semi- and un-skilled workers with an average IQ of 95, two standard deviations above the average gets us only to IQ 125. To qualify for Mensa a Social Class 4 & 5 person would therefore need to be two and one third standard deviations above the IQ average for their Class: so about 1 percent of Social Class 4 & 5 would qualify for Mensa.

And because Social Classes 1 & 2 have an average IQ of 110, then the Mensa threshold of IQ 130 is only 20 points above the average, or just one and a third standard deviations. This means that about 10 percent of Social Classes 1 & 2 would be expected to qualify for Mensa.

So a random person from Social Class 1 & 2 is about ten times as likely to qualify for Mensa as someone from the lowest Social Classes. Tenfold is a large difference.

The exact IQ of each Social Class depends upon how precisely the Social Classes are defined. The most educated and intellectual Social Classes (e.g. doctors, lawyers, chief executives of large corporations) have an average IQ about 130 – which means that about half the members of the most intellectually-selected Classes would be expected to qualify for Mensa. This proportion is about fifty times higher than the proportion of potential Mensans from semi- or un-skilled workers.


Social Class differences in attainment related to IQ should be expected

In conclusion, socioeconomic differences in average IQ are substantial and they will influence the proportions of people reaching specific levels of educational attainment or cognitive ability.

One common misunderstanding concerns averages. There is overlap in IQ between Social Class groups, and the situation is non-symmetrical for higher and lower Classes. People in jobs requiring high level skills or educational qualifications (e.g. architects or professional scientists) will almost-certainly all have above-average IQs. But a high IQ does not exclude people from unskilled jobs, and there will be a wider range of IQs in Social Classes 4 & 5. It is all a matter of percentages, not clear cut distinctions.

So, there will be some manual labourers who have higher IQs than some dentists, because it would be predicted that about one in a hundred labourers could get into Mensa while about half of dentists could not. But 130-plus IQ individuals will make-up a relatively small proportion of manual labourers compared with dentists.

Furthermore, the UK is mostly a middle class society nowadays. There are actually more people in Social Class 1 & 2 (around 40 percent of the working population) than there are in Classes 4 & 5 (about 20 percent). So in combination with the many-fold increased probability of higher IQ in higher Social Classes, this means that very selective organizations such as Mensa or Oxbridge should expect a fair and meritocratic selection mechanism to yield only a small proportion of people from the lowest Social Classes.

These facts and statistics are clearly unpopular in some quarters. Nonetheless, I feel that, given the overwhelming weight of evidence, we should now accept the reality of Social Class differences in IQ, and move-on to have a reasoned discussion of the implications.

Friday 7 November 2008

Why are scientists so dull?

Bruce G Charlton

Oxford Magazine - 2008; 281: 7-8.


The short answer is: because the selection and training process ruthlessly weeds-out any interesting people

Scientists are, as a group, dull and getting-duller. Duller both in term of less intelligent and more boring. And the science they produce is increasingly dull – although its tediousness is often concealed by shamelessly dishonest hype and spin.

This dullness is not accidental but a product of the fact that scientists are not even trying to do interesting research, funders are not prepared to fund interesting research (because it has a high risk of failing to deliver) and most journals are not keen to publish interesting research (because it is more likely to be wrong).

The premier scientific and medical journals have almost abandoned science reporting in favour of political advocacy or politically-correct moralizing. I hear that Nature has plans to recognize reality and rename itself Journal of the Theology of Climate Change; the British Medical Journal is to become Newsletter of the National Health Service Bureaucracy and the Lancet will soon be Acta AntiWar Propagandica…

No, I’m kidding, but it is almost believable.


Why are scientists duller than journalists?

The point is the editors and journalists running even the premier journals – those having the pick of modern science – themselves find science too dull to bother writing about. And they are too often correct.

The science journalists are themselves a clue. We need to ask why the smart and interesting people who nowadays run the premier science journals (and the many similarly-talented folk who work in the media generally, including bloggers) are functioning as pundits instead of doing science themselves.

The answer is obvious enough: being a modern scientist is too dull. In particular the requirement for around ten to fifteen years of postgraduate training before even having a shot at doing some independent research of one’s own choosing (but more likely with the prospect of functioning as a cog in somebody else’s research machine) is enough to deter almost anyone with a spark of vitality or self-respect.

And the whole process and texture of doing science has slowed-up. Read the memoirs of scientists up to the middle 1960s – doing science was nimble, fast-moving. Many experiments could be set-up and done in days. For the individuals concerned there was a palpable sense of progress, a crackling excitement.

Now there is an always-expanding need for advanced planning, committee permissions, and logistical organization; combined with a proliferation of mindless and damaging bureaucracy. The timescale of scientific action and discourse has gone up from days and weeks to months and years.

What a contrast with journalism! Where is equivalent hourly and daily stimulus of journalism in the life of a scientist? The kind of person attracted to modern science is (I presume) somebody who likes long term project management especially form-filling; and can persevere through difficulties without wavering in determination or changing tack (especially not deviating to explore unexpected leads or insights).


The filtering-out of intelligence and creativity

The kind of individual who can plough through endless years of coursework, a PhD, and cycles of postdoctoral training; and can stay out of trouble with their peers until they – eventually - get a long-term or tenured position; is on average going to be characterized by personality attributes of conscientiousness and agreeableness. The modern scientist who has passed these tests of character is not likely to be the kind of awkward, abrasive and somewhat wildly-creative personality which characterized many of the greatest scientists of the past.

Nor are the modern scientists likely to be as intelligent as in the old days, because IQ and the personality trait of conscientiousness are only slightly (or some people suggest inversely!) correlated. This means that that greatly increasing the demand for perseverance in a training program will inevitably tend to depress the IQ of successful trainees.

Having adding 5-10 years to science training over the past 40 years, means that those who now survive to apply for permanent positions are indeed more conscientious than scientists of yore. But since the most intelligent people are not always the most conscientious, this enhancement in perseverance has been achieved at the serious cost of filtering-out some of the highest IQ scientists. When appointing independent scientists in the fourth decade of their life, we are scraping the barrel for attributes of high intelligence (and creativity).

We can only conclude that science is dull mainly because its requirements for long-term plodding perseverance and social inoffensiveness have the effect of ruthlessly weeding-out too many smart and interesting people.

The smart and interesting people instead gravitate to fast-moving fields like journalism (or finance, or management, or entrepreneurship of many types) where they get hourly or daily stimulus, and have a chance of following their own inclinations and making their mark before reaching their mid forties.

Since people who nowadays eventually emerge from the lengthening pipeline of scientific training are quite different from the scientists of 50 years ago, they naturally tend to move science further in the direction which created their own success. So that modern scientific leader often elevate the requirements for very long periods of tedious make-work, and judge scientists mainly by their capacity for steady and reliable production.


Needed: more clever crazies

At the same time, high level journalism in science and medicine is full of very high IQ people who are virtuosically able to manipulate words and concepts (and, sometimes, numbers); but who often lack the common sense of a new-born kitten and indeed frequently propagate world views which are near-psychotic in their detachment from social reality.

These clever crazies should be working as scientists, not journalists! Science is the activity that really benefits from this kind of brilliant unorthodoxy, puts it to use in generating, critiqueing and testing new ideas, and passes it through the evaluative social mechanisms of science which tend to filter-out the mistaken craziness and leave-behind the correct-craziness.

Instead, these idiots-savants are going into journalism after graduating from the best universities; where they infuse their naïve and lunatic perspectives into the realms of public policy discourse.

On the whole, I believe that these brilliant fools usually do a lot more social harm than good as journalists - but either way, their personal contributions are invariably ephemeral. They have sacrificed long-term creative and constructive satisfaction for short-term stimulation and mischief-making. It is hard to blame them for making this choice – but this situation is neither optimal for the individuals nor for society at large.

What should be done? How can science be reformed and re-structured to enable the kind of people who now work in journalism and punditry to become the kind of people who work as scientists?

Can science again become a career that attracts and rewards the most intelligent and most creative individuals (even, or especially, when they are serious oddballs).

One thing is for sure, the answer is not going to come from within science.

Saturday 27 September 2008

Tolkien's ‘The Marring of Men’

*

Heaven and the Human Condition in ‘The Marring of Men’ (‘The debate of Finrod and Andreth’)

Bruce G Charlton. The Chronicle of the Oxford University C.S. Lewis Society, 2008; Vol 5, Issue 3: 20-29


The Argument from Desire

One of C.S. Lewis’s most famous arguments in support of Christianity is that the instinctive but otherworldly yearning emotion of ‘joy’ (in German, Sehnsucht) implies that there exists some means of satisfying this urge; otherwise humans would not experience it.

This is sometimes termed the ‘argument from desire’. In brief, it states that because humans profoundly and spontaneously desire something not of this world, the experience suggests the reality of the supernatural. Lewis used the argument in many of his best known Christian writings. In Mere Christianity, he argues that ‘[i]f I find in myself a desire which no experience in this world can satisfy, the most probable explanation is that I was made for another world’. In ‘The Weight of Glory’, he notes that ‘we remain conscious of a desire which no natural happiness will satisfy’. And in the autobiographical Surprised by Joy, he comments that ‘[i]n a sense, the central story of [his] life is about nothing else’.

But Lewis is not the only among his friends to formulate an argument from desire. Perhaps the idea’s most powerful and compelling exposition can be found in a little-known and recently-published (1993) story by Lewis’ great friend J.R.R. Tolkien; a tale which was written in about 1959 and appears in the middle of Volume X of The History of Middle-earth, edited by Christopher Tolkien and published in twelve volumes between 1983 and 1996 [1]. Since The History of Middle-earth is read only by Tolkien scholars and enthusiasts, this wonderful dialogue is at present little known or discussed.

It is, of course, no coincidence that both Lewis and Tolkien should write of the argument from desire, since Lewis’s own conversion to Christianity was shaped by this argument: both Tolkien and Hugo Dyson used it in the famous late night conversation of September 1929 on Addison’s Walk in Magdalen College – an event which was recorded by both Lewis and Tolkien. Tolkien’s epistolary poem ‘Mythopoeia’ (addressed to Lewis) outflanks the counter-argument that this is mere wishful thinking or day-dreaming by asking the question: ‘Whence came the wish, and whence the power to dream?’ And Tolkien used the argument again in a letter to his son Christopher dated 30 January 1945, in reference to the human yearning for the Garden of Eden:

…certainly there was an Eden on this very unhappy earth. We all long for it, and we are constantly glimpsing it: our whole nature at its best and least corrupted, its gentlest and most humane, is still soaked with the sense of ‘exile’ [2].

But in ‘The Marring of Men’, Tolkien makes the argument from desire the basis of a fiction – and, as so often, Tolkien’s most personal concerns are most powerfully expressed in the terms of the mythic ‘secondary world’ he created.


‘The Marring of Men’

Tolkien’s story was never formally named – but probably the most compelling of its alternative titles was ‘The Marring of Men’ which I have adopted here . In the History of Middle-earth, the story is given its Elven name, ‘Athrabeth Finrod ah Andreth’, translated as ‘The Debate of Finrod and Andreth’. The text of J.R.R. Tolkien’s story is about twenty pages long, with a further forty pages of notes and supplementary material compiled from other writings by J.R.R. Tolkien and notes by Christopher Tolkien.

‘The Marring of Men’ is part of the Silmarillion body of texts, which were composed over many decades, from Tolkien’s young adulthood during World War I right up until his death in 1973. This body of texts is sometimes referred to in its totality as Tolkien’s ‘Legendarium’, to distinguish it from the single volume Silmarillion selected by J.R.R. Tolkien’s son Christopher, and published in 1978.

The situation in ‘The Marring of Men’ is that of a conversation between Andreth, a mortal human woman, and Finrod Felagund, an immortal Noldo, a ‘High’ Elf. The explicit subject of their conversation is the nature and meaning of mortality, and its implications for the human condition – a subject which is probably the most fundamental of all religious topics, and which is certainly the single main interest and underlying theme of most of Tolkien’s fiction, including The Lord of the Rings. The implicit subject of the conversation is original sin and the fallen nature of Man – which is why the title ‘The Marring of Men’ seems appropriate.

But the conversation between Andreth and Finrod is not simply an abstract philosophical debate: It is fuelled both by world events and by personal experiences. The protagonists are aware of the imminent prospect of Middle-earth being irrevocably overrun and permanently destroyed by Morgoth. (The selfishness and assertive pride of Morgoth, the corrupt Vala or ‘fallen angel’ analogous to the Christian devil, are the primary origin of evil in Tolkien’s world.)

The personal element comes from the fact that the now middle-aged woman Andreth had fallen mutually in love with Finrod’s brother Aegnor in her youth, and had wished to marry the immortal Elf; but she was ultimately rejected by the Elf, who left to follow the call of duty and fight in the (believed hopeless) wars against Morgoth. It emerges during the conversation that Aegnor’s most compelling reason for rejecting Andreth was that he did not want love to turn to pity at her advancing age, infirmity and ultimate mortality – but (in Elven fashion) wished to preserve a memory of perfect love unstained by pity.

The ‘marring’ referred to in the title is mortality. The first question is whether Men were created mortal, or whether Men were originally immortal but lapsed into mortality due to some event analogous to original sin.


Immortal Elves and Mortal Men

While mortality is a universal feature of the human condition as we know it in the primary world, the Elven presence in Tolkien’s secondary world brings to this debate a contrast unavailable in human history. Tolkien asks in which ways the issue of mortality would be sharpened and made inescapable if mortal Men found themselves living alongside immortal Elves – creatures who, while they can be killed, do not die of age or sickness, and, if killed, can be reincarnated or remain as spirits within the world.

Tolkien’s Elves are fundamentally the same species as Men – both are human in the biological sense that Men and Elves can intermarry and reproduce to have viable offspring (who are then offered the choice whether to become immortal Elves or mortal Men). Elves are also religious kin to Men in that both are ‘children’ of the one God (Elves having been created first). But Elves seem, at the time of this story, to be superior to Men, in that Elves are immortal in the sense defined above. Elves do not suffer illness; they are more intelligent (‘wise’) than men, more beautiful, more knowledgeable and more artistic; Elves also have a much more vivid, lasting and accurate memory than Men.

The question arises in the secondary world: If Elves are immortal and generally superior in abilities, what is the function of Men? Why did Eru (the ‘One’ God) create mortal Men at all, when he had already created immortal Elves? Implicitly, Tolkien is also asking the primary world question why God created mortal and imperfect Men when he could have created more perfect humans – like the immortal Elves?

Tolkien’s answer is subtle and indirect, but seems to be related to the single key area in which the greatest mortal Men are superior to Elves: courage. Most of the ‘heroes’ in Tolkien’s world, those who have changed the direction of history, are mortal Men (or indeed Hobbits, who are close kin to mortal Men); and there seems to be a kind of courage possible for mortals which is either impossible for, or at least much rarer among, Elves. Elves have (especially as they grow older) a tendency to despondency, detachment and the avoidance of confrontation. On a related note, Tolkien hints that Men are free in a way in which Elves are not, and that this freedom is integral to the ultimate purpose of Men in Tolkien’s world – and by implication also in the real world.

C.S. Lewis once stated (albeit from the pen of a fictional devil!) that courage was the fundamental human virtue, because it underpinned all other virtues: Without courage other virtues would be abandoned as soon as this became expedient:

"Courage is not simply one of the virtues, but the form of every virtue at the testing point, which means, at the point of highest reality. A chastity or honesty, or mercy, which yields to danger will be chaste or honest or merciful only on conditions. Pilate was merciful till it became risky." [3]

At any rate, courage seems to be one virtue in which the best of Tolkien’s mortal Men seem to excel.


The Fall of Men

The first question is whether Tolkien’s One God ‘Eru’ originally created immortal Men, who had been ‘marred’ and made mortal by the time of Andreth (and, by implication, our time). This is Andreth’s first view – the mortal woman suspects that Men were meant to be immortal but have been punished with mortality:

‘[T]he Wise among men say: “We were not made for death, nor born ever to die. Death was imposed upon us.” And behold! the fear of it is with us always, and we flee from it ever as the hart from the hunter’ [4].

‘We may have been mortal when first we met the Elves far away, or maybe we were not.... But already we had our lore, and needed none from the Elves: we knew that in our beginning we had been born never to die. And by that, my lord, we meant: born to life everlasting, without any shadow of an end’ [5].

Naturally, this prompts the Christian reader to think of parallels with the Fall of Man and original sin; and this analogy is clearly intended by Tolkien.

Andreth talks of a rumour she has heard from the wise men and women among her ancestors, that perhaps in the past Men committed a terrible but undefined act which was the cause of this marring. The implication, never made fully clear, is that Men in their freedom may have deviated from their original role as conceived by ‘the One’, and been corrupted or intimidated into worshipping Morgoth, or at least into doing his will and in some way serving his purposes. This, it is suggested, may be the cause of Men’s mortality as such, along with a progressive shortening of their lifespan and a permanent dissatisfaction and alienation from the world they inhabit and even their own bodies. In the dialogue, Finrod asks:

‘[W]hat did ye do, ye men, long ago in the dark? How did ye anger Eru?... Will you not say what you know or have heard?’

‘I will not’, said Andreth. ‘We do not speak of this to those of other race. But indeed the Wise are uncertain and speak with contrary voices; for whatever happened long ago, we have fled from it; we have tried to forget, and so long we have tried that now we cannot remember any time when we were not as we are’ [6].


Men’s Lifespan

By contrast to their uncertainty about the origin of mortality, the decline in mortal lifespan caused by Morgoth’s corruption of the world seems certain to both Andreth and Finrod. Later in Tolkien’s history, those Men who help defeat Morgoth are rewarded with a lifespan of about three times Men’s usual maximum, i.e. about 300 years; greater strength, intelligence and height; and a safe island off the coast of Middle-earth on which to dwell (Numenor, Tolkien’s Atlantis).

It seems possible that the enhancements of ‘Numenorean’ Men are simply a restoration of the original condition of Men. Or it may be that these enhancements are compensations of Elvenness, rendering Men more Elven (though still mortal), perhaps with the ultimate aim of a unification of Elves and Men. At any rate, the majority of Numenoreans eventually succumb to corruption and evil, and are destroyed by Eru in a massive reshaping of the world, which drowns the island and the vast Numenorian navy that is landing on the shores of the undying lands.

For Tolkien, it is a characteristic sin of Men to cling to life, and it is this clinging which corrupts the mortal but long-lived Numenoreans who try to invade the undying lands – either in the mistaken belief that they will become immortal by dwelling there, or with the intention to compel the Valar to grant them immortal life.

While Men are characteristically tempted to elude mortality – to stop change in themselves – the almost-unchanging Elves are tempted to try to stop change in the world – to embalm beauty in perfection. This Elven sin is related to the first tragedy of the Silmarillion, when ultimate beauty – the light of the primordial trees – is captured in three jewels; and it later leads to the creation of the Rings of Power, which are able to slow time almost to a stop, and thereby to arrest the pollution and wearing-down of Middle-earth.

As well as having an increased lifespan, Numenoreans surrender their lives voluntarily at the appropriate time, and before suffering the extreme degenerative changes of age. This voluntary death (or transition) at the end of a long life is described in the most moving of the appendices to The Lord of the Rings, when Aragorn (the last true Numenorean) yields his life at will to move on to another world. His wife Arwen pleads with him to hold on to life for a while longer to keep her company in this world; however Aragorn kindly but firmly refuses her request:

‘Let us not be overthrown at the final test, who of old renounced the Shadow and the Ring. In sorrow we must go, but not in despair. Behold! we are not bound forever to the circles of the world, and beyond them is more than memory. Farewell!’ [7]

Arwen’s fate is tragic, because she is one of the ‘half-elven’ who may choose whether to become Man or Elf; she chooses to become mortal in order to marry Aragorn and share his fate. However, her resolve to accept mortality at the proper time is undermined by her ‘lack of faith’ in Man’s destiny of life after death. In the appendix, she is portrayed as regretting becoming a mortal instead of an Elf; and as having succumbed to the sin of clinging to mortal life rather than accepting mortality and trusting that there is life after death.

"…and the light of her eyes was quenched, and it seemed to her people that she had become cold and grey as nightfall in winter than comes without a star." [8]

The half-elven Arwen has failed to embrace the mortal need for courage to underpin all other virtues; and one possible interpretation of this passage is that this has consequences for her fate in the next world.


At Home in the World, or Exiled?

For Tolkien (and Lewis), the sense of exile is a ‘desire’ which implies the possibility of its gratification; in other words, it reflects the fact that Men have indeed been ‘exiled’ from somewhere other than this world.

Finrod makes clear that Elves, by contrast, feel fully at home in the world to which they are tied:

‘Each of our kindreds perceives Arda differently, and appraises its beauties in different mode and degree. How shall I say it? To me the difference seems like that between one who visits a strange country, and abides there a while (but need not), and one who has lived in that land always (and must)’. [9]

‘Were you and I to go together to your ancient homes east away I should recognize the things there as part of my home, but I should see in your eyes the same wonder and comparison as I see in the eyes of Men in Beleriand who were born here’. [10]

Elves therefore care for the world more than Men, and do not exploit nature as Men do, but nurture and enhance the world. And indeed Elves are not truly immortal, since when the world eventually ends, they will die; and to Finrod it seems likely that this death will mean utter annihilation:

"You see us...still in the first ages of being, and the end is far off.... But the end will come. That we all know. And then we must die; we must perish utterly, it seems, because we belong to Arda (in [body] and in [spirit]). And beyond that what? The going out to no return, as you say; the uttermost end, the irremediable loss?" [11]

Partly because of this prospect, the almost-unchanging Elves become increasingly grieved by the ravages of time upon the world, and cumulatively overcome by weariness with their extended lives. Hence the characteristically Elven temptation to try to stop time, to arrest change.

By contrast, Men seem to Finrod like ‘guests’, always comparing the actual world of Middle-earth to some other situation. This opens up the question of Tolkien’s version of ‘the argument from desire’. Finrod thinks that Men have an inborn, instinctive knowledge of another and better world. Hence, he thinks that they never were immortal, but have always known death as a transition to another, more perfect world – not as the prospect of annihilation which Elves face. Thus, he considers the possibility that Men’s ‘mortality’ is ultimately preferable to Elven ‘immortality’.

But even in this world Finrod suspects that the destiny of Men may eventually be higher than that of Elves. He acknowledges that at the time of his debate with Andreth the Elves are the superior race in most respects; but he can envisage a time when mortal Men will attain leadership, and the Elves will be valued mainly for the scholarly and artistic abilities fostered by their more accurate and vivid memories. This projected role of Men will be related to the healing of the world from the evil that was permeated through it by Morgoth.

One possible interpretation of this is that Elves cannot heal the marred world because they are tied to, part of, that world; but that mortal Men may be able to heal it because, although they themselves share the marring of the world, they are ultimately free from that world through death.


Tolkien’s Vision of Heaven

Building on hints by Andreth, Finrod intuits that if things had gone according to Eru’s original plan, there would have been no need for Men. The first-born, immortal Elves would have been the best inhabitants and custodians of an unmarred world, because their very existence was tied to it.

But since the demiurgic Morgoth infused creation with evil at a very early stage, Eru made a second race of mortals – Men – who lived in the world for a while, then passed on to another condition. Because mortals were not tied to the world, they had the freedom to act upon the world in a way that Elves did not. This freedom of Men could be misused to exploit the world short-sightedly; but it could also be used to heal the world, to the benefit of both mortals and immortals alike.

[Finrod]: ‘This then, I propound, was the errand of Men, not the followers but the heirs and fulfillers of all: to heal the marring of Arda’.

Indeed, Finrod perceives that to clarify this insight may be the main reason for their discussion: so that Andreth may learn the meaning of mortality from Finrod, and pass this knowledge on to other Men, to save them from despair and encourage them in hope.

[Finrod]: ‘Maybe it was ordained that we [Elves], and you [Men], ere the world grows old, should meet and bring news to one another, and so we should learn of the Hope from you; ordained, indeed, that thou and I, Andreth, should sit here and speak together, across the gulf that divides our kindreds’. [12]

Andreth suggests that Eru himself may intervene for this hope.

[Andreth]: How or when shall healing come?…To such questions only those of the Old Hope (as they call themselves) have any guess of an answer.… [T]hey say that the One will himself enter into Arda, and heal Men and all the Marring from the beginning to the end’.[13]

Finrod cannot at first understand how this could be, and Andreth herself seems to regard it as highly implausible – a wishful dream. But on reflection, Finrod argues:

‘Eru will surely not suffer [Morgoth] to turn the world to his own will and to triumph in the end. Yet there is no power conceivable greater than [Morgoth] save Eru only. Therefore, Eru, if he will not relinquish His work to [Morgoth], who must else proceed to mastery, then Eru must come in to conquer him’. [14]

The Christian parallels are obvious. Indeed, ‘The Marring of Men’ can be seen as part of Tolkien’s lifelong endeavour to make his legendarium (originally conceptualized as a ‘mythology for England’) broadly compatible with known human history, particularly Christian history [15].

Andreth’s hints inspire Finrod to a vision which offers ultimate hope to the immortal but finite Elves as well as to mortal Men:

‘Suddenly I beheld a vision of Arda Remade; and there the [High Elves] completed but not ended could abide in the present forever, and there could walk, maybe, with the Children of Men, their deliverers, and sing to them such songs as, even in the Bliss beyond bliss, should make the green valleys ring and the everlasting mountain-tops to throb like harps’.

‘We should tell you tales of the Past and of Arda that was Before, of the perils and great deeds and the making of the Silmarils. We were the lordly ones then! But ye, ye would then be at home, looking at all things intently, as your own. Ye would be the lordly ones’. [16]

This, then, is Tolkien’s vision of Heaven, pictured in the context of Arda, his sub-created world.


Myth and reality

The conversation of Andreth and Finrod occurs during a lull before the storm of war breaks upon Middle-earth; and Finrod foresees that the next stage of war will claim the life of his brother Elf Aegnor, whom the mortal woman Andreth loved in her youth and loves still. The fragment ends with Finrod bidding Andreth farewell by reaffirming, ‘you are not for Arda. Whither you go you may find light. Await us there, my brother – and me’. Andreth’s destiny lies beyond the world, and Finrod dares to hope that this is true for the Elves also.

In Tolkien’s legendarium, loss or transmission of knowledge is always a matter of concern. The message we take away from ‘the Marring of Men’ is hopeful. We are called to infer that this conversation has ‘come down’ to us today: that it was remembered, recorded, and has survived the vicissitudes of history, possibly because we modern readers need or are meant to know this.

Just as Morgoth’s marring of the World and of Men is analogous to the Christian account of the Fall of Satan and of original sin, Finrod and Andreth’s intuitions and hopes, Tolkien implies, were vindicated in real history by the coming of Jesus Christ. And Tolkien’s sub-creative vision of heaven, as explicated by Finrod, is meant to be taken seriously as an image of true heaven, in which Tolkien believed as a Christian. It is entirely characteristic that Tolkien’s heaven should have a place for Elves as well as for Men.

Tolkien’s story ‘The Marring of Men’ – though so brief a tale – seems to me one of his most beautiful and profound: a product of deep thought and visionary inspiration. It encapsulates nothing less than Tolkien’s mature understanding of the human condition and the meaning of life. Scholars and admirers of C.S. Lewis, who are unfamiliar with Tolkien’s legendarium, may find a way into his magnificent fantasy by reading it as complementary to Lewis’s great idea of ‘joy’ and his characteristic ‘argument from desire’: Tolkien engaged in developing and completing themes which underpin much of his old friend’s best and most serious work.


1. J.R.R. Tolkien, Morgoth’s Ring: The History of Middle-Earth, Volume X, ed. Christopher Tolkien (London: HarperCollins, 2002[1993]), pp. 301-366.
2. Humphrey Carpenter (ed.), Letters of J.R.R. Tolkien (London: George Allen and Unwin, 1981), p110.
3. C.S. Lewis, The Screwtape Letters (New York: Macmillan, 1951), p148.
4. Morgoth’s Ring, p. 309.
5. Ibid., p. 314.
6. Ibid., p. 313.
7. The Return of the King (London: George Allen and Unwin, 1974), p. 309.
8. Ibid., p. 309.
9. Morgoth’s Ring, p. 315.
10. Ibid., p. 316.
11. Ibid., p. 312.
12. Ibid., p. 323.
13. Ibid., p. 321.
14. Ibid., p. 322.
15. This is the subject of Verlyn Flieger’s book Interrupted Music: The Making of Tolkien’s Mythology (Kent, OH: Kent State University Press, 2005).
16. Morgoth’s Ring, p. 319.

*

Thursday 22 May 2008

Social class IQ differences and university access

Social class differences in IQ: implications for the government’s ‘fair access’ political agenda

A feature for the Times Higher Education - 23 May 2008

Also at: http://www.timeshighereducation.co.uk/Journals/THE/THE/22_May_2008/attachments/Times%20Higher%20IQ%20Social%20Class.doc

Bruce G Charlton

Since ‘the Laura Spence Affair’ in 2000, the UK government has spent a great deal of time and effort in asserting that universities, especially Oxford and Cambridge, are unfairly excluding people from low social class backgrounds and privileging those from higher social classes. Evidence to support the allegation of systematic unfairness has never been presented, nevertheless the accusation has been used to fuel a populist ‘class war’ agenda.

Yet in all this debate a simple and vital fact has been missed: higher social classes have a significantly higher average IQ than lower social classes.

The exact size of the measured IQ difference varies according to the precision of definitions of social class – but in all studies I have seen, the measured social class IQ difference is substantial and of significance and relevance to the issue of university admissions.

The existence of substantial class differences in average IQ seems to be uncontroversial and widely accepted for many decades among those who have studied the scientific literature. And IQ is highly predictive of a wide range of positive outcomes in terms of educational duration and attainment, attained income levels, and social status (see Deary – Intelligence, 2001).

This means that in a meritocratic university admissions system there will be a greater proportion of higher class students than lower class students admitted to university.

What is less widely understood is that – on simple mathematical grounds – it is inevitable that the differential between upper and lower classes admitted to university will become greater the more selective is the university.

***

There have been numerous studies of IQ according to occupational social class, stretching back over many decades. In the UK, average IQ is 100 and the standard deviation is 15 with a normal distribution curve.

Social class is not an absolute measure, and the size of differences between social classes in biological variables (such as health or life expectancy) varies according to how socio-economic status is defined (eg. by job, income or education) and also by how precisely defined is the socio-economic status (for example, the number of categories of class, and the exactness of the measurement method – so that years of education or annual salary will generate bigger differentials than cruder measures such as job allocation, postcode deprivation ratings or state versus private education).

In general, the more precise the definition of social class, the larger will be the measured social class differences in IQ and other biological variables.

Typically, the average IQ of the highest occupational Social Class (SC) - mainly professional and senior managerial workers such as professors, doctors and bank managers - is 115 or more when social class is measured precisely, and about 110 when social class is measured less precisely (eg. mixing-in lower status groups such as teachers and middle managers).

By comparison, the average IQ of the lowest social class of unskilled workers is about 90 when measured precisely, or about 95 when measured less precisely (eg. mixing-in higher social classes such as foremen and supervisors or jobs requiring some significant formal qualification or training).

The non-symmetrical distribution of high and low social class around the average of 100 is probably due to the fact that some of the highest IQ people can be found doing unskilled jobs (such as catering or labouring) but the lowest IQ people are very unlikely to be found doing selective-education-type professional jobs (such as medicine, architecture, science or law).

In round numbers, there are differences of nearly two standard deviations (or 25 IQ points) between the highest and lowest occupational social classes when class is measured precisely; and about one standard deviation (or 15 IQ points) difference when SC is measured less precisely.

I will use these measured social class IQ differences of either one or nearly two standard deviations to give upper and lower bounds to estimates of the differential or ratio of upper and lower social classes we would expect to see at universities of varying degrees of selectivity.

We can assume that there are three types of universities of differing selectivity roughly corresponding to some post-1992 ex-polytechnic universities; some of the pre-1992 Redbrick or Plateglass universities (eg. the less selective members of the Russell Group and 1994 Group), and Oxbridge.

The ‘ex-poly’ university has a threshold minimum IQ of 100 for admissions (ie. the top half of the age cohort of 18 year olds in the population – given that about half the UK population now attend a higher education institution), the ‘Redbrick’ university has a minimum IQ of 115 (ie. the top 16 percent of the age cohort); while ‘Oxbridge’ is assumed to have a minimum IQ of about 130 (ie. the top 2 percent of the age cohort).

***


Table 1: Precise measurement of Social Class (SC) – Approx proportion of 18 year old students eligible for admission to three universities of differing minimum IQ selectivity

Ex-poly - IQ 100; Redbrick - IQ 115; Oxbridge IQ 130

Highest SC– av. IQ 115: 84 percent; 50 percent; 16 percent

Lowest SC– av. IQ 90: 25 percent; 5 percent; ½ percent

Expected SC diff: 3.3 fold; 10 fold; 32 fold



Table 2: Imprecise measurement of Social Class (SC) – Approx proportion of 18 year old students eligible for admission to three universities of differing minimum IQ selectivity

Ex-Poly - IQ 100; Redbrick - IQ 115; Oxbridge - IQ 130

Highest SC –av. IQ 110: 75 percent; 37 percent; 9 percent

Lowest SC –av. IQ 95: 37 percent; 9 percent; 1 percent

Expected SC diff: 2 fold; 4 fold; 9 fold

***

When social class is measured precisely, it can be seen that the expected Highest SC to Lowest SC differential would probably be expected to increase from about three-fold (when the percentages at university are compared with the proportions in the national population) in relatively unselective universities to more than thirty-fold at highly selective universities.

In other words, if this social class IQ difference is accurate, the average child from the highest social class is approximately thirty times more likely to qualify for admission to a highly selective university than the average child from the lowest social class.

When using a more conservative assumption of just one standard deviation in average IQ between upper (IQ 110) and lower (IQ 95) social classes there will be significant differentials between Highest and Lowest social classes, increasing from two-fold at the ‘ex-poly’ through four-fold at the ‘Redbrick’ university to ninefold at ‘Oxbridge’.

Naturally, this simple analysis is based on several assumptions, each of which could be challenged and adjusted; and further factors could be introduced. However, the take-home-message is simple. When admissions are assumed to be absolutely meritocratic, social class IQ differences of plausible magnitude lead to highly significant effects on the social class ratios of students at university when compared with the general population.

Furthermore, the social class differentials inevitably become highly amplified at the most selective universities such as Oxbridge.

Indeed, it can be predicted that around half of a random selection of kids whose parents are among the IQ 130 ‘cognitive elite’ (eg. with both parents and all grandparents successful in professions requiring high levels of highly selective education) would probably be eligible for admission to the most-selective universities or the most selective professional courses such as medicine, law and veterinary medicine; but only about one in two hundred of kids from the lowest social stratum would be eligible for admission on meritocratic grounds.

In other words, with a fully-meritocratic admissions policy we should expect to see a differential in favour of the highest social classes relative to the lowest social classes at all universities, and this differential would become very large at a highly-selective university such as Oxford or Cambridge.

The highly unequal class distributions seen in elite universities compared to the general population are unlikely to be due to prejudice or corruption in the admissions process. On the contrary, the observed pattern is a natural outcome of meritocracy. Indeed, anything other than very unequal outcomes would need to be a consequence of non-merit-based selection methods.


Selected references for social class and IQ:

Argyle, M. The psychology of social class. London: Routledge, 1994. (Page 153 contains tabulated summaries of several studies with social class I IQs estimated from 115-132 and lowest social classes IQ from 94-97).

C.L. Hart et al. Scottish Mental Health Survey 1932 linked to the Midspan Studies: a prospective investigation of childhood intelligence and future health. Public Health. 2003; 117: 187-195. (Social class 1 IQ 115, Social class V IQ 90; Deprivation category 1 – IQ 110, deprivation category 7 – IQ 92).

Nettle D. 2003. Intelligence and class mobility in the British population. British Journal of Psychology. 94: 551-561. (Estimates approx one standard deviation between lowest and highest social classes).

Validity of IQ – See Deary IJ. Intelligence – A very short introduction. Oxford University Press 2001.

Note - It is very likely that IQ is _mostly_ hereditary (I would favour the upper bound of the estimates of heredity, with a correlation of around 0.8), but because IQ is not _fully_ hereditary there is a 'regression towards the mean' such that the children of high IQ parents will average lower IQ than their parents (and vice versa). But the degree to which this regression happens will vary according to the genetic population from which the people are drawn - so that high IQ individuals from a high IQ population will exhibit less regression towards the mean, because the ancestral population mean IQ is higher. Because reproduction in modern societies is 'assortative' with respect to IQ (i.e. people tend to have children with other people of similar IQ), and because this assortative mating has been going on for several generations, the expected regression towards the mean will be different according to specific ancestry. Due to this complexity, I have omitted any discussion of regression to the mean IQ from parents to children in the above journalistic article which had a non-scientific target audience.

Friday 9 May 2008

Class Sizes in UK universities

What has happened to class sizes in Russell Group universities? The need for national data.

Oxford Magazine - 2007

Bruce Charlton

In a decade my final-year class size has gone from around 16-24 to 84 and 123. First and second year classes are around 200 students. In other words, aside from a handful of tutorials and supervisions of dissertations or projects, it seems as if students now go through the whole degree in very large classes.

What I would like to know is whether this massive decline in teaching quality is typical of the top 50 (ie. roughly pre-1992) UK universities in general, and of the large Russell Group research universities in particular.

Anecdotally, the answer would seem to be yes, such increases in class size are typical. In the past, introductory lectures were big, but as students progressed groups became smaller. But the remarkable fact is that no one really knows what's going on, because information on university class sizes is not collected nationally – or if it is, it is not publicized or distributed.

In particular, although the national university "teaching inspectorate", the Quality Assurance Agency (QAA), examined a great deal of paperwork (a whole roomful of box files in the case of my department), and indirectly generated vastly more, it failed to measure class size.

Just think about that for a moment. The QAA neglected to measure the single most important, best understood, most widely-discussed measure of teaching quality: class size.

It is no mystery why class sizes have expanded. Over 25 years, funding per student has declined by more than half, and the average number of students per member of staff increased from about 8 to 18. In the face of long-term cuts, a decline in teaching quality was inevitable. Indeed, it was anticipated: the QAA was created in order to monitor and control this decline.

But is class size important? Of course it is! In the first place, the public regard class size as the single most significant measure of teaching quality. Every parent with a child at school knows their class size. Parents who pay for their children to attend private schools are often explicitly paying for smaller classes.

It is not just in schools that size matters. US universities publish class size statistics that are closely scrutinised by applicants. For instance US News gives data on percentage of classes with fewer than 20 and percentage of classes with 50 or more. Around 70 per cent of classes at top research universities such as Princeton currently have groups of fewer than 20, and less than 15 percent of classes with more than 50 students. The expensive and prestigious undergraduate liberal arts colleges offer not only about this proportion of small classes and an even smaller proportion of large classes, but guarantee that classes that are always taught by professors (rather than teaching assistants).

A way of measuring the importance of class size is to see what people are prepared to pay. In a small comparison of public and private universities in America, Peter Andras and I found that students at the private institutions paid on average 80 per cent more in tuition fees, for which they got 80 per cent more time in small classes.

Given the usefulness of a valid and objective measure of university teaching quality, and the overwhelming evidence of public demand for small classes, the case for publishing national data on university class sizes seems unanswerable. I would guess that class size data is already available within the central administration of many UK universities, because they record the number of students registered for each course for their own internal administrative purposes. It is just a matter of collecting and summarizing the information, and publishing it nationally.

However, I doubt that universities will publish class size data unless they are made to do so. University bosses probably feel too embarrassed to admit the real situation: nobody wants to be first above the parapet with shocking statistics. Alternatively, if or when the three thousand pound cap is taken off university fees, some universities with small classes may start to publish this data in order to justify charging higher fees, and eventually all universities may be forced to follow suit.

But why wasn’t the QAA interested in collecting and publishing data on UK university class sizes? I can not think of any good educational reason. It managed to spend well over £53m in data collection and auditing since being set-up, plus many times that amount in opportunity costs incurred by UK universities, but amazingly failed to provide a valid measure of teaching quality.

Incompetence and inefficiency on this scale would beggar belief if the QAA really was concerned with teaching quality - but of course it never has been.

Friday 25 April 2008

UK Elite Universities

Which are the elite universities in the UK? And why is the number declining?

Bruce Charlton

Oxford Magazine. 2008; 275: 22-23.

***

How many elite universities are there currently in the UK? And which are they?

If ‘elite’ is defined in terms of the intellectual quality of their students, then the number of elite UK universities has declined very substantially from about 35 to about 12.

I suggest that the main reason for this decline is the expansion of the undergraduate intake in the most-selective universities.

My suggestion is be that the current elite UK undergraduate universities are: Oxford, Cambridge, LSE, Imperial, Warwick, St Andrew's, UCL, York, Bristol, Edinburgh, Bath and Durham.

***

Introduction

There were about 50 UK universities pre-1992 (when the former polytechnics were re-christened). The current ‘elite’ of these pre-1992 institutions are usually considered to be those thirty-eight research-orientated universities who are members of either the Russell Group (larger institutions) or the 1994 Group (smaller institutions).

Among the Russell and 1994 Groups, according to the Sunday Times University Guide, the top-twenty most-selective UK universities are, in order: Oxford, Cambridge, LSE, Imperial, Warwick, St Andrew's, UCL, York, Bristol, Edinburgh, Bath, Durham, Nottingham, Manchester, King's, Glasgow, Birmingham, Sheffield, Southampton and Newcastle.

But how many UK universities are elite? Are all of the Russell and 1994 Group universities elite, or just the Sunday Times top-20, or more, or fewer? The answer depends on how terms are defined.


Defining the cognitive elite of students

I will define elite universities as those recruiting mostly from the top 10 percent of the population in terms of IQ. Since IQ in the UK has an average of 100 with a standard deviation of 15, the top 10 percent of the UK population have an IQ of about 120 plus.

IQ mainly measures rapidity of learning and ability at abstract logical thinking. It is highly predictive of a wide range of successful outcomes in modern societies such as educational attainment, salary, life expectancy and social class. But IQ does not measure all valuable attributes – for example a ‘conscientious’ personality capable of sustained and methodical work also predicts success in many domains. (For a clear and balanced discussion of IQ see Intelligence: a very short introduction, by Ian J Deary from OUP.)

My definition of the cognitive elite derives from the work of IQ scholars such as Linda Gottfredson or Richard Herrnstein and Charles Murray (authors of The Bell Curve). They note that US data suggest that relatively few ‘high-IQ’ professions have an average entry standard of 120 plus and absorb about half the cognitive elite.

These professions include accountants, architects, scientists, computer scientists, social scientists, university teachers, mathematicians, engineers, lawyers, dentists and physicians. Leading Chief Executives and senior managers make up the other main high-IQ group.

The suggestion is that the great majority of the national elite in societies such as the US and the UK are drawn from the top ten percent of people with an IQ of at least 120; since in modern developed societies (although less-so in less-complex societies) almost all leadership positions require a high level of those attributes such as rapid learning and abstract thinking which are measured by IQ.


Defining an elite university – a majority of elite students

Using the IQ 120 threshold, I will define an elite university as an institution that has a majority of students in the top ten percent, with an IQ at or above 120.

There are currently approximately 800,000 eighteen year olds in the UK population in any given year. This means there are about 80,000 potential undergraduates per year in the cognitive elite group having an IQ above 120 (ignoring undergraduates from abroad).

I roughly estimated the numbers of first year undergraduates in the Sunday Times guide top-20 most selective UK universities by looking at the number of undergraduates listed in Wikipedia and dividing the number by three (this will somewhat overestimate the number of first years because some undergraduate degrees last for longer than three years – for example MAs in the Scottish universities and also several professional and vocational degrees).

In round numbers it turns-out that there are around 80,000 undergraduate first year places at the top-20 most selective UK universities – i.e. about the same number of first year places at top-20 universities as there are 120-IQ 18 year olds. I will assume that virtually all of the top ten percent of 18 years olds by IQ will go into higher education – and this seems to be largely correct.

So, if there was a perfect system and assortment of students by IQ, there would be enough 120-IQ students completely to fill the top twenty universities with none left over, or else to provide between 20 and 40 universities with a slim majority of cognitive elite students.

However, this cannot be the case; because in practice cognitive elite students are spread across a much larger number of institutions. This happens due to personal choice (students who choose to attend a less-selective institution than their qualifications would allow), constraints on personal mobility (eg. students’ need to attend a local institution), centres of excellence located in lower-ranked and less-selective institutions (such as medical schools and law schools – which may be attracting 120-IQ students to institutions that are considerably less selective than this on average) – and of course the inevitable imperfections of national examinations and institutional selection procedures.

My guesstimate, therefore, is that less than half of the age cohort of 80 000 elite – not more than 35,000 students per year - will find their way into the top 20 most-selective UK universities.

It is worth focussing on this number for a moment. My proposition is that there are at-most just 35,000 IQ-120 university students for whom all the best universities are competing. It does not take very many universities to absorb 35,000 UK students per year.

This analysis implies that at most twenty UK universities can be regarded as truly elite in the defined meaning of it being possible for them to have a majority of students from the top 10 percent of IQ.


Fewer than twenty elite UK universities

However, twenty elite UK universities is an upper limit, and in practice the number of elite universities must be lower than twenty.

A further down-grading of this estimate is required because there will be large differences in the proportion of the cognitive elite even among elite universities defined in this fashion.

If US data on the Ivy League are taken as a guide, a university such as Oxford or Cambridge will probably have students with an average IQ more like 145; which is three standard deviations above average – or roughly the top 0.1 percent of IQ, or roughly the top thousandth of the UK population. So that we should assume that virtually all Oxbridge students will have an IQ above 120. This would mean that more than six thousand of the best of the top ten percent students in each year cohort will go to Oxbridge alone.

Recall that there are only about 35,000 potential elite undergraduates. If the top-two universities pretty-much fill-up with elite students, then the same applies – to a decreasing extent – as we descend the selectivity league table. Each decrement in university selectivity will take a lower proportion of the elite among their n-thousand first year entrants; nonetheless the threshold at which there is less than a majority of IQ-120 undergraduates in an institution will be reached considerably before the twentieth university.

The conclusion is that there is currently something between ten and fifteen elite universities in the UK.

But if we go back forty-something years, the average intake of a UK university was less than half, more-like a third of what it is today. In those days, even the largest of the most selective universities took just a few thousand new undergraduates per year, and some took less than a thousand. Inevitably this meant that the cognitive elite was spread thickly across a much larger number of institutions.

My hunch is that forty years ago, instead of about ten to fifteen elite universities there would have been more like thirty to forty elite universities. In other words, a couple of generations ago most UK institutions with the title of ‘university’ could legitimately have been considered ‘elite’.

This means that twenty-something previously elite UK universities have declined to non-elite status over a fairly short period of time – mostly during the past twenty or so years of rapid university expansion .


Who are the current elite among UK universities?

This analysis suggests that there has been a rapid decline from elite status in more than half of the less-selective pre-1992 universities as the most-selective universities have expanded their intake; because relatively few top universities can now hoover-up almost all of the top ten percent of students available for selection.

My point is that a major but neglected cause of the average students’ cognitive decline, which has been noticed in many of the UK’s most prestigious universities, must surely have been the several-fold expansion in the size of the most selective universities. As the annual undergraduate intake of the top UK universities doubled, then trebled in size; they became able to mop-up almost all of the limited supply of circa 35,000 students per year who constitute the UK cognitive elite.

There must therefore have been a very-significant decline in average cognitive ability of undergraduate students at most (but not all) of the Russell and 1994 Group universities – especially a decline of IQ-related abilities such as rapidity of learning and capacity for abstract logical thinking.

The outcome is that the student intake at the minority of most-selective Russell/ 1994 Group universities is bigger in numbers and has largely retained the same high levels of average IQ as before the massive UK university expansion; while among the lower-ranked majority of the Russell/ 1994 universities the post-expansion intakes are bigger in numbers but also the average students are significantly lower in terms of IQ. So that most of the Russell and 1994 Group universities are now non-elite.

In conclusion, I suggest that there are now likely to be only between ten and fifteen elite universities in the UK; where an elite university is defined as one in which the majority of the undergraduates have an IQ in the top ten percent of the population.

Assuming that the Sunday Times data are correct, my tentative suggestion is be that the only current elite UK undergraduate universities are: Oxford, Cambridge, LSE, Imperial, Warwick, St Andrew's, UCL, York, Bristol, Edinburgh, Bath and Durham.

***

Second thoughts: 18 March 2009

I would now consider that in the modern educational system, the personality trait of Conscientiousness counts for as much, or more than, IQ in determining examination results.

http://medicalhypotheses.blogspot.com/2009/02/why-are-modern-scientists-so-dull.html.

So I would now refer to these dozen elite universities as having students in the top 10 percent of examination results, instead of the top 10 percent of IQ.