Tuesday 12 March 2013

The climate has cooled - for sure...

*

Since climate researchers (I refuse to call them scientists) are very obviously incompetent liars, and the weather 'forecasters' are political propaganda agencies (as well as being very clearly incompetent - they cannot even describe the current weather, leave aside predicting weather!); the only way to decide about what's going on is by direct personal experience.

*

Luckily, with 'global' climate, that kind of thing is very easy - because (at the first level of approximation, which is about as far as we can really understand in something as mega-complex as planetary climate) if the 'global' temperature is rising (or falling), then the temperature of the whole earth surface, thus any specific spot on the globe, will also be rising (or falling). ^

(Unless there is some known reason why this might be subverted, such as - for example - when the thermometer measuring temperature has a furnace built next to it.)

*

The only limitation on this inference is concerned with random measurement error - which would be substantial for one small patch of ground and one year, but can be reduced substantially by taking larger areas and observing over several years.

On this basis, the global climate has cooled - with a high degree of probability. I've made the observations.

*

Several years ago I decided that the global climate is either warming or cooling (because it is always doing one or the other, except for a few years when at an apex or trough and the trend is changing direction).

All I had to do was see whether there was a new pattern of extremes - and the number of repetitions and annual frequency of these extremes would determine my degree of certainty. My area was Newcastle upon Tyne.

(But the process is retrospective - we know what has happened up to now, but cannot know whether it is valid to extrapolate past trends into the future. At least not until we have made and tested several predictions of future trends - and then we must recognize the limits of this kind of inductive - non causal - reasoning.)

*

When we had an exceptionally sustained period of exceptional cold in the winter of 2009-10 I noticed this and began to suspect global cooling; but decided that it would need two more confirmations, close together, before I would be sufficiently sure that the climate was cooling.

The next year - 2010-11 - there was another exceptionally sustained period of exceptional cold which this time began in early November - nobody I spoke with could remember such an early winter in this area ever before.

For 2011-12 the winter was not exceptional - indeed it was quite mild.

But this year (2012-13) we have had another severe winter, although with intermittent rather than continuous snow - culminating in severe cold and snow just two night ago - exceptionally late for snow.

*

With three out of four exceptionally cold winters (an unprecedented thing in my lifetime), I regard the observation as sufficiently replicated to state that the Newcastle upon Tyne climate, hence the global climate, has cooled.

If the same pattern of bunched together sustained temperature extremes could be described by specific trustworthy persons at a couple of other reasonable sized areas of the earth's surface, then the hypothesis would be clinched; but in the meantime, and in the absence of such evidence - I know what to believe.

*


^NOTE: If, on the other hand, it is assumed that (once random fluctuations have been dealt with) the temperature of one part of the earth's surface can rise while that of another part of the earth's surface is cooling, then the whole concept of a global temperature or climate is challenged.

Indeed if contradictory trends can occur over sustained periods, then there is no such thing as global temperature or climate - unless it is rescued by an auxiliary hypothesis which explains how contradictory sustained trends can occur in the context of a true underlying overall unidirectional trend - and this auxiliary hypothesis would need to be tested separately.

However, I strongly suspect that this kind of auxiliary hypothesis is de facto untestable in a context as complex and uncertain as global climate. Since confirmation would necessarily be prospective (not retrospective), the precision and period of future observation required to discriminate between rival complex hypothesis would make an auxiliary hypothesis designed to explain contradictory climate trends in practice undisproveable.

Only the simple theory of climate change being qualitatively reflected everywhere (absent specific known locally distorting factors, like a furnace or a new-grown city or the like) can be tested in a reasonable time frame. 

(ie. If climate is warming, everywhere warms, and vice versa - the sign of direction of change should be the same, although the quantitative change need not necessarily be to exactly the same extent.) 

*

Noted added 19 March - I changed the title of this post from 'is cooling' to 'has cooled' because it is my thesis that humans cannot predict climate beyond saying something like 'next year will probably be similar to this year'. So, I can say, from three out of four exceptionally cold winters, that the climate has cooled; but I can't say whether or not this retrospective trend will continue - because neither I nor anybody else understands the cause of these (small) trends. 

Monday 11 March 2013

Attitude to the sexual revolution is the single most decisive litmus test of Leftism

*

A positive attitude to the sexual revolution is the hallmark of Leftism, which trumps all other themes and unites disparate (and hostile) factions.

To be pro-the sexual revolution is not only the cornerstone of Marxists, Communists, Fascists, Socialists, Labour parties and Democrats; but is shared by mainstream Conservatives, Neo-Conservatives, Republicans; and by Anarchists and Libertarians; and by sex-worshipping neo-Nietzschian pseudo-reactionaries - such as those of the 'manosphere'.

To be pro-the sexual revolution is the nearest thing to a core value of the mass media; and of art both high-brow and low. 

This vast conglomeration is the Left alliance; it is modern local, national and international politics - united only by being pro-the sexual revolution: but this is enough.  

*

What is the sexual revolution?

Simply the divorce of sex from marriage and family.

Marriage and family are social institutions; but sex cut-off from ('liberated' from) marriage and family is (sooner or later) a monstrous, insatiable and self-stimulating greed for pleasure and distraction.

*

Attitude to the sexual revolution therefore marks the difference between those who are ultimately in favour of human society; and those who delight in its destruction (aka Leftists) who see social collapse as primarily an opportunity to feed their personal addictions; to use other people to make themselves feel good about themselves; to distract themselves with pleasure, and pleasure themselves with distraction.

*

 

Mega randomized clinical trials and their intrinsic flaws

*

Fundamental deficiencies in the megatrial methodology

Bruce G Charlton

Current Controlled Trials in Cardiovascular Medicine. 2001, 2: 2-7 

Abstract

The fundamental methodological deficiency of megatrials is deliberate reduction of experimental control in order to maximize recruitment and compliance of subjects. Hence, typical megatrials recruit pathologically and prognostically heterogeneous subjects, and protocols typically fail to exclude significant confounders. Therefore, most megatrials do not test a scientific hypothesis, nor are they informative about individual patients. The proper function of a megatrial is precise measurement of effect size for a therapeutic intervention. Valid megatrials can be designed only when simplification can be achieved without significantly affecting experimental control. Megatrials should be conducted only at the end of a long process of therapeutic development, and must always be designed and interpreted in the context of relevant scientific and clinical information.

Keywords: epidemiology; history; megatrial; methodology; randomized trial

Introduction

Megatrials are very large randomized controlled trials (RCTs) - usually recruiting thousands of subjects and usually multicentred - and their methodological hallmark is that recruitment criteria are highly inclusive, protocols are maximally simplified, and end points are unambiguous (eg mortality). Megatrials have been put forward - especially by the 'evidence-based-medicine movement' - as the criterion reference source of evidence, superior to any other method for measuring the effectiveness or effect size of medical interventions.

This aggrandizement of megatrials to a position of superiority is an error. I explore how it was that such a transparently ludicrous idea has gained such wide currency and explicate some of the fundamental deficiencies of the megatrial methodology which mean that - in most cases - megatrials are highly prone to mislead. Properly understood, the results of large, simplified, randomized trails can be understood only against a background of a great deal of other information, especially information derived from more scientifically rigorous research methods.

 

Reasons for the supposed superiority of megatrials

How did the illusion of the superiority of megatrials come about? There are probably three main reasons - historical, managerial, and methodological.

1. Historical

When large randomized controlled trials emerged from the middle 1960s, it was as a methodology intended to come at the end of a long process of drug development [1]. For instance, tricyclic and monoamine-oxidase-inhibitor antidepressants were synthesized in the 1950s, and their toxicity, dosage, clinical properties, and side effects were elucidated almost wholly by means of clinical observations, in animal studies, 'open', uncontrolled studies, and small, highly controlled trials [2]. Only after about a decade of worldwide clinical use was a large (by contemporary standards), placebo-controlled, comparison, randomized trial executed by the UK Medical Research Council (MRC), in 1965 - and even then, the dose of the monoamine-oxidase inhibitor chosen was too low. So, a great deal was already known about antidepressants before a large RCT was planned. It was already known that antidepressants worked - and the function of the trial was merely to estimate the magnitude of the effect size.

Nowadays, because of the widespread overvaluation of megatrials, the process of drug development has almost been turned upon its head. Instead of megatrials coming at the end of a long process of drug development, after a great deal of scientific information and clinical experience has accumulated, it is sometimes argued that drugs should not even be made available to patients until after megatrials have been completed. For instance, 1999 saw the National Institute for Clinical Excellence (NICE) delay the introduction of the anti-influenza agent Relenza® (zanamivir) with the excuse that there had been insufficient evidence from RCTs to justify clinical use, thus preventing the kind of detailed, practical, clinical evaluation that is actually a prerequisite to rigorous trial design.

It is not sufficiently appreciated that one cannot design an appropriate megatrial until one already knows a great deal about the drug. This prior knowledge is required to be able to select the right subjects, choose an optimal dose, and create a protocol that controls for distorting variables. If a megatrial is executed without such knowledge, then it will simplify where it ought to be controlling: eg patients will be recruited who are actually unsuitable for treatment, they will be given the trial drug in incorrect doses, patients taking interfering drugs will not be excluded, etc. Consequently, such premature megatrials will usually tend systematically to underestimate the effect size of a new drug.

2. Managerial - changes in research personnel

Before megatrials could become so widely and profoundly misunderstood, it was necessary that the statistical aspects of research should become wildly overvalued. Properly, statistics is a means to the end of scientific understanding [3] - and when studying medical interventions, the nature of scientific understanding could be termed 'clinical science' - an enterprise for which the qualifications would include knowledge of disease and experience of patients [1]. People with such qualifications would provide the basis for a leadership role in research into the effectiveness of drugs and other technologies.

Instead, recent decades have seen biostatisticians and epidemiologists rise to a position of primacy in the organization, funding, and refereeing of medical research - in other words, people whose knowledge of disease and patients in relation to any particular medical treatment is second-hand at best and nonexistent at worst.
The reason for this hegemony of the number-crunchers is not, of course, anything to do with their possessing scientific superiority, nor even a track record of achievement; but has a great deal to do with the needs of managerialism - a topic that lies beyond the scope of this essay [4].

3. Methodological - masking of clinical inapplicability by statistical precision

There are also methodological reasons behind the aggrandizement of megatrials. As therapy has advanced, clinicians have come to expect incremental, quantitative improvements in already effective interventions, rather than qualitative 'breakthroughs' and the development of wholly new treatment methods. This has led to demands for ever-increasing precision in the measurement of therapeutic effectiveness, as the concern has been expressed that the modest benefits of new treatment could be obscured by random error. Furthermore, when expected effect sizes are relatively small, it becomes increasingly difficult to disentangle primary therapeutic effects from confounding factors. Of course, where confounders (such as age, sex, severity of illness) are known, they can be controlled by selective recruitment. But selective recruitment tends to make trials small.

Megatrials appear to offer the ability to deal with these problems. Instead of controlling confounders by rigorous selection of subjects and tight protocols, confounding is dealt with by randomly allocating subjects between the comparison groups, and using sufficiently large numbers of subjects so that any confounders (including unknown ones) may be expected to balance each other out [5]. The large numbers of subjects also offer unprecedented discriminative power to obtain statistically precise measurements of the outcomes of treatment [6]. Even modest, stepwise increments of therapeutic progress could, in principle, be resolved by sufficiently large studies.

Resolving power, in a strictly statistical sense, is apparently limited only by the numbers of subjects in the trial -and very large numbers of patients can be recruited by using simple protocols in multiple research centres [6]. Analysis of megatrials requires comparison of the average outcome in each allocation group (ie by 'intention to treat') rather than by treatment received. This is necessitated by the absolute dependence upon randomization rather than rigorous protocols to deal with confounding [5]. So, in pursuit of precision, randomized trials have grown ever larger and simpler. More recently, there has been a fashion for pooling data from such trials to expand the number of subjects still further in a process called meta-analysis [7] - this can be considered an extension of the megatrial idea, with all its problems multiplied [8]. For instance, results of meta-analyses differ among themselves, in relation to RCT information, and may diverge from scientific and clinical knowledge of pharmacology and physiology [9]

The problem is that 'simplification' of protocol translates into scientific terms as deliberate reduction in the level of experimental control. This is employed with good intentions - in order to increase recruitment, consistency, and compliance [5], and is vital to the creation of huge databases from randomized subjects. However, as I have argued elsewhere, the strategy of expanding size by diminishing control is a methodological mistake [10]. Reduced experimental control inevitably means less informational content in a trial. At the absurd extreme, the ultimate megatrial would recruit an unselected population of anybody at all, and randomize subjects to a protocol that would not, however, necessarily bear any relation to what actually happened to the subject from then on. So long as the outcomes were analysed according to the protocol to which the subject had originally been randomized, then this would be statistically acceptable. The apparent basis for the mistake of deliberately reducing experimental rigour in megatrials seems to be an imagined, but unreal, tradeoff between rigour and size - perhaps resulting from the observation that small, rigorous trials and large, simple trials may have similar 'confidence interval' statistics [10]. Yet these methodologies are not equivalent: in science the protocol defines the experiment, and different protocols imply different studies examining different questions in different populations [5].

 

Assumptions behind the megatrial methodology

Megatrials could be defined as RCTs in which recruitment is the primary methodological imperative. The common assumption has been that with the advent of megatrials, clinicians now have an instrument that can provide estimates and comparisons of therapeutic effectiveness that are both clinically applicable and statistically precise. Widespread adoption of megatrials has been based upon the assumption that their results could be extrapolated beyond the immediate circumstances of the trial and used to determine, or at least substantially influence, clinical practice.

However, this question of generalizing from the average result of megatrials to individual patients has never been satisfactorily resolved. Many clinicians are aware of serious problems [11,12], and yet these problems have been largely ignored by the advocates of a trial-led approach to practice.

Extrapolation from megatrials to practice has been justified on the basis of several assertions. It has been assumed (if not argued) that high levels of experimental rigour are not important in RCTs because the randomization of large numbers of subjects compensates (in some undefined way) for lower levels of control. This is a mistaken argument based on a statistical confusion: large, poorly controlled trials may have a similar confidence interval to that in a small, well controlled trial (a large scatter divided by the square root of large numbers may be numerically equal to a smaller scatter divided by the square root of smaller numbers) - but this does not mean that the studies are equivalent [5]. The smaller, better-controlled study is superior. Different protocols mean a different experiment, and low control means less information. After all, if poor control were better than good control, scientists would never need to do experiments - control is of the essence of experiment.

Furthermore, it is routinely assumed that the average effect measured among the many thousands of patients in a megatrial group is also a measure of the probability of an intervention producing this same effect in an individual patient. In other words, it is assumed that the megatrial result and its confidence interval can serve as an estimate of the probability of a given outcome in an individual patient to whom the trial result might be applied.

This is not the case. Even when a megatrial population is representative of a clinical population (something very rarely achieved), when trial populations are heterogeneous average outcomes do not necessarily reflect probabilities in individuals. To take a fictional example: supposing a drug called 'Fluzap' shortens an illness by 5 days if that illness is influenza and if patients actually take the drug. Then suppose that the trial population also contains patients who do not have influenza (because of non-rigorous recruitment criteria) and also patients who (despite being randomized to 'Fluzap') do not take the drug - suppose that in such subjects, the drug 'Fluzap' has no effect. Then the average effect size for 'Fluzap' according to intention-to-treat analysis would be a value intermediate between zero and five - eg that 'Fluzap' shortened the episode of influenza by about a day. This trial result may be statistically acceptable, but it does not apply to any individual patient. The value of such a randomized trial as a guide to treatment is therefore somewhat questionable, and the mass dissemination of such a summary statistic through the professional and lay press would seem to be politically, rather than scientifically, motivated.

 

Confidence intervals - confidence trick?

The decline in scientific rigour associated with the mega-trial methodology has been disguised by the standard statistical displays used to express the outcome of megatrials. Megatrials typically quote the statistic called the 'confidence interval' (CI) as their summary estimate of therapeutic outcome; or else quote the average outcome for each protocol and a measure of the 'statistical significance' of any measured difference between averages.

But although the confidence interval has been promoted as an improvement on significance tests [13], it has serious problems when used for clinical purposes, and is not a useful summary statistic for determining practical applications of a trial. The confidence interval describes the parameters within which the 'true' mean of a therapeutic trial can be considered to lie - with a quoted degree of probability and given certain rigorous (and seldommet) statistical assumptions [14].

Clinicians need measures of outcome among individual patients in a trial, especially the nature and degree of variation in the outcome. The confidence interval simply does not tell the clinician what he or she needs to know in order to decide how useful the results of a megatrial would be for implementation in clinical practice. Average results and confidence intervals from megatrials conceal an enormous diversity among the results for individual subjects - for example, an average effect size for a drug is uninformative when there is huge variation between individuals.

When used to summarize large data sets, the confidence-interval statistic gives no readily apprehended indication of the scatter of patient outcomes, because it includes the square root of the number of patients as denominator (confidence interval equals standard deviation divided by square root of n) [15]. This creates the misleading impression that big studies are better, because simply increasing the number of patients will increase the divisor of the fraction, which will powerfully tend to reduce the size of the confidence interval when trials become 'mega' in size.

Consequently, the confidence interval will usually reduce as studies enlarge, although the scatter of outcomes (eg the standard deviation) may remain the same, or more probably will increase as a result of simplified protocols and poorer control.

The exceptionally narrow 'confidence intervals' generated by megatrials (and even more so by meta-analyses) are often misunderstood to mean that doctors can be very 'confident' that the trial estimates of therapeutic effectiveness are valid and accurate. This is untrue both in narrowly statistical and broadly clinical senses. In fact, the confidence interval per se gives no indication whatsoever of the precision of an estimate with regard to the individual subjects in a trial. Furthermore, the narrowness of a confidence interval does not have any necessary relation to the reality of a proposed causal relation, nor does it give any indication of the applicability of a trial result to another population. Indeed, since the confidence interval gives no guide to the equivalence of the populations under comparison, differences between trial results may be due to bias rather than causation. [16].

So, narrow, nonoverlapping confidence intervals, which discriminate sharply between protocols in a statistical sense, may nevertheless be associated with qualitative variation between subjects such that a minority of patients are probably actively harmed by a treatment that benefits the majority [17].

 

Measures of scatter needed for clinical interpretation

It would be more useful to the clinician if randomized trials were to display their results in terms of the scatter of patient outcomes, rather than averages. This may be approximated by a scattergram display of trial results, with each individual patient outcome represented as a dot. Such a display allows an estimate of experimental control as well as statistical precision, since poorly controlled studies will have very wide scatters of results with substantial overlaps between alternative protocols. The fact that such displays are almost never seen for megatrials suggests that they would be highly revealing of the scientifically slipshod methods routinely employed by such studies.

If this graphic display of all results is too unwieldy even for modern computerized graphics, a reasonable numerical approximation that gives the average outcome with a measure of scatter is also useful - for example, the mean and standard deviation, or the median with interquartile range [14]. These types of presentation allow the clinician to see at a glance, or at least swiftly calculate, what range of outcomes followed a given intervention in the trial, and therefore (all else being equal, and when proper standards of rigour and representativeness apply) the probability of a given outcome in an individual patient.

While the confidence-interval statistic will usually give a misleadingly clear-cut impression of any difference between the averages of two interventions being compared, a mean and standard deviation reveal the degree of overlap in results. When the confidence interval relates to an interval scale, it may indeed be possible to use the confidence interval to generate an approximate standard-deviation statistic. This is done on the basis that the 95% CI is (roughly) two 'standard-error-of-the-mean' (SEM) values above and below the mean [15]. The SEM is the standard deviation divided by the square root of n. Therefore, if the difference between the mean and the confidence limit is halved to give the SEM, and if the SEM is multiplied by the square root of n, this will yield the approximate standard deviation. The above calculation may be a worthwhile exercise, because it is often surprising to discover the enormous scatter of outcomes that lie hidden within a tight-looking confidence interval. However, most megatrials use proportional measures of outcome (eg percentage mortality rate, or 5-year survival), and these measures cannot be converted to standard deviations by the above method, or by any other convenient means.

Confidence intervals therefore have no readily comprehensible relation to confidence concerning outcomes - which is the variable of interest to clinicians. What is required instead of confidence intervals is a display, or numerical measure, of scatter that assists the practitioner in deciding the clinical importance that should be attached to 'statistically significant' differences between average results.

 

A false hierarchy of research methods leads to an uncritical attitude to RCTs

There is a widespread perception that RCTs are the 'gold standard' of clinical research (a hackneyed phrase). It is routinely stated that randomized trials are 'the best' evidence, followed by cohort studies, case-control studies, surveys, case series, and finally single case studies (quoted by Olkin [7]). This hierarchy of methods seems to have attained the status of unquestioned dogma. In other words, the belief is that RCTs are intrinsically superior to other forms of epidemiological or scientific study, and therefore offer results of greater validity than the alternatives.

To anyone with a scientific background, this idea of a hierarchy of methods is amazing nonsense, and belief in such a hierarchy constitutes conclusive evidence of scientific illiteracy. The validity of a piece of science is not determined by its method - as if gene sequencing were 'better than' electron microscopy! For example, contrary to the hierarchical dogma, individual case studies are not intrinsically inferior to group studies - they merely have different uses [18]. The great physiologist Claude Bernard pointed out many years ago that the averaging involved in group studies is a potentially misleading procedure that must be justified in each specific instance [19]. When case studies are performed as qualitative tests of a pre-existing explicit and detailed hypothetical model, they exemplify the highest standards of scientific rigour - each case serving as an independent test of the hypothesis [20,21]. Individual human case studies are frequently published in top scientific journals such as Nature and Science.

Validity is conferred not by the application of a method or technique, nor by the size of a study, nor even by the difficulty and expense of the study, but only by the degree of rigour (ie the level of experimental control) with which a given study is able to test a research question. Since mega-trials deliberately reduce the level of experimental control in order to maximize recruitment, this means that megatrial results invariably require very careful interpretation.

 

NNT - not necessarily true

The assumption just mentioned is embodied in that cherished evidence-based medicine (EBM) tool, the comparison of two interventions in terms of the 'number needed to treat', or NNT [22]. The NNT expresses the difference between the outcomes of two rival trial protocols in terms of how many patients must be treated for how long in order to prevent one adverse event. For instance, comparing beta-blocker with placebo in hypertension may yield an NNT of 13 patients treated for 5 years to prevent one stroke.

However, the apparent simplicity and clarity of this information depends upon the clinical target population having the same risk-benefit profile as the randomized trial population. When trial and target populations differ and the trial population is unrepresentative of the target population, the NNT will be an inaccurate estimate of effect size for the actual patients whose treatment is being considered. For instance, an elderly population may be more vulnerable to the adverse effects of a drug and less responsive to its therapeutic effect, to the point where an intervention that produces an average benefit to the young may be harmful in the old.

On top of this, the patients in a megatrial population are always prognostically heterogeneous, because the methodology uses deliberately simplified protocols designed to optimize recruitment rather than control - and meta-analyses are even more heterogeneous [3,8]. In a megatrial that shows an overall benefit, it is very probable that while the outcome for some patients will be improved by treatment, other patients will be made worse, and others will be unaffected. What this means is that even a representative megatrial (and such trials are exceedingly uncommon) cannot provide a risk estimate of what will happen to individual patients who are allocated the same protocol. Trials on unrepresentative populations may, of course, be actively misleading. The NNT generated by a megatrial does not in itself, therefore, provide guidance for clinical management. The NNT is Not Necessarily True! [22].

 

Conclusion

Megatrials, like other kinds of epidemiological study, should be considered as primarily methods for precise measurement rather than a scientific method for generating or testing a hypothesis [10]. Precise measurements of the effect size of medical interventions such as drugs should be attempted only when a great deal is known about the drug and its clinical actions. When megatrials are conducted without sufficient background scientific and clinical knowledge, they will be measuring mainly artefacts. Unless - for instance - a trial is performed on pathologically and prognostically homogeneous populations, and uses well controlled management protocols, the apparent precision of the result is more spurious than real.

Megatrials have become an unassailable 'gold standard' in some quarters. And this situation has become self-perpetuating, since the results of megatrials have become de facto untestable. Since megatrials are not testing hypotheses, because they are merely measuring the magnitude of an effect, the result of a megatrial is itself not an hypothesis, and cannot be tested using other methods. A mega-trial of, say, an antihypertensive drug measures the comparative effect of that drug under the circumstances of the trial. Assuming that no calculation mistakes have been made, this result of a megatrial is neither right nor wrong: it is just a measurement.

People often talk of megatrials as if they proved or disproved the hypothesis that a drug 'works'. Far from being the final word on determining the effectiveness of a therapy, this is a question that a megatrial is inherently incapable of answering. But once the error has been made of assuming that a statistical measurement can test a hypothesis, the mistake becomes uncorrectable, because the level of statistical precision in a megatrial is greater than that attainable by other methods.

In such an environment of compounded error, it should not really be a source of surprise that statistical considerations utterly overwhelm scientific knowledge and clinical understanding, and we end up with the lunacy of regarding statisticians and epidemiologists as the final arbiters of medical decision-making. Health care becomes merely a matter of managers providing systems to 'implement' whatever the number-crunching technocrats tell them is supported by 'the best evidence' [4]. The methodological deficiencies of megatrials make them ideally suited to providing an intellectual underpinning for that world of join-the-dots medicine which seems just around the corner.

 

References

  1. Charlton BG: Clinical research methods for the new millennium.
    J Eval Clin Pract 1999, 5:251-263. PubMed Abstract | Publisher Full Text OpenURL
  2. Healy D:
    The Antidepressant Era. Cambridge, MA: Harvard University Press,. 1998. OpenURL
  3. Charlton BG: Statistical malpractice.
    J Roy Coll Physicians London 1996, 30:112-114. OpenURL
  4. Charlton BG: The new management of scientific knowledge: a change in direction with profound implications.
    In NICE, CHI and the NHS Reforms: Enabling Excellence or Imposing Control? Edited by Miles A, Hampton JR, Hurwitz B. London: Aesculapius Medical Press, 2000, 13-32. OpenURL
  5. Charlton BG: Mega-trials: methodological issues and clinical implications.
    J Roy Coll Physicians London 2000, 29:96-100. OpenURL
  6. Yusuf S, Collins R, Peto R: Why do we need some large, simple randomized trials?
    Statistics Med 1984, 3:409-420. OpenURL
  7. Olkin I: Meta-analysis: reconciling the results of independent studies.
    Statistics Med 1995, 14:457-472. OpenURL
  8. Charlton BG: The uses and abuses of meta-analysis.
    Fam Pract 1996, 13:397-401. PubMed Abstract OpenURL
  9. Robertson JIS: Which antihypertensive classes have been shown to be beneficial? What are their benefits? A critique of hypertension treatment trials.
    Cardiovasc Drugs Ther 14:357-366. PubMed Abstract | Publisher Full Text OpenURL
  10. Charlton BG: Megatrials are based on a methodological mistake.
    Brit J Gen Pract 1996, 46:429-431. OpenURL
  11. Julian D: Trials and tribulations.
    Cardiovasc Res 1994, 28:598-603. PubMed Abstract OpenURL
  12. Hampton JR: Evidence-based medicine, practice variations and clinical freedom.
    J Eval Clin Pract 1997, 3:123-131. PubMed Abstract | Publisher Full Text OpenURL
  13. Gardner MJ:
    Statistics with Confidence: Confidence Intervals and Statistical Guidelines. London: British Medical Association,. 1989. OpenURL
  14. Bradford Hill AB, Hill ID:
    Bradford Hill's Principles of Medical Statistics. London: Edward Arnold,. 1991. OpenURL
  15. Kirkwood BR:
    Essentials of Medical Statistics. Oxford: Blackwell,. 1988. OpenURL
  16. Charlton BG: The scope and nature of epidemiology.
    J Clin Epidemiol 1996, 49:623-626. PubMed Abstract | Publisher Full Text OpenURL
  17. Horvitz RI, Singer BH, Makuch , Viscoli CM: Can treatment that is helpful on average be harmful to some patients? A study of the conflicting information-needs of clinical inquiry and drug regulation.
    J Clin Epidemiol 1996, 49:395-400. PubMed Abstract | Publisher Full Text OpenURL
  18. Charlton BG, Walston F: Individual case studies in clinical research.
    J Eval Clin Pract 1998, 4:147-155. PubMed Abstract | Publisher Full Text OpenURL
  19. Bernard C:
    An Introduction to the Study of Experimental Medicine. New York: Dover, 1865;. 1957. OpenURL
  20. Marshall JC, Newcombe F: Putative problems and pure progress in neuropsychological single-case studies.
    J Clin Neuropsychol 1984, 6:65-70. PubMed Abstract OpenURL
  21. Shallice T:
    From Neuropsychology to Mental Structure. Cambridge: Cambridge University Press,. 1988. OpenURL
  22. Charlton BG: The future of clinical research: from megatrials towards methodological rigour and representative sampling.
    J Eval Clin Prac 1996, 2:159-169. OpenURL


Sunday 10 March 2013

Implications of the reality of Man's free agency


*

The following is adapted from a comment I made on the blog of WmJas

https://wmjas.wordpress.com/2013/03/10/philosophically-anarchic-vs-dysfunctional/#comment-992 

*

Once the decision has been made that free agency is necessary and real, then various consequences are implied which I think do not usually tend to be followed up.

In fact, one of the things I find most impressive about Joseph Smith's Restored Christianity, is the way in which he - step by step, and not without faltering, but with great determination and completeness - follows up the implications of human free agency for our fundamental status in the Christian world.

(In what follows I use God to refer to the one God the Father, creator of Heaven and Earth; and lower case god to refer to the many Sons of God' of the same 'kind' as Jesus Christ - to which status Christians believe humans will be resurrected. This use of lower case god is mainstream Christian and occurs frequently in the Bible - perhaps sometimes also referring to the angels, whose status in relation to God and Man is scripturally ambiguous.)

*


It is hard to make sense of free agency without also acknowledging that humans are of the same 'kind' as God - are minor or flawed/ corrupted gods, but of the same general kind.

Free agency is such an astonishing thing, implying such qualitatively superior powers on the part of humans, that something of this sort seems to be implied (I'm not saying it is entailed, but it is at least potentially implied).

*


Because free agency cannot work in a void - but also goes with knowledge/ intelligence and reason - which both enable learning from experience, and provide or supply the basis for free agency.

And for the 'triad' of free agency, intelligence and reason to be able to operate under widely varied and often hostile mortal conditions, and for learning to occur; seems to imply an autonomy from these mortal conditions. 

It seems to imply the autonomy of the soul (or unique personal spirit).

*


And, in turn, such autonomy seems to imply 'eternal' existence - in the sense of pre-existence of the soul (before mortal life) and well as its persistence after death - otherwise (it seems!) the free agent soul would be subject-to the conditions of mortal life, and therefore unfree.

*


But while mainstream Christian thought has tended, often, to regard incarnation of the soul and the added factor of the body as yet another disadvantage which limits agency (the body's needs and weaknesses are seen as a constraint on agency) - JS saw the body as an enhancement of agency, by (as it were) concentrating the diffuse matter of the soul/ spirit into a form that is capable of controlling matter in a proto-god-like manner (en route to full godhood).

*


I have extrapolated, but the main point was the first - that the reality of free agency is not just god-like, but evidence of god-status - and not just potentially, but here and now, actually, in mortal life.

Which implies that we are already Sons of God here and now on earth, that is our status - but at a developmental stage which is yet incomplete and un-perfected, at least partially-corrupted, and indeed preliminary.

(And, because of free agency: capable of rejecting further development or  indeed denying our Son of God status; we can freely chose to sell ourselves into slavery, and thereby to ally with the other spirits, the fallen Sons of God, who have already done so.)

*


A full recognition of the reality/ necessity of free agency at the core of Man, therefore leads onto many other plausible inferences - not compelling entailments, since they can be and are usually denied; but inferences which seem to flow naturally-enough from the structure and inclinations of the human mind.

And if the human mind is regarded as capable of free agency (and has knowledge and reason, thus can learn) then what results is a higher estimate of Man's capability and autonomy, hence mortal Man's status, role and evaluative ability - than in most versions of mainstream Christianity.


*

Saturday 9 March 2013

When words fail

*

I have found that I simply cannot discuss many things that happen nowadays - beyond describing them.

So, if I am pointing-out to somebody the latest, hourly, example of beyond-belief politically correct insanity - I mean sheer indefensible lunacy of the kind that happens everywhere and all the time - then anything more I say beyond the basic description actually detracts from the strength of the statement.

It seems that at some levels of extremity, the attempt to explain craziness serves to dilute the craziness.

Don't get drawn-in!

Say nothing, hold your face impassive - or shake the head, roll the eyes - but don't try to explain why craziness is crazy.

If you are talking to a sane person there is no need; and if you are talking to an insane person, it won't work.

Any attempt to explain to a madman why they are mad will lead them to infer that their madness is debatable.

Some things are beyond rational debate - and many such things are now mainstream public discourse.

*

Friday 8 March 2013

The Good News, and the bad news

*

The Good News is that Christ's work has won us all salvation - and all that we have to do is accept it.

*

The bad news is that even one, single, solitary unrepented sin may suffice to induce us to reject salvation.

*

Thanks to Christ, we are no longer slaves to sin, no longer doomed by our sins; and it is only unrepented sin that is the problem.

However, unrepented sin is fatal.

*

Modernity in the West has been set-up as an environment to ensure that sins are unrepented.

*

Only one is needed.

*

That sin could be a moral sin - and the sexual revolution has given us encouragement to commit a wide range of sexual sins, and - much more importantly - to deny that they are sins, to pretend that they are virtues - therefore not to repent them.

But that sin could be a sin of dishonesty - the denial of a truth, or the propagation of a lie. Only one is needed, so long as we do not repent - so long as we convince ourselves that the lie is a higher form of truthfulness.

Or a sin against beauty: an unrepented act of uglification - a mutilation not merely unrepented but advertized as pretended beauty (or truth, or virtue).

*

The Good News is that salvation is as easy as ever it was; but the bad news is that rejection of salvation is easier than it ever has been before in the whole of human history.

*

Is THIS BLOG part of the mass media?

*

On the whole, yes of course, since it could potentially be read by a billion people, which is 'mass'.

In actuality, it is read by a few hundreds a day, at most - probably (I mean actually read) - about the same number I could lecture to without technological aids (if the lecture theatre was well designed).

*

But the main determinant of whether this blog counts as 'mass' is in the behaviour of the reader.

Is reading this blog part of a large trawl though other blogs and websites; or is it a specific, deliberate and much more focused thing.

If reading this blog is just one element in trawling the web, or marinating-yourself in the internet, then it is indeed part of the mass media - even if only six other people are reading it.

*

Are you plugged-into the mass media?

*

I sometimes think that the main difference between pseudo-Right wingers (such as libertarians and neo-conservatives and 'manosphere' types) and real reactionaries is indicated by whether they are plugged-into the media, or not.

On this analysis, if you life is dominated by the mass media you are a creature of the Left - and this is not affected by the nature of the media content.

A self-identified 'reactionary' who spends several hours per day engaged with 'reactionary' news sources, magazines, blogs and the like - is actually a Leftist.

*

Thursday 7 March 2013

Real understanding versus procedural pseudo-understanding: a collage of sentences

*

The typical modern educated person - by educated I mean someone with advanced educational certification - has zero understanding of complex concepts; including the specific concepts which his educational certification purports to validate.

Modern educational certification is based on evaluations (one can hardly call them examinations) which are so procedural, and so difficult to fail except on procedural grounds, that they are incapable of evaluating understanding.

*

Only another human being, in sufficiently dense and sustained human contact, is capable of evaluating understanding.

What we have instead is the evaluation of collages of sentences - evaluated one sentence at a time, in terms of the accuracy of reproduction.

*

In multiple choice examinations, students are required to match up sentences in a very explicit way - but in extended writings such as essays and dissertations and theses, the principle is the same: these extended writings (or indeed conversations) are collages of sentences - individual factoids learned and assembled according to prescribed procedure.

*

It is not that such evaluations are easy - many people cannot do them; simply that they are grossly misleading in terms of over-estimating understanding.

These evaluations primarily test adherence to procedure; and to adhere to a procedure requires approximately one standard deviation of intelligence less intelligence than to understand that procedure.

*

But if our educational evaluations were to become genuine tests of understanding, then not only would nearly all students fail nearly all examinations - but so would their 'teachers'.

All this being a consequence of the decline of general intelligence combined with the inheritance of a highly complex society and culture bequeathed by earlier (and more intelligent and much more creative) generations.

http://iqpersonalitygenius.blogspot.co.uk/2012/11/the-over-promoted-society.html 

*

Asking the right questions about the mass media: positive and negative agendas

*

People approach the question of the media and politics from the wrong end - they ask how politics - or more specifically politicians - influence, bias and control the mass media - which is a classic example of asking the opposite question from that which reflects reality.

The proper question to ask is how the media influence politics; because the mass media is the ruling social system in modernity.

*

The mass media control not just politics and civil administration, but law, religion, the military, education, science and the arts - all the major social systems.

Of course, control is not absolute - control never is; and of course there are more-or-less successful efforts for other social systems reciprocally to control the media (even a slave has some control of his master); but nonetheless the net direction of control is directed from the media to act upon the other social systems.

*

What is the agenda of the mass media? What is it trying to do?

First there is a positive agenda - which comes from specific persons in the media; and a negative one.

*

The media has multiple positive agendas, at a fine level of detail there are almost as many as there are people working within the media - as well as those agendas reflecting the back-influence of other social systems on the media.

In the past half century the positive agendas of those working in the media have become significantly aligned by selective recruitment, retention and promotion practices - that is by political correctness, which ideology comes-from and is enforced-by the mass media. 

*

But it is the negative, and implicit, media agenda which is primary.

The negative agenda is that insofar as the media expands its share of attention (time and effort) then the media displaces other social systems.

Already the media is the primary mode of evaluation of public communications - as the mass media increases its 'market share' of peoples' lives, so it enhances its domination of those lives.

*

Already the communications of the mass media dominate observation and experience (that which is observed and experienced but which is not in the mass media does not really exist; that which is in the mass media is - somehow - real and important even when we have never seen nor experienced it).

This is the authority of the mass media. Anyone who has attempted to argue against the trend of the mass media will know how powerful this is. Knowledge, direct personal experience, evidence, logic - all and any such are met by intense skepticism and moral rejection when they contradict mainstream media perspectives.

The media perspective is 'reality' for most people, on most subjects, most of the time: now that is authority.  

*

The negative agenda of the mass media is that the mass media control all agendas more-and-more - that the mass media becomes decisive in all societal, public communication; that all other social agendas be assimilated to the media: and that societal and public communications displace personal observation and experience such that we live inside the mass media.

*

Wednesday 6 March 2013

Defending a clear, strong and simple understanding of God as loving Father

*

There is a rich set of comments and my responses still running on the post from a few days ago in which I am exploring how to explain suffering (and evil) in the context of a clear, strong and simple concept of God as our loving Heavenly Father - something which would readily be understandable by a child.

http://charltonteaching.blogspot.co.uk/2013/03/explaining-eternal-goodness-speculative.html

*

Why is the mass media intrinsically anti-Good? Because it necessarily displaces religion

*

The mass media is an enigma - of the kind that happens when we ask exactly the wrong questions.

And, of course, it is exactly the mass media which specializes in getting people to ask exactly the wrong questions.

*

I have been trying to unravel this stuff for several years - for example in this piece from my pro-modernization, libertarian, pre-Christian era - http://medicalhypotheses.blogspot.co.uk/2007/07/modern-mass-media-enables-social.html.

That piece is, of course, wrong both fundamentally and superficially - but the basic insight was correct that the mass media replaces religion.

*

(Religion may be pro- or anti-Good and to varying degrees, according to the religion - but the mass media is necessarily anti-Good.)

*

So, if we accept the McLuhanite insight that the medium is the message - so it is not the contents, but the fact the of mass media which is primary (and the fact that so many people are engaged by it for so many hours per day)...

And add to it the observation that there is a reciprocal relationship between the mass media and religion - and as the mass media grows, there is a commensurate destruction of religion...

Then we have the basis of an explanation for what the mass media is doing, and why it is intrinsically anti-Good.

*

The confusion comes because to be anti-Good is not the same as to be pro-evil.

So much of the content of the modern mass media in the West is indeed overtly pro-evil that we neglect to notice that this is mostly a phenomenon of the post mid-1960s era, growing in strength over the past several decades.

Early mass media was equally anti-Good in its effect - but the content was often anti-evil, or pro-Good - so that this was hard to discern.

*

The anti-Good effect of the mass media therefore essentially comes from the fact that it displaces religion as the social evaluation system.

The specific evaluations of the mass media may variously be pro-evil, or even pro-good - but it is the fact that the mass media has become the major societal evaluation system which is primary.

Once the mass media has become the primary system of evaluation, then a line has been crossed (this was crossed in the mid-1960s in the West).

*

So, while the specific media evaluations can and do vary, this is not the phenomenon of primary significance.

It is that the mass media necessarily displaces religion as the mechanism of societal evaluation which is primary. 

What matters essentially is that in the modern West it is the mass media which makes and communicates (or does not communicate) all significant social evaluations: and that is why the nature of the mass media is to be anti-Good.

*

Tuesday 5 March 2013

Why did mobile phones and social networking turn out to be mere extensions and amplifiers of the mass media?

*

It seems clear that the spread in usage of mobile phones and internet social networking websites of the Facebook type has been an exacerbation, a continuation of the trend, of the mass media  - and an extension and deepening of secular hedonism and alienation.

*

Yet, in principle, if we had not experienced the opposite, it might be supposed that by keeping people in touch more of the time, the influence of the mass media would be held-back - that by people-interacting-with-people for more of the time, and with more people, the ideology of the mass media would be blocked.

*

(Just as so many people - including myself - used to suppose that the internet would combat the domination by 'official' news media, to facilitate an informed society where everybody discovered the real facts behind the propaganda, and formed their own opinions. Ha! - How utterly and completely wrong can anyone be!)  

*

This is obviously not the case, and the interpersonal media are instead serving as an addiction and a distraction: an addictive distraction.

In theory, the new interpersonal media should strengthen marriage and family relations by keeping the members in-touch; in practice these media are at the heart of a society zealously engaged in the coercive destruction of families.

*

The main consequence of pervasive social communication media is that people are out of touch with their environment for more of the time, that they never self-remember, that they are prevented from experiencing the life they are in.

In the recent past, a person walking alone might be stimulated to look around, listen, smell, feel the air flowing past them - be where they are. Not now.

They almost never experience the here and the now. 

*

Once again, the prime insight of Marshall McLuhan has been confirmed - that the primary effect of media is indifferent to content.

The fact of interpersonal mass media has an effect which quite overwhelms the specifics of interpersonal information exchange via these media.

*

So, it hardly matters what is said, or heard, or seen via these media; the major consequence of the fact of the medium is vastly more powerful than the specifics of  communication.

*

This explains how it is that our society has been able to absorb such incredible changes as the internet and ubiquitous mobile phones and vast social networking websites while - at a fundamental level - having been unaffected by them.

And without any significant overall economic benefits - indeed, increasingly obvious deep damage to economic productivity in the sense that Western societies have simply given-up even trying to run an economy.

The trends in place before the internet have continued. The advent and growth of the internet was imperceptible at a mass level of analysis.  

*

We do not control these media; they control us.

*

So interpersonal communications media are part of the mass media.

And the mass media is the primary domain of evil in our society, here and now.

Not only and not mostly in the sense of being loaded with accidental and deliberately corrupting communications of evil; but in the primary sense that the addictive distraction of the mass media is anti-good, is a turning-away-from reality (and therefore God).

*

It is the fact of the medium which is the essence - and this fact is a fact: engagement can be moderated but participation is mandatory.

*

Monday 4 March 2013

More on Tolkien's niggler-status

*

http://notionclubpapers.blogspot.co.uk/2013/03/why-is-it-important-to-recognize-that.html

*

The analgesic properties of tubular elastic bandages

*

I have previously mentioned that I am somewhat plagued by osteoarthritis (mostly) in the knees - and the main problem is pain; and the main problem with this pain is that due to side effects (and side effects of the drugs used to combat these side effects) I cannot take any of the effective analgesics (NSAIDs or Aspirin).

The only effective and tolerable analgesic that I have so far discovered is to wear 'tubigrip' - tubular elasticated bandages - around the knee joints...

(and extending down to the ankle, to reduce the risk of deep vein thrombosis if I was to have only knee bandages constricting blood supply above the calf).

*

These tubigrip bandages offer substantial analgesic benefits (i.e. they significantly reduce pain both at rest and when walking), and do so instantly (as soon as the bandages are applied) - which implies that, to be so rapid, the analgesic effect must be via nerves and not (for example) by reducing-swelling or providing support to the joint.

*

The likely mechanism by which tubigrip bandages work is by counter-irritation, or the Gate Theory - in other words the tubular bandages stimulate superficial touch receptors in the skin (which have rapid-conducting, myelinated nerve fibres); and these skin sensations then block (or gate) the more slowly-conducting (unmyelinated fibres) pain stimuli from within the knee joint.

The principle is the same as rubbing a limb - quickly and lightly - which you have just bumped and which you know will start to hurt in a couple of seconds time. But if you can start rubbing the skin over the injury immediately, then the touch sensation will travel faster to the central nervous system than the pain sensation (because the myelinated touch fibres beat the unmyelinated pain fibres); and therefore the pain is, to some extent, blocked and reduced. 

*

I have in fact been using tubular bandages on sore joints, including elbows, wrists and ankles, for many years - but used to suppose that they worked by giving support and reducing swelling.

If bandages worked by increasing joint-support and reducing swelling, this would predict that stronger and more compressive bandages would be more effective that lighter and less compressive bandages.

However, I have found the lighter and less-compressive bandages to be more effective (as well as more comfortable). Thus my assumption that the counter-irritation Gate Theory is most likely.

*

If I am correct about the analgesic mechanism of tubigrip bandages, then the principle could be applied to devise other methods of touch receptor stimulation as an analgesic manoeuvre to treat joint pain; from topical agents (creams) that produce some kind of 'irritation'/ stimulation, perhaps temperature change, perhaps transcutaneous electric stimulation?



Sunday 3 March 2013

Explaining eternal goodness - a speculative story/ analogy

*

Yesterday's post on theosis and free will elicited two accounts (in the comments) of how it is that after death and resurrection a human characterized by free will can nonetheless choose only good: the idea that the choice of good after death functions rather like a freely-subscribed oath binding us for eternity, and the idea that the last choice as we pass from time into eternity becomes eternal.

*

The fact that needs to be explained is that somehow a creature of free will can become such as to choose only good.

But the answer must be such that we can understand the purpose or necessity of mortal incarnate life in this world.  And I do not think either of the above answers help us to understand this.



I have been thinking along somewhat different lines.

What follows is a mixture of conviction and rational elaboration - regard it as a fantasy if you wish.

*

If Man has a pre-mortal spirit existence of the soul, then the choice of good or evil could have been made with full foreknowledge of the nature and consequences of the choice, such that the decision was of necessity permanent.

(This is the orthodox account of why fallen angels are irreversibly damned - because - unlike mortal men - they knew exactly what they were doing, and the consequences, and chose damnation.)

Then the spirits of those Men who chose good were enhanced with bodies - but mortal bodies, on earth. The reason for which was that incarnation in mortal bodies provides and unique and essential experience.

Thus, mortal life and death is ultimately an experience, not a test.

*

For the human soul (or spirit), even living very briefly in a body and then dying is a necessary experience to qualify us to become Sons of God.

So that for a soul even to live very briefly as a baby, even perhaps living only as an embryo in the womb, is an experience of incarnate mortality; which is of such great value to the soul, that the difference between a spirit who has lived and died as a mortal and a spirit who has lived only as an un-incarnated spirit is a qualitative difference.

The spirit who has lived a mortal life and died, and then been resurrected, is a qualitatively higher state than is attainable by a spirit lacking this experience.

*

But free agency continues throughout as part of the essence of being a Man.

So Men who have chosen good, and which choice is irrevocable (because made with full knowledge) choose then to embark upon mortal life in which there is partial and distorted knowledge and during which they regain the freedom to choose evil.

Because, during mortal life they are subject to temptation from those pre-mortal spirits who originally chose evil.

So mortal life is a hazard; a risk taken freely by the pre-mortal spirit in hope and expectation of attaining a higher state; but which opens the human soul to the possibility of losing everything.

*

Yet, if the benefits of mortal life and death can be attained by living briefly as an embryo or baby, then why should humans live longer and suffer the corruptions and temptations of childhood and adult life, of disease and senility?

The answer would presumably be that the benefits of mortality are qualitative with respect to the preceding spirit state, but quantitative with respect to one another - that to survive the hazards a longer life without yielding to the temptation to choose damnation is a higher thing.

So, when a man dies (at any age: an unborn baby, an old and sick adult) then the innocence of the less experienced human has a reward that is certain - but  a lower place in Heaven; while the endurance of the old and sick is rewarded by a higher place in Heaven (assuming that the offer is accepted, not rejected), because the result is a higher being - a more complex resurrected Man capable of a higher role - just as a mortal adult is potentially more complex than a child.

*

In terms of free will the sequence is:

1. Pre-mortal spirits (souls) with free agency, irrevocably chooses good - irrevocably so long as they remain in the spiritual state (those spirits who choose evil are damned which means excluded from the following - they remain spirits).

2. Those spirits who choose good may become incarnate mortals with free agency.

3. The experience of being an incarnate mortal, however briefly, is of qualitative value or benefit - it enhances the pre-mortal spirit beyond his former state and in a way otherwise impossible.

4. But the experience of being an incarnate mortal re-opens the possibility of rejecting good/ God and choosing evil - so that a pre-mortal spirit who could not have chosen damnation as an incarnate mortal again becomes capable of choosing damnation.

5. However, the default of mortal life is salvation - due to the once-and-for-always (past, present, future) work of Christ 's atonement - his death and resurrection. Thus Men will be saved by default, and eternal damnation is only by active rejection of salvation.

(This is the safeguard of mortal incarnate life, without which it would not just be a hazard but a hopeless gamble against overwhelming odds. It is only by Christ's work - in cleansing us of the sin from innumerable bad choices - that there is any possibility of the incarnate soul coming through an extended mortal life with a hope of salvation - because that hope has been made an assurance of salvation (unless it is rejected by choice.)

6. At the end of mortal life, those who have-not-rejected salvation (a negative definition) will return to the spiritual realm outwith the earth (Heaven) to await ultimate resurrection in the same but perfected bodies they inhabited during mortal life - at which point they have a higher state than they would otherwise have had without the experience of mortal life.

So for the good souls, the judgment is a matter being allocated between the 'many mansions' of Heaven - by analogy their job, role, authority. 

(Those good spirits who have not been through the experience of mortal life simply remained as they were, at a lower hierarchical level - remembering that all levels are blessed.)

7. At the end of mortal life, those who reject salvation, go to hell (or pre-hell, perhaps) to await ultimate resurrection and the final judgment; at which point decision for- or against-good must be made in light of its full consequences; and those who actively-choose evil instead of good are then - in resurrected form - sequestered in that permanent hell which they have chosen.

*

This may seem, and probably is, an over-elaborated, over-complex and fanciful tale - but parts of it seem to answer some of my most pressing problems in a simple and pictorial fashion.

In particular, I find the explanation of incarnate mortal life as primarily an experience (and only secondarily as a test) to be valuable - especially because it makes sense both of the fact of mortal incarnate existence, and also the fact that so many humans have got no further than being fetuses or babies, and many other do not reach adulthood - on the basis that it is potentially of great value to have been an incarnate mortal human at all and however briefly, while on the other hand extended life is potentially of additional value.

Incarnate mortal life is thereby conceptualized as a high risk, high reward venture - freely chosen.

*

In terms of choice, the above seems to regard choice in an extended fashion - that as free agents interacting withe the world, we are making choices all the time - only a very few of which (or perhaps none of which) are conscious.

Yet since Men are intrinsically free agents, we necessarily make these choices - it is intrinsic to our relation with the world.

Thus even a baby makes choices. However, these baby choices do not have the same status as conscious, deliberate choice. Because salvation is a default, damnation must be chosen - and a baby cannot choose damnation - thus a baby is Innocent.

*

The above account is tentative, conjectural, no doubt garbled and contains errors (as do all human things, but maybe more than usual!), and its theological basis will be obvious to some; but I think it is pretty-much (if corrections were made) - or perhaps - not-impossible to reconcile with, potentially compatible with, standard Christianity and Scriptural interpretation - if not currently, and in all denominations - then across the sweep of Christian history...

In other words: take it, modify it, or leave it - as you will.

*

Saturday 2 March 2013

Theosis and free agency

*

Man has free agency, and this is intrinsic and eternal.

Man uses his agency to make choices, but some of these free choices will be sinful.

The work of Christ has wiped us clean from these sins.  Now we get the benefits of our good choices, and are forgiven our evil choices. Our prospects are transformed.

*

Yet, the problem remains of how it could ever be that creatures with free agency could ever cease to make evil choices.

The slate has been wiped clean, and is wiped clean continually; but why would Man ever cease from writing sin over it again and again - even in Heaven?

That there would be no temptation in Heaven is an inadequate answer, since Man would not have fallen if there had been no temptation. The assumption is that mortal life, death and resurrection puts us into a better position than Adam...

*

The ultimate question is something like this:

How can man be transformed such that he will always choose good?

It must always be a choice, and the choice is (and must be) free; and yet the free choice must always be of good.

*

Man must become such that he will always resist temptation, will always shun sin, will always choose God instead of pride.

This matter is - I think - almost wholly neglected; and yet I believe the question is valid and vital.

Somehow we must explain how men become good gods - gods who always choose good; and how our mortal life is a necessary part of this process.

This happens; but we lack a clear, simple explanation of how it happens. 

*

The story we tell ourselves of human purpose, meaning and relations with God should tell us how this happens. Yet at present it does not. There remains work to be done...

Friday 1 March 2013

Christian Revival in China and Africa

*

I recently wrote about the nature and causes of religious revival

http://charltonteaching.blogspot.co.uk/2013/02/what-happens-in-religious-revival.html

*

In revival, people are sensitized to the Holy Ghost, but also to evil spiritual influences.

Even though both sides are more polarized in a revival, since humans have free agency they are able to choose, and they may choose evil instead of good.

Or, to change the metaphor - in a religious revival, the veil between this world and others becomes thinner and more permeable: good things can be glimpsed and experienced more easily, but only at the cost of doing the same for bad things.

*

One consequence of this understanding is that a Christian revival in the West now would be unlikely to lead to good outcomes - since we are so very deep in spiritual corruption and purposive destruction of the good, any revival would be most likely to amplify and strengthen the forces of evil.

(But, nonetheless, despite such a tiny chance of good eventuating; a revival may be our only chance of arresting our media-distracted sleepwalk into hell.)

*

However, there seems to have been, and perhaps may still be, Christian revival outside of the West in China and Africa.

http://charltonteaching.blogspot.co.uk/2012/07/70-million-christians-in-china-and.html 

And African Bishops have taken moral leadership within the Anglican communion:

http://charltonteaching.blogspot.co.uk/2013/01/things-are-coming-to-point-in-church-of.html

*

From what I can gather, albeit I am not an expert on the topic but have merely picked-up hints here and there, Christianity in China and Africa has the character of revival - with 'gifts of the Holy Ghost' much in evidence: miracles, healings and the like.

For Western Christians this means that good things are at present probably more likely to come from China or Africa then from our own societies; but also that these are likely to be mixed-up, and eventually perhaps overwhelmed, with bad things.

Discernment will be needed - as always...

*