Tag Archives: Science

Random Refutations

“(1) Parmenides-Leucippus: Leucippus takes the existence of motion as a partial refutation of Parmenides’s theory that the world is full and motionless. This leads to the theory of ‘atoms and the void’. It is the foundation of atomic theory.

(2) Galileo refutes Aristotle’s theory of motion : this leads to the foundation of the theory of acceleration, and later of Newtonian forces. Also, Galileo takes the moons of Jupiter and the phases of Venus as a refutation of Ptolemy, and thus as empirical support of the rival theory of Copernicus.

(3) Toricelli (and predecessors) : the refutation of ‘nature abhors a vacuum‘. This prepares for a mechanistic world view.

(4) Kepler’s refutation of the hypothesis of circular motion upheld till then (even by Tycho and Galileo), leads to Kepler’s laws and so to Newton’s theory.6

(5) Lavoisier’s refutation of the phlogiston theory leads to modern chemistry.

(6) The falsification of Newton’s theory of light (Young’s two- slit experiment). This leads to the Young-Fresnel theory of light. The velocity of light in moving water is another refutation. It prepares for special relativity.

(7) Oersted’s experiment is interpreted by Faraday as a refutation of the universal theory of Newtonian central forces and thus leads to the Faraday-Maxwell field theory.

(8) Atomic theory: the atomicity of the atom is refuted by the Thomson electron. This leads to the electromagnetic theory of matter, and, in time, to the rise of electronics. See Einstein’s and Weyl’s attempts at a monistic (‘unified’) theory of gravitation and electromagnetics.

(9) Michelson’s experiment (1881-1887-1902, etc.) leads to Lorentz’s Versuch einer Theorie der electrischen und optischen Erscheinungen in bewegten Körpern (1895: see §89). Lorentz’s book was crucially important to Einstein, who alluded to it twice in §9 of his relativity paper of 1905. (Einstein himself did not regard the Michelson experiment as very important.) Einstein’s special relativity theory is (a) a development of the formalism founded by Lorentz and (b) a different—that is, relativistic—interpretation of that formalism. There is no crucial experiment so far to decide between Lorentz’s and Einstein’s interpretations; but if we have to adopt action at a distance (non-locality: see Quantum Theory and the Schism in Physics, Vol. III of the Postscript, Preface 1982), then we would have to return to Lorentz.

Incidentally, it took years before physicists began to come to some agreement about the importance of Michelson’s experiments: I do not contend that falsifications are usually accepted at once (see the preceding section) not even that they are immediately recognised as potential falsifications.

(10) The ‘chance-discoveries’ of Roentgen and of Becquerel refuted certain (unconsciously held) expectations; especially Becquerel’s expectations. They had, of course, revolutionary consequences.

(11) Wilhelm Wien’s (partially) successful theory of black body radiation conflicted with the (partially) also very successful theories of SirJames Jeans and Lord Rayleigh. The refutation by Lummer and Pringsheim of the radiation formula of Rayleigh and Jeans, together with Wien’s work, leads to Planck’s quantum theory (see L.Sc.D., p. 108). In this, Planck refutes his own theory, the absolutistic interpretation of the entropy law, as opposed to a probabilistic interpretation similar to Boltzmann’s.

(12) Philipp Lenard’s experiments concerning the photoelectric effect conflicted, as Lenard himself insisted, with what was to be expected from Maxwell’s theory. They led to Einstein’s theory of light-quanta or photons (which were of course also in conflict with Maxwell), and thus, much later, to particle- wave dualism. (

(13) The refutation of the Mach-Ostwald anti-atomistic and phenomenalistic theory of matter: Einstein’s great paper on Brownian motion of 1905 suggested that Brownian motion may be interpreted as a refutation of this theory. Thus this paper did much to establish the reality of molecules and atoms. (14) Rutherford’s refutation of the vortex model of the atom.8 This leads directly to Bohr’s 1913 theory of the hydrogen atom, and thus, in the end, to quantum mechanics.

(14) Rutherford’s refutation of the vortex model of the atom.8 This leads directly to Bohr’s 1913 theory of the hydrogen atom, and thus, in the end, to quantum mechanics.

(15) Rutherford’s refutation (in 1919) of the theory that chemical elements cannot be changed artificially (though they may disintegrate spontaneously).

(16) The theory of Bohr, Kramers and Slater (see L.Sc.D., pp. 250, 243): this theory was refuted by Compton and Simon. The refutation leads almost at once to the Heisenberg-Born- Jordan quantum mechanics.

(17) Schrodinger’s interpretation of his (and de Broglie’s) theory is refuted by the statistical interpretation of matter waves (experiments of Davisson and Germer, and of George Thomson, for instance). This leads to Bom’s statistical interpretation.

(18) Anderson’s discovery of the positron (1932) refutes a lot: the theory of two elementary particles — protons and electrons — is refuted; conservation of particles is refuted; and Dirac’s own original interpretation of his predicted positive particles (he thought they were protons) is refuted. Some theoretical work of about 1930-31 is thereby corroborated.

(19) The electrical theory of matter elaborated by Einstein and Weyl, and held implicitly — and at any rate, pursued — by Einstein to the end of his life (since he interpreted the unified field theory as a theory of two fields, gravitation and electromagnetics),is refuted by the neutron and by Yukawa’s theory of nuclear forces: the Yukawa Meson. This gives rise to the theory of the nucleus.
(20) The refutation of parity conservation. (See Allan Franklin, Stud. Hist. Philos. Sci. 10, 1979, p. 201.)”
That is an interesting list of scientific refutations provided by Popper himself. Popper  was right to suggest that the new theories highlighted above were not direct results of the refutations. The refutations merely created new problem situations which stimulated imaginative and critical thought by thinking men. But this initial stage of conceiving a new theory is not susceptible for logical analysis.”The question how it happens that a new idea occurs to a man  … may be of great interest  to empirical psychology ; but it is irrelevant to the logical analysis of scientific knowledge” (See Popper, K., The  Logic of Scientific Discovery,1934,  p. 7). That is because the latter does not concern with quid facti but with quid juris.
Tagged , ,

Testing for prostate cancer… Is that necessary?

From Dance with Chance blog:

The recent publication of two large-scale studies of prostate screening in the US and Europe attracted our attention. After all, (a) this was something we wrote about in our book, and (b) we are – all three of us – men of a certain age.

In the US study, 38,343 men received annual PSA (Prostate Specific Antigen) blood tests while 38,350 men were assigned to a control group. After 7 years of follow-up, the incidence of prostate cancer death per 10,000 person-years was 2% in the screening group – that is 50 deaths. In the control group it was 1.7%, or 44 deaths. Not particularly significant then.

The European study involved some 180,000 men. This time, however, the screening group was billed as having a 20.7% improvement in the survival rate over the control group. This seems impressive and a good reason for continuing with PSA screening – in Europe at least.

So are European doctors better at screening than their US colleagues? Or is it just that numbers can be deceptive?

There were 214 deaths from prostate cancer in the screening group and 326 in the control group. This means 112 fewer deaths in the screening group, or a relative improvement over the screening group of 20.7% – found by dividing 112 by 540 (540 = 214 + 326).

In absolute terms, however, the improvement is much less impressive, since there were 72,890 subjects in the screening group and 89,353 in the control group. This means 7.1 fewer deaths in the screening group than in the control group, for every 10,000 people. If we consider that the study lasted some nine years, this is a tiny improvement of less than one person per year.

Continue reading

Tagged , , ,

Popper II: The attributes of bad theories (and some examples)

“After the collapse of the Austrian Empire there had been a revolution in Austria: the air was full of revolutionary slogans and ideas, and new and often wild theories. Among the theories which interested me Einstein’s theory of relativity was no doubt by far the most important. Three others were Marx’s theory of history, Freud’s psycho-analysis, and Alfred Adler’s so-called ‘individual psychology’.

The three other theories I have mentioned were also widely discussed among students at that time. I myself happened to come into personal contact with Alfred Adler, and even to co-operate with him in his social work among the children and young people in the working-class districts of Vienna where he had established social guidance clinics. It was during the summer of 1919 that I began to feel more and more dissatisfied with these three theories-the Marxist theory of history, psychoanalysis, and individual psychology; and I began to feel dubious about their claims to scientific status. My problem perhaps first took the simple form, ‘What is wrong with Marxism, Psycho-analysis, and individual psychology? Why are they so different from physical theories, from Newton’s theory, and especially from the theory of relativity ?

Continue reading

Tagged , , ,

A new way to explain explanation (David Deutsch)

David Deutsch is a physicist at the University of Oxford and the writer of the very interesting book The Fabric of Reality: Towards a Theory of Everything which I highly recommend. David Deutsch is a member of the Quantum Computation and Cryptography Research Group at the Clarendon Laboratory and he is considered to be an authority on the theory of parallel universes.

Tagged , ,

Why Most Published Medical Research Findings Are False?

 

Most medical researchers blindly adhere to the popular dogma of p-values. According to this dogma, the strategy of declaring statistical significance is based on a p-value alone (often a p-value below 0.05). To the practician of this religion, Statistics refer solely to the investigation of such values. However, the probability that an association is true given a statistically significant finding, depends not only on the estimated p-value but also on the prior probability of it being real, the research bias (the combination of various design, data, analysis, and presentation factors that tend to produce research findings when they should not be produced) and the statistical power of the test. More specifically, it can been seen that the positive predictive value (PPV) of a test (i.e. the post-study probability that the association is true) equals*:

PPV(\alpha, \beta, R, u)= \frac{(1-\beta)R + u \beta R}{R- \beta R + \alpha + u - u \alpha + u \beta R}

where R is the ratio of the number of “true relationships” to “no relationships” among those tested in the field, α is the Type I error rate, β is the Type II error rate (and hence 1-β is the “power” of the test) and u the research bias. Hence, according to the equation above (assuming at this point insignificant bias)  a research finding is more probable to be true than false iff  (1 – β)R > α.

The graphs below highlight the relationship between the variables. As we can easily observe (click graphs to zoom in) the higher the R and the lower the type II error the higher the PPV. The red surface corresponds to the zero research bias case while the green and the yellow correspondingly to u=0.2 and u=0.6. The ball blue plane corresponds to PPV 0.5 i.e. the cut-off positive predictive value. The multicoloured floor of the graph indicates the levels of β and R (for u=0, 0.2, 0.6) for which research findings are more possible than not.

Continue reading

Tagged , , , ,

Mental tunels in Medical Practice: Their implicit effect in clinical judgement

An excerpt from Massimo Piattelli-Palmarini’s book (which I strongly recommend) “Inevitable Illusions: How Mistakes of Reason Rule Our Mind”:

The seeds of doubt about this “ideal” and the legitimacy of classical theory were sown when it began to be seen that the supposedly rational subjects-those who would react rationally to decision making-were in fact not only quantitatively but also qualitatively different from real subjects. What brought this home in striking fashion was the massive development of com­putational equipment in the decision-making process. A single famous example makes this plain.

In 1957, L. B. Lusted, a clinical researcher at the National Institute of Health, and R. S. Ledley, a dentist at the National Bureau of Standards, sought to automate on computers, and thus improve, the decision-making process by which doctors, with clinical data at their disposal, made their diagnoses. Their approach was about as classical as you could find. They based themselves on classical logic in the strictest of ways (that is, on the so-called tables of logical functions, such as negation, con­ junction, disjunction, and the conditional “if . . . then”) and on a hierarchy of hypotheses and subhypotheses depicted in flow charts, the arrows or “directions” of which logically linked, in a connected graph of increasing details, the probabilistically weighted hypotheses. They almost immediately realized that these automated diagnoses of theirs gave rather different results from those that the best clinicians might have made from the same data fed to the computer. The discrepancy between real subjects and ideal subjects in this classical case of ‘judgment under uncertainty” emerged with dramatic force. The dilemma that surfaced was whether it would be more ratio­nal to follow the conclusions of the best clinical minds or those of the computer.

A few years earlier, in 1954, the University of Minnesota psychologist Paul E. Meehl had published an explosive article, “Clinical versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence.” Meehl’s thesis was that statisti­cal prediction was more reliable than intuitive prediction, even that of the best doctors. According to Meehl’s data (which were based on an impressive body of research), the results of psychological tests analyzed by the computer were better able to predict outcomes (e.g., who might give up his studies or quit college, who might make a good pilot, who might fall back into crime, who might attempt suicide) than the personal judg­ments of eminent professional psychologists.

Tagged , , ,