Article originally found at: http://physicsweb.org/articles/world/11/4/2

What's wrong with relativism?

Forum: April 1998

Physicists have criticized sociologists of science for the way that they analyse science. Harry Collins - one of their victims - defends the sociology of scientific knowledge


In his article last year, the Belgian physicist Jean Bricmont separated out some of the distinct strands in "science studies" and tried to provide a considered critique of sociology and history of scientific knowledge. He saw that there is a difference between "philosophical relativism" and "methodological relativism". He grasped that the former view, which says that the truth of a proposition depends on who interprets it, is a perfectly tenable philosophical position, even though it has little leverage on the world. And he saw that methodological relativism - impartial assessment of how knowledge develops - is the key idea for sociology of scientific knowledge (SSK).

It was also good to see that Bricmont - unlike so many others - did not treat an accusation of "relativism" as an argument in itself. In contrast, he tried to explore what he sees as the faults of methodological relativism. In the context of the "science wars", this is good progress, which makes it worthwhile to go over some of the arguments on which SSK was founded. In doing so, I will point out that scientific truth is somewhat more complicated than it is usually taken to be - particularly in the short term.

How much science should one know?

Bricmont's complaint was that historians and sociologists of science do not know enough science. He said that they need to know more. Practitioners of SSK believe that it is important to know as much science as possible about the cases they study. Sometimes one cannot learn enough. For example, after carrying out some thirteen tape-recorded interviews on the topic of the theory of amorphous semiconductors, I concluded that I could not understand enough of the science to do the sociology; I abandoned the study. On the other hand, in fields in which I feel more at home, I check my writings carefully with respondents to make sure there are no serious scientific errors. This is normal practice within SSK.

If sociologists of scientific knowledge cannot learn to talk science at some kind of interesting level with their respondents, they should choose another field. But there is another way of looking at this. Methodological relativism means the sociologist puts on hold all appeals to terms like "truth" and "the facts". This is because what counts as "the truth" or "the facts" in the case under study is typically contested. Prescribing methodological relativism is another way of telling the analyst to avoid being wiser than the scientists themselves. In historical case studies, this means that we have to try to forget how things turned out. To know more than the scientists who were involved is to know too much science.

For example, because I know that special relativity is right, I know that the Michelson-Morley experiment in 1887 should have given a result of zero ether-drift. Similarly, I know that when, in the 1930s, Dayton Miller - using what was then thought to be the most sensitive interferometer yet built - won a prize from the American Association for the Advancement of Science (AAAS) for finding an ether-drift of 11 km s-1, he was doing something wrong. The principle of methodological relativism says: "When you ask the question 'Why did most people choose to believe the Michelson-Morley result rather than the Miller result?', you must not include as any part of your answer 'because it was true' or 'because special relativity is true'."

Michelson, Morley and Miller did not know these things. Michelson and Morley knew nothing about relativity and Michelson, it seems, thought that there might be something wrong with his experiment because it had failed to detect the Earth's movement through the ether. Miller thought that special relativity was wrong. Furthermore, those other scientists who were making up their minds about who to believe did not know these things either - at least not in the way we know them now. If they knew what we know now, Miller would have thrown out his own result, and the AAAS would not have given him a prize. In sum, to explain the outcome of an argument you must not include the outcome in the explanation, because this leads to circularity. And this is what methodological relativism is about. To repeat, in the history of science it is sometimes a matter of trying to know less science rather than more.

To go back to Bricmont's argument, he says that to answer this kind of question properly one needs to know enough science to understand the scientific reasons for believing in Michelson-Morley rather than Miller. How much science is this? It is certainly more science than Miller, because Miller got it wrong! So, according to Bricmont, it is not that the sociologist has to know as much physics as a research physicist before doing sociological analyses of physics, or as much microbiology as a research microbiologist before doing sociological analyses of microbiology, and so forth: the sociologist has to know more. This is absurd. In passing, it is worth noting that if experiment is the key, then Miller was in a much better position to reach a conclusion than anyone who had not actually done an experiment for themselves. Miller had better knowledge of what these experiments involve than any commentator. It is sociologically interesting that, nevertheless, the opinions of scientists who watch from the sidelines tend to be much stronger than the views of those involved at the research front of a difficult area of science. The relationship between the views of insiders and outsiders is often the opposite of what one might expect.

Now let me go over the argument again for a more contemporary example. In 1989 it was claimed that room-temperature, resonant-bar, gravitational-wave detectors saw events that correlated with supernova 1987A. But in the very paper that announced this finding, M Aglietta and co-workers said that if our current understanding was correct, the energy seen by the detectors was equivalent to the complete conversion of 2400 solar masses into gravitational waves (1989 Il Nuovo Cimento 12C 1 75) The authors agreed that this was incredible, but nevertheless thought they should report what they had found in print in case something odd was going on. Nearly everyone else thought that the result was wrong, and a critical paper was published that tried to show that it was the outcome of inadvertent statistical massage (1995 Phys. Rev. D 51 2644). Last year, in an internal report from the University of Rome La Sapienza, the original authors rejected the criticism.

The easy solution for the sociologist studying a dispute like this (in this case, me) is to side with the "big guns", and say that the SN1987A findings were wrong because too much energy was involved, and that the accidental misuse of statistics was the culprit. There is nothing terribly difficult to understand here, but the scientists who put the claims forward continue to stand by them. Thus for me to take the easy option would imply that either I am morally bankrupt, possess the aptly named gift of prescience or am a better physicist than many of the physicists I am studying. Now, that would justify a science war! Once more, the crucial thing for the sociologist is to avoid claiming to know too much science. What the sociologist must do is ignore mainstream opinion and minority opinion alike, and stick to the question of how most people were persuaded to go one way rather than another. Setting the science aside, one is led to ask a different kind of question. For example, how did the particular pattern of publication come about and what was its influence likely to be? Physical Review refused to publish the original paper before it was accepted by Il Nuovo Cimento . But Physical Review still published the criticism, then refused the rebuttal. This is noteworthy because, often, criticisms of disputed claims in recent gravitational radiation research have never been seen in print but have been confined to the informal networks.

Short term, long term and history

To the superficial glance this kind of history may look like gossip, but it is the wider patterns that are important. As a very first step, one can see from cases like this that the peer-review system is segmented, and that not all claims are treated in the same way. What follows is that various groups of researchers and observers are exposed to different cross-sections of debate. Furthermore, there is a systematic difference between what insiders and outsiders get to see. In this case one might guess that outsiders were considered a more important group than usual and that is why the rebuttals were published. In a full case study, the work of relating the pattern of argument to the wider context would now begin. How much difference these patterns make in the very long term is sometimes hard to see; all human activity is deeply social, but, that said, it is the details that are interesting and important. (And it is vital to take into account the deeply social nature of human activity if the limitations of non-social entities, such as intelligent machines, are to be understood.) In the short term, the relationship between the details of the debate and the conclusions people reach is more evident than it is in the long term. The "short term" is important and can be remarkably long. In the case of the experiments relating to the constancy of the speed of light, the short term was about half a century; for the direct detection of gravitational waves it is 30 years and counting.

In saying that the way scientists come to agree on scientific truth is somewhat more complicated than it is usually taken to be, one is not saying that science is flawed, or shoddy, or should be replaced by something better. The only thing that science cannot live up to is the idealized notions of the scientific method. Experienced research scientists need little convincing of this. Even Lewis Wolpert, the biologist, has said: "...scientists must make an assessment of the reliability of experiments. One of the reasons for going to meetings is to meet the scientists in one's field so that one can form an opinion of them and judge their work". The idea that SSK attacks science may result from confusing one kind of history for another. Most history of science is written for the scientific profession and it is meant, quite properly, to attribute credit for scientific success: this kind of "professional history" gives a sense of what and who is deserving. But this kind of history has limited use when it comes to understanding scientific knowledge-making because it concentrates exclusively on success. To use it for deeper purposes would be like trying to understand the economy by concentrating exclusively on the activities of millionaires. "Interpretative history and sociology of science", on the other hand, does not use success as a sorting rule. Some of the critics of SSK have, perhaps, taken interpretative history to be an attack on the way honour is distributed within professional history - it is not. Bricmont began his piece with a quotation from my and Trevor Pinch's book The Golem: What Everyone Should Know About Science (1993 Cambridge University Press pp144-145): "Scientists at the research front cannot settle their disagreements through better experimentation, more knowledge, more advanced theories, or clearer thinking."

In the new edition of The Golem, which will be published later this year, the word "disagreements" has now been changed to "deep disagreements", and this quotation is discussed at length.

Consider the deep disagreement about SN1987A discussed above. Observations, better experimentation, more knowledge, more advanced theories and clearer thinking have not settled the argument - at least, not to the satisfaction of all parties. What happens in deep disputes like this is summed up in the grim Planck dictum: scientists do not give up their disputed ideas, they only die. The quotation also expresses the philosophical truism that all scientific claims require interpretation - their meaning is never self-evident. Interpretation does not follow automatically from the data, the theories or the logic. Interpretation is the prerogative of the scientific community. Finally, setting philosophical and intellectual issues aside, the problems of day-to-day life make it more important to understand the "short term" life of science than the long term. In more public forums - such as courtrooms - it is hard to cope with the "short term" disagreements among scientists and technologists to which we are continually exposed. The trouble is that the popular image of science is a kind of conveyor belt for agreement; disagreement is taken to imply incompetence, or bias, or political interference. If one can show that disagreement is found within the best of the hardest sciences, it will cease to be seen as a symptom of a pathology. Experts who disagree are not to be distrusted; disagreement often accompanies virtuosity. On the other hand, disagreement will continue to be seen as damaging to science's credibility so long as we continue to live with the dangerous, idealized models of science and technology that treat the short term as an aberrant phase within a perfectible activity.

About the author

Harry Collins is head of the Centre for the Study of Knowledge, Expertise and Science at the University of Wales, Cardiff, UK