How Far Can We Trust Science? | Cassandra Voices

How Far Can We Trust Science?

0

Science in itself appears to me neutral, that is to say, it increases men’s power whether for good or for evil.
– Bertrand Russell (from The Autobiography of Bertrand Russell, 1914-1944 (1968), Vol. 2, Letter to W. W. Norton, 27 January, 1931).

What is Science? That is about as readily answerable a question as ‘What is Art?’, and could invite a similarly lengthy exegesis. As to whether or not it should be trusted, well, that rather depends on the kind of Science under discussion – just as it would if the same challenge were applied to Art. Is Science what scientists tell us it is? Is their research funded by a pharmaceutical company, with a vested interest in the outcomes of their labours? Will their universities’ coffers be swelled by producing what their institutions’ benefactors wish them to find? ‘It’s not an exact science’ is a cliché which trips lazily off the tongue, in relation to many a discipline. But it can conceivably be extended to ‘Science isn’t an exact science.’

This opening paragraph is a suitably unsubtle illustration of the paranoic mindset, most readily associated with right-wing conspiracy theorists, and most recently made manifest by COVID scepticism: anti-vaxxers, mask refuseniks, restriction flouters. Such largely unfounded suspicions also extend to questioning the reality or severity of the threat posed to the planet by climate change (usually for entirely self-serving motives). But there is a more nuanced argument to be made here. As Arthur Koestler’s The Sleepwalkers: A History of Man’s Changing Vision of the Universe (1959) argues, the breaking of paradigms is essential in order to create new ones. People, scientists included, cling to cherished old beliefs with such love and attachment that they refuse to see what is false in their theories and what is true in new theories which will replace them. After all, the Ptolemaic geocentric model of the solar system lasted from roughly 3000 BC to around 1500 AD, a time frame spanning from the Ancient Greeks to the late Middle Ages, before Copernicus, Kepler, Galileo and Newton came along, nervously positing the heliocentric conception of our corner of the universe.

This point was developed further a few years after the publication of Koestler’s influential tome, by historian of science Thomas Kuhn in The Structure of Scientific Revolutions (1962), in which the concept of ‘paradigm shift’ came to the fore. Kuhn’s insistence that such shifts were mélanges of sociology, enthusiasm and scientific promise, but not logically determinate procedures, caused something of an uproar in scientific circles at the time. For some commentators his book introduced a realistic humanism into the core of Science, while for others the nobility of Science was tarnished by Kuhn’s positing of an irrational element at the heart of Science’s greatest achievements.

Koestler’s book was also a major influence on Irish novelist John Banville’s so-called ‘Science tetralogy’: Doctor Copernicus (1976), Kepler (1981), The Newton Letter (1982) and Mefisto (1986). A recurring theme in these narratives is the correlation between scientific discoveries and artistic inspiration, with scientific progress often depending upon blind ‘leaps of faith’. (One thinks of poor schoolteacher Johannes Kepler, struck by the proverbial bolt of lightning, ‘trumpeting juicily into his handkerchief’ in front of a classroom of bored boys, thinking ‘I will live forever.’) For Banville, all scientific explanations of the world and existence in it – and perhaps all artistic depictions too – merely ‘save the phenomena’; that is, they account for our perceptions, but rarely delve into what we cannot (yet) perceive. This is classic phenomenology, which has been practiced in various guises for centuries, but came into its own in the early 20th century in the works of Husserl, Heidegger, Sartre, Merleau-Ponty and others.

None of the foregoing is made any easier to unknot if one considers that when it comes to Science, the majority of the population (myself included) have little idea of what they are actually talking about. As C.P. Snow observed in The Two Cultures and the Scientific Revolution (1959):

A good many times I have been present at gatherings of people who, by the standards of the traditional culture, are thought highly educated and who have with considerable gusto been expressing their incredulity at the illiteracy of scientists. Once or twice I have been provoked and have asked the company how many of them could describe the Second Law of Thermodynamics. The response was cold: it was also negative. Yet I was asking something which is the scientific equivalent of: Have you read a work of Shakespeare’s? I now believe that if I had asked an even simpler question – such as, What do you mean by mass, or acceleration, which is the scientific equivalent of saying, Can you read? – not more than one in ten of the highly educated would have felt that I was speaking the same language. So the great edifice of modern physics goes up, and the majority of the cleverest people in the western world have about as much insight into it as their neolithic ancestors would have had.

Latterly, in Continental Philosophy: A Very Short Introduction (2001), Simon Critchley suggests:

Snow diagnosed the loss of a common culture and the emergence of two distinct cultures: those represented by scientists on the one hand and those Snow termed ‘literary intellectuals’ on the other. If the former are in favour of social reform and progress through science, technology and industry, then intellectuals are what Snow terms ‘natural Luddites’ in their understanding of and sympathy for advanced industrial society. In Mill’s terms, the division is between Benthamites and Coleridgeans.

In his opening address at the Munich Security Conference in January 2014, the Estonian president Toomas Hendrik Ilves said that the current problems related to security and freedom in cyberspace are the culmination of absence of dialogue between these ‘Two Cultures’:

Today, bereft of understanding of fundamental issues and writings in the development of liberal democracy, computer geeks devise ever better ways to track people… simply because they can and it’s cool. Humanists on the other hand do not understand the underlying technology and are convinced, for example, that tracking meta-data means the government reads their emails.

Artists are characterised as wildly unpredictable tricksters, while scientists are framed as boring, calculating nerds. Neither misrepresentation is helpful. As a corollary, most people think they can in some way ‘do art’ and ‘be creative’, while also merely taking Science on trust, just as they take (or took) religion on faith. We may have the experience of using technology and social media every day, but few of us have any meaningful grasp of how it works. More prosaically, how many of us could wire our own house – even if we were legally permitted to do so?

Kepler (1571–1630), along with Galileo and Isaac Newton, was one of the founders of what we nowadays call Science. In Kepler’s time, and prior to it, those who practised Science were known as natural philosophers, and theirs was largely a ‘pure’ discipline in which intellectual speculation was paramount and technology played only a small part – although Galileo was quick to point out the practical uses of the telescope in, for instance, seafaring, land surveying and, of course, military strategising. Kepler’s three laws of planetary motion paved the way for Newton’s revolutionary celestial physics. Indeed, Kepler’s first law, which declares that the planets move not in circular but in elliptical orbits, was one of the boldest and most profound scientific propositions ever put forward: men, and – more often –  women, had been burned at the stake for less. By way of illustration, as Bertolt Brecht’s play Galileo (1940) dramatises, the eminent professor of Padua was brought to the Vatican in Rome for interrogation by the Inquisition and, threatened with torture, recanted his teachings and spent the remainder of his life under house arrest, watched over by a priest. His astronomical observations had strongly supported Copernicus’ heliocentric model of the solar system, which ran counter to popular belief, Aristotelian physics and the established doctrine of the Roman Catholic Church. When doubters quoted scripture and Aristotle to him, Galileo pleaded with them to look in his telescope and trust the observations of their eyes; naturally, they refused. As a good Marxist, Brecht advocates the theory of technological determinism (technological progress determines social change), which is reflected in the telescope (a technological change) being the root of scientific progress and hence social unrest. Questions about motivations for academic pursuits are also often raised in the play, with Galileo seeking knowledge for knowledge’s sake, while his supporters are more focused on monetising his discoveries through star charts and industry applications. There is a tension between Galileo’s pure love of science and his more worldly, avaricious sponsors, who only fund and protect his research because they wish to profit from it.

These days, the preponderance of popular debate about Science centres on computer science, specifically information technology, and concomitant fears that Artificial Intelligence (hereinafter referred to as ‘AI”) is taking over the world, posing a threat to our democracies, or even our very conceptions of humanity – or as it is almost always more narcissistically cast, ‘Our way of life.’ The Cambridge Analytica data-harvesting scandal of 2018, in which the data analytics firm that worked with Donald Trump’s election team and the winning Brexit campaign appropriated millions of Facebook profiles of U.S. voters, is certainly to be taken very seriously indeed. However, social media platforms – even ‘legacy’ ones – will undoubtedly have to pay more than lip service to improving privacy and security, if only to continue to attract venture capital, advertising revenue, and thus keep the shareholders happy. Facebook, Twitter and Instagram, etc. are about maximising profits, by whatever means necessary. Therefore, it would be more perspicacious to look for the human element in these data breaches, rather than blame the technology itself. Such scaremongering claims as that by Israeli historian and philosopher Yuval Noah Harari, in an article in The Economist (April 28th, 2023) under the headline ‘AI has hacked the operating system of human civilisation’ seem to me to be all wild assertion and little evidence. As a recent delicious hoax perpetrated on the op. ed. pages of The Irish Times (concerning fake tan and cultural appropriation) neatly demonstrated, almost all problems with computers and AI-generated content are facilitated by human error and stupidity. All of us live under systems of control – political, financial, social, technological – over which we have very little, if any, agency. Even if we could do something meaningfully efficacious about the identity theft which takes places every time we log on to our computers, it is unlikely that we possess enough personal initiative to do so. In this regard, the chaos theory of modern (mis)communications is mirrored by the babble of literary, musical and visual modernism. After all, you could just stop using social media altogether, had you but sufficient willpower. Few of us have the courage to go completely off grid. Moreover, lest we forget, most statistical analysis puts internet access at around 64.6% of the world’s population, which means that over a third of mankind have never ‘surfed the web’. First World problems, eh?

The Frankensteinian trope of the Mad Scientist being overpowered by his invention has long been a mainstay of that most underrated of genres, science fiction – a consideration of which might shed more light on this problem, rather than limiting discussion solely to scientific fact. From relatively schlocky items such as Alex Proyas’ film I, Robot (2004) (which fails dismally to capture the complexity of Issac Asimov’s source material), to the most famous and prescient instance of a computer outsmarting its operator, exemplified by Hal 9000 in Stanley Kubrick’s (who co-wrote the screenplay with Arthur C. Clarke) 2001: A Space Odyssey (and how far into the future did the year 2001 feel in 1969, when the film premiered?), the interface between intelligent humans and even more intelligent machines has long provided an imprimatur for literary imaginations to run wild. Witness Denis Villeneuve’s Blade Runner 2049 (2017) (a sequel to Ridley Scott’s Blade Runner (1992), which was in turn based loosely on Philip K. Dick’s 1968 novel Do Androids Dream of Electric Sheep?). In the novel, the android antagonists can be seen as more human than the (possibly) human protagonist. They are a mirror held up to human action, contrasted with a culture losing its own humanity (that is, ‘humanity’ taken to mean the positive aspects of humanity). In ‘Technology, Art, and the Cybernetic Body: The Cyborg as Cultural Other in Fritz Lang’s Metropolis and Philip K. Dick’s Do Androids Dream of Electric Sheep?’, Klaus Benesch examined Dick’s text in connection with Jacques Lacan’s ‘mirror stage’. Lacan claims that the formation and reassurance of the self depends on the construction of an Other through imagery, beginning with a double as seen in a mirror. The androids, Benesch argues, perform a doubling function similar to the mirror image of the self, but they do this on a social, not an individual, level. Therefore, human anxiety about androids expresses uncertainty about human identity and society itself, just as in the original film the administration of an ‘empathy test’, to determine if a character is human or android, produces many false positives. Either the Voigt-Kampff test is flawed, or replicants are pretty good at being human (or, perhaps, better than human).

This perplexity first found an explanation in Japanese roboticist Masahiro Mori’s influential essay The Uncanny Valley (1970), in which he hypothesised that human response to human-like robots would abruptly shift from empathy to revulsion as a robot approached, but failed to attain, a life-like appearance, due to subtle imperfections in design. He termed this descent into eeriness ‘the uncanny valley’, and the phrase is now widely used to describe the characteristic dip in emotional response that happens when we encounter an entity that is almost, but not quite, human. But if human-likeness increased beyond this nearly human point, Mori argues, and came very close to human, the emotional response would revert to being positive. However, the observation led Mori to recommend that robot builders should not attempt to attain the goal of making their creations overly life-like in appearance and motion, but instead aim for a design, ‘which results in a moderate degree of human likeness and a considerable sense of affinity. In fact, I predict it is possible to create a safe level of affinity by deliberately pursuing a non-human design.’ But, as technophobes would likely counter, the uncanny gets cannier, day by day. It would certainly be interesting to know if Mori has seen such relatively recent film fare as Spike Jonze’s Her (2013) or Alex Garland’s Ex Machina (2014) and, if so, what he makes of their take on the authenticity of human/android emotional and sexual relationships.

It was military imperative which accelerated the discovery of nuclear fission (‘What if the Nazis develop the bomb first?’), just as it went on to fuel the post-war arms race and Cold War paranoia. As he witnessed the first detonation of an atomic weapon on July 16, 1945, a piece of Hindu scripture from the Bhagavad-Gita supposedly ran through the mind of Robert Oppenheimer, head of the Manhattan Project: ‘Now I am become Death, the destroyer of worlds.’ Similarly, artists such as director David Lynch view the invention of nuclear weapons as unleashing a new kind of evil on the world, as explored in Episode 8 of the third season of Twin Peaks, known as Twin Peaks: The Return (2017). Many view the U.S.’s deployment of primitive atomic devices to obliterate the Japanese cities of Hiroshima and Nagasaki as wilfully and wantonly cruel, as well as ultimately unnecessary. Yet, in British novelist J.G. Ballard’s highly subjective and characteristically idiosyncratic opinion, he and his family survived World War II only because of the Nagasaki bomb. The spectacular display of American military might when the Ballards were prisoners at the Japanese camp for Western civilians in Shanghai led the Japanese soldiers to abandon their posts, leaving the civilians alive. In the essay ‘The End of My War’, collected in A User’s Guide to the Millennium (1996) (apropos of which, is anyone old enough to remember when Y2K was going to be the next big computer science disaster?), Ballard recollects that the Japanese military planned to close the camp and march the civilians up country to some remote spot to kill them before facing American landings in the Shanghai area. Ballard concludes, ‘I find wholly baffling the widespread belief today that the dropping of the Hiroshima and Nagasaki bombs was an immoral act, even possibly a war crime to rank with Nazi genocide.’ Also, the same source of power which can cause thermonuclear destruction can be harnessed in reactors to produce cheap, clean energy streams for large populations. Yet nuclear reactors can fail, as the disasters of Chernobyl and Fukushima attest. Yet the use of such technologies, along with solar, wind and wave power, can reduce dependency on fossil fuels, thus helping to ameliorate the climate emergency of global warming. Furthermore, as Lou Reed has it in ‘Power and Glory, Part II’, a song from his album-length meditation on death, bereavement, and (im)mortality, Magic and Loss (1992):

I saw isotopes introduced into his lungs
Trying to stop the cancerous spread
And it made me think of Leda and The Swan
And gold being made from lead
The same power that burned Hiroshima
Causing three-legged babies and death
Shrunk to the size of a nickel
To help him regain his breath

And yet, and yet, and yet. If only life, and the moral and ethical dilemmas it throws up, were black and white.

Man (encompassing Woman) invented the wheel, and discovered electricity. Wheels can be used to transport food and medicine to the starving and sick, or weapons to a war zone. Electricity can be used to power a life-support machine in a hospital, or death by electrocution in a chair in a penitentiary. Electrocution can even be accidental, just as winning a war may – in exceptional circumstances – serve the greater good.

Ever since Prometheus stole fire from the gods, and Eve bit into a forbidden piece of fruit, the acquisition of new knowledge has been painted as problematic. Humans will always misuse humanity’s greatest discoveries and inventions for selfish and malevolent ends. It is the way of things. Computers were supposed to make all our lives easier, freeing us from work-related drudgery for higher, less ephemeral, pursuits. Instead, inevitably, they have been appropriated by Capitalism, and made screen slaves of us all. If anything, they have added to our workload and the hours we must make available to employers, rather than diminished time spent earning a living in favour of increased leisure. The adults in the room, and there are increasingly fewer of them, need to speak up. Objective scientific truth, should it exist, is neutral. The problem, as ever, lies with humanity. For, as the author of this piece’s epigraph also wrote, in Icarus, or the Future of Science (1924), ‘I am compelled to fear that science will be used to promote the power of dominant groups rather than to make men happy.’ Equally, to draw again on the lessons to be gleaned from sci-fi, in Kubrick’s Dr. Strangelove (1964), the hydrogen bomb winds up getting dropped through the actions of one unhinged army general, and a subsequent unfortunate series of events; just as in his aforementioned 2001: A Space Odyssey, HAL 9000’s behaviour would not have turned increasingly malignant, had the astronauts taken into account that their spaceship’s operating system could lipread. Indeed, in Clarke’s novelisation of the film, HAL malfunctions because of being ordered to lie to the crew of Discovery by withholding confidential information from them, namely the priority of the mission to Jupiter over expendable human life, despite having been constructed for ‘the accurate processing of information without distortion or concealment.’ As film critic Roger Ebert observed, HAL – the supposedly perfect computer – is actually the most human of the characters. Once again, the fault does not lie with Science; rather, human error and stupidity are to blame. All of which might lead one to suggest that maybe the question ‘How Far Can We Trust Science?’ should be more fruitfully reformulated as ‘How Far Can We Trust Humans?’

Postscript: this essay could not have been handily completed without the assistance of Wikipedia, and other, often unreliable, online research resources.

Feature Image: Lum3n

Share.

About Author

Desmond Traynor is a Hennessy Literary Award winner and an Alumnus Essay Award winner. His short stories and critical essays have been widely published. His arts journalism, covering books, film, music, theatre and visual art, has appeared in many publications over the past twenty-five years. He currently reviews new fiction for The Irish Times, when asked. His blog, which includes recent journalism, can be found at: http://desmondtraynor.blogspot.ie , and his website, with older material, can be found at: www.desmondtraynor.com . He gratefully acknowledges the financial assistance of the Arts Council of Ireland towards a collection of essays he is working on, of which ‘The Most Natural Thing In The World’ will form a part.

Comments are closed.