Probing The Inaccuracies: Nuclear Power (UPDATED 21/04/2011)

Since its inception in the 1940s and 1950s, nuclear power has been looked upon with suspicion, and for more than thirty years, it has been perceived by some as being fundamentally unsafe. Between the public perception of nuclear power’s ostensible lack of safety that was generated in the late 1970s and 1980s, and the connections to nuclear weaponry that have plagued nuclear power since its invention, what was once one of the most promising sources of electrical generation has stagnated and even gone into decline as politicians, many afraid of what the consequences of actively supporting nuclear power would do to their careers, refuse to allow the nuclear power grid to expand.

Unfortunately for those people who would protest against the continuing use of nuclear power, we’re left with a quandary. Oil prices have climbed during the last decade. The most substantial sources of oil we can cheaply access exist in states which do not have the most favourable outlook on the West. What’s more, oil is a valuable feedstock for chemical production, generating products which are immensely important for agricultural and industrial purposes.

Those sources of energy production which we would seek to replace hydrocarbon fuels with, such as wind-driven turbines and solar panels, come with their own problems. Their output is ultimately limited by uncontrollable external factors, and their ability to produce a useful baseload capacity is questionable. Such sources of energy also currently require considerably more land per watt than either hydrocarbon or nuclear plants.

We have reached a time when we seriously need to give consideration to what path we will take with regards to energy production. It is very likely that we will require every alternative to oil that we can manage, and among those energy sources, we need one which will be able to produce dependable, predictable amounts of electrical power, one that isn’t hamstrung by external factors. Aside from hydrocarbon power stations, which we have already dismissed as a sensible option for the future because of the limited supply of fuel and their capacity for environmental damage, nuclear power is perhaps the most sensible of the remaining options.

Yet, confronted by this impending scenario, there remains strong opposition to nuclear power. Some opponents have legitimate concerns, the disposal of nuclear waste among them, while others simply use pure misconceptions to argue their case. I intend to address some of these misconceptions and help prove the case for nuclear power.

“Nuclear power is inherently unsafe!”

This would be one of the main problems that pro-nuclear campaigners have: Explaining away the PR disasters that resulted from the failures of the 1979 Three Mile Island nuclear power station in Pennsylvania and the famous and tragic disaster at the Chernobyl Nuclear Power Plant in 1986. Somehow, almost everybody, including anti-nuclear campaigners, seems to take it for granted that these two disasters demonstrate the inherent lack of safety in nuclear power generation. Recently, the disastrous consequences of the 2011 Japanese earthquake and tsunami that affected the Fukushima Daiichi power station has reignited the nuclear safety debate. Unfortunately for the opposition to nuclear power, this most damaging of all misconceptions is also one of the least rooted in actual fact. Statistically, nuclear power is one of the safest sources of energy available1, 2, 3.

What causes this misconception to thrive? One reason could be the aforementioned connection of nuclear power to nuclear weaponry, with some people making the false connection between nuclear weaponry and some sort of ostensible ability of a nuclear power plant to explode like a nuclear weapon. A quick look behind the principles of both nuclear power generation and fissile nuclear weaponry soon leads to the truth; a nuclear power plant, by virtue of the way it’s constructed, cannot explode like a nuclear weapon.

So, what about power plant failures that can actually happen? They’re still not as common as some people would like to think. What’s more, they’re very rarely fatal. There has never been a recorded fatal nuclear accident in France, a country which generates 78% of its power by nuclear generation4, 5. There hasn’t been a fatal nuclear accident in the United States since 1961 6, and none in the nuclear-powered vessels of the United States Navy, who have been using nuclear reactors in close proximity to human operators for more than fifty years.

Yet, the public discourse on this subject always seems to gravitate towards the failures at Three Mile Island and Chernobyl. This is perplexing, given that both scenarios have certain factors which make them unsuitable as examples against current-generation nuclear reactors. The disaster at Chernobyl may have been devastating, but it occurred in a vastly outdated reactor without a concrete shell, and only occurred because of gross human negligence. Meanwhile, the Three Mile Island argument would seem to be more in favour of the proponents of nuclear power, given that there was no significant release of radioactive material, that the concrete shell of the reactor did exactly the job that it was intended to do, and that there were no recorded fatalities as a consequence of the incident.

The Chernobyl incident is worth discussing in greater detail. The reactors at the Chernobyl Nuclear Power Plant, which have now been decommissioned, are of the RBMK-1000 type. The RBMK range of nuclear reactors are notoriously crude 7, with a design dating back to the 1950s, and noted to be one of the most unsafe and dangerous reactor designs ever put into production. They were made for two main purposes: to be cheap and to produce plutonium for nuclear weapons. Electrical power was strictly a useful byproduct, and to facilitate the quick retrieval of plutonium from the reactor, the reactors at Chernobyl were produced without concrete shielding, unlike all contemporary American designs, and indeed, just about every nuclear reactor made since.

Even with the inherent safety issues with the RBMK reactor design, the Chernobyl disaster only occurred on the scale that it did because of catastrophic incompetence and gross human error 8. A decision was made to test the emergency cooling system of the reactor during routine maintenance, most particularly by trying to use the residual kinetic energy in the steam turbine to power the cooling pumps. This is not a decision that any trained nuclear engineer should have made, as it left a lot to chance – far too much for the tolerances of the already unsafe RBMK reactors. More worryingly, the workers who had been informed of the experiment had long since departed when the reactor was shut down, while the evening shift were also preparing to depart. This left the experiment to occur during a shift change, leaving the incoming workers completely unknowledgeable about what was about to happen.

Some of the safety failings of the RBMK design have been discussed above, but the reactor had another set of worrying characteristics – it was counter-intuitively and confusingly more dangerous at low power, and the SCRAM, or emergency shut-down procedure, took eighteen seconds to complete, which compares extremely unfavourably to the five seconds of modern reactors.

In order to maintain the unstable chain reaction in the reactor core, the manual control rods had been removed, only exacerbating the problem with the SCRAM procedure. When the inevitable happened, and the reactor went super-critical, the graphite tips of the control rods displaced the light-water coolant, initially speeding up the reaction before slowing it down. Expanding fuel rods blocked some of the channels for the control rods, the reactor power sharply increased, a steam explosion significantly damaged the reactor mechanism and was followed by a power excursion.

The catastrophic incidences in this disaster can be blamed directly on two sources: abominable reactor design and poor human decisions. As mentioned above, no contemporary American nuclear reactors would have been built without a sufficiently strong concrete shell, which would have contained the radiation in the reactor, preventing the spread of damage to the wider community. The Chernobyl reactor was operated primarily by humans – who can easily be demonstrated to be fallible. More recent reactor designs – so-called Generation III designs, which have begun operation in Japan and are planned for France – are operated primarily using highly redundant arrays of computers, which can act at a rate far quicker than a human at the sign of any anomaly. The Chernobyl disaster simply couldn’t happen with a modern reactor.

Another thing that should be noted is that while it is undoubted that the Chernobyl disaster was a horrifying incident and an indictment of the Soviet nuclear power program, that the recorded death toll was not as high as some sources would have you believe. In 2005, a report by the Chernobyl Forum, a group headed by the World Health Organisation and made up of members of the International Atomic Energy Agency and several other UN organisations, established that apart from the 57 deaths directly attributable to the disaster, a further 4,000 to 9,000 were estimated to die due to thyroid cancer from the release of radiation9.

This figure must be looked at in context. Few people would suggest that even 4,000 deaths is anything but a tragedy, and yet, more people die every year because of particulate air pollution caused by fossil fuel sources than have died in the entire history of nuclear-fuelled energy, a figure not reported upon because they do not occur due to a conspicuous disaster. Hundreds of coal miners die every year, many of them in barely-noticed mine collapses. Hydroelectric power is no less innocent – dam collapses can be lethal, most conspicuously seen in China in 1975, when the Banqiao Dam collapse caused the deaths of an estimated 26,000 people due to flooding and a further 145,000 due to epidemics and famine10, 11, 12.

The fact is that no source of energy production is entirely infallible. Even wind turbines can fail, given certain weather conditions, with dangerous results. Statistically, nuclear power proves to be one of the safest sources of energy production currently available. One could liken it to travelling in an aeroplane; people get nervous because of the conspicuous incidents which have occurred during the history of flight, and yet, it is statistically safer to fly than to drive, to take a train or a boat.

All of this leads to the recent events at Fukushima. While the long-term effects of the disaster are not yet known, a few things are still certain. On the 14th of April, 2011, the Fukushima Daiichi reactor failures were classified by the Japanese nuclear regulation agency as the second-ever INES Level 7 nuclear accident; the first, of course, being Chernobyl. Indeed, the comparisons to Chernobyl have come thick and fast from the media. It is my belief that the Fukushima reactor failures resemble Chernobyl in only two ways – they were both conspicuous nuclear accidents, and Fukushima will likely result in multiple deaths.

Yet, comparing Fukushima to Chernobyl seems to indicate more a limited grasp of the facts than anything approaching merit. Unlike Chernobyl, Fukushima wasn’t caused by gross human negligence – despite the limits of the protection against seismic activity, the power plant had been prepared for what would normally seem like an appropriate magnitude for an earthquake. An earthquake that is 9.0 on the moment magnitude scale is not a common occurrence, and it comes with problems that go beyond nuclear power stations.

Many thousands of people had already died as a direct consequence of the earthquake and subsequent tsunami. People have been left homeless or displaced from their homes. Meanwhile, despite the news coverage given to the nuclear power plants at Fukushima, the release of radiation from the plants was of a consequence more to the people directly involved with the power station right now than it does to the wider population. Because of the containment structures within the Fukushima power plants, it is unlikely that there will be any long-term restrictions within the vicinity of the power stations.

Recent incidences involving terrorism have caused people to think of nuclear safety in another way: A nuclear reactor could be a target for terrorists, wishing to destroy nuclear reactors in order to cause the spread of radiation or to steal byproducts of nuclear fission in order to create a radiological weapon. The first scenario can easily be debunked, thanks to that same concrete shell protecting the outside world from the effects of any safety breaches inside the reactor core. Engineers are not stupid people, and demonstrate a lot of foresight. Many reactors are shielded by more than one layer of concrete, and given that a single layer of concrete several inches thick will demolish an aeroplane flying towards it with barely a scratch on the concrete and require a large amount of explosives to demolish, beyond the means of the poorly-funded and not particularly well-organised terrorist organisations which may currently want to target a nuclear reactor, I think that the public doesn’t exactly have to worry about the structural integrity of a nuclear reactor in the case of a prospective terrorist attack.

The second scenario postulated here is a more realistic – and arguably, more worrying – one. The addition of radioactive substances to a conventional explosive could greatly exacerbate the effects of the explosion. However, and this is a point that I will return to later, nuclear reactors don’t produce the amount of nuclear waste that many people would believe; nuclear fuel loadings are closer to kilograms than tonnes. Most of the left-over material from fission of uranium-235 is uranium-238, an isotope of uranium which can only be split by fast-moving neutrons, and which does not sustain a chain reaction. This is a substance with a very long half-life – about 4.5 billion years – and a correlating weak release of alpha radiation, the least penetrating type of radiation. While this can cause significant cellular damage inside the body, it can easily be stopped by a sheet of paper – or by skin cells. The concentration of more dangerous radioactive isotopes, such as radioactive iodine or plutonium, is considerably less than the concentration of uranium-238. The difficulty in isolating these more dangerous isotopes and the difficulty of acquiring a large amount of radioactive material makes this scenario less probable than the use of conventional explosives, which are far more readily acquired – or made at home. Nevertheless, despite its unlikelihood, it is a potential hazard, and one which the administrators of nuclear power stations will have to continue to look out for.

Nuclear fuel won’t last forever!”

No, it won’t. That’s a correct assumption as long as the nuclear power being used is fission rather than fusion, which relies on hydrogen and helium isotopes which can be found in significant yields. However, the shortage of nuclear fuel may well be overestimated – technologies not yet explored in full which use rather more abundant fuels than uranium-235.

Before we explore these other technologies, we must not rule out the potential for a more efficient use of uranium-235. One kilogram of uranium-235 can theoretically produce 80 terajoules of energy, which is three million times energy than that of coal13. Even with the incomplete fission found in nuclear reactors, uranium-235 still produces a vast amount of energy per unit mass.

This more efficient use of uranium could be facilitated using nuclear breeder technology. A breeder reactor creates more fissile material than it consumes, and could be used to vastly improve the efficiency of nuclear reactions. There are two main proposals for breeder reactors: fast breeder reactors, which would convert the common and previously mentioned uranium-238 isotope to the fissile plutonium-239, and the more economically viable thermal breeder reactors, which convert thorium to uranium-233.

While the conversion of uranium-238 into a fissile isotope is of strong theoretical interest, the thorium fuel cycle demonstrated in thermal breeder reactors is of more economical interest, and therefore, I will focus my attention on it. Thorium is an actinide found in nature as the thorium-232 isotope, and is estimated to exist in about three or four times the quantity as uranium, and is a waste product in the combustion of coal, among other sources. Significant interest in this technology has been demonstrated by India, who are believed to have one of the highest percentages of thorium in the world, although significant quantities are also believed to exist in the United States and Norway.

The mononuclidic nature of thorium makes it an interesting candidate for nuclear breeding. Unlike uranium-235, not found in large abundance within uranium ore, thorium-232 is the single stable isotope of the metal, and therefore, isotope separation does not need to be performed, making thorium able to be used without as many processing steps. The thorium fuel cycle also has a number of advantages which would make it an interesting candidate for nuclear reactors of the future, among them being the greater difficulty of making nuclear weapons from the uranium-233 produced in the fuel cycle.

Without having the significant inefficiencies found in current-generation uranium-based reactors, thorium fuels could last for a significantly longer time than uranium is projected to last, giving a lot more time to develop alternatives such as renewable energy sources more efficient than today’s wind farms and solar panels, along with the Holy Grail of nuclear energy production: Nuclear fusion.

Nuclear fusion generates large quantities of energy through the fusion of deuterium, tritium – both isotopes of hydrogen which can be hydrolysed from water, if with some difficulty – and helium-3, which is expected to exist in significant abundance on the surface of the moon. The principle has existed for billions of years inside stars, including our own solar system’s Sol.

Nuclear fusion can claim some important advantages over nuclear fission: The fuel is unquestionably more abundant than uranium or even thorium, waste products are negligible, and such reactors would be useful in certain futuristic scenarios, including interplanetary research stations, colonies and spacecraft, which renewable sources of energy would not prove sufficiently useful or reliable at.

Currently, nuclear fusion is more of an experimental concern than anything we could use practically. Research is ongoing in France14 to demonstrate the potential of nuclear fusion as an energy source for the future. However, fusion requires a certain investiture of energy before the nuclei will fuse, and we have not yet been able to generate more energy from fusion than was invested, the most favourable ratio so far being approximately 10:7.

Nevertheless, the technology has some unquestionable benefits for the future, and certainly can’t be ignored. Earth isn’t about to run out of hydrogen any time soon, and fusion power plants have the potential to be incredibly safe and produce low amounts of waste while still providing a useful baseload capacity.

“Pie in the sky schemes make for good typecast, but what about nuclear power right now?”

Many criticisms of nuclear power which don’t revolve around the ostensible safety problems revolve around economics. As of today, it costs billions of euro to produce a nuclear reactor, sometimes taking decades to complete.

I don’t believe that this is any issue of the reactors. Instead, I’d level my criticism at red tape and bureaucracy, both generated because of the perplexing hatred which some significant minority communities have towards nuclear power. The safety issues are not a legitimate concern. Most operational reactors have failure margins in the range of one damaged core per plant every 20,000 years15, and modern reactors have failure margins in the order of once every hundred thousand years, well beyond the lifespan of a nuclear reactor and facilitated by engineering redundancy.

Other criticisms revolve around the high cost per kilowatt-hour for nuclear production. I’m not sure whether these costs include the costs of building the reactor in the first place, but if they do, removing some of the absolutely unnecessary bureaucratic bother surrounding their construction. If not, the costs could be appreciably decreased by using the more abundant thorium breeder reactor designs out there.

Some critics of nuclear power grasp onto the point that modern nuclear power stations seem to all be monolithic, centralised power units, incapable of being decentralised. Try telling that to the US Navy, who have been using nuclear reactors on ships and submarines since 1955. I’m not sure if you’ve ever seen a modern submarine, but despite their size increase over previous-generation submarines, they aren’t all that big, especially in comparison to the huge metropolises which some nuclear stations have to generate power for.

In that vein, some companies have been working towards scalable, modular power stations which can be embedded underground and which require little maintenance. Among these is the Toshiba 4S16, 17, a design producing 40MW of power in a footprint little bigger than an electrical sub-station. Reactors like this could make up arrays which would act together in the place of larger, single-core reactors, or else be distributed in such a way as to decentralise the generation of power.

Meanwhile, in all of the fuss, people have forgotten some of the major advantages of nuclear power. Nuclear power has very low emissions of carbon dioxide18, considerably lower than any fossil fuel source, and as things currently stand, potentially even lower than even wind and solar power19. This is one of those things that makes me rather perplexed at the complete dismissal of nuclear power by environmentalists. While I understand their criticism on the grounds of nuclear waste, I thought that the reduction of carbon dioxide emissions was a fairly big deal. Did priorities suddenly change without me noticing?

“How about the alternatives, about renewable energy?”

I’m not going to suggest that exploring – and using – alternatives to fossil fuels, and perhaps, eventually nuclear energy, is a bad thing. It’s clear that we need to use everything we can get our hands on to reduce our dependence on oil as a fuel source, and instead save it for chemical production. But at the same time as not being able to dismiss wind and solar energy because we have nuclear energy, we can’t dismiss nuclear energy just because we’re tilting towards renewables.

The problem is that renewable energy sources are currently too limited to use in the capacity of baseload power generators. Wind and solar power rely on specific weather conditions in order to operate at optimal capacity, a limitation not found in nuclear power. While the optimal weather conditions for wind and solar seem to balance themselves out, the reliability of the combined sources is not in the same region as nuclear power.

The sensible thing to do would seem to be to build excess capacity of power plants and to store some of the excess energy in a form that can be reused later. One of the most promising methods of doing so is to use the hydrolysis of water to generate hydrogen which can be used later in a fuel cell for the generation of energy by the reforming of water. This method has the advantage of producing a waste product which can be reused under hydrolysis again, creating an entirely waste-free cycle.

While one could build an excess of wind and solar power plants for this purpose, the more sensible option would seem to be to use the excess capacity of a more dependable nuclear power plant for the production of hydrogen and the wind and solar plants for the generation of power-grid electricity. There are a few reasons why we’d want to do it that way around, one of the most prominent being the space requirements for excess solar and wind stations.

Nuclear power plants take up a lot less space per watt than wind turbines or solar panels, both of which occupy a lot of land. Some of this space requirement could hypothetically be mitigated by the use of turbines and solar panels on a residential scale, although the efficiency of such devices may not be sufficient to entirely run a building. Nevertheless, the occupation of a large amount of potentially productive land could make the use of wind and solar power a more dubious prospect.

Tidal power, an alternative which has recently been rejected in Britain by the incumbent Conservative government, is even less efficient in terms of space, and can cause problems with the fishing industry. Hydroelectric power, as mentioned above, can be highly unsafe in the case of catastrophic failure. Geothermal power is only a particularly economically viable option in countries where volcanic vents exist. All of these energy sources can be useful, but few provide the baseload capacity of fossil-fuel or nuclear power, and most are limited by environmental conditions of some sort.

“And what about nuclear waste? Surely, that’s a pretty big deal.”

Yes, it is a pretty big deal. Nuclear waste is pretty much universally considered to be a Bad Thing, with environmentalists worried about the potential harmful effects on nature, and the owners of nuclear power stations concerned about profit margins – as nuclear waste is, after all, representative of wasted energy.

That would be why modern – Generation III, specifically – reactor designs are intended to minimise the amount of waste that they produce. The use of even more modern reactor designs from the so-called Generation IV set of technologies, or the use of thorium breeder reactors, which use up most of the transuranic elements produced in the process of their reactions, could further minimise the risk.

The storage of the remaining waste has posed a few problems. While nuclear waste could actually be diluted a thousand-fold or more and dumped into the ocean with less negative results than the premise of the scheme would suggest, most people would consider this to be very irresponsible.

In fact, what seems to be the most reasonable option is the storage of waste underground after vitrification, conversion of the waste into a form of glass. This could be combined with reprocessing in order to retrieve as much as possible of the useful nuclear material as possible, a strategy not being used for some reason by the United States, but being used by other major nuclear powers such as France, Russia and Japan.

If the storage of such materials underground still seems irresponsible, consider that there is no reason why certain nuclear isotopes contained within the waste should not become useful in the future. There’s no way to tell if people in the future may not find a sort of nuclear reactor capable of efficiently using depleted uranium.

The people responsible for the safe storage of nuclear waste may seem lackadaisical, but that’s nothing on the level of the people responsible for disposing of coal waste. Research conducted by the Oak Ridge National Laboratory20 in the United States demonstrated the presence of significant quantities of radioisotopes of uranium and thorium in coal slag. You should recognise these two elements respectively as the current primary nuclear fuel source and the promising “super-fuel” discussed earlier.

In fact, these radioisotopes are present in such quantities that there is more energy contained in the nuclear waste in coal than liberated from the combustion of the coal itself. People are just throwing that potential nuclear fuel away, willy-nilly! If waste from nuclear reactors was treated in such a blasé fashion, there would be uproar.

Incidentally, the presence of thorium and uranium in coal means that coal-fired power plants means that they generate 100 times the population-effective dose of radiation that nuclear power plants do20. Think about that the next time that somebody rolls out the old NIMBY argument about nuclear reactors – they probably wouldn’t have the same problems living in the vicinity of a coal power plant. Of course, the criticism of nuclear reactors makes it seem less like “not in my back yard” and more like “build absolutely nothing anywhere near anything”.

So far, there isn’t a perfect solution to the problem of nuclear waste. A lot of that is down to the fact that nuclear waste often has the potential to be useful – it’s just that research hasn’t got to the stage where we can economically reuse nuclear waste. I’d be inclined to level some of the responsibility for this slow rate of research at the people holding back the adoption of nuclear power as a whole, who have prevented more modern, more efficient and cleaner reactors from coming into service. However, it’s certainly a problem that can be solved with the judicious application of science and engineering, and certainly not with scaremongering, political horse-trading and Luddism.

References:

1http://www.phyast.pitt.edu/~blc/book/chapter6.html#4

2 http://www.world-nuclear.org/info/inf06.html

3 http://www-pub.iaea.org/MTCD/publications/PDF/Pub1032_web.pdf – Chapter 7

4 http://www.iaea.org/Publications/Factsheets/English/ines.pdf – the most substantial nuclear accident in France was Level 4 on the INES scale – below even Three Mile Island in severity.

5http://www.rte-france.com/uploads/Mediatheque_docs/vie_systeme/annuelles/bilan_energetique/energie_electrique_en_france_2010.pdf

6 http://en.wikipedia.org/wiki/SL-1

7http://www-pub.iaea.org/MTCD/publications/PDF/Pub1032_web.pdf – page 194

8 http://en.wikipedia.org/wiki/Chernobyl_disaster

9 http://www.iaea.org/Publications/Booklets/Chernobyl/chernobyl.pdf

10 http://en.wikipedia.org/wiki/Banqiao_Dam

11 http://news.sina.com.cn/o/2005-08-09/06296643805s.shtml

12http://english.people.com.cn/200510/01/eng20051001_211892.html

13http://en.wikipedia.org/wiki/Uranium

14http://www.iter.org/

15 http://www.ce.nl/pdf/03_7905_11.pdf

16 http://www.nrc.gov/reactors/advanced/4s.html

17 http://www.businessweek.com/magazine/content/10_22/b4180020375312.htm

18 http://www.iaea.org/Publications/Magazines/Bulletin/Bull354/35404782026.pdf

19http://www.world-nuclear.org/education/comparativeco2.html

20http://www.ornl.gov/info/ornlreview/rev26-34/text/colmain.html

The Good, The Bad and The Ugly – A Cinematic Review

The Good, The Bad and The Ugly: It’s a title that almost everybody knows. With a myriad of references, pastiches and imitations, it’s almost certain that you’ll know something about the film, even if your knowledge is limited to the critically acclaimed title theme. It’s anything but a stretch to call this one of the most iconic films ever produced. Opening up a new dimension in the Western genre with its novel approach, the film is arguably the magnum opus of its director, Sergio Leone (although others would credit that specific honour to the 1968 film, Once Upon A Time In The West.)

The film, released in 1966, and starring Clint Eastwood, Lee Van Cleef and Eli Wallach as Blondie, Angeleyes and Tuco Ramirez, also known as the titular Good, Bad and Ugly, follows the three protagonists in a journey across the southern states of America, as they cross a country divided and ravaged by civil war in a battle to unearth a fortune of stolen gold buried beneath an unmarked grave.

The plot is surprisingly deep and very complex, combining bloodshed and betrayal with a cynical look at the American Civil War, filmed to resemble the battlefields of the First World War. Cynicism is very much the order of the day throughout the film, as the protagonists occupy a world where the “Good” only ostensibly lives up to his title. Sergio Leone delights in his role of deconstructing the once-tired myth of the West, substituting the cliched puritan and upright ethos of preceding Westerns for a seedy standpoint where morality is subjective, and never interferes with the main characters’ desire for money. The frontier spirit of the early Western conquests is captured, and despite the fact that the film was shot in Spain with mostly European actors, it arguably does a better job of recreating the Old West than many American-made Westerns.

Key to this cynicism are the characters themselves. The Man With No Name is considered to be the “Good” only because he refrains from the blatant banditry of his fellow protagonists. Yet, within the opening scenes of the movie, we see him shooting other would-be bounty hunters, obstructing the law by shooting the hangman’s rope tied around the neck of a notorious bandit, then, once the source of money from his ploy has dried up, he abandons the bandit in the middle of the desert with a fifty-mile journey to the nearest town.

If the “Good” doesn’t quite live up to his billed title, the same couldn’t be said of the “Bad”. Angeleyes, a mercenary-for-hire, has absolutely no trouble in playing two sides against each other, in torturing, in exploitation or murder. We see this hammered home within the first fifteen minutes, as he shoots down two paymasters in opposition to each other as soon as he’s collected the money that they offer him. Yet, Angeleyes still works to a steadfast principle: “When I get paid for a job, I always see it through to the end.” Despite his lack of loyalties and his impartiality, he still works to something at least resembling a moral code, even if the details leave much to be desired.

While the “Good” and the “Bad” represent the closest things to moral absolutes in a film which delights in its cynicism and loose morality, the “Ugly” is an altogether more complex character. With the film opening as Tuco Ramirez smashes through a window in order to escape a group of bounty hunters, we immediately see his volatile and unpredictable nature. Despite that, though, he shows considerable charisma and charm. In contrast to Blondie, the taciturn bounty hunter, or Angeleyes, the murderous mercenary who lurks behind the scenes, Tuco is a more fleshed-out character. Tuco’s notoriety has led him to a growing list of crimes, which almost seems comical in its length and variety, but we see later that Tuco’s behaviour seems more selfish than abjectly “good” or “bad”, fuelled by the same money-lust that drives the other protagonists, only with a more unpredictable and more driven personality behind it.

With these characters comes a great deal of unpredictability, as such strong personalities don’t lend them well to teamwork, and the film is full of betrayal and shifting, uneasy alliances, which will keep the viewer asking themselves questions to the very end, and making the plot far less derivative and more riveting than would be immediately expected from a Western.

Cinematically, this film is a masterpiece. Sergio Leone proves himself an able practitioner of the “show, don’t tell” principle, eschewing dialogue in favour of the cinematic approach. With some of the most able and competent uses of cinematic techniques in a movie, ranging from sweeping camera pans over the battlefields of the Civil War, to framing of the characters, right through to some of the most fantastic close-up shots ever, particularly during the shoot-outs, with close shots of darting and squinting eyes and fidgeting hands (bonus points for those who notice the missing segment of Lee Van Cleef’s middle finger), Sergio Leone creates a fantastic sense of style about the film, from the fabulous beginning sequence with its abstractions all the way to the very end, with a shoot-out so stylish that it entrances.

While Leone never relies on his dialogue to propel the movie, managing to do more with a single person’s expression than most film-makers manage with most characters’ dialogue, when the characters do speak, it’s usually something worth listening for. The characters fire off ripostes from their mouth almost as often as they do from their guns, and the lines in this movie prove themselves able to withstand repetition by never seeming forced, and always seeming natural. Of particular note is Tuco’s response to a man who has cornered him and gives him a monologue about how long he has waited to find Tuco – a surprise shot, followed by four more, and a reply, “If you need to shoot, shoot! Don’t talk!”, which perfectly illustrates both the emphasis on action over dialogue in the film, and the one-liner nature of the entire film, where nearly the whole film is worthy of quotation.

While the film would be notable for these preceding characteristics alone, proving itself a fantastic deconstructor of those over-used tropes in the genre, there is one element not yet covered which ensures and cements this film’s deserved iconic status.

The music in this film goes beyond good. It goes beyond brilliant. It is quite simply masterful, the work of a virtuoso at his very finest. Apart from the iconic title theme, which is deservedly one of the most well-known tunes ever, Ennio Morricone builds tension throughout with his outstanding score and the unpredictable and inventive use of instruments not usually encountered in the sort of classical music which Morricone composes. Out of this fabulous score, we see two stand-out successes: L’Estasi Dell’Oro (known in English as The Ecstasy of Gold), one of the most fitting and masterfully orchestrated pieces of music found in cinema, a sweeping epic backed by the entrancing wordless vocals of Edda Dell’Orso, and Il Triello, a powerful, tension-building piece of music which backs the final shoot-out, once again backed by Dell’Orso’s vocals, this time at soprano levels.

With the fine mixture of cinematic art, dialogue which never grates and the plainly beautiful music, there’s very little to criticise about this film. It is, however, a very long film at 156 minutes (171 for the extended version), and while the film sustains its pace over its entire length, it is not a film which one can go into with the idea of wasting time. It must be watched from start to finish as a cohesive package, because this film is an epic in the most traditional sense, twisting and turning enough to keep people excited throughout.

If there was one thing I would criticise, I would note some of the editing decisions made in the extended version of the film. Adding 18 minutes of extra footage, the extended version adds to the film several scenes which were formerly excised from the cinematic release, for American audiences which had grown accustomed to shorter films, usually only an hour and a half long. Unfortunately, these scenes are not of uniform quality, ranging from those which actually add a bit of extra flavour to the film, to those probably best left on the editing floor, and as somebody accustomed to the cinematic release just as much as the extended version, the extra scenes can occasionally grate.

Despite this minor criticism, there is very little that can be said against this film. Forcefully sweeping aside the puritanism of prior Westerns, it stands as progenitor of a new sort of Western film, one that would more accurately look at the Old West as a cynical, blood-stained chapter of American history. Combined to some of the finest cinematic technique ever seen in a film, sparkling and imaginative dialogue and music which fits perfectly and remains utterly memorable more than forty years after its first release, it’s not hard to see why The Good, The Bad and The Ugly is so critically acclaimed, and why I give it such a recommendation.

Bottom Line:This film is epic, this film is art, but most of all, this film is enjoyable. There’s no reason why any self-proclaimed cinephile should not have seen this film.

Recommendation:Watch it. DVD boxsets of the “Man With No Name” trilogy go for comparative pennies these days, and along with one of the finest and most artistic films ever made, you get two films which are enjoyable in their own right and show just why Clint Eastwood’s career was suddenly elevated from that of a lowly television actor into that of a superstar.

On The Gaming PC Upgrade Cycle and The Future

It has been an often-perpetuated myth, with little backing in reality, that personal computer gaming is a ludicrously expensive business, with people having to spend thousands of dollars every six months to stay on top of the curve. This myth contains a lot of exaggeration; personal computer gaming is expensive, but upgrade cycles come every two to four years for most computer gamers, depending on their tolerance for lower resolutions, and the “sweet spot” for computer design right now is somewhere around the €750 mark, with a €500 budget yielding a still-reasonable machine.

Indeed, over the last three years or so, desktop computer design has reached a point where most computers on the market with any sort of discrete graphics card can play the majority of modern games acceptably. Games consoles are a major contributor to this situation, especially with Sony’s insistence that the PlayStation 3 will last for a decade (perhaps as a budget option beside the PlayStation 4, but they’re dreaming if they think that it’ll last as a vanguard in the console war).

Consoles are considerably less powerful than gaming computers. The graphics that they generate are fairly impressive, but they’re not generating 1080p resolutions on most games – the graphics are upscaled for higher-resolution displays. As consoles have become the predominant platform for graphics-intensive games, the graphical quality of multi-platform games seems to be limited by the lowest common denominator, the Xbox 360. Because it doesn’t seem to pay much to try to push a computer to its limits, the most graphically-intensive game in common knowledge, Crysis, dates back to 2007.

All of this has made me consider the gaming computer in this context. If consoles with hardware which would be considered as mid-range in the PC market when the consoles were released are going to limit graphical quality on personal computers, there’s little point in spending huge amounts of money on a ridiculously powerful monolith of a machine. An AMD (previously ATI) Radeon HD 5970, the most powerful graphics card available, really requires three monitors, preferably with 2560×1600 resolutions, to demonstrate its power properly. This isn’t performance to turn one’s nose up at, but not many people have €8,000 or so of money to spend on a computer and three monitors simply to get the best out of games which already look impressive at less-demanding resolutions. Even my own machine, with a Radeon HD 4890, is a bit over-kill for the native 1280×1024 resolution of my monitor.

That’s a pretty obvious conclusion, but there are still elements of the gaming PC which don’t make much sense in context. Whenever people ask for recommendation for the specifications that their own designs for gaming PCs should follow, there will usually be a big discussion on the power supply unit. There’s a good reason for this, as the PSU is one of the most likely components of a personal computer to fail, and I advise never going cheap on the component, but a lot of the suggestions on the internet recommend a 750 watt power supply.

I’ve started to wonder of late why a personal computer should need 750 watts of power to sustain its processes, enough to light a house full of non-CFL lightbulbs, and several houses with energy-saver bulbs. Even with the slight inherent inefficiency of power supplies, and assuming that the PSU will run with 80% efficiency at maximum load, that’s still 600 watts required by the internals of the computer.

Graphics cards are a major culprit in this scenario. As the power of a graphics card increases, the amount of power it requires will also increase, sometimes to slightly absurd levels. With the market for expensive, high-end GPUs closing up as specifications for games stay relatively level, perhaps it’s time for graphics card manufacturers and developers to start considering how to increase the power efficiency of their products.

AMD has taken a small step towards this goal with their Radeon HD 5xxx series of cards, with the 5770 producing about as much graphical potential as a previous-generation Radeon HD 4870 card, but with less demand for electrical power. Yet, a discrete graphics processing unit at idle speeds still sucks up a lot of power, sometimes in the region of 100 watts. My HD 4890 would go unused most of the time if I didn’t run an instance of Folding@home on it in the background while doing less computer-intensive tasks, sucking up power while only rendering Windows 7 Aero effects. Perhaps it’s time to try working on graphics cards with separate units for slower and faster graphical settings, with the ability to hot-swap automatically between them. This capacity has been demonstrated on laptops, where power consumption is a big deal, but it also needs to be demonstrated on desktops.

The recent developments of SLI and CrossFire by both of the main graphics card developers has caused another problem. Yes, I understand that NVIDIA and AMD need to get rid of that lot of mediocre graphics cards you have left over somehow. That doesn’t excuse them from trying to dump this technology on us. If they want to sell dual-graphics card systems, at least make it so that the end-user gets graphical potential more equivalent to the amount of electrical power that gets used up. Less dual-graphics card systems, less high-end graphics units which serve more to show off the potential of the company than to do any real tasks, and more decent, power-efficient mid-range components which make financial sense, please.