Probing The Inaccuracies: Modern Infantry Combat, Part 2

Dual-Wielded Pistols – The Sole Preserve of Action Stars and Blithering Morons: I suppose, if I shut off a small section of my brain, that I could just about make out the reason why the dual-wielding of pistols is so common within the action genre. With the limited knowledge of firearms that most people acquire over their lifetimes, it would be easy to suppose that if a single pistol allowed you a certain amount of firepower, a second pistol would allow for twice that firepower. There are significant failings with this reasoning, though. Even in the circumstance where you’re putting twice the number of rounds downrange, which isn’t always the case, there’s little reason to do so when you can’t put the bullets on target.

Firing one pistol effectively takes a lot of training, more than a rifle or sub-machine gun. A pistol has a short barrel, and apart from a few select models, lacks a stock to brace the gun against a shoulder. To “brace” a pistol, one must therefore use an awkward grip to hold the gun in two hands, and all of the aforementioned factors contribute to the low effective range of a pistol. In the hands of a trained soldier in a good stance, pistols are only expected to be accurate within 30 to 50 metres, compared to double that for a braced sub-machine gun, and in excess of 300 metres for a rifle.

The ammunition of pistols doesn’t help much either. The 9x19mm Parabellum rounds that are standard in NATO countries have a low muzzle velocity, which leads to low armour penetration and a considerably lower kinetic energy than rifle rounds. The low kinetic energy in turn leads to low momentum, a curved trajectory and a short point-blank range versus a more suitable military weapon. Therefore, a pistol is carried primarily by officers, rear-echelon personnel and special forces units as a backup weapon, one that is typically used only when more effective weapons cannot be found, and one that is difficult to use on its own. As the truism goes: “A pistol is a weapon you use to fight your way to your rifle.”

If one pistol is difficult to use effectively past very short ranges, adding another pistol to the picture leaves you with two weapons that can’t be used effectively, with an atrocious effective range, utterly useless on the battlefield and even mostly useless in close quarters. When using two pistols in tandem, one cannot brace either pistol, leaving each hand susceptible to muscular tremors. While single-handed pistol stances were common during most of history, this had more to do with their use as a cavalry weapon, where it was useful to have a hand free for the reins. When pistols became a military backup weapon after the Second World War, it didn’t take long for the two-handed Weaver stance to be formalised. It has since been joined by the isoceles stance, a more natural two-handed pistol stance.

Another problem caused by dual-wielding is the inability to get a proper sight picture through either of the pistols’ sights. This means that one has little in the way of knowledge whether they’re actually pointing the barrel at the target or something a metre or two to either side. This, even at very close ranges, can lead to a significant miss, and more hazardously, shooting something – or someone – completely apart from your target. Not a good plan.

I did mention that dual-wielding doesn’t always lead to twice the rounds going downrange, and this is more significant the larger the rounds being fired are. Even light rounds like the 9mm Parabellum produce enough recoil to shake the guns wildly off target, which means that you constantly have to coordinate your movements to make sure that the pistol is on target. Coordinating movements for a single pistol isn’t too bad; doing it effectively for two pistols is extremely difficult. Trying to multitask with guns is a bad idea at the best of times, but even more so when you’re actually slowing yourself down because you’re trying to do something grossly inappropriate.

Just about the only thing that dual-wielding seems to offer as an advantage is double the ammunition, but even that advantage comes with a massive disadvantage. Once you’ve emptied the pistols of their ammunition, you’re going to have to reload, and reloading of two guns takes more than double the time of loading a single pistol. Without a hand free, you’re going to be fumbling around with the ammunition, trying to find a place to put one of your guns while you’re loading the other one. You could put it in your holster, but then, you’ve got to take it out again when you’re done. You could also put it under your arm, but that’s an incredibly uncomfortable and limiting place to put it, and essentially makes any gun without an empty magazine rather a dangerous prospect to be dealing with. If you wanted extra ammunition, wouldn’t it be a far more sensible idea to deal with a single machine-pistol or sub-machine gun?

The inaccuracy of showing people using two pistols at once tends to be limited to the more absurd movies, to be fair, but it has shown up at times in computer games in which they don’t belong. Counter-Strike comes to mind; there’s no rational reason that a terrorist would use two pistols, and while the game does introduce limitations on the dual-wielded pistols in the game, their presence is still very much incongruous. Call of Duty: Modern Warfare 2 is an even more egregious offender, with not only dual-wielded pistols, but dual-wielded Desert Eagles and shotguns, all in a game series which trades off its ostensible realism.

Unfortunately, I seem to be fighting a losing war against a sea of fans of utter absurdities and inefficient combat techniques. Never mind that a soldier staring down the barrel of an assault rifle, a sub-machine gun or a combat shotgun is far more imposing than a man holding two pistols in a method that can be proven in seconds to be inefficient. All I can say is that I’m waiting for a movie where somebody goes into a slow-motion jump with two pistols in his hands, and is promptly and unspectacularly taken care of with a single shot from an uninterested soldier with an assault rifle.

That’ll teach you lot to admire dual-wielding so much.


Probing The Inaccuracies: Modern Infantry Combat, Part 1

Editor’s Note: This is a somewhat-updated version of one of the sections of Probing The Inaccuracies: Modern Infantry Combat. I intend to rewrite certain sections of this over the coming months.

The Desert Eagle: Oversized, Overpowered, Overweight, Inexplicably Popular: The Desert Eagle, a weapon designed by Magnum Research in the United States and produced by Israel Weapon Industries (formerly Israel Military Industries), is a large-calibre pistol using a gas-operated mechanism, more commonly found in rifles. This mechanism allows the pistol to fire Magnum rounds, more powerful than the normal calibres found in police or military pistols, and gives the pistol a great potential stopping power. The power of this pistol has made it very popular in the media, but the popularity is mostly undeserved – the ability to fire the large-calibre rounds has led to a significant number of flaws with the gun, ones which would render it worthless in the hands of the military.

The first thing you’ll notice about the gun is its size, a result of its gas-operated mechanism. By using a mechanism more at home in a rifle, the Desert Eagle is rendered much larger than is logical for a military pistol, and with this size comes weight. At almost two kilograms, the Desert Eagle compares very badly with other military pistols, weighing twice the weight of most currently-issued military firearms, and about 600 grams heavier than the Colt M1911.

This weight starts to look somewhat more reasonable when compared to other firearms firing .50 calibre rounds, such as the Smith & Wesson Model 500, firing the .500 S&W Magnum round. However, there are still several caveats when comparing these guns – the .500 S&W Magnum round hasn’t been adopted by the military either, and is considered useful only for a backup weapon when hunting, used for taking down bears, rather than humans. One must also consider that the weight of the Desert Eagle doesn’t decrease significantly when firing more reasonably-sized cartridges, such as the .357 Magnum, or the .44 Magnum, and the weight does not compare at all favourably with other pistols firing these rounds.

Even if the weight issue isn’t supposed to be important, one must consider the heavy recoil of the .50 Action Express, and to a lesser extent, the .44 Magnum and .357 Magnum rounds. Heavy training is required to fire a pistol effectively even with smaller 9mm rounds, and the heavier recoil of the Magnum rounds only increases that difficulty. Considering that all of that time spent trying to compensate for the heavy recoil of a pistol firing Magnum rounds could be used training on a weapon more useful on the battlefield, it seems unreasonable to give a soldier a sidearm with a calibre much larger than .45 ACP, which has moderately high recoil by itself.

Recoil apparently creates other problems in the Desert Eagle design. The Desert Eagle is reportedly not particularly tolerant of limp-wristing, a tendency to hold the gun limply in anticipation of the recoil of the gun. Without a proper, firm grip, the action fails to cycle correctly, jamming the gun and causing it to be even more of an overweight lump of metal than it already is even when it’s firing properly. All in all, not a particularly great trait to have in a sidearm that’s supposed to be dependable.

Even getting past the disadvantages of weight and recoil, the Desert Eagle still has a significant disadvantage which renders it a sub-par choice for military use. The magazine capacity of the gun, despite its size, is woefully small, with magazines of only nine rounds for the .357 Magnum, and seven rounds for the .44 Magnum and .50 Action Express variants. Compared to the 13-round magazine capacity of the 9mm Browning High Power, and the 17-round capacity of the more modern 9mm Glock 17, it seems ridiculously small, and even though it compares favourably with the Colt M1911, one must remember that the M1911 was developed about seventy years before the Desert Eagle, and is a more reasonable military pistol to boot.

The Desert Eagle doesn’t just look unreasonable compared to proper military pistols, but also a poor choice when compared to a sub-machine gun. While the Desert Eagle is lighter than almost all current military sub-machine guns and fires larger rounds than all of them, one must take in mind that a sub-machine gun requires less training to be effective with it, is more accurate with that lesser amount of training, and, with automatic fire, can get more lead to the target more quickly than the pistol.

Despite all of these disadvantages, some computer games and movies put these ineffective weapons into the hands of people meant to represent trained military personnel, who would probably be the last ones to want that paperweight hanging at their side. I point to Counter-Strike, where the Desert Eagle is a popular choice among the people who don’t realise what foolishness it would actually be to use such a weapon in a proper hostage situation.

Representing the single-wielded Desert Eagle in that way in a computer game or movie is bad enough, but somehow, there are people who just have to make it worse. Enter the dual-wielded Desert Eagle. Call of Duty: Modern Warfare 2 made the illogical decision to allow people to do this. Again, you’re playing badass military types in the game, and yet it still doesn’t make a lick of sense for them to be using a gun usually chambered in an expensive proprietary round of ammunition, or to be dual-wielding pistols when they have far more effective rifles and sub-machine guns at their disposal. Of course, this isn’t the most irritating feature of the game, which would be the dual-wielded shotguns (N.B. When Arnold Schwarzenegger single-handedly wielded a Winchester Model 1901 in Terminator 2: Judgment Day, he was firing blanks), but it still blunts the credibility of a game at least purporting to have some elements of gritty realism.

Probing The Inaccuracies: Nuclear Power (UPDATED 21/04/2011)

Since its inception in the 1940s and 1950s, nuclear power has been looked upon with suspicion, and for more than thirty years, it has been perceived by some as being fundamentally unsafe. Between the public perception of nuclear power’s ostensible lack of safety that was generated in the late 1970s and 1980s, and the connections to nuclear weaponry that have plagued nuclear power since its invention, what was once one of the most promising sources of electrical generation has stagnated and even gone into decline as politicians, many afraid of what the consequences of actively supporting nuclear power would do to their careers, refuse to allow the nuclear power grid to expand.

Unfortunately for those people who would protest against the continuing use of nuclear power, we’re left with a quandary. Oil prices have climbed during the last decade. The most substantial sources of oil we can cheaply access exist in states which do not have the most favourable outlook on the West. What’s more, oil is a valuable feedstock for chemical production, generating products which are immensely important for agricultural and industrial purposes.

Those sources of energy production which we would seek to replace hydrocarbon fuels with, such as wind-driven turbines and solar panels, come with their own problems. Their output is ultimately limited by uncontrollable external factors, and their ability to produce a useful baseload capacity is questionable. Such sources of energy also currently require considerably more land per watt than either hydrocarbon or nuclear plants.

We have reached a time when we seriously need to give consideration to what path we will take with regards to energy production. It is very likely that we will require every alternative to oil that we can manage, and among those energy sources, we need one which will be able to produce dependable, predictable amounts of electrical power, one that isn’t hamstrung by external factors. Aside from hydrocarbon power stations, which we have already dismissed as a sensible option for the future because of the limited supply of fuel and their capacity for environmental damage, nuclear power is perhaps the most sensible of the remaining options.

Yet, confronted by this impending scenario, there remains strong opposition to nuclear power. Some opponents have legitimate concerns, the disposal of nuclear waste among them, while others simply use pure misconceptions to argue their case. I intend to address some of these misconceptions and help prove the case for nuclear power.

“Nuclear power is inherently unsafe!”

This would be one of the main problems that pro-nuclear campaigners have: Explaining away the PR disasters that resulted from the failures of the 1979 Three Mile Island nuclear power station in Pennsylvania and the famous and tragic disaster at the Chernobyl Nuclear Power Plant in 1986. Somehow, almost everybody, including anti-nuclear campaigners, seems to take it for granted that these two disasters demonstrate the inherent lack of safety in nuclear power generation. Recently, the disastrous consequences of the 2011 Japanese earthquake and tsunami that affected the Fukushima Daiichi power station has reignited the nuclear safety debate. Unfortunately for the opposition to nuclear power, this most damaging of all misconceptions is also one of the least rooted in actual fact. Statistically, nuclear power is one of the safest sources of energy available1, 2, 3.

What causes this misconception to thrive? One reason could be the aforementioned connection of nuclear power to nuclear weaponry, with some people making the false connection between nuclear weaponry and some sort of ostensible ability of a nuclear power plant to explode like a nuclear weapon. A quick look behind the principles of both nuclear power generation and fissile nuclear weaponry soon leads to the truth; a nuclear power plant, by virtue of the way it’s constructed, cannot explode like a nuclear weapon.

So, what about power plant failures that can actually happen? They’re still not as common as some people would like to think. What’s more, they’re very rarely fatal. There has never been a recorded fatal nuclear accident in France, a country which generates 78% of its power by nuclear generation4, 5. There hasn’t been a fatal nuclear accident in the United States since 1961 6, and none in the nuclear-powered vessels of the United States Navy, who have been using nuclear reactors in close proximity to human operators for more than fifty years.

Yet, the public discourse on this subject always seems to gravitate towards the failures at Three Mile Island and Chernobyl. This is perplexing, given that both scenarios have certain factors which make them unsuitable as examples against current-generation nuclear reactors. The disaster at Chernobyl may have been devastating, but it occurred in a vastly outdated reactor without a concrete shell, and only occurred because of gross human negligence. Meanwhile, the Three Mile Island argument would seem to be more in favour of the proponents of nuclear power, given that there was no significant release of radioactive material, that the concrete shell of the reactor did exactly the job that it was intended to do, and that there were no recorded fatalities as a consequence of the incident.

The Chernobyl incident is worth discussing in greater detail. The reactors at the Chernobyl Nuclear Power Plant, which have now been decommissioned, are of the RBMK-1000 type. The RBMK range of nuclear reactors are notoriously crude 7, with a design dating back to the 1950s, and noted to be one of the most unsafe and dangerous reactor designs ever put into production. They were made for two main purposes: to be cheap and to produce plutonium for nuclear weapons. Electrical power was strictly a useful byproduct, and to facilitate the quick retrieval of plutonium from the reactor, the reactors at Chernobyl were produced without concrete shielding, unlike all contemporary American designs, and indeed, just about every nuclear reactor made since.

Even with the inherent safety issues with the RBMK reactor design, the Chernobyl disaster only occurred on the scale that it did because of catastrophic incompetence and gross human error 8. A decision was made to test the emergency cooling system of the reactor during routine maintenance, most particularly by trying to use the residual kinetic energy in the steam turbine to power the cooling pumps. This is not a decision that any trained nuclear engineer should have made, as it left a lot to chance – far too much for the tolerances of the already unsafe RBMK reactors. More worryingly, the workers who had been informed of the experiment had long since departed when the reactor was shut down, while the evening shift were also preparing to depart. This left the experiment to occur during a shift change, leaving the incoming workers completely unknowledgeable about what was about to happen.

Some of the safety failings of the RBMK design have been discussed above, but the reactor had another set of worrying characteristics – it was counter-intuitively and confusingly more dangerous at low power, and the SCRAM, or emergency shut-down procedure, took eighteen seconds to complete, which compares extremely unfavourably to the five seconds of modern reactors.

In order to maintain the unstable chain reaction in the reactor core, the manual control rods had been removed, only exacerbating the problem with the SCRAM procedure. When the inevitable happened, and the reactor went super-critical, the graphite tips of the control rods displaced the light-water coolant, initially speeding up the reaction before slowing it down. Expanding fuel rods blocked some of the channels for the control rods, the reactor power sharply increased, a steam explosion significantly damaged the reactor mechanism and was followed by a power excursion.

The catastrophic incidences in this disaster can be blamed directly on two sources: abominable reactor design and poor human decisions. As mentioned above, no contemporary American nuclear reactors would have been built without a sufficiently strong concrete shell, which would have contained the radiation in the reactor, preventing the spread of damage to the wider community. The Chernobyl reactor was operated primarily by humans – who can easily be demonstrated to be fallible. More recent reactor designs – so-called Generation III designs, which have begun operation in Japan and are planned for France – are operated primarily using highly redundant arrays of computers, which can act at a rate far quicker than a human at the sign of any anomaly. The Chernobyl disaster simply couldn’t happen with a modern reactor.

Another thing that should be noted is that while it is undoubted that the Chernobyl disaster was a horrifying incident and an indictment of the Soviet nuclear power program, that the recorded death toll was not as high as some sources would have you believe. In 2005, a report by the Chernobyl Forum, a group headed by the World Health Organisation and made up of members of the International Atomic Energy Agency and several other UN organisations, established that apart from the 57 deaths directly attributable to the disaster, a further 4,000 to 9,000 were estimated to die due to thyroid cancer from the release of radiation9.

This figure must be looked at in context. Few people would suggest that even 4,000 deaths is anything but a tragedy, and yet, more people die every year because of particulate air pollution caused by fossil fuel sources than have died in the entire history of nuclear-fuelled energy, a figure not reported upon because they do not occur due to a conspicuous disaster. Hundreds of coal miners die every year, many of them in barely-noticed mine collapses. Hydroelectric power is no less innocent – dam collapses can be lethal, most conspicuously seen in China in 1975, when the Banqiao Dam collapse caused the deaths of an estimated 26,000 people due to flooding and a further 145,000 due to epidemics and famine10, 11, 12.

The fact is that no source of energy production is entirely infallible. Even wind turbines can fail, given certain weather conditions, with dangerous results. Statistically, nuclear power proves to be one of the safest sources of energy production currently available. One could liken it to travelling in an aeroplane; people get nervous because of the conspicuous incidents which have occurred during the history of flight, and yet, it is statistically safer to fly than to drive, to take a train or a boat.

All of this leads to the recent events at Fukushima. While the long-term effects of the disaster are not yet known, a few things are still certain. On the 14th of April, 2011, the Fukushima Daiichi reactor failures were classified by the Japanese nuclear regulation agency as the second-ever INES Level 7 nuclear accident; the first, of course, being Chernobyl. Indeed, the comparisons to Chernobyl have come thick and fast from the media. It is my belief that the Fukushima reactor failures resemble Chernobyl in only two ways – they were both conspicuous nuclear accidents, and Fukushima will likely result in multiple deaths.

Yet, comparing Fukushima to Chernobyl seems to indicate more a limited grasp of the facts than anything approaching merit. Unlike Chernobyl, Fukushima wasn’t caused by gross human negligence – despite the limits of the protection against seismic activity, the power plant had been prepared for what would normally seem like an appropriate magnitude for an earthquake. An earthquake that is 9.0 on the moment magnitude scale is not a common occurrence, and it comes with problems that go beyond nuclear power stations.

Many thousands of people had already died as a direct consequence of the earthquake and subsequent tsunami. People have been left homeless or displaced from their homes. Meanwhile, despite the news coverage given to the nuclear power plants at Fukushima, the release of radiation from the plants was of a consequence more to the people directly involved with the power station right now than it does to the wider population. Because of the containment structures within the Fukushima power plants, it is unlikely that there will be any long-term restrictions within the vicinity of the power stations.

Recent incidences involving terrorism have caused people to think of nuclear safety in another way: A nuclear reactor could be a target for terrorists, wishing to destroy nuclear reactors in order to cause the spread of radiation or to steal byproducts of nuclear fission in order to create a radiological weapon. The first scenario can easily be debunked, thanks to that same concrete shell protecting the outside world from the effects of any safety breaches inside the reactor core. Engineers are not stupid people, and demonstrate a lot of foresight. Many reactors are shielded by more than one layer of concrete, and given that a single layer of concrete several inches thick will demolish an aeroplane flying towards it with barely a scratch on the concrete and require a large amount of explosives to demolish, beyond the means of the poorly-funded and not particularly well-organised terrorist organisations which may currently want to target a nuclear reactor, I think that the public doesn’t exactly have to worry about the structural integrity of a nuclear reactor in the case of a prospective terrorist attack.

The second scenario postulated here is a more realistic – and arguably, more worrying – one. The addition of radioactive substances to a conventional explosive could greatly exacerbate the effects of the explosion. However, and this is a point that I will return to later, nuclear reactors don’t produce the amount of nuclear waste that many people would believe; nuclear fuel loadings are closer to kilograms than tonnes. Most of the left-over material from fission of uranium-235 is uranium-238, an isotope of uranium which can only be split by fast-moving neutrons, and which does not sustain a chain reaction. This is a substance with a very long half-life – about 4.5 billion years – and a correlating weak release of alpha radiation, the least penetrating type of radiation. While this can cause significant cellular damage inside the body, it can easily be stopped by a sheet of paper – or by skin cells. The concentration of more dangerous radioactive isotopes, such as radioactive iodine or plutonium, is considerably less than the concentration of uranium-238. The difficulty in isolating these more dangerous isotopes and the difficulty of acquiring a large amount of radioactive material makes this scenario less probable than the use of conventional explosives, which are far more readily acquired – or made at home. Nevertheless, despite its unlikelihood, it is a potential hazard, and one which the administrators of nuclear power stations will have to continue to look out for.

Nuclear fuel won’t last forever!”

No, it won’t. That’s a correct assumption as long as the nuclear power being used is fission rather than fusion, which relies on hydrogen and helium isotopes which can be found in significant yields. However, the shortage of nuclear fuel may well be overestimated – technologies not yet explored in full which use rather more abundant fuels than uranium-235.

Before we explore these other technologies, we must not rule out the potential for a more efficient use of uranium-235. One kilogram of uranium-235 can theoretically produce 80 terajoules of energy, which is three million times energy than that of coal13. Even with the incomplete fission found in nuclear reactors, uranium-235 still produces a vast amount of energy per unit mass.

This more efficient use of uranium could be facilitated using nuclear breeder technology. A breeder reactor creates more fissile material than it consumes, and could be used to vastly improve the efficiency of nuclear reactions. There are two main proposals for breeder reactors: fast breeder reactors, which would convert the common and previously mentioned uranium-238 isotope to the fissile plutonium-239, and the more economically viable thermal breeder reactors, which convert thorium to uranium-233.

While the conversion of uranium-238 into a fissile isotope is of strong theoretical interest, the thorium fuel cycle demonstrated in thermal breeder reactors is of more economical interest, and therefore, I will focus my attention on it. Thorium is an actinide found in nature as the thorium-232 isotope, and is estimated to exist in about three or four times the quantity as uranium, and is a waste product in the combustion of coal, among other sources. Significant interest in this technology has been demonstrated by India, who are believed to have one of the highest percentages of thorium in the world, although significant quantities are also believed to exist in the United States and Norway.

The mononuclidic nature of thorium makes it an interesting candidate for nuclear breeding. Unlike uranium-235, not found in large abundance within uranium ore, thorium-232 is the single stable isotope of the metal, and therefore, isotope separation does not need to be performed, making thorium able to be used without as many processing steps. The thorium fuel cycle also has a number of advantages which would make it an interesting candidate for nuclear reactors of the future, among them being the greater difficulty of making nuclear weapons from the uranium-233 produced in the fuel cycle.

Without having the significant inefficiencies found in current-generation uranium-based reactors, thorium fuels could last for a significantly longer time than uranium is projected to last, giving a lot more time to develop alternatives such as renewable energy sources more efficient than today’s wind farms and solar panels, along with the Holy Grail of nuclear energy production: Nuclear fusion.

Nuclear fusion generates large quantities of energy through the fusion of deuterium, tritium – both isotopes of hydrogen which can be hydrolysed from water, if with some difficulty – and helium-3, which is expected to exist in significant abundance on the surface of the moon. The principle has existed for billions of years inside stars, including our own solar system’s Sol.

Nuclear fusion can claim some important advantages over nuclear fission: The fuel is unquestionably more abundant than uranium or even thorium, waste products are negligible, and such reactors would be useful in certain futuristic scenarios, including interplanetary research stations, colonies and spacecraft, which renewable sources of energy would not prove sufficiently useful or reliable at.

Currently, nuclear fusion is more of an experimental concern than anything we could use practically. Research is ongoing in France14 to demonstrate the potential of nuclear fusion as an energy source for the future. However, fusion requires a certain investiture of energy before the nuclei will fuse, and we have not yet been able to generate more energy from fusion than was invested, the most favourable ratio so far being approximately 10:7.

Nevertheless, the technology has some unquestionable benefits for the future, and certainly can’t be ignored. Earth isn’t about to run out of hydrogen any time soon, and fusion power plants have the potential to be incredibly safe and produce low amounts of waste while still providing a useful baseload capacity.

“Pie in the sky schemes make for good typecast, but what about nuclear power right now?”

Many criticisms of nuclear power which don’t revolve around the ostensible safety problems revolve around economics. As of today, it costs billions of euro to produce a nuclear reactor, sometimes taking decades to complete.

I don’t believe that this is any issue of the reactors. Instead, I’d level my criticism at red tape and bureaucracy, both generated because of the perplexing hatred which some significant minority communities have towards nuclear power. The safety issues are not a legitimate concern. Most operational reactors have failure margins in the range of one damaged core per plant every 20,000 years15, and modern reactors have failure margins in the order of once every hundred thousand years, well beyond the lifespan of a nuclear reactor and facilitated by engineering redundancy.

Other criticisms revolve around the high cost per kilowatt-hour for nuclear production. I’m not sure whether these costs include the costs of building the reactor in the first place, but if they do, removing some of the absolutely unnecessary bureaucratic bother surrounding their construction. If not, the costs could be appreciably decreased by using the more abundant thorium breeder reactor designs out there.

Some critics of nuclear power grasp onto the point that modern nuclear power stations seem to all be monolithic, centralised power units, incapable of being decentralised. Try telling that to the US Navy, who have been using nuclear reactors on ships and submarines since 1955. I’m not sure if you’ve ever seen a modern submarine, but despite their size increase over previous-generation submarines, they aren’t all that big, especially in comparison to the huge metropolises which some nuclear stations have to generate power for.

In that vein, some companies have been working towards scalable, modular power stations which can be embedded underground and which require little maintenance. Among these is the Toshiba 4S16, 17, a design producing 40MW of power in a footprint little bigger than an electrical sub-station. Reactors like this could make up arrays which would act together in the place of larger, single-core reactors, or else be distributed in such a way as to decentralise the generation of power.

Meanwhile, in all of the fuss, people have forgotten some of the major advantages of nuclear power. Nuclear power has very low emissions of carbon dioxide18, considerably lower than any fossil fuel source, and as things currently stand, potentially even lower than even wind and solar power19. This is one of those things that makes me rather perplexed at the complete dismissal of nuclear power by environmentalists. While I understand their criticism on the grounds of nuclear waste, I thought that the reduction of carbon dioxide emissions was a fairly big deal. Did priorities suddenly change without me noticing?

“How about the alternatives, about renewable energy?”

I’m not going to suggest that exploring – and using – alternatives to fossil fuels, and perhaps, eventually nuclear energy, is a bad thing. It’s clear that we need to use everything we can get our hands on to reduce our dependence on oil as a fuel source, and instead save it for chemical production. But at the same time as not being able to dismiss wind and solar energy because we have nuclear energy, we can’t dismiss nuclear energy just because we’re tilting towards renewables.

The problem is that renewable energy sources are currently too limited to use in the capacity of baseload power generators. Wind and solar power rely on specific weather conditions in order to operate at optimal capacity, a limitation not found in nuclear power. While the optimal weather conditions for wind and solar seem to balance themselves out, the reliability of the combined sources is not in the same region as nuclear power.

The sensible thing to do would seem to be to build excess capacity of power plants and to store some of the excess energy in a form that can be reused later. One of the most promising methods of doing so is to use the hydrolysis of water to generate hydrogen which can be used later in a fuel cell for the generation of energy by the reforming of water. This method has the advantage of producing a waste product which can be reused under hydrolysis again, creating an entirely waste-free cycle.

While one could build an excess of wind and solar power plants for this purpose, the more sensible option would seem to be to use the excess capacity of a more dependable nuclear power plant for the production of hydrogen and the wind and solar plants for the generation of power-grid electricity. There are a few reasons why we’d want to do it that way around, one of the most prominent being the space requirements for excess solar and wind stations.

Nuclear power plants take up a lot less space per watt than wind turbines or solar panels, both of which occupy a lot of land. Some of this space requirement could hypothetically be mitigated by the use of turbines and solar panels on a residential scale, although the efficiency of such devices may not be sufficient to entirely run a building. Nevertheless, the occupation of a large amount of potentially productive land could make the use of wind and solar power a more dubious prospect.

Tidal power, an alternative which has recently been rejected in Britain by the incumbent Conservative government, is even less efficient in terms of space, and can cause problems with the fishing industry. Hydroelectric power, as mentioned above, can be highly unsafe in the case of catastrophic failure. Geothermal power is only a particularly economically viable option in countries where volcanic vents exist. All of these energy sources can be useful, but few provide the baseload capacity of fossil-fuel or nuclear power, and most are limited by environmental conditions of some sort.

“And what about nuclear waste? Surely, that’s a pretty big deal.”

Yes, it is a pretty big deal. Nuclear waste is pretty much universally considered to be a Bad Thing, with environmentalists worried about the potential harmful effects on nature, and the owners of nuclear power stations concerned about profit margins – as nuclear waste is, after all, representative of wasted energy.

That would be why modern – Generation III, specifically – reactor designs are intended to minimise the amount of waste that they produce. The use of even more modern reactor designs from the so-called Generation IV set of technologies, or the use of thorium breeder reactors, which use up most of the transuranic elements produced in the process of their reactions, could further minimise the risk.

The storage of the remaining waste has posed a few problems. While nuclear waste could actually be diluted a thousand-fold or more and dumped into the ocean with less negative results than the premise of the scheme would suggest, most people would consider this to be very irresponsible.

In fact, what seems to be the most reasonable option is the storage of waste underground after vitrification, conversion of the waste into a form of glass. This could be combined with reprocessing in order to retrieve as much as possible of the useful nuclear material as possible, a strategy not being used for some reason by the United States, but being used by other major nuclear powers such as France, Russia and Japan.

If the storage of such materials underground still seems irresponsible, consider that there is no reason why certain nuclear isotopes contained within the waste should not become useful in the future. There’s no way to tell if people in the future may not find a sort of nuclear reactor capable of efficiently using depleted uranium.

The people responsible for the safe storage of nuclear waste may seem lackadaisical, but that’s nothing on the level of the people responsible for disposing of coal waste. Research conducted by the Oak Ridge National Laboratory20 in the United States demonstrated the presence of significant quantities of radioisotopes of uranium and thorium in coal slag. You should recognise these two elements respectively as the current primary nuclear fuel source and the promising “super-fuel” discussed earlier.

In fact, these radioisotopes are present in such quantities that there is more energy contained in the nuclear waste in coal than liberated from the combustion of the coal itself. People are just throwing that potential nuclear fuel away, willy-nilly! If waste from nuclear reactors was treated in such a blasé fashion, there would be uproar.

Incidentally, the presence of thorium and uranium in coal means that coal-fired power plants means that they generate 100 times the population-effective dose of radiation that nuclear power plants do20. Think about that the next time that somebody rolls out the old NIMBY argument about nuclear reactors – they probably wouldn’t have the same problems living in the vicinity of a coal power plant. Of course, the criticism of nuclear reactors makes it seem less like “not in my back yard” and more like “build absolutely nothing anywhere near anything”.

So far, there isn’t a perfect solution to the problem of nuclear waste. A lot of that is down to the fact that nuclear waste often has the potential to be useful – it’s just that research hasn’t got to the stage where we can economically reuse nuclear waste. I’d be inclined to level some of the responsibility for this slow rate of research at the people holding back the adoption of nuclear power as a whole, who have prevented more modern, more efficient and cleaner reactors from coming into service. However, it’s certainly a problem that can be solved with the judicious application of science and engineering, and certainly not with scaremongering, political horse-trading and Luddism.




3 – Chapter 7

4 – the most substantial nuclear accident in France was Level 4 on the INES scale – below even Three Mile Island in severity.



7 – page 194














FROM THE ARCHIVE: Probing The Inaccuracies – The Automobile

Author’s Note: This article was the second in the Probing The Inaccuracies series, first written in November 2009.

More Power Doesn’t Always Mean More Speed

So, you want to make your car go faster, so it would be a good idea to jack up the power, right?

Not necessarily. There are far more factors in play than the amount of power that you’re producing. The amount of torque is more important than the raw power, and that’s before you get to things like weight, transmission, suspension, chassis design, aerodynamics, et cetera.

The power generated by a car is a function of its level of torque and the revolution speed of the engine. A car with either low torque or a low revolution speed isn’t going to generate much power, while a higher-revving car in the vein of the Honda Civic or a car with a high torque, such as the Dodge Viper, will produce much more power.

But there’s no point producing a huge amount of power if you can’t transmit it to the road. The world is full of car designs by back-shed mechanics and huge car companies alike which produce ridiculously high amounts of power and torque, yet can’t take advantage of it except when the pedals are under the feet of trained professionals. You see, in order to use that torque effectively, you have to use a sufficient transmission.

Some of the most famous cases of a car having far too much torque for its transmission to handle come from the AMG division at Mercedes-Benz, famous for its factory-modified sports models. Some of their more powerful cars include 6.3 and 6.5L V12 engines, which produce so much torque that not only have they had to use a five-speed transmission because their newer seven-speed automatic can’t support the torque, but they’ve had to artificially limit the torque level to help stop their tyre-shredding might.

An excess of power was also experienced in the prototype engine of the TVR Cerbera Speed 12. The Speed 12 was a 7.3L V12 engine produced by TVR for motorsport, and was produced by fitting two of TVR’s straight-six engines to a common crankshaft. The motorsport engine was limited by the addition of air restrictors, and when it came to trying to produce a road-legal car with the engine, they removed the air restrictors and attached the unrestricted engine to a dynamometer rated for 1000 horsepower. The engine produced so much power as to break the shaft of the dynamometer, and TVR estimated that it had produced 940 horsepower. Not deterred, Peter Wheeler, then owner of TVR, took out a prototype car with the Speed 12 engine and declared it far too powerful for useful use. It’s perhaps useful to note that this is a man who regularly competed in the Tuscan Challenges and who didn’t put airbags or ABS into his cars because he didn’t trust them, a man who owned a company where the cars had attained a reputation for ferocity and insanity, and that was what it took for him to relent. A single Cerbera Speed 12 was later sold, with the engine detuned to 800 horsepower.

Let’s say you’ve modified your ridiculously powerful car with a transmission capable of smoothly taking the strain. Surely, you’ll be able to go fast now? Yes, but within a very narrow context. Those insanely modified 1000 horsepower Skylines that I’m sure you’ll have heard about and seen on the front of modified car magazines are only fast in one direction: In a straight line. While modifying these cars for outright power, these people have sacrificed the ability to use these cars effectively around a track. I don’t really care if your tuned Skyline has more power than some trains if you can’t use it effectively for anything but drag racing.

When it comes to circuit racing or road use, there are still far more pressing issues which dictate if your car will be effective. Weight is one of the most pressing issues. You see, every extra kilogram of weight on your car is an extra kilogram that the engine will have to move, and an extra kilogram that the brakes will eventually have to stop. Racing cars are rarely more than a tonne, and commonly much less. I’ll address this issue regarding modified motors later on, but when it comes to a racing car, the lowest weight possible is imperative.

A properly-sprung suspension is just as imperative. American car manufacturers have long been developing their fast cars for long stretches of motorway, as opposed to the European and Japanese approach of modifying them for the track and for twisting country roads. While various American car manufacturers are beginning to see the importance of car dynamics, chief among them being Chevrolet with their Corvette – tested at the Nurburgring and produced as a grand touring racer for the Le Mans Series of endurance races – for a long time, a less powerful car from the likes of Lotus could easily beat the most powerful muscle cars around a track.

Finally, we have the issue of aerodynamics, and this is one where laypeople often confuse matters. I’ll get into this one closer with my next point, but a car which is shaped like it’s made of Lego obviously isn’t going to go as fast as a car specifically designed for favourable aerodynamic qualities, unless there’s a gross power difference.

Every so often, a car manufacturer manages to combine these qualities to make a powerful car which can go outrageously fast in most conditions. The Koeniggsegg CCX is a good example, with 800 horsepower. The Bugatti Veyron, somewhat surprisingly, considering its protracted and difficult development, is a fabulous example, being the fastest-accelerating production car in the world. However, despite the amazing top speeds and accelerations of these cars, remember something: The Ariel Atom, a car with only 300 horsepower, can almost match them around a racing circuit by virtue of its extremely low weight. The Bugatti Veyron, despite its huge amount of generated power and torque, isn’t even as astounding a track car as you’d expect – it’s a heavy car at almost two tonnes, and isn’t set up for track conditions.

Big Spoilers on Front-Wheel Drive Cars? That’s Just Stupid!

I noted that I’d get back to aerodynamics. This really is an issue which people seem to get wrong far, far too often, and it irritates me greatly.

You’re doing it wrong!

The spoiler is a device which could be described as an analogue to an aeroplane wing, except that instead of generating lift, it’s mounted upside down in order to produce downforce. In a rear-wheel or four-wheel drive car, a spoiler can help to avoid instability in corners by forcing the driving wheels into the road, controlling the rear end of the car as it comes out of a corner and thus reducing excessive oversteer and the possibility of a spin-out.

In a front-wheel drive car, though? Not such a great idea. In a front-wheel drive car, many of the positive benefits of having a spoiler are lost. A front-wheel drive car is inclined to understeer in any case, and adding a spoiler on the back just increases that inclination. Essentially, you’d try to turn a corner on a track and either you’d end up having to slow to a crawl, or else you’d end up unable to turn the corner and just end up crashing into a wall.

That’s not the only problem with misplaced spoilers. Something which few car modifiers take into consideration is the fact that a spoiler adds drag. It doesn’t increase acceleration and it doesn’t increase top speed – in fact, it reduces both. What this means for car modifiers is that they might add a spoiler to their front-wheel drive car and end up slowing it down. Well done, you stupid clots, you’ve just made your car worse.

But maybe I should consider something else as well. You see, there are various limitations on a front-wheel drive car which limits the amount of power that can effectively be transmitted to the front wheels alone – torque steer among them. The front-wheel drive cars that the boy racer community uses are typically lukewarm Japanese hatchbacks with no more than 200 horsepower. They don’t usually go at speeds where the spoiler will actually work effectively, and this just makes me laugh. These fools have decreased the maximum speed of their car with a device that they’ll usually never have any reason to use, and which is placed onto a car for which it serves few practical benefits.

They’ll never learn.

Big Wheels And Spinning Rims Make A Car Slow

There’s been a popular movement recently to put the most ridiculously gigantic wheels possible onto cars, accelerated by shows like Pimp My Ride. I contend that this popular movement is spread by a series of morons.

There’s a grain of truth in the idea of changing your wheels. The wheels that are used are made of various alloys, typically stronger and lighter than the materials used on production cars. A modern alloy wheel does make a lot more structural sense than the 1960s wire wheel. It’s a pity, then, that the grain of truth appears to be surrounded by a Sahara of stupidity.

Large wheels add weight to a car, which has already been suggested to be an important component in producing a fast car. What’s more, it’s unsprung weight as well, which isn’t supported by the suspension, and therefore, a large set of wheels is going to ruin the driving dynamics of your car. Great work, dolts. Your massive wheels have just slowed down your car again.

What’s worse than the overly-large wheel, which at least makes some sense on a car like a Rolls-Royce Phantom (although if I see people trying to bling up that specific car any more, I will erupt with rage at their corruption at the core values of a Rolls-Royce), is the additions to a wheel in the vein of a spinning rim. This is probably the ultimate and most tasteless variant of the “form over function” principles which seem to guide many of the more clueless car modifiers. These devices have absolutely no practical benefits, adding unsprung weight which can’t even be justified, and frankly, I’d rather have a more subtly designed car which can actually field the performance to back its looks.

This isn’t just a problem that exists with wheels – the stereo systems of a car are one of the most popular modified components. Now, there’s nothing wrong at all with wanting a good stereo system in a car – within reason. But these car modifiers are rarely reasonable, and their stereo system layouts indicate this perfectly.

First of all, it’s hopelessly stupid to load a car which is ostensibly designed to be fast with a whole load of heavy electronic components. Again, every kilogram that you put into your car is one more that the engine has to pull along, and the weight of a stereo system isn’t going to improve the performance of your car in any way. Secondly, it is completely unreasonable to have a stereo system loud enough to hear it from the other side of a country. As soon as you can hear what a driver is listening to while they have the windows shut, the music is almost certainly too loud. Now, I wouldn’t mind this so much, but boy racerdom seems to come with a like of the most horrific music ever devised by the human mind. When I’m trying to sleep, concentrate on driving myself or else listen to my own music, I don’t want to be interrupted by a constant repetition of whatever “oonts, oonts, oonts” crap that these tasteless individuals seem to think is appropriate.

Facepalm time, methinks.

Drifting Is For Posers And Rally Drivers

Drifting is one of those favourite sports of the car modifier, along with drag racing. Now, strictly speaking, drifting should be taken as more reasonable than just massively turbocharging your Skyline’s RB26DETT engine to the point where you have to reinforce the cylinder heads and just zooming off in a straight line. You need a different type of car for drifting, and you need to keep it under control. You don’t get the big spoilers which are endemic in the car modification scene. Yet, you don’t see most racing drivers drifting during a race. Why? Because it’s completely impractical!

Drifting around corners doesn’t increase your speed around a circuit with modern, downforce-heavy cars – it slows it down. When you’re drifting, you are specifically allowing the car’s driving wheels not to grip properly, which means that you’re not transmitting the power of the car onto the road. Instead, you’re wasting your energy hopelessly spinning the wheels and, in the process, wearing down your tyres.

Now, there was a time when you may have seen drifting in an automobile race. During the 1950s and 1960s in Formula One, the cars had almost no downforce and rock-hard bias-ply tyres, and so, with their massive amounts of power, they were inclined to oversteer very often. Indeed, the late 1960s were probably the most dangerous time for circuit racing ever, with deaths all too common on the Formula One circuit, and legendary drivers like Jackie Stewart, John Surtees and Denny Hulme attaining their reputations by stepping into fundamentally unsafe cars and giving Death the finger.

However, in the late 1960s, developments by Lotus and other teams in Formula One led to the fielding of the first spoilers, which drastically increased downforce and led to a battle of technical driving, with precision being imperative. (Note: The huge power and rear-central position of a Formula One engine makes it a very useful application of spoilers – unlike the lukewarm hatchbacks you occasionally see it on). Since then, drifting has been considered a waste of time, and some of the most spectacular racing comes from the wet when drivers try to battle the rain to maintain their technical driving in less-than-desirable conditions.

Unfortunately, nobody seems to have told film makers and computer game designers that drifting is a waste of time. It’s absolutely endemic. I know that if I sit down and watch a film like The Fast and the Furious, that I’m going to see people drifting. I know that if I play a game like the Need for Speed games, that drifting is going to be imperative to win.

Some of you will not realise how frustrating it is for somebody who has spent their time watching technical displays of driving on a circuit to suddenly see people throwing their cars around on drifting – or be expected to do it in a driving game. I suppose the next time I see somebody drifting around corners in a computer game that I’ll be inclined to say, “Go straight, not sideways, you stupid clot!”

But then, there are places where drifting actually is useful. It isn’t on the circuit, where precision cornering is the order of the day, but instead on the rally track. The title may have made you think that I was suggesting that rally drivers were just posers – not by any means. They’re extremely talented drivers who can deal with conditions that would frustrate most circuit racers. Now, real life circuit drifters can be exceptionally talented as well, but then, I feel that they’ve wasted their time going sideways instead of conforming to effective driving techniques.

Any non-tarmac rally circuit will have very loose particles under wheel, and attempts at circuit-style driving will only result in very slow times. The rally driver does manage to get quicker results from a car by drifting it around corners using techniques like the Scandanavian Flick. However, when they get onto tarmac, do they drift around then? No! That would hurt their times in the same way that circuit-style racing hurts their times on dirt or gravel.

Now, you’re not going to hold a game like Mario Kart to any semblance of physical accuracy, so drifting is still acceptable there. Games like Need for Speed and the Ridge Racer series, on the other hand, aren’t so lucky. I would call upon driving game makers to stop this horrible obsession with an inefficient technique, because really, I feel you can have just as much fun with a slightly more realistic model which doesn’t use something which irritates me so much.

Nitrous Oxide Is Not Magic

Here comes yet another device, probably popularised by The Fast and the Furious, that seems to have become popular with the car modifying community, even if most of them never use it. In the media, nitrous oxide is portrayed as some sort of magic device, which sends a car into “win mode”. If this is the mechanical knowledge of computer game designers, I don’t want them tinkering with my car.

Nitrous oxide is an additive, usually contained in a separate tank in the car, which increases the oxygen level inside the combustion chamber while it is being injected, effectively increasing the combustion rate of the fuel and therefore the horsepower. It can be effectively used to increase the acceleration of a car while it is being injected. However, it’s far from being magical – it’s absolutely loaded with limitations.

Nitrous oxide most effectively increases the acceleration rate of a car at low gears. There’s absolutely no point, as many games would have you do, in injecting nitrous oxide into your engine when you’re already close to top speed. It would be far more effective to use it when you were just coming onto a straight, but then, arcade driving games don’t rely on the brake pedal very often, do they?

Of course, this would be presuming that nitrous oxide was actually a practical idea for racing – but it’s not. Nitrous oxide may increase horsepower, but that will probably strain a car’s components. I’m still waiting to see a show or a game where somebody puts nitrous oxide into their car with the pretence of making it into a winning car, and then has his engine explode immediately because he’s put it into a car with insufficiently strong components.

Even if you do have a car which can use nitrous oxide effectively, it adds weight to the car, making it more difficult to steer around corners, and then, you can only use it at limited intervals, because extra power around a corner is just going to increase any tendencies to understeer or oversteer.

And this guy’s ruined his engine forever.

There’s another reason why racing cars don’t use nitrous oxide – because they already have oxidisers in their fuel. They use methanol fuel, which has a high octane level and an increased oxygen count over standard unleaded petrol, and therefore don’t need ridiculous contrivances like nitrous oxide. In real life, nitrous oxide is usually for losers who can’t build a car properly in the first place, with very limited applications in the world of motorsport.

Unfortunately, this inaccuracy doesn’t look like it’s going to die out any time soon, and it will likely be perpetuated in the short term by the introduction of the KERS (Kinetic Energy Recovery System) technology in Formula One. This system allows various manufacturers to store kinetic energy while braking in electric or mechanical systems and use it for a small amount of time per lap, effectively giving themselves an 80 horsepower increase for the duration of the KERS boost. Actually, it’s a very interesting system – precisely because it’s limited by the weight restrictions that I talk about above. A car with KERS is going to be heavier than one without the system, and it isn’t a game-breaker either – almost all of the Formula One races of the 2009 season were decisively won by cars without KERS, and the possible re-introduction of the system in 2011 won’t necessarily change that situation.

If driving game manufacturers must include some sort of boost system, I’d hope they look at KERS first, making cars with a boost heavier than ones without, instead of perpetuating an inaccuracy about the massive advantages of nitrous oxide. As it stands, the boost isn’t done well in driving games at all.

The Turbocharger Isn’t Magic Either

Unfortunately, nitrous oxide isn’t the only “go faster” technology which people portray inaccurately. The turbocharger isn’t properly understood either.

The turbocharger, short for the now-antiquated terminology, “turbo-supercharger”, is a device fitted onto the exhaust of a car, consisting of a turbine which is propelled by the exhaust gases of a car in order to force more oxygen into an engine. The word “turbo” isn’t just a synonym for “fast”, then; it relates to the component that the turbocharger is made from.

I expect that many people treat the turbocharger as a “win more” device, basically massively improving a car’s performance. Unlike nitrous oxide, it’s actually practical to put onto a production car, but like nitrous oxide, it’s limited by various restrictions.

A turbocharged engine can be outrageously powerful, producing huge amounts of power from a very small engine. The aforementioned RB26DETT engine in the Skyline GT-R series is a 2.6L twin-turbo straight-six which can produce more than 600 horsepower with a stock engine block, and up to a megawatt (1300 horsepower!) with specially-reinforced components. The 1980s saw a massive development of turbocharger technology in Formula One, with 1.5L engines producing 1500 horsepower in qualifying trim. Despite these huge amounts of power, there is one distinctive limitation that a turbocharged car can have which a naturally-aspirated car doesn’t: Turbo lag.

“Turbo lag” is the name given to the phenomenon in turbocharged cars when, at low revs, not enough exhaust gas is going through the turbine and as a result, the car is slowed down. Particularly endemic in turbocharged cars in the 1980s, it has improved significantly since then, but often necessitates a twin-turbo design, where a smaller turbocharger runs at low revolution speeds and a larger turbocharger runs at higher speeds. A turbocharged diesel engine will almost always get a benefit from the turbocharger; a turbocharged petrol engine won’t always benefit.

For production cars, improper turbocharging can also lead to unreliability – even engine explosions. Trying to fit a massive racing-car spec turbocharger to a turbocharged hatchback is not only going to give the car so much lag that you’d have to start it at full revs, but it’s also likely going to make the car overheat or even explode. Too much turbocharger boost is not safe for an engine, and in a spark-ignition engine, maximum boost is usually limited to about 1.5 bar.

Turbochargers can be useful devices. They’re very useful at some of the higher echelons of motorsport, but they’re not some sort of amazing device that can instantly make cars go faster. Anybody who thinks that they are should gently be directed towards the books on motor vehicle technology before it’s too late.

Petrol-Electric Cars: Not The Future, And (Mostly) Ridiculous

I move onto an issue which is more in line with most people’s normal lives. The hybrid car is one of those new technologies which various car manufacturers are trying to push forward. In this age of ever-decreasing petrol reserves, any sort of efficient alternative to the reciprocating petrol engine would be a step in the right direction. Whatever the alternative happens to be, though, I doubt it’s going to be the petrol-electric hybrid.

My feelings towards the petrol-electric hybrid is that it’s a marketing exercise, a way to make people feel better about the planet without actually having to do anything. The problem is that it doesn’t work that way. The petrol-electric car isn’t particularly more efficient than its petrol alternatives, and is, when practical tests of efficiency are used, often less efficient than an equivalent diesel engine.

A petrol-electric hybrid works on simple principles. Along with a standard, albeit usually low-power, reciprocating petrol engine, it has a secondary electric motor, which propels the car at low speeds, with the petrol engine taking over at higher speeds. The electric system is usually recharged by regenerative braking, a way to convert the kinetic energy of a car into electrical energy, or, when the batteries run out of power, by an alternator powered by the petrol engine.

Such a system is very complex, and with complexity comes weight. Weight, as anyone who remembers the first point will have realised, is one of the enemies of car design, particularly when it comes to speed. So, hybrid cars aren’t particularly quick, but then, when you’re driving a car like a hybrid, I don’t suppose that speed is your main objective. But weight also leads to lower fuel economy, which means at high speeds where the electric engine is unable to keep up, it’s at a decided disadvantage versus more conventional cars.

So, let’s take a closer look at the fuel economy of the hybrid. A car like the second-generation Toyota Prius gets a fuel economy of about 65mpg (imperial), according to official UK statistics. That’s pretty good, actually, but it would be a lot more impressive if the Volkswagen Polo Bluemotion (a diesel) didn’t get figures more akin to 80mpg. The official statistics don’t tell the full story either – according to more practical tests by What Car? Magazine in the UK, the Toyota Prius can only get about 50mpg when driven in a normal fashion, which starts to put it down near a series of more powerful and larger diesel engines, let alone the Polo Bluemotion, which could probably call upon a 70-75mpg fuel economy with practical use.

Actually, petrol-electric cars aren’t particularly good for the environment either. Because the Toyota Prius has components sourced from all around the world, and isn’t even built in the American or European factories, they have to ship the cars around the world, probably causing more emissions than they’ll ever save by virtue of their hybrid engines.

Then, there’s those batteries. They’ve got a limited lifespan, somewhere on the region of eight years. This doesn’t hold up well compared to conventional cars. European and Japanese cars in Europe often last for more than a decade, with my own car being twelve years old. Certain collector’s sports cars, particularly a car like the MG B, could last much longer. Many of them are now thirty or more years old. Because it’s not cost-effective to replace the batteries in an eight-year-old hybrid, the whole car will be scrapped, and it is for that reason that a Toyota Prius is considered by some sources to be more damaging to the environment over its lifetime than a Land Rover Discovery.

The Toyota Prius: That veneer of environmental consciousness is merely a veneer.

“But my favourite celebrity drives a Toyota Prius,” you might say. Well, obviously, there’s a problem with that sort of reasoning. When it comes to matters of automobiles, celebrities are more often than not very ignorant as to how a car works. Go and ask somebody like Leonardo di Caprio how a catalytic converter or a gearbox works.

Well, maybe you’ll have a hard time getting in contact with him. Now, there are celebrities who know a lot about cars, people like Jay Leno and Rowan Atkinson, and a few very talented racers in the set, including the late Steve McQueen and Paul Newman. Do you know what sorts of cars they drive? Big V8 or V12 supercars, not pokey little milk floats with stupid hybrid engines. The fact that so many celebrities seem to be driving cars like the Prius and the horrible, abominable, disgusting REVA G-Wiz is precisely because they want to be looked at as having done something to save the planet without actually having to realise that they’d have done a lot more by buying a diesel.

At this point, it might look as I’ve completely shrugged off the hybrid as a possibility altogether, but surprisingly, I haven’t. There is still one hybrid technology that I think might have some practical benefits, and I’m surprised that it wasn’t developed sooner. All you have to do is replace the petrol engine with a diesel engine, and you get the diesel-electric hybrid. It’s more difficult to produce, but suddenly, the fuel economy goes from 65mpg to somewhere over 100mpg. That sounds a lot better, doesn’t it?

(Currently, Vauxhall are looking at a concept for a diesel-electric hybrid that can theoretically get 170mpg, but of course, being attached to General Motors doesn’t help the chances of that technology being developed any time soon.)

Flying Cars: Completely Impractical And Not That Clever

I finish with a feature of cars which almost everybody associates with the future. It’s very present in science-fiction: the flying car, avoiding traffic jams by just flying over them. Nobody’s ever made a practical, efficient design, and I’m convinced that nobody will make a mass-production flying car which actually has any practical benefits.

The flying car, unfortunately, has the fundamental weakness of being impractical. A flying vehicle needs to defeat gravity – which needs a lot of power directed downwards. We can generate hovercraft now, but they have a limited hover range, and can’t be controlled. In order to give the vehicle an effective command of the skies, it’s going to have a lot of lift, which requires a lot of energy.

Once you get your flying car into the air, you’ve got another problem. While cars on the road can rely on friction to stop, flying cars can’t. So, in order for it to stop, you’ve got to spend substantial amounts of energy in propelling it the other way.

By the time that we develop a technology that allows us to have flying cars, we’ll likely have more efficient ways of travelling around anyway. But don’t despair! Not all flying cars need to be made into an obsolete idea – those used for racing may remain. For once, I’m going to allow for Rule of Cool to apply to an inaccuracy, and suggest that flying cars used for racing will remain precisely because they are impractical. These flying cars, such as the ones in F-Zero and Wipeout, work on a better level when you suggest that they are driven by extremely talented drivers that know that they’re impractical and dangerous, and that they drive them for other people’s entertainment. For once, the popular opinion has allowed an inaccuracy to survive – and I welcome it!

Probing The Inaccuracies: Mecha

There’s something about a gigantic bipedal robot that inspires the imagination. Whether it’s the return to personal, one-on-one combat that many mecha-related series seem to explore, or the idea of a huge humanoid machine kicking ass, it’s pretty easy to see the appeal of mecha. It’s also difficult to dispute that they are, in fact, rather awesome.

Unfortunately, they are also completely pointless.

The first question that needs to be asked is, “What exactly is a mecha good for?” Putting aside clearly improbable designs that can freely fly, as found in the Super Robot genre, it would seem that the mecha would be designed as an analogue to the tank – or alternately, to displace the tank completely. This, to me, seems rather improbable, as the limitations of any of the common designs of mecha – bipedal, tripodal, quadrupedal or spider-shape – far outweigh any advantages conferred upon the machine by that design.

To investigate why this is so, we must examine the general form of the mecha in order to determine its typical characteristics.

The Vincent from Code Geass, a series which I feel gets things very, very wrong.

The first, and probably foremost, problem with this design, which appears to be representative of most mecha designs, is its high centre of gravity combined with only two points of contact with the ground. As anybody who has been pushed over before they have a chance to brace themselves will know, this leads to a considerable amount of instability. As mecha would require an improbable amount of flexibility and speed of movement in order to brace themselves after an impact, this would lead to the design being very easy to topple over, and thus incapable of taking any sort of impact without being rendered immobile and therefore useless.

Of course, an actual mecha design would be fitted with gyroscopes in order to prevent it from falling over when it so much as moved on any sort of surface that wasn’t completely flat, but there’s only so far that one can go with gyroscopic stabilisation, and of course, gyroscopes add weight to the machine. It really outlines the disadvantages of bipedal movement in anything that isn’t biological, humans only being capable of doing it efficiently due to their locking kneecaps and the ability to unconsciously maintain their balance with tiny, almost imperceptible movements.

It isn’t just bipedal mecha which suffer from stability problems and a high centre of gravity. Designs with more than two legs may have a more stable base, which largely negates the need for heavy and cumbersome gyroscopes, but they can be just as easily knocked over with a large enough impact. Once a leg is restrained or destroyed, instant stability problems occur, with the machine being rendered instantly immobile, and most likely falling over because of their inability to redistribute their weight unlike a biological organism. The vulnerability of the legs of these machines means that they are rendered vulnerable to tanks, close-air-support aircraft and even men with portable missile launchers. As it is difficult to distribute armour to the legs of mecha without making their movement cumbersome, it would appear that mecha would be limited immediately by the weakness of their legs.

The AT-AT from Star Wars, a series which may not have been realistic, but which outlined the ease of knocking a big mech over.

This isn’t the only weakness of a design based on legs. Leg movement is a form of reciprocating movement, where a piece of machinery repeats a back-and-forth (or up-and-down) movement. While this has proven to be the only successful form of ground movement in animals, reciprocating motion is not considered to be desirable for machinery which is used for propulsion. In engine design, a reciprocating engine requires far more components and usually wears out more quickly than an engine utilising circular motion, and attempts at replacing the piston engine in cars, planes and ships have been common ever since the development of the electric engine and gas turbine.

The gas turbine has displaced the reciprocating engine in all but the smallest aeroplanes since the 1960s, either in the form of the turboprop or the jet engine, while larger ships commonly use turbine engines in order to propel them instead of more complicated, more difficult-to-maintain piston engines. Only in cars and motorcycles has the piston engine persisted; the superior fuel consumption of such engines at that size compared to gas turbines and Wankel engines has allowed them to carve out that niche. However, electric engines utilise circular motion, and with the development of improving battery technology and hydrogen fuel cells, the piston engine will likely be displaced in this market as well.

This has relevance to mecha, because even piston engines convert their reciprocating motion to circular motion at the crank. If one were to directly connect the mech’s legs to the engine, one would either be converting circular motion to reciprocating motion, if a gas turbine or some sort of electric engine were used, or reciprocating motion to circular motion back to reciprocating motion if a piston engine were used. I hope you can see why that would cause apoplexy in many engineers; you’d essentially be transmitting power through another set of complex components, which adds more places for an already complicated machine to fail. If that doesn’t drive the engineers crazy, then it would definitely drive the mechanics that would have to work on it to drink.

It’s unlikely that a direct mechanical linkage to the engine would be used, for not only the reasons outlined above, but also because it would limit the flexibility of the limbs and leave them essentially as simple, crude metal struts. A far more likely system to be used is a hydraulic system, similar to the digging implements found on bulldozers. This would allow for movement of the legs more closely related to the movement of human legs, but would still be considerably less efficient than the movement of actual human legs. As discussed above, the locking kneecaps and ability to quickly change one’s balance lead to efficient bipedal movement in humans, but what would distinguish us from mecha capable of doing the same thing is that human muscles work on the microscopic scale, with nanoscale particles involved in the molecular biochemical activation of muscles. Ultimately, this scale allows humans and other animals to have impressive strength for their size, using a lot less energy than a comparative hydraulic system would use.

The Cauldron Born from the BattleTech series, a series which at least does things a little better than most mecha series. A little.

Returning to the general form of the mecha, apart from the disadvantages conferred by the instability of such a top-heavy design, the height of such machines leads to another obvious disadvantage: It leads to them being very noticeable. For something that purports to be an analogue to the tank, that is rather a significant weakness. Some people seem to forget that tanks are hardly invulnerable themselves; their tracks are potential targets to even outdated anti-tank launchers, while tank-on-tank combat can lead to the destruction of one of the tanks with just one lucky shot. As such, tanks attempt to decrease their profile and the amount of area to target by running hull-down, using terrain to disguise and cover themselves. This is not a luxury afforded to mecha.

The weaknesses of mecha versus tanks continues with mobility. By virtue of independent driving of both tracks, a tank can turn on its axis, while this is difficult, if not impossible for mecha to do. In order to turn the legs of a mech, one requires a complex series of components which far outstrip the complexity of comparative tank steering systems. As with the difficulties posed with reciprocating motion, these complex systems are useless for anything except making engineers and mechanics very angry.

Even then, the movement will be awkward, which would be especially dangerous in urban combat. Tanks are hardly the most appropriate weapon system in that sort of warfare either, to be fair; they are particularly vulnerable to improvised explosive devices and anti-tank launchers fired by people concealed in buildings, but mecha are even worse in these environments, with problems pursuing or retreating, which is rather problematic.

Just when you thought that there couldn’t be any more mechanical problems with mecha, physics comes and bites the idea in the arse again. Mecha are typically very large machines, and with increasing size comes an interesting correlation. For every squaring of surface area of an object, its mass goes up by the cube of the original object’s mass. While a human male may be on average 70kg, when that same humanoid shape is scaled up to several times that of a human, the mass increases correspondingly, such that mecha end up extremely heavy. A small increase in the height of a mech can necessitate the use of far more powerful servo systems and hydraulics, which is expensive not only in energy but also in cost.

The excessive weight of these machines can cause problems in other ways as well. A heavy machine resting on supports with limited surface area in contact with the ground leads to high pressure underneath. Tanks require wide tracks in order to prevent themselves from sinking into soft ground, but unless a mech had ridiculously wide feet, it would be likely to get stuck very easily in anything softer than concrete or baked soil, and to break up roads in urban terrain. Not particularly useful when you already have mobility problems.

Having discussed the weaknesses of mecha design, let me reiterate that I can still accept the inclusion of such machines in certain series, subject to some rules. I think the most important rule is that the series doesn’t take itself too seriously about the realistic use of mecha, unless there is a very good reason for their inclusion.

The second rule is that mecha in a series really have to “belong” – a criticism that I level quite heavily at Code Geass, as I don’t believe that an empire fundamentally deriving from the British would focus their efforts on huge mecha, as there is little in British tradition to suggest a significant interest in such developments. Ultimately, I think that the alternate history angle of this show, which actually could lead to a very interesting setting, is somewhat let down by the inclusion of something that doesn’t really fit. It might be said that I would need to watch the show with a certain amount of suspension of disbelief, but as the other instalments in this series of articles may suggest, that’s something I can’t always do.

On the other hand, BattleTech can be taken as an example of a mech-related series that I do enjoy. The mecha seem to fit better into the series than some other series involving such machines, and although there is a significant disparity between their portrayal of mecha and designs which would work in real life (insofar as such designs could work), they at least don’t portray the machines as invulnerable, giving it the heat-venting problems which add a bit of extra tactics to the series. I think it’s a good example of how to do mecha correctly without necessarily making them realistic.

Probing The Inaccuracies: Motorsport

I’m a fan of motor vehicles, something which can easily be identified by bringing up the subject in conversation with me. There’s something about roaring engines made up of hundreds of mechanical parts moving synchronously, and the sight of a motor vehicle moving rapidly that inspires me. It should therefore not be entirely surprising that I have recently acquired a taste for motorsport. Unfortunately, motorsport is not necessarily a particularly accessible sport, and I’ve heard quite a few misconceptions about it which I need to address, like:

“Racing is just driving around in circles! I could do that!”

For obvious reasons, this is one of those commonly repeated sentiments regarding the sport, usually recited by those which have almost no experience with it at all. Unfortunately for them, this is the most easily debunked misconception, and its application just damages any credibility that they have in order to make valid complaints.

First and foremost, very few tracks in the world are actual circles; most of them are at least ovals of some sort, and usually road or street circuits. A few circle tracks do exist, including Volkswagen’s Nardo high-speed test track, but these tracks are invariably not used for racing. Indeed, circle tracks give very unsatisfying racing. Because there are no braking points on a circular track, the cars will eventually just travel at either the highest speed that the tyres can manage without slipping, if the track has a relatively small radius, or at their maximum speed if the track has a large radius.

This removes several of the main dynamics of motor racing and leads to two unsatisfactory conclusions – if one car can maintain a higher speed than the others, it will undoubtedly win, and if the cars are all close enough to keep them in a pack, the only way to get any overtaking is to get into the slipstream of the opponent and hope to slingshot past them. The latter sorts of races are bound to be accident-prone, as demonstrated by superficially similar NASCAR restrictor plate races, where the bunched-up grid regularly leads to multi-car pile-ups.

If this looks like a circle to you, perhaps you need your eyesight checked.

Clearly, this argument isn’t literal, though, and is meant more as a way to disparage racing drivers for what the people making this argument would perceive as too much merit for an ostensibly easy sport. Of course, this argument is easily shot down as well. Motor racing, whether it’s autocross or single-marque racing up to the fastest cars in Formula One and the Indy Racing League, is not easy.

In order to be a successful racing driver, there are several attributes which you must have – ones that don’t necessarily exist in the wider populace. You must be able to control a car or motorcycle at speeds exceeding 100 miles per hour, while racing rivals try to get past you. You must be spatially aware and capable of figuring out the physical characteristics of the vehicle under all conditions, and do so unconsciously. You must be able to communicate effectively with engineers and mechanics on the technical details of the vehicle and how you wish it to drive. These are not skills which exist within the majority of the non-racing populace, many of which seem to think that driving goes no further than turning the steering wheel and operating the pedals and gear stick.

Motor racing is not only mentally difficult, but can also be physically difficult as well. Depending on the characteristics of the car, the first physical difficulty can arise with actually getting the car to turn. While power steering systems have made it easier for any sort of driver to turn a steering wheel in a car at lower speeds, this ease of turning doesn’t necessarily translate directly at high speeds, where momentum and inertia can drastically affect how a car handles, along with other physical forces. Other physical effects on the body include high G-forces resulting from increasing cornering speeds and the inevitable buckets of sweat produced by a racing driver on the edge. As for motorcyclists, the constant almost-imperceptible shifts in body mass that need to be performed in order to race the motorcycle put them almost in a league of their own when it comes to physical strength and fitness.

It soon becomes apparent upon closer examination that motor racing is a far more difficult sport than most people would credit it with, but the argument persists. Indeed, arguments along this line are often made by fans of specific types of motor racing wishing to disparage other classes of racing, by people who should probably know better. These include:

“NASCAR is just a bunch of people turning left for hundreds of miles. How hard can that be?”

I’m going to be fair right now and admit that I’ve disparaged NASCAR in the past for what I’ve seen as a lack of entertainment value. Most of the racing tracks in NASCAR are oval tracks which are rather far removed from the road and street tracks that my favoured classes of motorsport are usually raced on. But while I may criticise NASCAR for what the racing looks like to a detached spectator, I do know something of what it’s like inside the actual car, and I maintain respect for the drivers who manage to muscle these heavy machines around the track.

For the unfamiliar, NASCAR (National Association for Stock Car Auto Racing) is an American stock car racing series, raced using “silhouette” car models which are ostensibly based on road cars manufactured by Chevrolet, Ford, Dodge and Toyota, but which are really homologated prototype cars based around a standard specification. The tracks used by NASCAR are predominantly anti-clockwise ovals, but two clockwise road tracks are included in the top-echelon Sprint Cup Series. Due to the lack of downforce created by the car body, along with the reasonably close standard of all of the cars, the series exhibits a lot of overtaking, and dozens of lead changes can occur during a single race.

NASCAR gets a lot of criticism, both from domestic and international sources, for being an overly-simplistic representation of motor racing, and indeed, for apparently being easy. Let’s get things straight immediately: NASCAR is far from easy, and the cars are as much of a contributor to the difficulty of the sport than anything else. NASCAR was, as its name suggests, originally raced using unmodified road cars, but in the 1960s, the cars were homologated in order to remove the massive technical advances that teams like Lotus and Cooper were dominating Formula One with.

Today’s NASCAR Cars of Tomorrow (that’s the official name, by the way), thus bear a reasonable amount of resemblance to the American cars of the 1960s, with cast-iron V8 engines using pushrod valves and carburettors in comparison to the overhead camshafts, aluminium alloy construction and fuel injection systems of today. One of the most obvious characteristics of NASCAR racing cars is their rather significant mass compared to other racing cars, which causes several physical effects to the car which make it more difficult to drive than it looks like from the eyes of the observer from afar. As I’ll discuss below, more mass increases inertia, increases momentum and decreases the speed that a car can turn at before the tyres begin to slip, affecting acceleration, braking and cornering respectively.

Most of the criticisms of NASCAR seem to centre around the fact that the cars largely turn left during the oval races, and that this is therefore not legitimate motorsport. Firstly, let me counter with a few words of my own: Watkins Glen International, Infineon Raceway. Secondly, turning left in a NASCAR racing machine at full pace is quite a bit different to turning left at road speeds in a road car, and uses a very different set of skills. NASCAR, unlike racing series which race predominantly on road or street circuits, rewards consistency and smooth driving rather than the abrupt braking and acceleration moves in other motor racing series.

Staying on the racing line at the longer tracks like Indianapolis requires one to take each corner at significant pace, as excessive slowing down will just open up an opportunity for somebody to overtake. Meanwhile, at shorter half-mile tracks, including the notoriously difficult Martinsville Speedway, the turns are much tighter and more closely resemble corners on road circuits than they do corners on longer ovals. Each of these corners needs to be negotiated in a car with several hundred brake horsepower transmitted through the back wheels, which makes it tail-happy under acceleration and rather reluctant to turn under braking.

All of this has to be done with a packed grid regularly consisting of more than forty cars jostling for position, which all comes to unhappy conclusions for many of the drivers when the cars begin to crash. This is a phenomenon which NASCAR is well known for, and while the rate of crashes in NASCAR is often exaggerated, when they do happen, they tend to be big. This is a natural consequence of oval racing using cars which are reluctant to stop; crashes often occur at speeds in excess of 150mph, which means a lot of momentum and kinetic energy.

Apparently, the only reason anybody ever watches NASCAR.

The phenomenon is accentuated at Talladega and Daytona, which are very long tracks with no real braking areas. To prevent deadly accidents, the cars run with restrictor plates over the carburettors to reduce the air and fuel flow to the engine, and thus reduce the power produced by the engines. The cars in these races travel in characteristic bunched packs, where the only way to overtake on track under normal conditions is to use the slipstream of the car in front of you to reduce the aerodynamic drag of your car and thus slingshot past. This technique, known as drafting, is commonly used in other racing tracks in NASCAR, and also in other racing series as well, but is taken to its extreme at Talladega and Daytona, with cars travelling bumper-to-bumper in order to not only increase their own speed, but the speed of the car in front of them in order to pull away from the pack.

This rather specialised type of motorsport, which leads to a certain breed of racing driver who can succeed at it, sometimes goes wrong. As mentioned above, NASCAR machines are rather tail-happy, and don’t take all too well to being pushed off the racing line. One car spinning out of control in a pack bunched up by restrictor plates can lead to a massive crash that can involve more than twenty cars. As those of you who have been in any sort of car crash will know, that sudden stop can hurt – or even kill. When you have dozens of cars flying around you, some of them spinning out of control, it adds another element to the crash, and one that would terrify most road drivers.

NASCAR isn’t the only racing series which is criticised (unfairly) for an ostensible lack of skill needed to succeed at it. The most popular motor racing series in the world, Formula One, is another target for invective, including the following:

“Formula One cars these days just drive themselves! How difficult is that?”

You know, I could almost understand this criticism from people who actually raced Formula One cars in the past. Formula One has evolved from the ultra-dangerous spectacles of the 1960s to a series where safety is paramount and the chassis is specially developed in order to take as much of the brunt of a collision as possible. Considering the balls it took to drive a Formula One car at racing speeds in the 1960s and 1970s, it would almost be warranted that they’d be less impressed with people racing in the series today. However, the Formula One racers of the past aren’t the ones that criticise the sport today. They have the proper amount of reverence, realising that the things which make Formula One difficult have changed since their days.

Formula One is one of the fastest racing series in the world, with cars reaching speeds in excess of 190mph, and sometimes in excess of 215mph at the likes of Monza and Spa-Francorchamps. While the top speeds are faster in the 24 Heures du Mans, the acceleration of a Formula One car is considerably more sudden than any other racing car, reaching more than 1G off the starting line. Think about the force of gravity pinning you to the ground, and then think about that force pushing you back into your seat. That is the least significant force felt by a Formula One driver, which should give you an idea of what the difficulties of Formula One often revolve around.

Cornering forces are a lot more significant than the relatively puny forces felt under acceleration, reaching 3 to 5G under braking, and occurring several times a lap. Imagine that force of gravity, scale it up by five times and imagine all of that load occurring on your neck. Short of being a fighter pilot, you’re unlikely to experience these forces in everyday life, so let’s just say that it’s a lot more than you or I could reasonably sustain through even one corner, let alone several hundred. The cars are sprung with extraordinarily hard suspensions, which doesn’t help the drivers either; every bump on the road is transmitted through the chassis and through the driver. Ouch.

Sometimes, this massive force is sustained over several seconds, as found at the 130R corner at Suzuka, or the long, sweeping anti-clockwise Turn 8 at Istanbul Park. Sustaining these forces for the fraction of a second that it takes to go through most corners is bad enough; making that last for any longer requires extraordinary neck muscles in order not to lose control. Add the twenty-plus cars that you’re racing against, and the need to stay concentrated through every corner, and it soon becomes mind-boggling how physically and mentally straining the sport really is.

Spa-Francorchamps – one of the most difficult and fastest racing circuits in Formula One.

It is not much of a surprise then that drivers in Formula One are immensely fit, and would be capable of doing athletic events in times that wouldn’t be embarrassing. A lot of the exercise done by Formula One drivers concentrates on the neck, giving them the strength needed to resist the tremendous forces on their bodies through every lap.

The difficulties posed by Formula One do not just involve physical strains. Characteristics of the car conspire to make things difficult as well. Under the current rules, Formula One cars are powered by 2.4L V8 engines, producing about 750 brake horsepower at 18,000rpm. Such finely-tuned engines have a very narrow torque band, in the higher end of the revolutions that the engine can produce, and need to be kept at high revs in order to get the maximum out of the engine.

The problem is, though, that pushing 750 brake horsepower through the rear wheels during a corner where the aerodynamic components aren’t working optimally leads to a lot of potential wheelspin, and the power has to be controlled while still transmitted to the track as quickly as possible. This needs quick reaction speeds and the capability to countersteer these ferociously difficult cars. Since the removal of traction control, it has been all up to the driver to be able to control the car, which immediately goes some way in eliminating the criticism that the cars drive themselves.

When you’re in a Formula One car, you have a very limited amount of space to move, being surrounded by the monocoque of the car. This also contributes to a limited visibility, which can be reasonably simulated by sitting on the floor, and blocking your view of anything below the bottom of your head. This impedance of sight makes it rather difficult to work out whether there are any cars behind or beside you, which is rather troubling when you’re trying to fend off impending overtaking attempts.

As for the technological criticisms of Formula One, these are at least somewhat warranted. Formula One cars are stuffed to the gills with transponders and gravitational meters in an attempt to detail every single facet of the car’s performance in an attempt to shave precious hundredths of a second off lap times. This sort of reliance on technology does come with the sport, and yet it could be said that it takes away from the purity of the sport, and makes everything rather expensive. It’s difficult to say where this should start and end, but technological development and innovation has been at the forefront of Formula One since the beginning, and nothing is going to stop the teams from trying anything within the rules to improve speed, short of a strict homologation like NASCAR or IRL, and that would take away one of the big advantages of motorsport in the real world, which will be discussed below.

Motorsport doesn’t just receive criticism for the driving; it regularly receives criticism from environmentally-conscious people who perceive some sort of wastage involved in the sport. As I am not a hemp sandal-wearing hippie, it gives me some pleasure to discuss the misconceptions found in this next point:

“Motorsport is just a massive waste of fuel!”

This is one of the most common criticisms levelled at motorsport, for fairly obvious reasons. There’s a germ of truth in there as well: as the consumption of a limited, non-renewable fuel has proven to be the most practical way to propel a motor vehicle, it is somewhat reasonable to assume that the sport is inherently wasteful and environmentally unfriendly. It may come as a surprise, then, to hear that motorsport was the catalyst for many of the developments, innovations and improvements in car and motorcycle design.

Engineers tend to be, as a rule, people who favour efficient solutions to problems. In motor racing, the chief problem is “How do we make this vehicle go around the set course in the least time?” There are several ways to achieve this, from minor changes in the car’s suspension or gearing, to increasing the power produced by the engine, but the most significant improvements usually come from decreasing the mass of the car. More mass in a car decreases acceleration, braking potential and cornering speeds, which are three very important characteristics in determining how quickly your car or motorcycle will go in all of the different circumstances it may be made to face.

Mass is a contributing factor to inertia, momentum and centripetal acceleration. The relation between mass and these three mathematical values is such that a vehicle will be quicker to accelerate, brake and corner when mass is decreased. There have been various clever innovations designed at reducing the mass of a racing car, including the monocoque chassis and the use of lightweight materials including honeycomb aluminium and carbon-fibre. Somewhere along the line, it occurred to racing engineers that one of those things contributing to the mass of a racing car is the fuel, and therefore, if the engine is more frugal or can produce more power from the same mass of fuel, the car would be quicker to get around a track.

Therefore, some of the innovations in vehicle design have included overhead camshafts and variable valve timing, along with improved fuel injection systems and electronic engine management. These innovations have worked to improve the frugality of engines, meaning less mass having to be dedicated to fuel, and greater distances between pit stops for fuel. Eventually, engineers designing road cars can mass-produce these systems for road cars, making your road cars better and more efficient in the process.

That’s all well and good,” you may say at this point, “but how does that excuse the amount of fuel that cars use to develop these systems?” Actually, motor racing consumes a surprisingly low amount of fuel compared to some common methods of transport used every day. A Formula One car has an engine which is more efficient per unit speed than your road car, including a sort of alternating V4 mode in order to save fuel, where some of the cylinders are turned off in order to save fuel at low speeds. Unfortunately, such technologies are too expensive to put into standard road cars, but it demonstrates how far ahead of road car technology that Formula One and other forms of motorsport can be.

If motorsport engines can be frugal compared to road cars, they are especially frugal compared to aeroplanes. Even a full season of Formula One can use less fuel than a single long-haul 747 journey, and as many of the people reading this will have travelled somewhere on an aeroplane, they can hardly complain about a racing series which works to improve the cars driven in everyday life.