Why the Philae lander came at just the right time – a social perspective from a science enthusiast

By now, it has been more than a week since the Philae lander was released from the Rosetta space probe and began its journey onto the surface of Comet 67P/Churyumov-Gerasimenko. The landing didn’t go without trouble, starting with the reported failure of the gas thruster meant to help keep it on the surface before the lander was even released and ending with Philae bouncing twice on the surface of the comet and ending up in the shadow of a cliff, greatly reducing the amount of solar exposure available to the lander. Nevertheless, the mission could be regarded as having succeeded in some respect already, even if conditions do not improve with regard to the sunlight falling on Philae; after all, it did retrieve some potentially useful results from its experimental apparatus before running out of battery power.

Frankly, though, as impressive as the science and engineering of Philae is, a lot of words have been spoken about that aspect long before this post by people far more experienced and talented in those fields than I am. What I want to talk about are some social implications of the fortuitous timing of Philae’s success. The timing of Philae’s mission came in the wake of two unfortunate accidents in the United States by privately-funded aerospace ventures: one the controlled explosion of a failed launch of an Antares rocket developed by Orbital Sciences and designated to send supplies to the International Space Station; the other being the recent crash of the SpaceShipTwo spacecraft, VSS Enterprise, in the Mojave Desert during testing, an accident which led to the death of one of the pilots. At a time when funding for space exploration is hard to come by, these accidents looked embarrassing at best. Rosetta and Philae were launched on their course ten years ago, but arrived in time to at least salvage one reasonable success for space exploration at a time when some people have been quick to criticise it, especially those always willing to fight for petty political victories in matters that mean little.

In that vein, another social implication of Rosetta and Philae comes courtesy of their existence as components of a mission of the European Space Agency. The ESA, funded partially by funds forthcoming from each participating government and partially by the European Union, is a demonstration of the effectiveness of European cooperation at a time when several Eurosceptic groups seek to convince us that such cooperation will lead us nowhere. At a time when these groups have motivations that are at best questionable, like Ukip, while others look like straight-up crypto-fascists, like France’s Front National, I think any sort of success that can show them that Europe can work better if there is sufficient motivation to get things done is useful and desirable. That this happened because a set of scientists and engineers from different countries ignored the call of jingoism and pointless ring-fencing further reinforces my point about these people being willing to fight only in the sake of petty politics when more important things lie at stake. The Rosetta mission – and the ESA in general – shows us the potential and power of cooperation and should be taken as a good example of what the likes of Ukip and FN would take away from us if they were to take power in their respective countries.

Creationism is not science

To anybody of a rational, scientific mindset, the title of this article should invoke thoughts somewhat along the lines of, “No shit, Sherlock”. Evolutionary science has underpinned the efforts of biologists for decades or even centuries, providing an observable, tested mechanism for the diversity of species. Through the allied efforts of geneticists, it has given us a stronger grasp on how we can improve efforts towards artificial selection. Yet, in all of this, small but vocal groups, many situated within the United States, deny evolutionary science. Instead, they wish to implant their own unscientific creationist hypotheses into the education system, subverting the scientific consensus with their theologically-driven political charges.

Creationism appears to be driven by some sort of offence and insecurity at the idea that humans might have been derived from what creationists see as lower species, or that we might be related in some way to apes and monkeys. Christian creationism, the most vocal kind in the Western world, professes that a creator God designed humans in his own image – although I have to ask whether any creator God would actually want to claim a species with such a variety of known flaws as Homo sapiens as being in his or her image.

The most egregiously and brutally unscientific of the creationist hypotheses is that of Young Earth creationism, a ridiculously bizarre hypothesis that contravenes most of the major branches of natural science, along with many humanities disciplines and a couple of branches of mathematics to boot. Essentially, Young Earth creationism states that the world, in accordance with various calculations on figures given in the Bible, is somewhere in the region of six thousand years old. The recent, controversial debate between Bill Nye and Ken Ham was conducted at the Creation Museum, an establishment which claims Young Earth creationism to be true and accurate.

There are so many things wrong with this that it’s difficult to know where to begin, but how about beginning by stating that there are human settlements which have been dated more than five thousand years before that? I have a back-dated copy of National Geographic beside me (June 2011, if anybody’s interested in reading it) that discusses the archaeological site of Göbekli Tepe in Turkey, an elaborate and intricately designed religious site that is estimated to date back to 9600 BC.

That immediately puts a rather inconvenient stumbling block in front of Young Earth creationism, and I haven’t even got to the science yet. Aside from myriad fields of biology, including genetics, botany, zoology, biochemistry and more, all of which must be denied in order to claim Young Earth creationism as correct, we have various elements of physics, such as astronomy and radiometric dating which peg the Earth at somewhere near 4.5 billion years old, with the universe at least 13.7 billion years old.

Not only are creationists willing to deny reams of scientific evidence from fields all over the scientific spectrum, but they’re also willing to try to twist actual science to fit their demands. Among the most absurd arguments for creationism is the idea that evolution somehow violates the Second Law of Thermodynamics – a claim that could only be made by somebody who either doesn’t understand the Second Law of Thermodynamics or who thinks little enough of their audience to believe that the audience won’t understand it.

The Second Law of Thermodynamics, in a paraphrased form, states that in a closed system, all elements tend towards entropy. In more practical terms, it states that heat cannot flow from a cold object to a hot object without external work being applied to the system. The Earth is not a closed system. Heat is transferred between the Earth and its surroundings; heat flows into the Earth’s atmosphere from the Sun, while heat flows out of it via radiation. As for biological organisms, they must and do conduct external work on their own systems to maintain local order. Much of the energetic requirements of a human being are expended as heat in order to stave off localised entropy, with the brain being the prime example of this use of energy. None of this works in any way like the creationists explain – and their attempted perversion of science in this way demonstrates a ruthless and worrying disregard for the role of observation and experiment in their aims to push their pet hypotheses.

Young Earth creationism is, as a scientific hypothesis, a sad joke with no observable evidence behind it whatsoever and the works of several dozen fields of science and the humanities against it. However, creationism doesn’t stop there, as it has another, more presentable face in the form of so-called “Intelligent Design” – but this face is just as odious from a scientific perspective, since unlike the patently absurd Young Earth hypotheses, Intelligent Design pays lip-service to science while simultaneously ignoring many of its core tenets.

Intelligent Design, just as with any other form of creationism, posits the idea of a creator entity. The word “intelligent” in the name appears to relate to an intelligent entity rather than the design itself being intelligent – for, as I’ve intimated above, it would be pretty difficult to suggest that human anatomy, for example, is particularly intelligent. You know, with the backwards eye where light shines in through the wiring, the hip design which causes labouring mothers to experience a lot of pain, so on, so forth. The hypothesis appears on the surface to provide answers that other forms of creationism just can’t answer, like accountability for the actual, observed microevolution occurring in bacteria at this very moment – and probably including some of the bacteria living on the bodies of the readers. Yet, Intelligent Design still contravenes scientific consensus – largely for the reason that it is not falsifiable.

Falsifiability is a very important concept in science and plays a major role in the scientific method which underpins research in the physical sciences. The scientific method involves the use of a chain of steps, taking the rough form of observation-hypothesis-prediction-experimentation-reproduction, in order to test a hypothesis and attempt to produce observable, testable results which can then be reproduced by other scientists in order to eliminate any bias or contamination that may affect your experimentation procedure. A hypothesis with sufficiently large observed evidence for its correctness may then become a theory (a word which has become rather loaded when it comes to reporting science to non-practitioners, often being confused with a hypothesis in the sense described above). The principle of falsifiability plays deep into this process, since for an experiment to be useful, there must be a chance for the hypothesis that it tests to be invalided by the experiment.

This is not the case with Intelligent Design. An advocate for Intelligent Design could claim, if an experiment was ever undertaken to attempt to disprove the hypothesis, that the experimental conditions were themselves incorrect for any variety of experimental conditions. As a result, Intelligent Design, just as with any other form of creationism, is of no scientific value and therefore its teaching in a scientific curriculum would not only be useless but deleterious to other scientific disciplines.

Unfortunately, creationism is being peddled by a mixture of slick operators who play on a perceived public distrust of science and religiously motivated preachers who decry any attack on their religion – or at least the way in which they interpret their religion, since evolution does not inherently discount the idea of the existence of a god – even when that perceived attack relates to issues which should not have religious motivations behind them anyway.

This isn’t helped by the difficulty for scientists facing off against creationists; by debating them face to face, evolution scientists give creationists an air of scientific respectability that their beliefs do not deserve, while those who openly decry creationist teaching are often vocal atheists as well, creating a perspective that evolution marches in lockstep with atheism. Ignoring creationists might well magnify the erroneous idea of an ivory-tower scientific elite. In my eyes, the best thing to do would be to contest the principles of any school where creationist teachings are being given scientific credence either as an alternative or replacement for evolutionary theory, while trying to keep the vocal attacks on religion away from the subject while doing so. I may be an atheist myself, but I see having people conflating evolutionary science with atheism as a problem waiting to happen – the science should come first.

More Raspberry Pi Electronics Experiments – Gertboard and Potentiometer-Controlled LED Cluster

One of the things which alerted me to the potential of the Raspberry Pi as an electronics control system was the announcement of the Gertboard before the Raspberry Pi was released into the market. When the Gertboard was announced for sale in November 2012, it was fully my intention to buy one, but a lack of money kept me from purchasing it at that point. The unassembled Gertboard kits soon sold out, leaving element14, who distribute the Gertboard, to decide to release an assembled Gertboard. This went on sale very recently, and shortly after release, I bought one from Premier Farnell’s Irish subsidiary.

The Gertboard, for the uninitiated, is an I/O expansion board that plugs into the GPIO header on the Raspberry Pi. Designed by Gert van Loo, designer of the Raspberry Pi alpha hardware, the Gertboard not only protects the GPIO pins from physical and electrical damage, but also provides a set of additional features. These include input/output buffers, a motor controller, an open collector driver, an MCP3002 ADC and MCP4802 DAC and an Atmel ATMega328P microcontroller which is compatible with the Arduino IDE and programming environment.

I was very impressed by the quick response from element14 after my purchase; my delivery came only two days after ordering and would have come even sooner if I hadn’t missed the 20.00 deadline on the day I had ordered it. The Gertboard was packaged with a number of female-to-female jumper wires, a set of jumpers, plastic feet for the board and a CD-ROM with a set of development tools for the ARM Cortex-M platform.


So far, I’ve only had occasion to test the buffered I/O, the ADC and DAC and the microcontroller; I still don’t have parts to test the motor controller or open collector driver. Aside from some documented peculiarities regarding the input buffers when at a floating voltage, including the so-called “proximity sensor” effect, things seem to have been going rather well.

The acquisition of the Gertboard gave me the impetus to really get down to trying to test my own expansions to the simple test circuits I had implemented before. One interesting application that I considered was to use a potentiometer to control a bank of LEDs in order to provide some sort of status indication.

The following Fritzing circuit diagram shows the layout of this circuit without the use of the Gertboard; the onboard LEDs and GPIO pins lined up in a row on the Gertboard makes it slightly less messy in terms of wiring.

Potentiometer Controlled LEDs_bb

In this diagram, GPIO pins 0, 1, 4, 17, 18, 21, 22 and 23 are used to control the LEDs, although you could also use pins 24 or 25 without conflict with either the SPI bus – which is necessary for the MCP3002 ADC – or the serial UART on pins 14 and 15. However, this is a lot of GPIO pins taken up for one application, which may warrant the use of a shift register or an I2C I/O expander such as the MCP23008 or MCP23017 in order to control more LEDs with less pins.

In order to control this circuit, I took the sample Gertboard test software and modified it slightly. As the potentiometer is turned to the right, the ADC value increases to a maximum of 1023; therefore, the distance between each LED’s activation point should be 1023 divided by 8 – very close to 128. The LEDs will light from left-to-right as the potentiometer’s resistance decreases, with one LED lighting at an ADC reading of 0, two LEDs at 128, all the way up to all eight LEDs at 1023.

// Gertboard Demo
// SPI (ADC/DAC) control code
// This code is part of the Gertboard test suite
// Copyright (C) Gert Jan van Loo & Myra VanInwegen 2012
// No rights reserved
// You may treat this program as if it was in the public domain

#include "gb_common.h"
#include "gb_spi.h"

void setup_gpio(void);

int leds[] = {1 << 23, 1 << 22, 1 << 21, 1 << 18, 1 << 17, 1 << 4, 1 << 1,
	      1 << 0};

int main(void)
    int r, v, s, i, chan, nleds;

    do {
	printf("Which channel do you want to test? Type 0 or 1.\n");
	chan = (int) getchar();
	(void) getchar();
    } while (chan != '0' && chan != '1');

    printf("When ready, press Enter.");
    (void) getchar();


    for (r = 0; r < 1000000; r++) {
	v = read_adc(chan);
	for (i = 0; i < 8; i++) {
	    GPIO_CLR0 = leds[i];
	nleds = v / (1023 / 8); /* number of LEDs to turn on */
	for (i = 0; i < nleds; i++) {
	    GPIO_SET0 = leds[i];

    return 0;

void setup_gpio()
    /* Setup alternate functions of SPI bus pins and SPI chip select A */
    INP_GPIO(8); SET_GPIO_ALT(8, 0);
    INP_GPIO(9); SET_GPIO_ALT(9, 0);
    INP_GPIO(10); SET_GPIO_ALT(10, 0);
    INP_GPIO(11); SET_GPIO_ALT(11, 0);
    /* Setup LED GPIO pins */
    INP_GPIO(23); OUT_GPIO(23);
    INP_GPIO(22); OUT_GPIO(22);
    INP_GPIO(21); OUT_GPIO(21);
    INP_GPIO(18); OUT_GPIO(18);
    INP_GPIO(17); OUT_GPIO(17);
    INP_GPIO(4); OUT_GPIO(4);
    INP_GPIO(1); OUT_GPIO(1);
    INP_GPIO(0); OUT_GPIO(0);

A Raspberry Pi Electronics Experiment – TMP36 Temperature Sensor Trials and Failures

I mentioned in my last post that I had received a Raspberry Pi electronics starter kit from SK Pang Electronics as a gift for Christmas, and between studying for exams, I have been experimenting with the components in the kit. Apart from ensuring that the components I received actually work, I still haven’t got much past the “flashing LEDs in sequence” experiments. I think I need a few more components to really experiment properly – transistors, capacitors, et cetera – but I have had a bit of fun with the components that I did receive.

Not everything has been entirely fun, though. One component, the TMP36 temperature sensor which I received with the kit, led to a struggle for me to find out how it worked. On the face of it, this should be – and if you test it without the full circuit that I first tried it with, is – one of the easier components to deduce the operation of. My temperature sensor is in a three-pin TO-92 package, with one pin accepting an input voltage between 2.7 and 5.5V, another connecting to ground and a third which has a linear output voltage with 500mV representing 0ºC and a difference in voltage of 10mV for every degree Celsius up or down from 0ºC. So far, so simple. The problem is that I made things rather difficult for myself.

The Raspberry Pi, unlike a dedicated electronics-kit microcontroller like the Arduino platform, doesn’t have any analogue input or output. In order to get analogue input or output on a Raspberry Pi, you need either an analogue-to-digital converter for input or a digital-to-analogue converter for output. This wasn’t a big deal; both the MCP3002 ADC and the MCP4802 DAC came with the SK Pang starter kit and I had just successfully tested the 10kΩ Trimpot that came with the kit with the ADC. My self-inflicted problems occurred when I thought (ultimately correctly) that the three-pin package of the temperature sensor looked like it would be an adequate drop-in replacement for the Trimpot. So, I plugged in the temperature sensor based on the schematics in front of me and tried running the program to read in and translate the readings from the ADC.

As I started the program, I noted that I was getting a reading. So far, so good, I thought. Then, I decided to press on the temperature sensor to try adjusting the reading. At this moment, I noticed that the sensor was alarmingly hot. Disconnecting the sensor as quickly as I could reason, I thought to myself, “Oh crud, I’ve just ruined the sensor before I could even try it properly!” Taking action based on the directions given for TMP36 sensor use on the internet, I allowed the sensor to cool before plugging it back in – the right way around, this time – and tried the ADC translation program again.

I was still getting a reading, but this time, I was more wary; I did not know whether this signified that the correct reading was being read or not. With the aid of an Adafruit tutorial written precisely to aid people using the TMP36 with the Raspberry Pi, I decided to modify the ADC translation program to give converted values in the form of temperature readings. Another problem seemed to ensue – the readings I was being given were far too low for the room I was in. I attempted to find a solution on the internet, by reading forum posts, tutorials and datasheets, but little of this made sense to me.

Eventually, though, at least one of the sources gave me the idea to use a multimeter on the temperature sensor to test whether the output voltage on the middle pin was reasonable. I plugged the TMP36 directly into the 3.3V supply on the Raspberry Pi and tested the voltage over the input and output. It was showing as approximately 3.3V, so there wasn’t a short voltage on the temperature sensor itself. I then tested the output voltage on the middle pin, and this showed a reading much closer to the 20-22ºC I was expecting from my room at the time. As far as I could tell, the temperature sensor wasn’t damaged from the brief overheating that it had experienced. However, at this point, I had other things to do and had to leave my experimentation.

Eventually, though, I got back to experimenting with the TMP36 again, and tried plugging it into the ADC again. It was still giving the same low readings, and I still didn’t understand completely if the sensor, the ADC or the program I was running was at fault. I was at a loss to understand what was going on, so I shelved the temperature sensor experiments and tried understanding the code for the other components so that I could try my own experiments.

Some more looking on the internet pointed me more towards the answer I was looking for, though. The datasheet for the TMP36 suggests the use of a 0.1μF bypass capacitor on the input to smooth out the input voltage, but this didn’t really sound like the issue I was having – more it seemed like there was a low voltage going into the ADC. A forum post gave me an idea – try using a multimeter to test the voltage going across the TMP36 when it was plugged in with the ADC, and the output voltage from the sensor with the full circuit going. So, I did, and again, the temperature sensor had 3.3V going across it and about 740mV output voltage from the middle pin. I was perplexed, and tried testing the voltages across the ADC itself.

It was at this moment that one little sentence from the forum post gave me the answer – the problems with using the MCP3002 for reading in the voltage from the temperature sensor were linked to input impedance over the ADC rather than any problems with the temperature sensor. The ADC was working correctly in terms of reading in the value, and the temperature sensor was also working correctly, but because there was an impedance on the ADC – the voltage going across the ADC is 3.3V, but the voltage between the input pin and the channel read pin is, at least on my MCP3002, 2.58V – there were incorrect readings. A bit of modification to the ADC translation program, and I had the sort of readings that I expected both on the output voltage of the temperature sensor and the screen where the results were being printed.

Rather a long-winded set of tests for a simple problem, eh? I suppose much of the problem lies in my putting the cart before the horse and trying experiments with my only knowledge of electronics being my long-faded memories of secondary school physics. In any case, the problem was found, and a problem in my own lack of experience was also found, which I can start rectifying soon enough.

A Showcase of the Internal Construction of the Game Boy Advance Cartridge


Recently, after completing The Legend of Zelda: A Link to the Past, I was struggling to think of what game to play next. A session of gaming with some of my friends pointed me towards the Pokémon games, which I had not played in quite a while. Instead of jumping straight into the newest game in the series that I own, Pokémon Diamond, I decided that I’d play through the Game Boy Advance games first. Taking Pokémon Sapphire out of storage revealed that the internal battery had died, and that I would have to open up the case of the cartridge in order to replace it. Having opened the cartridge case and identified the battery type that I’ll need to purchase, I got to thinking what the internal structure of other Game Boy Advance cartridges that I own was like.

My (admittedly small) collection of Game Boy Advance games includes games with about five different internal patterns, ranging from simple patterns with a ROM chip and a few surface-mount capacitors to the complicated pattern found in the Pokémon Ruby and Sapphire cartridges, which contains a ROM chip, a large Flash memory chip, a real-time clock and a large set of surface-mount components.

The simplest internal structure was found in my copy of MotoGP among others.

The most substantial component in this pattern is the large Mask ROM chip that dominates the centre of the cartridge. The circuit board is marked with the lettering, “U1 32M MROM”, suggesting that this chip has a capacity of 32 megabits, or 4 megabytes. This connects to the Game Boy Advance using the traces coming from the chip which lead to the bottom of the cartridge, where several copper-coated traces lead to the cartridge connector of the Game Boy Advance. To the left of the ROM chip, we can see three surface-mount capacitors, marked “C1”, “C2” and “C3”. Aside from these components, there is little to talk about in the remainder of the cartridge case. The construction of this cartridge is very simple, and it’s easy to see how this might work. One thing which is non-existent on this design which we’ll see on other cartridges is a memory chip – this game uses the rather archaic technique of providing passwords to the player in lieu of saving progress.

A more advanced pattern can be seen inside of the cartridge for Doom. Doom on the Game Boy Advance was a port of the version found on the Atari Jaguar console, a low-resolution version which was missing the Cyberdemon and Spider Mastermind enemies, along with the Spectres (invisible variants of the mêlée-based Demon enemies). Nevertheless, despite the limitations of the port, it proved to be one of the more accomplished first-person shooters of the Game Boy Advance.

The Mask ROM chip in this cartridge has been offset to the right to make room for the memory chip, taking the form of an EEPROM chip, presumably of the larger 64-kilobit capacity. This EEPROM chip provides non-volatile memory which is an improvement over the battery-backed RAM found in games for the Game Boy and Game Boy Color. The life span of a Game Boy Advance save state is effectively limited to the life of the EEPROM or Flash chip, a far greater time than the life of the CR2025 or CR2032 batteries of the Game Boy cartridges.

Unlike the PC version of Doom, where saving can be done on any part of a level, the Game Boy Advance version only allows you to save at the end of a level, and needs only to store the current level along the health, armour and ammunition state of the player. Save games are also limited to four, rather than the eight found in the PC version. Aside from the additional EEPROM chip, there is another surface-mount capacitor on this board which was not found on the MotoGP cartridge.

Despite the addition of save states to this game, it does not have a particularly complex pattern by the standards of other Game Boy Advance games. A design more typical of Nintendo first- and second-party cartridges can be seen in the cartridge for Golden Sun.

Golden Sun plays very much to the sensibilities of the SNES era of Japanese RPGs, despite being released about five years after the likes of Chrono Trigger and Final Fantasy VI. Yet it was this playing to those sensibilities that made it one of my favourite games on the Game Boy Advance, and an avid follower of the series up to the recent Golden Sun: Dark Dawn for the Nintendo DS. The Mask ROM chip on this circuit board has been moved to the left-hand side, with the memory chip, a 512 kilobit Flash memory chip used to store up to three save files, containing such details as the geometric position of the player on the game environment, the health status of the characters, how the Djinn are set on each character, and so on and so forth. The other surface-mount components are four capacitors, as found on the Doom cartridge, only found in a different arrangement to suit the left-mounted position of the Mask ROM chip.

A similar pattern can be found on the cartridge for Golden Sun‘s sequel, Golden Sun: The Lost Age, along with the cartridge for Pokémon FireRed as can be seen below.

The general layout of this cartridge is similar to that of the cartridge for Golden Sun, and while both games are RPGs, a layout precisely like the cartridge layout for Golden Sun can also be found in Mario Kart: Super Circuit. The FireRed cartridge differs from the Golden Sun cartridge in some subtle ways, using a different Flash memory chip, and with a few more surface-mount components, this time including resistors as well as capacitors. The Flash memory chip still does not extend across the entire space allocated to it, which could suggest that it is a 512 kilobit chip rather than a 1 megabit chip.

The final type of cartridge pattern that I have found is also the most complex one, belonging to Pokémon Sapphire. Unlike any other Game Boy Advance game that I have found, Pokémon Sapphire possesses a type of time-linked game mechanic, which despite not being as advanced as the similar features in the Game Boy Color predecessors in the series, does still necessitate the use of a real-time clock.

The real-time clock is found above the Flash memory chip, which is found on the left-hand side of this cartridge. To the right, above the Mask ROM chip, is a CR1616 button cell which powers the real-time clock. This is the component that will have to be replaced in order for all of the features of this game to work correctly, as certain events are linked to the time of day and the progression of time. None of them are critical to the completion of the game, but it still annoys me to have an incomplete game for the sake of a cheap button cell.

The Flash memory chip on this circuit board is substantially larger than that on the other cartridges with Flash memory, which suggests that this is of the 1 megabit capacity rather than the smaller 512 kilobits found in the other cartridges. As well as that, there are more surface-mount components on this wafer than on others, with a larger set of resistors and plenty of capacitors. Another component, marked “X1”, is prominent on the left-hand top corner, beside the RTC chip, although its use is a mystery to me. It may be some sort of transducer, based on the decoding of the reference symbol, but aside from this, I have no real idea what the component could be used for.

UPDATE: I must be some sort of electronics dunce for only realising this now, but the component marked “X1” on the last picture is probably a crystal diode for the real-time clock IC.

Revolutionary Technology in Formula One: Downforce-Generating Wings

As the lessons demonstrated by Colin Chapman’s use of the monocoque chassis filtered down through the rest of the Formula One grid, the cars changed shape towards a cigar-like form typified by the bodywork of the 1966 and 1967 seasons. In 1966, there was another change in the regulations, once again allowing three-litre engines which produced in the order of 350 to 400 bhp, about twice the power of the engines used from 1961 to 1965. With such a surfeit of power, the cars were unpredictable and wild, and a bit of extravagant cornering wouldn’t sacrifice too much time around a lap. Within a few years, though, both the bodywork of the cars and the driving styles had begun to change, though, as the cars began to be pushed down into the track by aerodynamic effects and driving styles became more precise in order to compensate.

As with other revolutionary developments in the world of Formula One, the changes in this period were derived from the world of aeronautics. It has, and had been known for a very long time that an aerofoil could generate lift in accordance with Bernoulli’s principle, and aeronautical engineering had progressed in leaps and bounds during the years of the Second World War. Ideas had been hopping around the Formula One paddock for years about the effect of a reversed aerofoil, which would work in the opposite way to a typical aeroplane wing, and indeed, a few minor experiments had been tried with this idea in motor racing, including Jim Hall’s experiments with the Chaparral racing cars in the mid 1960s. Unlike an aeroplane wing, which generates lift by creating a pressure differential between the longer airflow path on top of the wing and the shorter path on the bottom of the wing, an automotive wing creates downforce by reversing the pressure differential, with a longer airflow path on the bottom rather than the top.

It took until 1968 for a downforce-generating wing to find its way into Formula One. Ferrari, having apparently got over its period of conservatism which cost it development time over the early garagiste teams, and Brabham were the first teams to try the idea of placing an aerofoil onto their cars. In the 1968 Belgian Grand Prix, raced at the fast, flowing Spa-Francorchamps circuit, Ferrari used a high-strutted rear aerofoil balanced off with little tabs mounted to the front of the nosecone, while Brabham used a lower-mounted rear wing, but balanced it off with larger front winglets. While neither Brabham affected the race much, both exiting due to reliability issues, the Ferrari of Chris Amon easily snatched pole – four seconds in front of Jackie Stewart in his Matra.

Amon then set about challenging for the lead when his radiator gave up, thus ending an interesting experiment. To be fair, the Ferrari was already a quick car, with the wingless car of Amon’s teammate, Jacky Ickx, finishing third, but the proof was there that wings were a useful addition to Formula One cars. Meanwhile, Bruce McLaren took a maiden victory for his eponymous team, while other teams looked on and wondered what they could do with the new aerodynamic aids.

Lotus was, unsurprisingly, one of these teams. With Colin Chapman having an interest in aeronautical developments, and having introduced an idea found in aeroplane design into his racing cars before, it had not escaped Chapman’s attention that a reversed aerofoil could be used in this fashion, even before Ferrari and Brabham tried their own experiments. The Lotus Formula One cars soon sprouted wings, which were bolted onto the suspension and towered up into the air on thin struts in a decidedly ungainly fashion. The highly-mounted wings suffered less from turbulence than wings mounted lower down, but were, as several incidents the following year would demonstrate, highly dangerous.

By the end of 1968, Graham Hill had taken his second World Drivers’ Championship driving for Lotus, which took the Constructors’ Championship along with it. The Lotus team, with an exceptional car, powered by a refined Cosworth V8 engine and using the nascent technology of its aerodynamic aids to its advantage, made the most of a year where their top driver, Jim Clark, was killed early on in a Formula Two race and Graham Hill had to step up to the role of leading the team. More teams throughout the year had seen the advantages of downforce-generating wings, and they spread throughout almost the entire grid.

By 1969, the high-strutted rear wing of the Lotus 49B had been joined by an equally tall front wing which towered over the front suspension. Other teams, including McLaren, had similar wing layouts, but these proved problematic. The tall struts that the wings were mounted to proved fragile, as demonstrated in the practice session at the first Grand Prix of the season, held at Kyalami in South Africa, and the practice of mounting the wings to the suspension also proved troublesome. When both Lotus cars crashed out of the Spanish Grand Prix a couple of months later, downforce-generating wings were temporarily banned, only brought back when the rules were rewritten to permit low-mounted wings bolted to the chassis. The wings of today’s Formula One cars roughly resemble the layout of the later Formula One cars of the 1969 season, although they are far more evolved.

The aerodynamic expertise of the Matra team helped them win both the Drivers’ Championship, with Jackie Stewart at the wheel, and the Constructors’ Championship by significant margins. Lotus only reached third in the Constructors’ Championship, as an season of unreliability for Jochen Rindt, and several finishes out of the points for Graham Hill left them floundering. Some wasted development on the unsuccessful four-wheel drive Lotus 63 kept them from focusing their full attention on the car with more potential, although Matra and McLaren did try their own unsuccessful four-wheel drive systems, with little more success than Lotus. The aerofoil was clearly the way forward and the best way to maintain grip in a Formula One car.

Since the late 1960s, aerodynamic wings have been an omnipresent sight on Formula One cars, and have evolved from simple aerofoils to sophisticated items designed to channel the air as precisely as possible to the most efficient places to create downforce with a minimum of drag. The wings have changed shape considerably through the years, with the development of the Gurney flap, among other things. During the 1970s, large table-shaped rear wings were the norm, with some peculiar front wing designs throughout the years, while some of the cars in the early 1980s shed their front wings in the era of ground effect.

The cars of the early 1990s had noses mounted close to the ground, but by the middle of the decade, most of the front-runners had changed to a more highly-placed nose more reminiscent of today’s cars. Sculpted front wings, designed to push the air towards various critical places on the chassis, have been a notable part of recent Formula One cars. Whatever their configuration, though, the aerodynamic effects of the wings have been critical for success in Formula One almost since their first development, and they not only changed the dynamics of Formula One cars permanently, but also the appearance, as the large wings of today’s Formula One cars are their most obvious element, even to an unfamiliar spectator.

Revolutionary Technology in Formula One: Composite Materials

Since the first development of racing cars, engineers have sought out ways of making them quicker. Physics dictates that one of the most crucial elements in an automobile design which is to be quick in all areas of racing is to reduce the mass of the car as much as possible. Steel bodies were therefore superceded by aluminium alloys, which left the cars with less momentum. The monocoque chassis, previously discussed in Revolutionary Technology in Formula One: The Monocoque Chassis, further decreased mass, leaving cars in the order of 450 kilograms, minus fuel and driver. By 1966, though, with the return of the 3-litre formula and the corresponding increase in mass, the developments in conventional aluminium construction had reached a plateau. One team, new to the sport, would take the lead in introducing a method of construction which would develop into a fundamental part of all Formula One cars in the future.

Bruce McLaren had already impressed people in the sport with several podiums and a few wins in the mid-engined Cooper cars which dominated the 1959 and 1960 seasons and had proved reasonably successful throughout the early 1960s. When John Cooper tried to insist that 1.5-litre Formula One engines should be run in McLaren’s attempts at running in the Australasian Tasman Series instead of the 2.5-litre engines permitted, McLaren set up his own racing team, competing with custom-built Cooper cars. With a championship win in the series, McLaren set his sights on Formula One, judging the Cooper team to be slipping down the ranks from their once-dominant position.

Bruce McLaren contracted Robin Herd, a former engineer on the Concorde project to design a car. Herd produced the M2B, a car designed with the use of a material named Mallite. Mallite was composed of a sheet of balsa wood covered on both sides with aluminum alloy, making the material stiffer than the conventional duralumin alloy used in other cars of the time. Another composite material, fibreglass, was used for some of the ancillary parts of the bodywork, such as the nose and engine cover.

All of these materials made for a light, yet stiff chassis which may have had some success if it weren’t for the unreliable engines that the McLaren team used in a season which emphasised reliability. However, Mallite, being an inherently inflexible material, was difficult to use in car designs in which curves and aerodynamic shapes were important. The cars of the 1970 season and for several seasons beyond would therefore remain relatively conventional, with the exception of the titanium-incorporating Eagle Mk1 and the disastrous magnesium-skinned design of the Honda RA302. However, composites would not remain a niche material in Formula One forever, and McLaren would once again be the team to bring the new developments to the table. Unfortunately, Bruce McLaren’s death in 1970 would guarantee that he would not see the success that his team would attain.

By 1981, McLaren had won two World Drivers’ Championships and one World Constructors’ Championship with their long-lasting M23 design, and had been competitive throughout most of the 1970s. In the midst of a downturn for the team, McLaren merged with a Formula Two team called Project Four, owned by Ron Dennis. The merger gave engineer John Barnard the resources to put his revolutionary new MP4/1 design to the race track. The MP4/1, for Marlboro Project Four/1, was entirely composed of carbon-fibre reinforced plastic, a light, stiff composite material then used primarily in the field of aerospace design, and the MP4/1 was the first demonstration of a monocoque automotive chassis designed from the material.

The decision to use carbon-fibre would prove to be a fortuitous one. Not only would the MP4/1 bring McLaren their first victory since 1977, but it would arguably contribute to the relative lack of injury suffered by John Watson after a horrifying crash at the Lesmo curves at the Italian Grand Prix. The material had truly experienced a trial by fire, and despite its expense, it was demonstrably useful for the field of motor racing.

The 1982 season was, by most accounts, a disastrous one and definitely one of the annī horribilis of the sport. Two drivers died, several escaped life-threatening injury and the eventual winner of the championship managed the feat by sheer consistency and reliability rather than the blazing speed of his car. For McLaren, however, the year wasn’t all bad. The return of Niki Lauda to the cockpit after a sabbatical lent some additional experience and a still-competitive driver to the McLaren team.

The Ferrari team, whose 126 C car proved the best of the field in 1982, also incorporated carbon fibre into their car design, although not to the extent of the McLaren team. Unfortunately, they suffered an early tragedy in the death of Gilles Villeneuve after a dispute with his teammate, Didier Pironi, over the results of the farcical San Marino Grand Prix. Didier Pironi’s success later in the season led to it looking like he would take the championship when he suffered a career-ending crash in qualifying for the German Grand Prix. This left the championship open for several competitors, including Alain Prost, Keke Rosberg and John Watson. Watson came close to winning the championship, but was held back by Rosberg’s superior consistency even in an inferior car without the turbocharged engines of the front-runners. As this would prove to be the last championship for a naturally-aspirated car until the turbocharged engines were banned in 1989, the predicted form of the year was further shaken up.

The 1983 season would not prove as successful for the McLaren team, and by then, many of their competitors had caught up with McLaren in the incorporation of carbon-fibre monocoques. Lotus, Alfa Romeo, Renault and Brabham had all taken cues from McLaren, and Brabham’s innovative, arrow-shaped BT52 model was the best suited to take advantage of the banning of ground effect from the rules in response to the tragedies of 1982. McLaren suffered a series of retirements which put them well outside of competition for the Drivers’ or Constructors’ Championships, while their competitors were taking advantage of a technology introduced by McLaren.

However, the 1984 season would allow McLaren to reap the rewards of their development with the new McLaren MP4/2. The mixture of a refined chassis with a powerful, yet reliable and fuel-efficient TAG-Porsche engine allowed McLaren to dominate the season, with a straight-up competition between Niki Lauda and Alain Prost, the latter having moved to McLaren after having missed out on the Drivers’ Championship by only two points. In the end, Lauda won his third World Championship by half-a-point over Prost, while McLaren easily won the Constructors’ Championship, just rewards for their efforts. Carbon fibre was in the sport to stay, and while some less well-funded teams still incorporated the older aluminium alloy design features into their cars for a few years afterwards, they would eventually have to follow the suit of their competitors as the power outputs produced by the turbocharged engines demonstrated that the old aluminium monocoques were no longer stiff enough for the job.

Revolutionary Technology in Formula One: The Monocoque Chassis

For a long time in the development of the automobile, it was common to build a car by fitting a separate body to an underlying rigid frame in a method called body-on-frame construction. While this technology had some advantages, such as the easy development of custom bodywork by coachbuilders, there was much left to be desired. With all of the torsional and bending strain placed on the rigid frame, it was necessary for this component of the car to be heavy and cumbersome in order to be strong enough to provide sufficient resistance against forces.

The earliest Formula One cars, such as the pre-war voiturettes of 1950 and 1951, were built on ladder frames, where the rigid frame only made up part of the bottom of the car, and the rest of the bodywork provided no structural support at all. This was quickly replaced with a more advanced chassis type when the sport moved to Formula Two rules in 1952. The spaceframe chassis, which was used on cars from the dominant Ferrari 500 of 1952 and 1953 to the Ferraris, Coopers, Lotuses and BRMs of the early 1960s, was based around interlocking struts placed in a geometric pattern around the body in such a way as to place the strain of torsion and bending all over the car.

Surprisingly for the top echelon of formula racing, though, spaceframes were old technology. The Second World War had necessitated a lot of technological and mechanical development in aircraft, and the improvements made in that field filtered down to car design in the post-war years. In the run-up to the war, the British, French and German air forces had independently designed fast interceptor fighter aircraft built on monocoque fuselages. The monocoque itself was not a particularly new idea, dating back to some reconnaissance aircraft of the First World War, and had been trialed in several pre-war car designs including the Lancia Lambda and the Citroën Traction Avant.

By the time Colin Chapman, owner of Team Lotus, designed his own monocoque Formula One cars, the technology had found its way into several commercial road-going vehicles, such as the Morris Minor and Chapman’s own Lotus Elite. It was therefore well-known that monocoque design provided substantial advantages over a ladder frame chassis in road cars, but it was yet to be demonstrated that these advantages would be worth the effort in Formula One racing.

Colin Chapman was known for his famous statements on car design, apocryphally stated as “Simplify, then add lightness”. Chapman, who had briefly been a pilot in the Royal Air Force, maintained a deep interest in aeronautical engineering techniques for the rest of his career in car design. Monocoque chassis use the body of the car as a supporting member in conjunction with the chassis, greatly increasing rigidity and therefore structural integrity. When Team Lotus introduced their Lotus 25 in 1962, the car was significantly lighter than its competitors and became the first of Lotus’ many successful Formula One models.

In fact, the Lotus 25 almost won both the Drivers’ Championship and the International Cup for F1 Manufacturers on its debut in 1962, but an engine failure for Jim Clark in the last race gave the championship victory to Graham Hill, driving for BRM with a more conventional spaceframe car. The results of the 1962 season underlined a weakness of the design characteristics of Team Lotus; while Graham Hill was classified at the end of every race that season, Jim Clark, despite his smooth driving style, had retired four times. The Lotus 25 was mechanically unreliable, and perhaps under a more leaden foot than that of the preternaturally talented Clark, the car may not even had made it that close to the championship that year.

That said, the car was clearly quick, and after the disappointment of the year before, 1963 would prove a change of fortune for Clark and Team Lotus, taking both championships convincingly against competition which persisted in using their spaceframe cars from the previous year. The advantages of the monocoque design had been clearly demonstrated, and another garagiste team joined Cooper in the annals of Formula One history.

Unfortunately, Jim Clark and Team Lotus were unable to repeat the feat in 1964. Ferrari and BRM, Lotus’ biggest competitors, had caught up technologically and built their own monocoque cars, and this combined with the Lotus 25’s recurrent unreliability left Lotus floundering at the end of the year. They would come only third, while John Surtees would secure the title for Ferrari after a year-long battle with Graham Hill in his BRM. Yet, by the time the Lotus 25 was replaced by the Lotus 33 a few races into the 1965 season, it had revolutionised the way that Formula One cars would be built in the future. The monocoque chassis is universal among modern Formula One cars, along with its near-universal presence in road cars.

However, there would be a few last significant demonstrations of spaceframe design in Formula One, taking place during the 1966 and 1967 seasons, where the increase of engine power from the new 3-litre engine formula gave the advantage to those who could acquire a reliable engine. Jack Brabham, having won the 1959 and 1960 Drivers’ Championships in mid-engined Coopers, had set up Motor Racing Developments, better known as Brabham, and chose a high-torque, lightweight aluminium engine from the Australian engineering company, Repco. This engine, which would prove the class of the field even with a deficit of power versus the Ferrari and Honda V12s, was allied to a conservative series of designs by Ron Tauranac, who still preferred the spaceframe.

There was one advantage of the spaceframe design which was of significance to Jack Brabham – it was easier to repair than a monocoque design, as a piece of tubular frame could be cut out and replaced a lot quicker than having to remove a whole section of body and weld in a replacement in such a way as to retain structural integrity and strength. Yet, in a season of unreliability, this was a secondary issue compared to the temperamental new engines used by most teams. Despite the dated design of his Brabham BT19, Jack Brabham would win the 1966 Drivers’ Championship with superior reliability and a run of four consecutive wins.

This was followed up by another Drivers’ Championship victory in 1967 for Brabham’s team, this time for Denny Hulme, but by the time 1968 came, other teams had caught up in the engine race, most significantly Team Lotus. Their assistance in the development of the Cosworth DFV had given them access to the engine which would dominate the next two decades, while the Repco engine didn’t cope well with additional power. By 1970, even Tauranac had conceded to the monocoque. Meanwhile, this would merely be the first in a series of innovative solutions from the mind of Colin Chapman.

Revolutionary Technology in Formula One: The Mid-Engine Configuration

In 1950, when Grand Prix motor racing acquired the Drivers’ World Championship, run under the recently-formulated Formula One rules, the cars were distinctive for their long noses, grille-protected air intakes and decidedly rear-mounted driver position. By the start of the next decade, the cars of the leading teams had changed utterly, with sleek, cigar-like aerodynamic bodies, spaceframe chassis and mid-rear-mounted engines. Anybody who wasn’t willing to conform to the mid-engined revolution was left in the dust, and 1960 would see the last win for a front-engined car in Formula One.

Like forced induction, the history of mid-engine configuration in Grand Prix racing goes back before the Second World War. Germany’s Silver Arrows were easily the dominant Grand Prix cars of their time, using the technological might of Mercedes-Benz and Auto Union to their advantage. These ferociously powerful cars would eventually produce almost 600 bhp at their peak, with which they managed to dominate every year of racing from 1935 to the breakout of the Second World War in 1939, only losing a single Grande Épreuve during these five years. While Mercedes-Benz used the traditional front-engined layout for their W25, W125 and W154 cars, Auto Union took a different strategy, placing their engines behind the driver. With a swing axle suspension system on the rear, the Auto Union cars acquired a reputation for evil handling even by the standards of the time, but the cars were powerful, fast and won many races.

Auto Union never ventured into the voiturette category that would form the basis of the post-war Formula One rules, and so the cars of 1950 stuck exclusively to the conventional front-engined layout that had been common among the other competitors in pre-war Grand Prix racing. However, with the success that Auto Union had attained with the mid-engine layout, it was only a matter of time before somebody attempted to make a car with the engine behind the driver again.

The party responsible for reviving the mid-engine design was the Cooper Car Company. This constructor of racing cars, founded by Charles and John Cooper in 1946, had started out with motorcycle-engined Formula Three cars in the early 1950s and worked their way up to Formula Two cars by 1957. According to John Cooper, matters of expediency led to their first Formula Three cars being developed with a mid-engined layout, as the motorcycle engine more effectively ran the rear wheels using a chain than a propeller shaft. Nevertheless, this proved to be a matter of serendipity.

With the exception of a few flirtations with four-wheel drive by various constructors, Formula One cars have always been driven exclusively through the rear wheels. The placing of the heavy metal block of the engine nearest the driving wheels of the car brings benefits regarding traction, which was useful in the low-compromises world of Formula One. The mid-engine design philosophy also allowed for better weight distribution, meaning less inertia and less inclination towards understeer, which was a problem for the front-engined cars that had taken over Formula One. By the time Cooper introduced their first rear engine model into Formula One, the cars were using considerably more sophisticated suspension than the Auto Unions of the 1930s, and the double wishbone suspension fitted to both front and rear went a long way in curing the snap oversteer apparently common to the Auto Unions.

Cooper introduced its first works Formula One car in 1957, the Cooper T43. A few cars were built for the works effort, and a few sold to privateer racers who ran them to Formula Two rules. The car’s first race was the 1957 Monaco Grand Prix, where the cars in the hands of Jack Brabham and Les Leston ran with 2-litre Coventry Climax engines which were 500cc smaller than the front-runners in the Maserati and Vanwall cars. Nevertheless, in this most attritional of races, Jack Brabham managed to finish in sixth place, just one place off a point in the 1950s scoring system. Later in the season, Roy Salvadori bettered this with a point at the 1957 British Grand Prix at Aintree, scoring the first ever point for the Cooper Car Company. It was to be the first of several.

1958 brought greater fortunes for the Cooper team. A considerably longer Championship season, coinciding with the introduction of the new International Cup for F1 Manufacturers, gave Cooper cars more opportunities to score, and with several privateer entries running the Cooper T43 and newer T45, both under the Formula One and Formula Two engine rules, there was some opportunity to compete against the more powerful Vanwall and Ferrari cars which would end up competing for the first International Cup for F1 Manufacturers.

The season started well for Cooper, not as a consequence of their works effort, but instead of the Cooper-running privateers, the R.R.C. Walker Racing Team. The first two races of the season were won by the privateer team, the first-ever wins for a rear-engined car in Formula One, one at the hands of Stirling Moss, and another win by Maurice Trintignant. Very quickly, Cooper had earned vindication for its peculiar design philosophy, and they would continue to compete for points and podiums throughout the rest of the season. The team finished third, even with a significant power deficit versus the top constructors.

The 1959 season was to prove more successful still. Cooper introduced its T51 model for its works effort and the Rob Walker Racing Team, now fitted with a full 2.5 litre Climax straight-four. With this engine fitted, Cooper cars managed to win five of the eight championship Formula One races that year, along with three of the five non-championship events. Jack Brabham took his first title after winning two races and scoring points in all of the races he finished. The only team that managed to compete with the superior Coopers with their more even weight distribution were the Ferraris, but their more powerful V6 engine only managed to win them the German Grand Prix, held for the first and last time at the simplistic AVUS circuit, which comprised two extremely long straights and a set of hairpin turns, and the French Grand Prix, held at the long, fast Reims-Gueux circuit.

It was interesting that it should be Scuderia Ferrari that was challenging Cooper in the 1959 season. The team had demonstrated a bit of a conservative streak, catalysed by Enzo Ferrari who only reluctantly pursued technological improvements that weren’t applicable to the engine. The Ferrari team therefore did produce some very powerful engines, but tended not to apply as much care to the chassis. In the battle between Italian power and British ingenuity, the British were proving that power wasn’t much good without control.

By the time the 1960 season rolled around, other teams had begun to take notice just how much potential there was in the mid-engine layout, and some of them had followed suit. Team Lotus, run by Colin Chapman, who would himself prove to be an innovative engineer later on, had adopted the layout for their Lotus 18 model, while BRM followed suit with their P48 model. Ferrari remained steadfast with their Ferrari 246, despite its increasing irrelevance. The 1960 season would show the error of their ways, as they slipped to third in the International Cup for F1 Manufacturers, far behind the victorious Cooper, who managed six victories out of nine events, and another Drivers’ World Championship for Jack Brabham.

Team Lotus also used their new mid-engined car to their advantage, taking two of the remaining wins in the season. In comparison, Ferrari managed one win, aptly at their home Grand Prix at Monza, but this was a rather assured victory after the leading British teams protested the event, which was run on the alarmingly quick ten-kilometre Monza layout incorporating the banked oval. Their other results during the year would prove to be disappointing, and Phil Hill’s victory at Monza would be the final championship race victory for a front-engined car. By 1961, even Ferrari had conceded; their dominant 1960 Formula Two car was a useful development tool for a Formula One series which had greatly decreased the maximum engine displacement. Amusingly, Ferrari would demonstrate greater success with 1961’s Ferrari 156 than they had during the last two years of running their obsolete front-engined car.

There was to be one more moment of glory for a front-engined car, a consequence of one of the flirtations with four-wheel drive alluded to above. The Ferguson P99 was a demonstration project using Harry Ferguson’s novel new four-wheel drive system, and was made front-engined by necessity. The 1961 International Gold Cup was a non-championship event held at Oulton Park, and while the P99 had not proved especially successful in the other races it had contended, with a significant weight disadvantage, the superior traction of the four-wheel drive system aided it in the wet conditions that prevailed that day, taking the victory at the hands of Stirling Moss.

Thus ended the era of front-engined Formula One cars. Every car on the grid was rear-engined by 1962. Cooper would not find much success in the 1.5 litre formula in Formula One, while two other British teams, Team Lotus and BRM, proved more suited to such rules. Team Lotus would go on to become a great innovator in its own right at the hands of Colin Chapman, while BRM would have its moment of glory in 1962, remaining competitive throughout the three following years, before disastrously introducing the overweight H16 engine which powered its cars in 1966 and 1967. Ferrari had mixed success during the early 1960s, easily taking the championship in 1961, but faltering for the next two years before taking another International Cup for F1 Manufacturers and a Drivers’ World Championship in 1964.

Revolutionary Technology in Formula One: The Turbocharger

This week, the FIA announced their confirmation that Formula One will be adopting 1.6 litre V6 turbocharged engines with energy recovery systems from 2014 onwards, to replace the current 2.4 litre naturally-aspirated V8 engines currently being used. This, of course, will not be the first time that Formula One has adopted turbochargers, nor even the first time that turbochargers were mandatory. The last time that turbochargers were adopted in Formula One, they began as a joke and ended up as essential kit for any competitive team.

The history of forced induction in Formula One begins right at the beginning of the World Drivers’ Championship itself, where Formula One had formalised its rules allowing pre-war 1.5 litre supercharged “voiturettes” to compete with 4.5 litre naturally-aspirated engines. The equivalence formula, supposed to provide a bit of competition between the two different forms of air induction, ended up with Alfa Romeo’s 158 and 159 supercharged models dominating Formula One for the first two years, before the switch in the World Drivers’ Championship to Formula Two rules.

The Alfettas, as they were known, produced a staggering 425bhp at their peak power in 1951, which couldn’t compete with the pre-war supercharged engines of Mercedes-Benz or Auto Union, but which was still far ahead of the naturally-aspirated engines of the time. Even the higher fuel consumption of the Alfettas couldn’t keep them from taking a near clean sweep of the championships in 1950 and 1951. However, as legal engine sizes dropped drastically during 1952 and 1953, a 750cc supercharged engine proved uncompetitive against the 2 litre naturally-aspirated engines of Ferrari. Enzo Ferrari had identified that supercharging would be a dead-end in the future, as supercharged engines were held back on their ability to rev by extension of the operation of the supercharger. When the World Drivers’ Championship returned to Formula One rules in 1954, allowing 2.5 litre naturally-aspirated engines, but retaining the size of forced induction engines at 750cc, forced induction remained a formality in the rules for several years.

This opened up the stage for Renault more than twenty years later. Renault had recently begun experimenting with turbochargers on their sports car engines and were winning races by 1975. This gave them the idea to attempt to use the clause allowing forced induction engines in Formula One, which had been largely ignored since the domination of the Alfettas. Unlike superchargers, which were spun by the engine, the turbocharger was driven by exhaust gas, therefore not inhibiting the engine’s ability to rev. In 1977, Renault entered Formula One with the RS01.

It was not an instant success. The RS01, most famously driven by Jean-Pierre Jabouille, started off as a slow, overweight car with frightful turbo-lag, and notably made little use of the other revolutionary technology being demonstrated in Formula One at the same time, ground effect. The RS01 was mocked by other teams in the paddock, who had seen how difficult the car was to drive around the tight street circuit of Monaco and how unreliable the new engine was, and who referred to the car by the derisory nickname of “the yellow teapot”.

Evolution was quick, however. In 1978, Renault won the most prestigious race in sports car racing, the 24 Hours of Le Mans, using a turbocharged Renault-Alpine A442 with the turbocharged 2 litre sports car engine which had previously won at Mugello, but a win eluded the French team in Formula One for quite some time. Jabouille scored the team’s first points with a fourth-place finish near the end of the 1978 season with a refined RS01, and the team’s first pole position at the fast sweeping track of Kyalami in South Africa in 1979.

Renault entered 1979 with a further-refined RS01 and a team-mate for Jean-Pierre Jabouille in René Arnoux, but it was the introduction of the RS10 during the mid-season which cemented Renault’s place as a competitive team. The 1979 French Grand Prix at Dijon-Prenois brought the first victory for the new team, with Jabouille taking victory in a French car in front of a French crowd. René Arnoux almost made it a Renault 1-2 after competing with Gilles Villeneuve in perhaps the best and most famous battle for position ever captured on camera, with a wheel-banging performance that lasted almost two laps.

The turbocharger had proven its point with a storming performance which made everybody in the paddock take notice. The turbo-lag problem had been mostly solved with the introduction of twin turbochargers to force air into each cylinder bank individually, although reliability still plagued the engines. Soon, Ferrari, Brabham and Alfa Romeo were researching turbocharged engines of their own. Ferrari were the next team to introduce a turbocharger into their car, using the smaller 1.5 litre V6 in 1981 in order to best exploit ground effect, which was difficult with the large flat-12 which had stormed to victory in 1979, but had faltered in 1980 after years of success. However, Ferrari were never to win a Drivers’ Championship with the technology, with their best result being a handful of Constructors’ Championships in 1982 and 1983.

Brabham, Alfa Romeo and Toleman (running a Hart turbocharged engine) were the next to experiment with the technology in the unpredictable and controversial 1982 season. These engines proved fragile, even with their outstanding power which commonly put the Renaults and Brabhams among the front rows, and it was the naturally-aspirated Cosworth DFV in Keke Rosberg’s car which would grant him the Drivers’ Championship of 1982, the last championship which a naturally-aspirated car would win until turbochargers were banned. Brabham’s BMW M12 and Renault’s EF1 engine seemed especially prone to embarrassing failure, often failing the drivers and allowing the tried-and-tested Cosworth engines to capitalise, as much as it was possible in a season which had no clear ascendant driver.

1983 would be the first dominant year of turbocharged cars, just as ground effect was banned. Twelve of the fifteen races in the season were taken by turbocharged cars, with a handful of victories for Cosworth-powered cars, usually at twisty street circuits where the additional power of the turbos was less significant. Despite their efforts in introducing and developing turbochargers, Renault would fail to take either championship that year, losing out on the Drivers’ Championship when Alain Prost’s turbocharger failed him at the last race of the season at Kyalami, handing the victory and a second World Championship to Brabham’s Nelson Piquet, and losing the Constructors’ Championship to the more reliable pair of Patrick Tambay and René Arnoux. The loss of both championships led Renault to sack Alain Prost, leaving the driver to go to a very un-French team at McLaren, running a very un-French engine from Porsche, with a Luxembourgish turbocharger.

This proved to be a fortuitous move for the Frenchman, ending up at a team which was just at the start of its dominant period where it would take all but one Drivers’ Championship between 1984 and 1991. The TAG-Porsche engine proved to be outstanding, with a mix of reliability and power which led Prost and the recently-returned World Champion, Niki Lauda, to fight for the championship in a season where few other teams were to attain victories, and none of those in naturally-aspirated cars. The Cosworth DFV family which had led so many drivers and constructors to their championships was now completely overshadowed by the far more powerful turbocharged engines which were present in all but two teams’ cars in 1984.

Niki Lauda would later take the Drivers’ Championship by the smallest margin ever, after Prost’s victory at a wet Monaco track had resulted in half-points after the race was stopped, in conditions where an unfancied Toleman driven by the rookie, Ayrton Senna, had almost taken victory for a team which had been one of the first to experiment with turbochargers. In either case, it was a resounding success for the forced induction engines. Tyrrell, the only Cosworth-running team that seemed capable of fighting for race victories, would later be disqualified from the championship for a technical infringement which swept excellently-fought podiums for both Martin Brundle and Stefan Bellof from the records.

By 1984, the gulf in power between the turbocharged engines and the Cosworths was extreme. The Cosworths produced somewhere in the order of 520 bhp; the turbos could produce in excess of 700 bhp in race trim, and more than 1,000 bhp in qualifying. In a vain attempt to produce some sort of equivalence formula for the two engines, FISA had introduced a fuel restriction of 220 litres at room temperature for the turbocharged cars, which were more fuel-hungry than the Cosworths, just as the supercharged 1.5 litre engines had been less frugal than the 4.5 litre naturally-aspirated engines in 1950 and 1951.

Nevertheless, 1985 proved just as dominant for turbochargers, if not necessarily for McLaren, who managed their Drivers’ Championship for Alain Prost through greater consistency and reliability than Ferrari, whose engines let them down at the last four races of the season. Once again, only two teams, Tyrrell and Minardi, were using naturally-aspirated engines, and both had secured turbocharged engines by the end of the year. Power outputs were creeping up to absurd values; the BMW engines in the Benettons of 1985 and 1986, derivatives of the BMW that had won the 1983 championship for Brabham, claimed 1,350 bhp in qualifying trim. Other engines were producing close to 900 bhp in race trim, an amount which wouldn’t be equalled until the early 2000s, by which time the engines were being held in check by traction control systems.

The Williams team, the privateers which had claimed the last title for a Cosworth-engined car, had begun to show striking performance with a Honda engine which was producing more power than the until-then supreme TAG-Porsche engine in the McLaren. Honda, which had previously competed in the 1960s with a factory effort, clearly saw the chances for glory in this engine formula which rewarded smaller engines, a particular strength of the manufacturer of dominant motorcycles and small cars. Even with the further-restricted fuel tanks, this time restricted to 195 litres per race, power outputs remained high.

With no way to enforce a reasonable equivalence formula, FISA, for the first and so far only time in the history of Formula One, banned the use of naturally-aspirated engines for 1986. This had left the privateer teams scrambling to find suitable engines. Some of them were lucky enough to get proven Renault designs, such as Lotus and Tyrrell. Others were left with unreliable Alfa Romeo and Motori Moderni designs which did not have the backing of a powerful manufacturer, and which broke down far more often than not. The class of the field was clearly the Honda engine in the back of the Williams FW11, and Alain Prost only won the second of his four Drivers’ Championships by capitalising on the squabbling between Nigel Mansell and Nelson Piquet. The sport had become a competition between the haves and the have-nots, and the power outputs of the turbocharged engines had become too high for comfort.

Naturally-aspirated engines were reintroduced in 1987, with a larger 3.5 litre capacity in order to increase power. FISA’s plans were to allow turbocharged engines for a further two years before forbidding them entirely, but with the knowledge that pitting the turbos against the naturally-aspirated engines directly was a lesson in futility, Formula One gained an additional two championships for 1987, the Jim Clark Trophy and the Colin Chapman Trophy, for drivers and constructors of naturally-aspirated cars respectively.

Again, Honda power proved dominant, with McLaren no longer able to compete effectively with the TAG-Porsche engine, and Williams and Lotus competing for the Drivers’ Championship. However, greater consistency with Alain Prost and Stefan Johansson at least gave McLaren second place in the Constructors’ Championship, with the team of Nelson Piquet, now a three-time champion, and Nigel Mansell easily taking first place. Tyrrell’s use of the Cosworth DFZ gave them both the Jim Clark and Colin Chapman Trophies, with Jonathan Palmer’s performance being strong enough to earn the team seven of the eleven points they earned in the normal Constructor’s Championship, with Philippe Streiff earning a further four points and sixth place for the team among the rest of the constructors.

1988 was the final year allowing forced induction engines. Some constructors, including Williams, who had lost Honda power to McLaren, used naturally-aspirated engines in their cars in preparation for 1989, and with a more stringent 155 litre fuel tank restriction and turbocharger pressure limited to 2.5 bar, it was hoped that there would finally be some sort of equivalence between the turbocharged cars and the naturally-aspirated engines. It wasn’t even close to being a fair competition. Despite all of these restrictions, the McLaren MP4/4 would go on to dominate the season in a fashion that no car had managed since the 1952 season.

With two of the best drivers in the sport, the most powerful engine, an extraordinary chassis from Gordon Murray and Steve Nichols and a team which had a lot of recent championship experience, the McLaren team won 15 of 16 races during the season, and perhaps only lost the Italian Grand Prix because of Ayrton Senna’s overambitious overtake on Jean-Louis Schlesser at the Rettifilo chicane. Gerhard Berger’s win at Ferrari’s home track at the first Italian Grand Prix since Enzo Ferrari’s death was highly popular, yet it did little to overshadow the fact that McLaren had been so dominant that if there hadn’t been an interesting battle between Ayrton Senna and Alain Prost, the season would have little to commend itself by. For many teams, the return to naturally-aspirated engines would have been all too welcome at this stage.

In the preceding years, as constructors had to become accustomed to their new engines, McLaren was still strong, but never as dominant as they were in 1988. Competition came from Ferrari, returning to the V12 engine design that had become their trademark and using a semi-automatic gearbox which was unreliable but which would eventually itself become a revolutionary technology in Formula One, along with Williams with their new Renault V10. While engine power increased as manufacturers figured out how to make their engines rev faster, and the early 2000s brought 900 bhp, 3 litre V10s which matched the race output of the turbocharged engines, nothing would ever compete with the ferocious qualifying engines of the 1985 and 1986 seasons.

Amusingly, considering that FISA tried to restrict turbocharged engines by limiting their fuel, turbocharger technology has improved to the point where turbo engines are being introduced again to save fuel over the high-revving V8 engines currently being used. Power from the 1.6 litre V6 turbos could easily match the engine power of today, although it remains to be seen how the engines will be restricted. It will be interesting to see if the turbocharger will once again become revolutionary in Formula One in a new generation, this time for a very different purpose than its original intention.