Why the Philae lander came at just the right time – a social perspective from a science enthusiast

By now, it has been more than a week since the Philae lander was released from the Rosetta space probe and began its journey onto the surface of Comet 67P/Churyumov-Gerasimenko. The landing didn’t go without trouble, starting with the reported failure of the gas thruster meant to help keep it on the surface before the lander was even released and ending with Philae bouncing twice on the surface of the comet and ending up in the shadow of a cliff, greatly reducing the amount of solar exposure available to the lander. Nevertheless, the mission could be regarded as having succeeded in some respect already, even if conditions do not improve with regard to the sunlight falling on Philae; after all, it did retrieve some potentially useful results from its experimental apparatus before running out of battery power.

Frankly, though, as impressive as the science and engineering of Philae is, a lot of words have been spoken about that aspect long before this post by people far more experienced and talented in those fields than I am. What I want to talk about are some social implications of the fortuitous timing of Philae’s success. The timing of Philae’s mission came in the wake of two unfortunate accidents in the United States by privately-funded aerospace ventures: one the controlled explosion of a failed launch of an Antares rocket developed by Orbital Sciences and designated to send supplies to the International Space Station; the other being the recent crash of the SpaceShipTwo spacecraft, VSS Enterprise, in the Mojave Desert during testing, an accident which led to the death of one of the pilots. At a time when funding for space exploration is hard to come by, these accidents looked embarrassing at best. Rosetta and Philae were launched on their course ten years ago, but arrived in time to at least salvage one reasonable success for space exploration at a time when some people have been quick to criticise it, especially those always willing to fight for petty political victories in matters that mean little.

In that vein, another social implication of Rosetta and Philae comes courtesy of their existence as components of a mission of the European Space Agency. The ESA, funded partially by funds forthcoming from each participating government and partially by the European Union, is a demonstration of the effectiveness of European cooperation at a time when several Eurosceptic groups seek to convince us that such cooperation will lead us nowhere. At a time when these groups have motivations that are at best questionable, like Ukip, while others look like straight-up crypto-fascists, like France’s Front National, I think any sort of success that can show them that Europe can work better if there is sufficient motivation to get things done is useful and desirable. That this happened because a set of scientists and engineers from different countries ignored the call of jingoism and pointless ring-fencing further reinforces my point about these people being willing to fight only in the sake of petty politics when more important things lie at stake. The Rosetta mission – and the ESA in general – shows us the potential and power of cooperation and should be taken as a good example of what the likes of Ukip and FN would take away from us if they were to take power in their respective countries.

Creationism is not science

To anybody of a rational, scientific mindset, the title of this article should invoke thoughts somewhat along the lines of, “No shit, Sherlock”. Evolutionary science has underpinned the efforts of biologists for decades or even centuries, providing an observable, tested mechanism for the diversity of species. Through the allied efforts of geneticists, it has given us a stronger grasp on how we can improve efforts towards artificial selection. Yet, in all of this, small but vocal groups, many situated within the United States, deny evolutionary science. Instead, they wish to implant their own unscientific creationist hypotheses into the education system, subverting the scientific consensus with their theologically-driven political charges.

Creationism appears to be driven by some sort of offence and insecurity at the idea that humans might have been derived from what creationists see as lower species, or that we might be related in some way to apes and monkeys. Christian creationism, the most vocal kind in the Western world, professes that a creator God designed humans in his own image – although I have to ask whether any creator God would actually want to claim a species with such a variety of known flaws as Homo sapiens as being in his or her image.

The most egregiously and brutally unscientific of the creationist hypotheses is that of Young Earth creationism, a ridiculously bizarre hypothesis that contravenes most of the major branches of natural science, along with many humanities disciplines and a couple of branches of mathematics to boot. Essentially, Young Earth creationism states that the world, in accordance with various calculations on figures given in the Bible, is somewhere in the region of six thousand years old. The recent, controversial debate between Bill Nye and Ken Ham was conducted at the Creation Museum, an establishment which claims Young Earth creationism to be true and accurate.

There are so many things wrong with this that it’s difficult to know where to begin, but how about beginning by stating that there are human settlements which have been dated more than five thousand years before that? I have a back-dated copy of National Geographic beside me (June 2011, if anybody’s interested in reading it) that discusses the archaeological site of Göbekli Tepe in Turkey, an elaborate and intricately designed religious site that is estimated to date back to 9600 BC.

That immediately puts a rather inconvenient stumbling block in front of Young Earth creationism, and I haven’t even got to the science yet. Aside from myriad fields of biology, including genetics, botany, zoology, biochemistry and more, all of which must be denied in order to claim Young Earth creationism as correct, we have various elements of physics, such as astronomy and radiometric dating which peg the Earth at somewhere near 4.5 billion years old, with the universe at least 13.7 billion years old.

Not only are creationists willing to deny reams of scientific evidence from fields all over the scientific spectrum, but they’re also willing to try to twist actual science to fit their demands. Among the most absurd arguments for creationism is the idea that evolution somehow violates the Second Law of Thermodynamics – a claim that could only be made by somebody who either doesn’t understand the Second Law of Thermodynamics or who thinks little enough of their audience to believe that the audience won’t understand it.

The Second Law of Thermodynamics, in a paraphrased form, states that in a closed system, all elements tend towards entropy. In more practical terms, it states that heat cannot flow from a cold object to a hot object without external work being applied to the system. The Earth is not a closed system. Heat is transferred between the Earth and its surroundings; heat flows into the Earth’s atmosphere from the Sun, while heat flows out of it via radiation. As for biological organisms, they must and do conduct external work on their own systems to maintain local order. Much of the energetic requirements of a human being are expended as heat in order to stave off localised entropy, with the brain being the prime example of this use of energy. None of this works in any way like the creationists explain – and their attempted perversion of science in this way demonstrates a ruthless and worrying disregard for the role of observation and experiment in their aims to push their pet hypotheses.

Young Earth creationism is, as a scientific hypothesis, a sad joke with no observable evidence behind it whatsoever and the works of several dozen fields of science and the humanities against it. However, creationism doesn’t stop there, as it has another, more presentable face in the form of so-called “Intelligent Design” – but this face is just as odious from a scientific perspective, since unlike the patently absurd Young Earth hypotheses, Intelligent Design pays lip-service to science while simultaneously ignoring many of its core tenets.

Intelligent Design, just as with any other form of creationism, posits the idea of a creator entity. The word “intelligent” in the name appears to relate to an intelligent entity rather than the design itself being intelligent – for, as I’ve intimated above, it would be pretty difficult to suggest that human anatomy, for example, is particularly intelligent. You know, with the backwards eye where light shines in through the wiring, the hip design which causes labouring mothers to experience a lot of pain, so on, so forth. The hypothesis appears on the surface to provide answers that other forms of creationism just can’t answer, like accountability for the actual, observed microevolution occurring in bacteria at this very moment – and probably including some of the bacteria living on the bodies of the readers. Yet, Intelligent Design still contravenes scientific consensus – largely for the reason that it is not falsifiable.

Falsifiability is a very important concept in science and plays a major role in the scientific method which underpins research in the physical sciences. The scientific method involves the use of a chain of steps, taking the rough form of observation-hypothesis-prediction-experimentation-reproduction, in order to test a hypothesis and attempt to produce observable, testable results which can then be reproduced by other scientists in order to eliminate any bias or contamination that may affect your experimentation procedure. A hypothesis with sufficiently large observed evidence for its correctness may then become a theory (a word which has become rather loaded when it comes to reporting science to non-practitioners, often being confused with a hypothesis in the sense described above). The principle of falsifiability plays deep into this process, since for an experiment to be useful, there must be a chance for the hypothesis that it tests to be invalided by the experiment.

This is not the case with Intelligent Design. An advocate for Intelligent Design could claim, if an experiment was ever undertaken to attempt to disprove the hypothesis, that the experimental conditions were themselves incorrect for any variety of experimental conditions. As a result, Intelligent Design, just as with any other form of creationism, is of no scientific value and therefore its teaching in a scientific curriculum would not only be useless but deleterious to other scientific disciplines.

Unfortunately, creationism is being peddled by a mixture of slick operators who play on a perceived public distrust of science and religiously motivated preachers who decry any attack on their religion – or at least the way in which they interpret their religion, since evolution does not inherently discount the idea of the existence of a god – even when that perceived attack relates to issues which should not have religious motivations behind them anyway.

This isn’t helped by the difficulty for scientists facing off against creationists; by debating them face to face, evolution scientists give creationists an air of scientific respectability that their beliefs do not deserve, while those who openly decry creationist teaching are often vocal atheists as well, creating a perspective that evolution marches in lockstep with atheism. Ignoring creationists might well magnify the erroneous idea of an ivory-tower scientific elite. In my eyes, the best thing to do would be to contest the principles of any school where creationist teachings are being given scientific credence either as an alternative or replacement for evolutionary theory, while trying to keep the vocal attacks on religion away from the subject while doing so. I may be an atheist myself, but I see having people conflating evolutionary science with atheism as a problem waiting to happen – the science should come first.

More Raspberry Pi Electronics Experiments – Gertboard and Potentiometer-Controlled LED Cluster

One of the things which alerted me to the potential of the Raspberry Pi as an electronics control system was the announcement of the Gertboard before the Raspberry Pi was released into the market. When the Gertboard was announced for sale in November 2012, it was fully my intention to buy one, but a lack of money kept me from purchasing it at that point. The unassembled Gertboard kits soon sold out, leaving element14, who distribute the Gertboard, to decide to release an assembled Gertboard. This went on sale very recently, and shortly after release, I bought one from Premier Farnell’s Irish subsidiary.

The Gertboard, for the uninitiated, is an I/O expansion board that plugs into the GPIO header on the Raspberry Pi. Designed by Gert van Loo, designer of the Raspberry Pi alpha hardware, the Gertboard not only protects the GPIO pins from physical and electrical damage, but also provides a set of additional features. These include input/output buffers, a motor controller, an open collector driver, an MCP3002 ADC and MCP4802 DAC and an Atmel ATMega328P microcontroller which is compatible with the Arduino IDE and programming environment.

I was very impressed by the quick response from element14 after my purchase; my delivery came only two days after ordering and would have come even sooner if I hadn’t missed the 20.00 deadline on the day I had ordered it. The Gertboard was packaged with a number of female-to-female jumper wires, a set of jumpers, plastic feet for the board and a CD-ROM with a set of development tools for the ARM Cortex-M platform.


So far, I’ve only had occasion to test the buffered I/O, the ADC and DAC and the microcontroller; I still don’t have parts to test the motor controller or open collector driver. Aside from some documented peculiarities regarding the input buffers when at a floating voltage, including the so-called “proximity sensor” effect, things seem to have been going rather well.

The acquisition of the Gertboard gave me the impetus to really get down to trying to test my own expansions to the simple test circuits I had implemented before. One interesting application that I considered was to use a potentiometer to control a bank of LEDs in order to provide some sort of status indication.

The following Fritzing circuit diagram shows the layout of this circuit without the use of the Gertboard; the onboard LEDs and GPIO pins lined up in a row on the Gertboard makes it slightly less messy in terms of wiring.

Potentiometer Controlled LEDs_bb

In this diagram, GPIO pins 0, 1, 4, 17, 18, 21, 22 and 23 are used to control the LEDs, although you could also use pins 24 or 25 without conflict with either the SPI bus – which is necessary for the MCP3002 ADC – or the serial UART on pins 14 and 15. However, this is a lot of GPIO pins taken up for one application, which may warrant the use of a shift register or an I2C I/O expander such as the MCP23008 or MCP23017 in order to control more LEDs with less pins.

In order to control this circuit, I took the sample Gertboard test software and modified it slightly. As the potentiometer is turned to the right, the ADC value increases to a maximum of 1023; therefore, the distance between each LED’s activation point should be 1023 divided by 8 – very close to 128. The LEDs will light from left-to-right as the potentiometer’s resistance decreases, with one LED lighting at an ADC reading of 0, two LEDs at 128, all the way up to all eight LEDs at 1023.

// Gertboard Demo
// SPI (ADC/DAC) control code
// This code is part of the Gertboard test suite
// Copyright (C) Gert Jan van Loo & Myra VanInwegen 2012
// No rights reserved
// You may treat this program as if it was in the public domain

#include "gb_common.h"
#include "gb_spi.h"

void setup_gpio(void);

int leds[] = {1 << 23, 1 << 22, 1 << 21, 1 << 18, 1 << 17, 1 << 4, 1 << 1,
	      1 << 0};

int main(void)
    int r, v, s, i, chan, nleds;

    do {
	printf("Which channel do you want to test? Type 0 or 1.\n");
	chan = (int) getchar();
	(void) getchar();
    } while (chan != '0' && chan != '1');

    printf("When ready, press Enter.");
    (void) getchar();


    for (r = 0; r < 1000000; r++) {
	v = read_adc(chan);
	for (i = 0; i < 8; i++) {
	    GPIO_CLR0 = leds[i];
	nleds = v / (1023 / 8); /* number of LEDs to turn on */
	for (i = 0; i < nleds; i++) {
	    GPIO_SET0 = leds[i];

    return 0;

void setup_gpio()
    /* Setup alternate functions of SPI bus pins and SPI chip select A */
    INP_GPIO(8); SET_GPIO_ALT(8, 0);
    INP_GPIO(9); SET_GPIO_ALT(9, 0);
    INP_GPIO(10); SET_GPIO_ALT(10, 0);
    INP_GPIO(11); SET_GPIO_ALT(11, 0);
    /* Setup LED GPIO pins */
    INP_GPIO(23); OUT_GPIO(23);
    INP_GPIO(22); OUT_GPIO(22);
    INP_GPIO(21); OUT_GPIO(21);
    INP_GPIO(18); OUT_GPIO(18);
    INP_GPIO(17); OUT_GPIO(17);
    INP_GPIO(4); OUT_GPIO(4);
    INP_GPIO(1); OUT_GPIO(1);
    INP_GPIO(0); OUT_GPIO(0);

A Raspberry Pi Electronics Experiment – TMP36 Temperature Sensor Trials and Failures

I mentioned in my last post that I had received a Raspberry Pi electronics starter kit from SK Pang Electronics as a gift for Christmas, and between studying for exams, I have been experimenting with the components in the kit. Apart from ensuring that the components I received actually work, I still haven’t got much past the “flashing LEDs in sequence” experiments. I think I need a few more components to really experiment properly – transistors, capacitors, et cetera – but I have had a bit of fun with the components that I did receive.

Not everything has been entirely fun, though. One component, the TMP36 temperature sensor which I received with the kit, led to a struggle for me to find out how it worked. On the face of it, this should be – and if you test it without the full circuit that I first tried it with, is – one of the easier components to deduce the operation of. My temperature sensor is in a three-pin TO-92 package, with one pin accepting an input voltage between 2.7 and 5.5V, another connecting to ground and a third which has a linear output voltage with 500mV representing 0ºC and a difference in voltage of 10mV for every degree Celsius up or down from 0ºC. So far, so simple. The problem is that I made things rather difficult for myself.

The Raspberry Pi, unlike a dedicated electronics-kit microcontroller like the Arduino platform, doesn’t have any analogue input or output. In order to get analogue input or output on a Raspberry Pi, you need either an analogue-to-digital converter for input or a digital-to-analogue converter for output. This wasn’t a big deal; both the MCP3002 ADC and the MCP4802 DAC came with the SK Pang starter kit and I had just successfully tested the 10kΩ Trimpot that came with the kit with the ADC. My self-inflicted problems occurred when I thought (ultimately correctly) that the three-pin package of the temperature sensor looked like it would be an adequate drop-in replacement for the Trimpot. So, I plugged in the temperature sensor based on the schematics in front of me and tried running the program to read in and translate the readings from the ADC.

As I started the program, I noted that I was getting a reading. So far, so good, I thought. Then, I decided to press on the temperature sensor to try adjusting the reading. At this moment, I noticed that the sensor was alarmingly hot. Disconnecting the sensor as quickly as I could reason, I thought to myself, “Oh crud, I’ve just ruined the sensor before I could even try it properly!” Taking action based on the directions given for TMP36 sensor use on the internet, I allowed the sensor to cool before plugging it back in – the right way around, this time – and tried the ADC translation program again.

I was still getting a reading, but this time, I was more wary; I did not know whether this signified that the correct reading was being read or not. With the aid of an Adafruit tutorial written precisely to aid people using the TMP36 with the Raspberry Pi, I decided to modify the ADC translation program to give converted values in the form of temperature readings. Another problem seemed to ensue – the readings I was being given were far too low for the room I was in. I attempted to find a solution on the internet, by reading forum posts, tutorials and datasheets, but little of this made sense to me.

Eventually, though, at least one of the sources gave me the idea to use a multimeter on the temperature sensor to test whether the output voltage on the middle pin was reasonable. I plugged the TMP36 directly into the 3.3V supply on the Raspberry Pi and tested the voltage over the input and output. It was showing as approximately 3.3V, so there wasn’t a short voltage on the temperature sensor itself. I then tested the output voltage on the middle pin, and this showed a reading much closer to the 20-22ºC I was expecting from my room at the time. As far as I could tell, the temperature sensor wasn’t damaged from the brief overheating that it had experienced. However, at this point, I had other things to do and had to leave my experimentation.

Eventually, though, I got back to experimenting with the TMP36 again, and tried plugging it into the ADC again. It was still giving the same low readings, and I still didn’t understand completely if the sensor, the ADC or the program I was running was at fault. I was at a loss to understand what was going on, so I shelved the temperature sensor experiments and tried understanding the code for the other components so that I could try my own experiments.

Some more looking on the internet pointed me more towards the answer I was looking for, though. The datasheet for the TMP36 suggests the use of a 0.1μF bypass capacitor on the input to smooth out the input voltage, but this didn’t really sound like the issue I was having – more it seemed like there was a low voltage going into the ADC. A forum post gave me an idea – try using a multimeter to test the voltage going across the TMP36 when it was plugged in with the ADC, and the output voltage from the sensor with the full circuit going. So, I did, and again, the temperature sensor had 3.3V going across it and about 740mV output voltage from the middle pin. I was perplexed, and tried testing the voltages across the ADC itself.

It was at this moment that one little sentence from the forum post gave me the answer – the problems with using the MCP3002 for reading in the voltage from the temperature sensor were linked to input impedance over the ADC rather than any problems with the temperature sensor. The ADC was working correctly in terms of reading in the value, and the temperature sensor was also working correctly, but because there was an impedance on the ADC – the voltage going across the ADC is 3.3V, but the voltage between the input pin and the channel read pin is, at least on my MCP3002, 2.58V – there were incorrect readings. A bit of modification to the ADC translation program, and I had the sort of readings that I expected both on the output voltage of the temperature sensor and the screen where the results were being printed.

Rather a long-winded set of tests for a simple problem, eh? I suppose much of the problem lies in my putting the cart before the horse and trying experiments with my only knowledge of electronics being my long-faded memories of secondary school physics. In any case, the problem was found, and a problem in my own lack of experience was also found, which I can start rectifying soon enough.

A Showcase of the Internal Construction of the Game Boy Advance Cartridge


Recently, after completing The Legend of Zelda: A Link to the Past, I was struggling to think of what game to play next. A session of gaming with some of my friends pointed me towards the Pokémon games, which I had not played in quite a while. Instead of jumping straight into the newest game in the series that I own, Pokémon Diamond, I decided that I’d play through the Game Boy Advance games first. Taking Pokémon Sapphire out of storage revealed that the internal battery had died, and that I would have to open up the case of the cartridge in order to replace it. Having opened the cartridge case and identified the battery type that I’ll need to purchase, I got to thinking what the internal structure of other Game Boy Advance cartridges that I own was like.

My (admittedly small) collection of Game Boy Advance games includes games with about five different internal patterns, ranging from simple patterns with a ROM chip and a few surface-mount capacitors to the complicated pattern found in the Pokémon Ruby and Sapphire cartridges, which contains a ROM chip, a large Flash memory chip, a real-time clock and a large set of surface-mount components.

The simplest internal structure was found in my copy of MotoGP among others.

The most substantial component in this pattern is the large Mask ROM chip that dominates the centre of the cartridge. The circuit board is marked with the lettering, “U1 32M MROM”, suggesting that this chip has a capacity of 32 megabits, or 4 megabytes. This connects to the Game Boy Advance using the traces coming from the chip which lead to the bottom of the cartridge, where several copper-coated traces lead to the cartridge connector of the Game Boy Advance. To the left of the ROM chip, we can see three surface-mount capacitors, marked “C1”, “C2” and “C3”. Aside from these components, there is little to talk about in the remainder of the cartridge case. The construction of this cartridge is very simple, and it’s easy to see how this might work. One thing which is non-existent on this design which we’ll see on other cartridges is a memory chip – this game uses the rather archaic technique of providing passwords to the player in lieu of saving progress.

A more advanced pattern can be seen inside of the cartridge for Doom. Doom on the Game Boy Advance was a port of the version found on the Atari Jaguar console, a low-resolution version which was missing the Cyberdemon and Spider Mastermind enemies, along with the Spectres (invisible variants of the mêlée-based Demon enemies). Nevertheless, despite the limitations of the port, it proved to be one of the more accomplished first-person shooters of the Game Boy Advance.

The Mask ROM chip in this cartridge has been offset to the right to make room for the memory chip, taking the form of an EEPROM chip, presumably of the larger 64-kilobit capacity. This EEPROM chip provides non-volatile memory which is an improvement over the battery-backed RAM found in games for the Game Boy and Game Boy Color. The life span of a Game Boy Advance save state is effectively limited to the life of the EEPROM or Flash chip, a far greater time than the life of the CR2025 or CR2032 batteries of the Game Boy cartridges.

Unlike the PC version of Doom, where saving can be done on any part of a level, the Game Boy Advance version only allows you to save at the end of a level, and needs only to store the current level along the health, armour and ammunition state of the player. Save games are also limited to four, rather than the eight found in the PC version. Aside from the additional EEPROM chip, there is another surface-mount capacitor on this board which was not found on the MotoGP cartridge.

Despite the addition of save states to this game, it does not have a particularly complex pattern by the standards of other Game Boy Advance games. A design more typical of Nintendo first- and second-party cartridges can be seen in the cartridge for Golden Sun.

Golden Sun plays very much to the sensibilities of the SNES era of Japanese RPGs, despite being released about five years after the likes of Chrono Trigger and Final Fantasy VI. Yet it was this playing to those sensibilities that made it one of my favourite games on the Game Boy Advance, and an avid follower of the series up to the recent Golden Sun: Dark Dawn for the Nintendo DS. The Mask ROM chip on this circuit board has been moved to the left-hand side, with the memory chip, a 512 kilobit Flash memory chip used to store up to three save files, containing such details as the geometric position of the player on the game environment, the health status of the characters, how the Djinn are set on each character, and so on and so forth. The other surface-mount components are four capacitors, as found on the Doom cartridge, only found in a different arrangement to suit the left-mounted position of the Mask ROM chip.

A similar pattern can be found on the cartridge for Golden Sun‘s sequel, Golden Sun: The Lost Age, along with the cartridge for Pokémon FireRed as can be seen below.

The general layout of this cartridge is similar to that of the cartridge for Golden Sun, and while both games are RPGs, a layout precisely like the cartridge layout for Golden Sun can also be found in Mario Kart: Super Circuit. The FireRed cartridge differs from the Golden Sun cartridge in some subtle ways, using a different Flash memory chip, and with a few more surface-mount components, this time including resistors as well as capacitors. The Flash memory chip still does not extend across the entire space allocated to it, which could suggest that it is a 512 kilobit chip rather than a 1 megabit chip.

The final type of cartridge pattern that I have found is also the most complex one, belonging to Pokémon Sapphire. Unlike any other Game Boy Advance game that I have found, Pokémon Sapphire possesses a type of time-linked game mechanic, which despite not being as advanced as the similar features in the Game Boy Color predecessors in the series, does still necessitate the use of a real-time clock.

The real-time clock is found above the Flash memory chip, which is found on the left-hand side of this cartridge. To the right, above the Mask ROM chip, is a CR1616 button cell which powers the real-time clock. This is the component that will have to be replaced in order for all of the features of this game to work correctly, as certain events are linked to the time of day and the progression of time. None of them are critical to the completion of the game, but it still annoys me to have an incomplete game for the sake of a cheap button cell.

The Flash memory chip on this circuit board is substantially larger than that on the other cartridges with Flash memory, which suggests that this is of the 1 megabit capacity rather than the smaller 512 kilobits found in the other cartridges. As well as that, there are more surface-mount components on this wafer than on others, with a larger set of resistors and plenty of capacitors. Another component, marked “X1”, is prominent on the left-hand top corner, beside the RTC chip, although its use is a mystery to me. It may be some sort of transducer, based on the decoding of the reference symbol, but aside from this, I have no real idea what the component could be used for.

UPDATE: I must be some sort of electronics dunce for only realising this now, but the component marked “X1” on the last picture is probably a crystal diode for the real-time clock IC.

Revolutionary Technology in Formula One: Downforce-Generating Wings

As the lessons demonstrated by Colin Chapman’s use of the monocoque chassis filtered down through the rest of the Formula One grid, the cars changed shape towards a cigar-like form typified by the bodywork of the 1966 and 1967 seasons. In 1966, there was another change in the regulations, once again allowing three-litre engines which produced in the order of 350 to 400 bhp, about twice the power of the engines used from 1961 to 1965. With such a surfeit of power, the cars were unpredictable and wild, and a bit of extravagant cornering wouldn’t sacrifice too much time around a lap. Within a few years, though, both the bodywork of the cars and the driving styles had begun to change, though, as the cars began to be pushed down into the track by aerodynamic effects and driving styles became more precise in order to compensate.

As with other revolutionary developments in the world of Formula One, the changes in this period were derived from the world of aeronautics. It has, and had been known for a very long time that an aerofoil could generate lift in accordance with Bernoulli’s principle, and aeronautical engineering had progressed in leaps and bounds during the years of the Second World War. Ideas had been hopping around the Formula One paddock for years about the effect of a reversed aerofoil, which would work in the opposite way to a typical aeroplane wing, and indeed, a few minor experiments had been tried with this idea in motor racing, including Jim Hall’s experiments with the Chaparral racing cars in the mid 1960s. Unlike an aeroplane wing, which generates lift by creating a pressure differential between the longer airflow path on top of the wing and the shorter path on the bottom of the wing, an automotive wing creates downforce by reversing the pressure differential, with a longer airflow path on the bottom rather than the top.

It took until 1968 for a downforce-generating wing to find its way into Formula One. Ferrari, having apparently got over its period of conservatism which cost it development time over the early garagiste teams, and Brabham were the first teams to try the idea of placing an aerofoil onto their cars. In the 1968 Belgian Grand Prix, raced at the fast, flowing Spa-Francorchamps circuit, Ferrari used a high-strutted rear aerofoil balanced off with little tabs mounted to the front of the nosecone, while Brabham used a lower-mounted rear wing, but balanced it off with larger front winglets. While neither Brabham affected the race much, both exiting due to reliability issues, the Ferrari of Chris Amon easily snatched pole – four seconds in front of Jackie Stewart in his Matra.

Amon then set about challenging for the lead when his radiator gave up, thus ending an interesting experiment. To be fair, the Ferrari was already a quick car, with the wingless car of Amon’s teammate, Jacky Ickx, finishing third, but the proof was there that wings were a useful addition to Formula One cars. Meanwhile, Bruce McLaren took a maiden victory for his eponymous team, while other teams looked on and wondered what they could do with the new aerodynamic aids.

Lotus was, unsurprisingly, one of these teams. With Colin Chapman having an interest in aeronautical developments, and having introduced an idea found in aeroplane design into his racing cars before, it had not escaped Chapman’s attention that a reversed aerofoil could be used in this fashion, even before Ferrari and Brabham tried their own experiments. The Lotus Formula One cars soon sprouted wings, which were bolted onto the suspension and towered up into the air on thin struts in a decidedly ungainly fashion. The highly-mounted wings suffered less from turbulence than wings mounted lower down, but were, as several incidents the following year would demonstrate, highly dangerous.

By the end of 1968, Graham Hill had taken his second World Drivers’ Championship driving for Lotus, which took the Constructors’ Championship along with it. The Lotus team, with an exceptional car, powered by a refined Cosworth V8 engine and using the nascent technology of its aerodynamic aids to its advantage, made the most of a year where their top driver, Jim Clark, was killed early on in a Formula Two race and Graham Hill had to step up to the role of leading the team. More teams throughout the year had seen the advantages of downforce-generating wings, and they spread throughout almost the entire grid.

By 1969, the high-strutted rear wing of the Lotus 49B had been joined by an equally tall front wing which towered over the front suspension. Other teams, including McLaren, had similar wing layouts, but these proved problematic. The tall struts that the wings were mounted to proved fragile, as demonstrated in the practice session at the first Grand Prix of the season, held at Kyalami in South Africa, and the practice of mounting the wings to the suspension also proved troublesome. When both Lotus cars crashed out of the Spanish Grand Prix a couple of months later, downforce-generating wings were temporarily banned, only brought back when the rules were rewritten to permit low-mounted wings bolted to the chassis. The wings of today’s Formula One cars roughly resemble the layout of the later Formula One cars of the 1969 season, although they are far more evolved.

The aerodynamic expertise of the Matra team helped them win both the Drivers’ Championship, with Jackie Stewart at the wheel, and the Constructors’ Championship by significant margins. Lotus only reached third in the Constructors’ Championship, as an season of unreliability for Jochen Rindt, and several finishes out of the points for Graham Hill left them floundering. Some wasted development on the unsuccessful four-wheel drive Lotus 63 kept them from focusing their full attention on the car with more potential, although Matra and McLaren did try their own unsuccessful four-wheel drive systems, with little more success than Lotus. The aerofoil was clearly the way forward and the best way to maintain grip in a Formula One car.

Since the late 1960s, aerodynamic wings have been an omnipresent sight on Formula One cars, and have evolved from simple aerofoils to sophisticated items designed to channel the air as precisely as possible to the most efficient places to create downforce with a minimum of drag. The wings have changed shape considerably through the years, with the development of the Gurney flap, among other things. During the 1970s, large table-shaped rear wings were the norm, with some peculiar front wing designs throughout the years, while some of the cars in the early 1980s shed their front wings in the era of ground effect.

The cars of the early 1990s had noses mounted close to the ground, but by the middle of the decade, most of the front-runners had changed to a more highly-placed nose more reminiscent of today’s cars. Sculpted front wings, designed to push the air towards various critical places on the chassis, have been a notable part of recent Formula One cars. Whatever their configuration, though, the aerodynamic effects of the wings have been critical for success in Formula One almost since their first development, and they not only changed the dynamics of Formula One cars permanently, but also the appearance, as the large wings of today’s Formula One cars are their most obvious element, even to an unfamiliar spectator.

Revolutionary Technology in Formula One: Composite Materials

Since the first development of racing cars, engineers have sought out ways of making them quicker. Physics dictates that one of the most crucial elements in an automobile design which is to be quick in all areas of racing is to reduce the mass of the car as much as possible. Steel bodies were therefore superceded by aluminium alloys, which left the cars with less momentum. The monocoque chassis, previously discussed in Revolutionary Technology in Formula One: The Monocoque Chassis, further decreased mass, leaving cars in the order of 450 kilograms, minus fuel and driver. By 1966, though, with the return of the 3-litre formula and the corresponding increase in mass, the developments in conventional aluminium construction had reached a plateau. One team, new to the sport, would take the lead in introducing a method of construction which would develop into a fundamental part of all Formula One cars in the future.

Bruce McLaren had already impressed people in the sport with several podiums and a few wins in the mid-engined Cooper cars which dominated the 1959 and 1960 seasons and had proved reasonably successful throughout the early 1960s. When John Cooper tried to insist that 1.5-litre Formula One engines should be run in McLaren’s attempts at running in the Australasian Tasman Series instead of the 2.5-litre engines permitted, McLaren set up his own racing team, competing with custom-built Cooper cars. With a championship win in the series, McLaren set his sights on Formula One, judging the Cooper team to be slipping down the ranks from their once-dominant position.

Bruce McLaren contracted Robin Herd, a former engineer on the Concorde project to design a car. Herd produced the M2B, a car designed with the use of a material named Mallite. Mallite was composed of a sheet of balsa wood covered on both sides with aluminum alloy, making the material stiffer than the conventional duralumin alloy used in other cars of the time. Another composite material, fibreglass, was used for some of the ancillary parts of the bodywork, such as the nose and engine cover.

All of these materials made for a light, yet stiff chassis which may have had some success if it weren’t for the unreliable engines that the McLaren team used in a season which emphasised reliability. However, Mallite, being an inherently inflexible material, was difficult to use in car designs in which curves and aerodynamic shapes were important. The cars of the 1970 season and for several seasons beyond would therefore remain relatively conventional, with the exception of the titanium-incorporating Eagle Mk1 and the disastrous magnesium-skinned design of the Honda RA302. However, composites would not remain a niche material in Formula One forever, and McLaren would once again be the team to bring the new developments to the table. Unfortunately, Bruce McLaren’s death in 1970 would guarantee that he would not see the success that his team would attain.

By 1981, McLaren had won two World Drivers’ Championships and one World Constructors’ Championship with their long-lasting M23 design, and had been competitive throughout most of the 1970s. In the midst of a downturn for the team, McLaren merged with a Formula Two team called Project Four, owned by Ron Dennis. The merger gave engineer John Barnard the resources to put his revolutionary new MP4/1 design to the race track. The MP4/1, for Marlboro Project Four/1, was entirely composed of carbon-fibre reinforced plastic, a light, stiff composite material then used primarily in the field of aerospace design, and the MP4/1 was the first demonstration of a monocoque automotive chassis designed from the material.

The decision to use carbon-fibre would prove to be a fortuitous one. Not only would the MP4/1 bring McLaren their first victory since 1977, but it would arguably contribute to the relative lack of injury suffered by John Watson after a horrifying crash at the Lesmo curves at the Italian Grand Prix. The material had truly experienced a trial by fire, and despite its expense, it was demonstrably useful for the field of motor racing.

The 1982 season was, by most accounts, a disastrous one and definitely one of the annī horribilis of the sport. Two drivers died, several escaped life-threatening injury and the eventual winner of the championship managed the feat by sheer consistency and reliability rather than the blazing speed of his car. For McLaren, however, the year wasn’t all bad. The return of Niki Lauda to the cockpit after a sabbatical lent some additional experience and a still-competitive driver to the McLaren team.

The Ferrari team, whose 126 C car proved the best of the field in 1982, also incorporated carbon fibre into their car design, although not to the extent of the McLaren team. Unfortunately, they suffered an early tragedy in the death of Gilles Villeneuve after a dispute with his teammate, Didier Pironi, over the results of the farcical San Marino Grand Prix. Didier Pironi’s success later in the season led to it looking like he would take the championship when he suffered a career-ending crash in qualifying for the German Grand Prix. This left the championship open for several competitors, including Alain Prost, Keke Rosberg and John Watson. Watson came close to winning the championship, but was held back by Rosberg’s superior consistency even in an inferior car without the turbocharged engines of the front-runners. As this would prove to be the last championship for a naturally-aspirated car until the turbocharged engines were banned in 1989, the predicted form of the year was further shaken up.

The 1983 season would not prove as successful for the McLaren team, and by then, many of their competitors had caught up with McLaren in the incorporation of carbon-fibre monocoques. Lotus, Alfa Romeo, Renault and Brabham had all taken cues from McLaren, and Brabham’s innovative, arrow-shaped BT52 model was the best suited to take advantage of the banning of ground effect from the rules in response to the tragedies of 1982. McLaren suffered a series of retirements which put them well outside of competition for the Drivers’ or Constructors’ Championships, while their competitors were taking advantage of a technology introduced by McLaren.

However, the 1984 season would allow McLaren to reap the rewards of their development with the new McLaren MP4/2. The mixture of a refined chassis with a powerful, yet reliable and fuel-efficient TAG-Porsche engine allowed McLaren to dominate the season, with a straight-up competition between Niki Lauda and Alain Prost, the latter having moved to McLaren after having missed out on the Drivers’ Championship by only two points. In the end, Lauda won his third World Championship by half-a-point over Prost, while McLaren easily won the Constructors’ Championship, just rewards for their efforts. Carbon fibre was in the sport to stay, and while some less well-funded teams still incorporated the older aluminium alloy design features into their cars for a few years afterwards, they would eventually have to follow the suit of their competitors as the power outputs produced by the turbocharged engines demonstrated that the old aluminium monocoques were no longer stiff enough for the job.