Probing The Inaccuracies: Motorsport

I’m a fan of motor vehicles, something which can easily be identified by bringing up the subject in conversation with me. There’s something about roaring engines made up of hundreds of mechanical parts moving synchronously, and the sight of a motor vehicle moving rapidly that inspires me. It should therefore not be entirely surprising that I have recently acquired a taste for motorsport. Unfortunately, motorsport is not necessarily a particularly accessible sport, and I’ve heard quite a few misconceptions about it which I need to address, like:

“Racing is just driving around in circles! I could do that!”

For obvious reasons, this is one of those commonly repeated sentiments regarding the sport, usually recited by those which have almost no experience with it at all. Unfortunately for them, this is the most easily debunked misconception, and its application just damages any credibility that they have in order to make valid complaints.

First and foremost, very few tracks in the world are actual circles; most of them are at least ovals of some sort, and usually road or street circuits. A few circle tracks do exist, including Volkswagen’s Nardo high-speed test track, but these tracks are invariably not used for racing. Indeed, circle tracks give very unsatisfying racing. Because there are no braking points on a circular track, the cars will eventually just travel at either the highest speed that the tyres can manage without slipping, if the track has a relatively small radius, or at their maximum speed if the track has a large radius.

This removes several of the main dynamics of motor racing and leads to two unsatisfactory conclusions – if one car can maintain a higher speed than the others, it will undoubtedly win, and if the cars are all close enough to keep them in a pack, the only way to get any overtaking is to get into the slipstream of the opponent and hope to slingshot past them. The latter sorts of races are bound to be accident-prone, as demonstrated by superficially similar NASCAR restrictor plate races, where the bunched-up grid regularly leads to multi-car pile-ups.

If this looks like a circle to you, perhaps you need your eyesight checked.

Clearly, this argument isn’t literal, though, and is meant more as a way to disparage racing drivers for what the people making this argument would perceive as too much merit for an ostensibly easy sport. Of course, this argument is easily shot down as well. Motor racing, whether it’s autocross or single-marque racing up to the fastest cars in Formula One and the Indy Racing League, is not easy.

In order to be a successful racing driver, there are several attributes which you must have – ones that don’t necessarily exist in the wider populace. You must be able to control a car or motorcycle at speeds exceeding 100 miles per hour, while racing rivals try to get past you. You must be spatially aware and capable of figuring out the physical characteristics of the vehicle under all conditions, and do so unconsciously. You must be able to communicate effectively with engineers and mechanics on the technical details of the vehicle and how you wish it to drive. These are not skills which exist within the majority of the non-racing populace, many of which seem to think that driving goes no further than turning the steering wheel and operating the pedals and gear stick.

Motor racing is not only mentally difficult, but can also be physically difficult as well. Depending on the characteristics of the car, the first physical difficulty can arise with actually getting the car to turn. While power steering systems have made it easier for any sort of driver to turn a steering wheel in a car at lower speeds, this ease of turning doesn’t necessarily translate directly at high speeds, where momentum and inertia can drastically affect how a car handles, along with other physical forces. Other physical effects on the body include high G-forces resulting from increasing cornering speeds and the inevitable buckets of sweat produced by a racing driver on the edge. As for motorcyclists, the constant almost-imperceptible shifts in body mass that need to be performed in order to race the motorcycle put them almost in a league of their own when it comes to physical strength and fitness.

It soon becomes apparent upon closer examination that motor racing is a far more difficult sport than most people would credit it with, but the argument persists. Indeed, arguments along this line are often made by fans of specific types of motor racing wishing to disparage other classes of racing, by people who should probably know better. These include:

“NASCAR is just a bunch of people turning left for hundreds of miles. How hard can that be?”

I’m going to be fair right now and admit that I’ve disparaged NASCAR in the past for what I’ve seen as a lack of entertainment value. Most of the racing tracks in NASCAR are oval tracks which are rather far removed from the road and street tracks that my favoured classes of motorsport are usually raced on. But while I may criticise NASCAR for what the racing looks like to a detached spectator, I do know something of what it’s like inside the actual car, and I maintain respect for the drivers who manage to muscle these heavy machines around the track.

For the unfamiliar, NASCAR (National Association for Stock Car Auto Racing) is an American stock car racing series, raced using “silhouette” car models which are ostensibly based on road cars manufactured by Chevrolet, Ford, Dodge and Toyota, but which are really homologated prototype cars based around a standard specification. The tracks used by NASCAR are predominantly anti-clockwise ovals, but two clockwise road tracks are included in the top-echelon Sprint Cup Series. Due to the lack of downforce created by the car body, along with the reasonably close standard of all of the cars, the series exhibits a lot of overtaking, and dozens of lead changes can occur during a single race.

NASCAR gets a lot of criticism, both from domestic and international sources, for being an overly-simplistic representation of motor racing, and indeed, for apparently being easy. Let’s get things straight immediately: NASCAR is far from easy, and the cars are as much of a contributor to the difficulty of the sport than anything else. NASCAR was, as its name suggests, originally raced using unmodified road cars, but in the 1960s, the cars were homologated in order to remove the massive technical advances that teams like Lotus and Cooper were dominating Formula One with.

Today’s NASCAR Cars of Tomorrow (that’s the official name, by the way), thus bear a reasonable amount of resemblance to the American cars of the 1960s, with cast-iron V8 engines using pushrod valves and carburettors in comparison to the overhead camshafts, aluminium alloy construction and fuel injection systems of today. One of the most obvious characteristics of NASCAR racing cars is their rather significant mass compared to other racing cars, which causes several physical effects to the car which make it more difficult to drive than it looks like from the eyes of the observer from afar. As I’ll discuss below, more mass increases inertia, increases momentum and decreases the speed that a car can turn at before the tyres begin to slip, affecting acceleration, braking and cornering respectively.

Most of the criticisms of NASCAR seem to centre around the fact that the cars largely turn left during the oval races, and that this is therefore not legitimate motorsport. Firstly, let me counter with a few words of my own: Watkins Glen International, Infineon Raceway. Secondly, turning left in a NASCAR racing machine at full pace is quite a bit different to turning left at road speeds in a road car, and uses a very different set of skills. NASCAR, unlike racing series which race predominantly on road or street circuits, rewards consistency and smooth driving rather than the abrupt braking and acceleration moves in other motor racing series.

Staying on the racing line at the longer tracks like Indianapolis requires one to take each corner at significant pace, as excessive slowing down will just open up an opportunity for somebody to overtake. Meanwhile, at shorter half-mile tracks, including the notoriously difficult Martinsville Speedway, the turns are much tighter and more closely resemble corners on road circuits than they do corners on longer ovals. Each of these corners needs to be negotiated in a car with several hundred brake horsepower transmitted through the back wheels, which makes it tail-happy under acceleration and rather reluctant to turn under braking.

All of this has to be done with a packed grid regularly consisting of more than forty cars jostling for position, which all comes to unhappy conclusions for many of the drivers when the cars begin to crash. This is a phenomenon which NASCAR is well known for, and while the rate of crashes in NASCAR is often exaggerated, when they do happen, they tend to be big. This is a natural consequence of oval racing using cars which are reluctant to stop; crashes often occur at speeds in excess of 150mph, which means a lot of momentum and kinetic energy.

Apparently, the only reason anybody ever watches NASCAR.

The phenomenon is accentuated at Talladega and Daytona, which are very long tracks with no real braking areas. To prevent deadly accidents, the cars run with restrictor plates over the carburettors to reduce the air and fuel flow to the engine, and thus reduce the power produced by the engines. The cars in these races travel in characteristic bunched packs, where the only way to overtake on track under normal conditions is to use the slipstream of the car in front of you to reduce the aerodynamic drag of your car and thus slingshot past. This technique, known as drafting, is commonly used in other racing tracks in NASCAR, and also in other racing series as well, but is taken to its extreme at Talladega and Daytona, with cars travelling bumper-to-bumper in order to not only increase their own speed, but the speed of the car in front of them in order to pull away from the pack.

This rather specialised type of motorsport, which leads to a certain breed of racing driver who can succeed at it, sometimes goes wrong. As mentioned above, NASCAR machines are rather tail-happy, and don’t take all too well to being pushed off the racing line. One car spinning out of control in a pack bunched up by restrictor plates can lead to a massive crash that can involve more than twenty cars. As those of you who have been in any sort of car crash will know, that sudden stop can hurt – or even kill. When you have dozens of cars flying around you, some of them spinning out of control, it adds another element to the crash, and one that would terrify most road drivers.

NASCAR isn’t the only racing series which is criticised (unfairly) for an ostensible lack of skill needed to succeed at it. The most popular motor racing series in the world, Formula One, is another target for invective, including the following:

“Formula One cars these days just drive themselves! How difficult is that?”

You know, I could almost understand this criticism from people who actually raced Formula One cars in the past. Formula One has evolved from the ultra-dangerous spectacles of the 1960s to a series where safety is paramount and the chassis is specially developed in order to take as much of the brunt of a collision as possible. Considering the balls it took to drive a Formula One car at racing speeds in the 1960s and 1970s, it would almost be warranted that they’d be less impressed with people racing in the series today. However, the Formula One racers of the past aren’t the ones that criticise the sport today. They have the proper amount of reverence, realising that the things which make Formula One difficult have changed since their days.

Formula One is one of the fastest racing series in the world, with cars reaching speeds in excess of 190mph, and sometimes in excess of 215mph at the likes of Monza and Spa-Francorchamps. While the top speeds are faster in the 24 Heures du Mans, the acceleration of a Formula One car is considerably more sudden than any other racing car, reaching more than 1G off the starting line. Think about the force of gravity pinning you to the ground, and then think about that force pushing you back into your seat. That is the least significant force felt by a Formula One driver, which should give you an idea of what the difficulties of Formula One often revolve around.

Cornering forces are a lot more significant than the relatively puny forces felt under acceleration, reaching 3 to 5G under braking, and occurring several times a lap. Imagine that force of gravity, scale it up by five times and imagine all of that load occurring on your neck. Short of being a fighter pilot, you’re unlikely to experience these forces in everyday life, so let’s just say that it’s a lot more than you or I could reasonably sustain through even one corner, let alone several hundred. The cars are sprung with extraordinarily hard suspensions, which doesn’t help the drivers either; every bump on the road is transmitted through the chassis and through the driver. Ouch.

Sometimes, this massive force is sustained over several seconds, as found at the 130R corner at Suzuka, or the long, sweeping anti-clockwise Turn 8 at Istanbul Park. Sustaining these forces for the fraction of a second that it takes to go through most corners is bad enough; making that last for any longer requires extraordinary neck muscles in order not to lose control. Add the twenty-plus cars that you’re racing against, and the need to stay concentrated through every corner, and it soon becomes mind-boggling how physically and mentally straining the sport really is.

Spa-Francorchamps – one of the most difficult and fastest racing circuits in Formula One.

It is not much of a surprise then that drivers in Formula One are immensely fit, and would be capable of doing athletic events in times that wouldn’t be embarrassing. A lot of the exercise done by Formula One drivers concentrates on the neck, giving them the strength needed to resist the tremendous forces on their bodies through every lap.

The difficulties posed by Formula One do not just involve physical strains. Characteristics of the car conspire to make things difficult as well. Under the current rules, Formula One cars are powered by 2.4L V8 engines, producing about 750 brake horsepower at 18,000rpm. Such finely-tuned engines have a very narrow torque band, in the higher end of the revolutions that the engine can produce, and need to be kept at high revs in order to get the maximum out of the engine.

The problem is, though, that pushing 750 brake horsepower through the rear wheels during a corner where the aerodynamic components aren’t working optimally leads to a lot of potential wheelspin, and the power has to be controlled while still transmitted to the track as quickly as possible. This needs quick reaction speeds and the capability to countersteer these ferociously difficult cars. Since the removal of traction control, it has been all up to the driver to be able to control the car, which immediately goes some way in eliminating the criticism that the cars drive themselves.

When you’re in a Formula One car, you have a very limited amount of space to move, being surrounded by the monocoque of the car. This also contributes to a limited visibility, which can be reasonably simulated by sitting on the floor, and blocking your view of anything below the bottom of your head. This impedance of sight makes it rather difficult to work out whether there are any cars behind or beside you, which is rather troubling when you’re trying to fend off impending overtaking attempts.

As for the technological criticisms of Formula One, these are at least somewhat warranted. Formula One cars are stuffed to the gills with transponders and gravitational meters in an attempt to detail every single facet of the car’s performance in an attempt to shave precious hundredths of a second off lap times. This sort of reliance on technology does come with the sport, and yet it could be said that it takes away from the purity of the sport, and makes everything rather expensive. It’s difficult to say where this should start and end, but technological development and innovation has been at the forefront of Formula One since the beginning, and nothing is going to stop the teams from trying anything within the rules to improve speed, short of a strict homologation like NASCAR or IRL, and that would take away one of the big advantages of motorsport in the real world, which will be discussed below.

Motorsport doesn’t just receive criticism for the driving; it regularly receives criticism from environmentally-conscious people who perceive some sort of wastage involved in the sport. As I am not a hemp sandal-wearing hippie, it gives me some pleasure to discuss the misconceptions found in this next point:

“Motorsport is just a massive waste of fuel!”

This is one of the most common criticisms levelled at motorsport, for fairly obvious reasons. There’s a germ of truth in there as well: as the consumption of a limited, non-renewable fuel has proven to be the most practical way to propel a motor vehicle, it is somewhat reasonable to assume that the sport is inherently wasteful and environmentally unfriendly. It may come as a surprise, then, to hear that motorsport was the catalyst for many of the developments, innovations and improvements in car and motorcycle design.

Engineers tend to be, as a rule, people who favour efficient solutions to problems. In motor racing, the chief problem is “How do we make this vehicle go around the set course in the least time?” There are several ways to achieve this, from minor changes in the car’s suspension or gearing, to increasing the power produced by the engine, but the most significant improvements usually come from decreasing the mass of the car. More mass in a car decreases acceleration, braking potential and cornering speeds, which are three very important characteristics in determining how quickly your car or motorcycle will go in all of the different circumstances it may be made to face.

Mass is a contributing factor to inertia, momentum and centripetal acceleration. The relation between mass and these three mathematical values is such that a vehicle will be quicker to accelerate, brake and corner when mass is decreased. There have been various clever innovations designed at reducing the mass of a racing car, including the monocoque chassis and the use of lightweight materials including honeycomb aluminium and carbon-fibre. Somewhere along the line, it occurred to racing engineers that one of those things contributing to the mass of a racing car is the fuel, and therefore, if the engine is more frugal or can produce more power from the same mass of fuel, the car would be quicker to get around a track.

Therefore, some of the innovations in vehicle design have included overhead camshafts and variable valve timing, along with improved fuel injection systems and electronic engine management. These innovations have worked to improve the frugality of engines, meaning less mass having to be dedicated to fuel, and greater distances between pit stops for fuel. Eventually, engineers designing road cars can mass-produce these systems for road cars, making your road cars better and more efficient in the process.

That’s all well and good,” you may say at this point, “but how does that excuse the amount of fuel that cars use to develop these systems?” Actually, motor racing consumes a surprisingly low amount of fuel compared to some common methods of transport used every day. A Formula One car has an engine which is more efficient per unit speed than your road car, including a sort of alternating V4 mode in order to save fuel, where some of the cylinders are turned off in order to save fuel at low speeds. Unfortunately, such technologies are too expensive to put into standard road cars, but it demonstrates how far ahead of road car technology that Formula One and other forms of motorsport can be.

If motorsport engines can be frugal compared to road cars, they are especially frugal compared to aeroplanes. Even a full season of Formula One can use less fuel than a single long-haul 747 journey, and as many of the people reading this will have travelled somewhere on an aeroplane, they can hardly complain about a racing series which works to improve the cars driven in everyday life.

 

Advertisements

NetHack – A Retrospective Gaming Review

As computer gaming becomes continuously mainstream and accessible, and flashy graphics and polished presentations become the norm, it becomes difficult to believe that a game which sticks to the graphical standards of text-based terminals, which has an idiosyncratic interface and which is hardly instantly accessible would still be in development, let alone be a cult favourite. NetHack, a turn-based roguelike fantasy RPG, first developed in 1987 and still in development today, defies the current conventional logic in the industry by having a dedicated fan base and still sticking to the same gameplay and graphical standards as it did when it was developed.

The plot of NetHack, insofar as it exists, is pretty much boilerplate fantasy: the player takes the role of an adventurer of one of many classes and mythical species, who has been summoned by their god in order to make their way through the Dungeons of Doom in order to secure an artifact named the Amulet of Yendor. A lot of the setting elements take inspiration from conventional fantasy as well, with orcs, trolls, goblins, dragons and the rest of your cornucopia of mythical creatures. It’s when you get to the gameplay that things really start to become distinctive.

The gameplay of NetHack takes place on a tile-based map with randomly-generated levels, in the same vein as other roguelike games. As NetHack was originally developed for Unix minicomputers linked to dumb terminals, each of the tiles is represented by a single ASCII character which can represent anything from black space to a creature to a piece of equipment. Immediately, the game conspires to make things more difficult than most people have come to expect from a computer game, and while there are graphical rendering packages for the game, most players stick to the ASCII graphics. The movement controls are based around the H, J, K and L keys, which is unintuitive to most modern computer users, but which make sense to those who are familiar with Unix, the same keys being used in the vi text editor.

This picture, while looking completely incomprehensible to starting players, can easily be read by NetHack veterans.

At this point, the question remains to be asked: Why would people want to learn such an idiosyncratic interface for a simple game? The answer to that is simple: NetHack is hardly a simple game. Indeed, it is one of the most expansive, detailed and logically-developed games ever made, with something new to learn almost every time you play the game. There is so much to do in the game and so much freedom of choice in how to play that every game can be a different experience.

Internal logic is consistent and surprisingly realistic for a fantasy game, and with over twenty years of development, just about everything you might choose to do has consequences. Some tools, including the pick-axe, can also be equipped as reasonably effective weapons in addition to their more conventional applications. Carrying too much weight will slow you down, causing you to fall down stairs, but at the same time can also increase your strength over time.

Even very complex interactions have been thought out; the cockatrice, a creature which can turn other creatures – or the character – to stone, is a prime example. A cockatrice corpse can be used as a weapon, instantly turning any creature it touches to stone, but it must be equipped using gloves. If the character holding the cockatrice trips over, they will be instantly turned to stone, the consequences for falling on the corpse. The character can also be transformed through various magical means into the creature, and in that state lay eggs, which can be equipped by the character as throwing weapons which have the stoning effect on creatures.

This internal logic looks all the more impressive when one considers the sheer amount of content in the game. More than fifty levels are generated every time a full game of NetHack is played, and the game ably fills each of these levels with progressively more difficult creatures as one descends. Each of the thirteen classes which a character can take have their own distinctive set of specialties and characteristics. This leads to very different styles of play, some with melee-dominant skills, others stronger with bows and other ranged weapons, and others with magic. All of this adds up to a game where completion can take a few days, at least, and learning all of the hidden secrets of the game can take years.

That is, if you ever get to complete it. NetHack is not only a difficult game to get into, but an even more difficult game as one progresses. Complacency is not an option when playing NetHack; almost everything in the dungeon can kill the unprepared player – or the unlucky one. It is entirely possible to be killed on the first level of the dungeon, killed by animals as unassuming as kittens. In fact, it’s possible, as a knight, to die on the first turn, by failing to correctly mount your horse, slipping off and cracking your skull. What’s more, the game gives you a single save-file, which is erased as soon as the character dies, and is useful only as a way of saving progress if the game has to be stopped part-way through. The game is cruel to the unassuming, but to the aware, it makes a potentially thrilling challenge.

For such a difficult game, though, it never really takes itself all too seriously, with plenty of whimsy and craziness in the game. This is a game where the majority of deaths of experienced players usually happen to ants, where shoplifting players get attacked by bumbling police creatures which wield cream pies and rubber hoses. As such, the game avoids that po-faced seriousness that occasionally pops up in the fantasy genre.

NetHack isn’t a game for everyone. Its interface and its difficulty leave it with limited appeal within the wider set of computer gamers, but the variety and challenge are the endearing elements to those willing to learn. What is more, you can be assured that the game will continue to be developed and to evolve; the source code is available to all who want it under the game’s own open-source licence, allowing any programmer with sufficient skill and time to contribute to the project.

Bottom Line:NetHack is very much a game with niche appeal, but to that niche, it represents one of the most impressive bits of game design ever, with a real emphasis of substance over style.

Historical Operating Systems – AmigaOS

With the 1980s came the microcomputer revolution, and with it, a chance for a wide variety of manufacturers to try their hand at producing a machine to compete in the rapidly expanding home computer market. Some machines proved very successful indeed, such as the IBM PC and the Sinclair ZX Spectrum, while others were destined to become cult classics, such as Acorn Computers’ BBC Micro, an educational computer built in conjunction with the BBC Computer Literacy Project, and Microsoft’s MSX, a computer designed to tap into the massive potential Japanese market. Yet others, finding that the market could not sustain such variety indefinitely, remained obscure even in their own time.

Most of these early home computers followed the same basic layout – based around a cheap 8-bit processor, often an MOS 6502 or a Zilog Z80, and an amount of chip RAM, usually ranging from 2 to 128KB, depending on the specification, such computers regularly plugged into televisions and used a command-line interface based around a simple, crude variant of BASIC carried on a ROM chip, many of the variants being programmed by Microsoft. Then, in 1984, Apple released its Macintosh, and things started to change rapidly in the personal computer market.

With a graphical user interface based on the work of Apple’s previous, more expensive workstation model, the Lisa, which in turn took design cues from the Alto and Star machines from Xerox PARC, the Macintosh was arguably too short of RAM and too held back by its single-tasking model for its earliest variants to be particularly useful, but it introduced a far more user-friendly interface to the fray than the older command lines.

Commodore Business Machines was one of the lucky companies during the early 1980s, creating one of the iconic computers of the time: The Commodore 64. Relatively affordable, and with a generous amount of RAM, the Commodore 64 would go on to become the single-best selling computer model of all time. However, by 1985, this machine was beginning to look a bit long in the tooth to be sold as the flagship model for the company.

The original Amiga, later dubbed the Amiga 1000, was not originally designed by Commodore; it was developed by a group of discontent ex-Atari staff who formed a company named Amiga Corporation. Through several complicated deals, involving Amiga Corporation, Atari and the dismissed president of Commodore, Jack Tramiel, Amiga Corporation was bought out by Commodore Business Machines, and the first Amiga was released in 1985.

Looking closely at the image on the screen, it looks like something that my second PC could produce – in 1996.

With a 32-bit Motorola 68000 processor and 256KB of RAM as standard, it was an amazingly quick machine for the time. As the machine had originally been intended as a games console, it featured impressive graphical and sound capabilities, which put it far ahead of most of its contemporaries. It also featured a very impressive operating system, known as AmigaOS – giving full pre-emptive multitasking when the standard operating systems of its competitors were limited to single-tasking or cooperative multitasking.

It’s sometimes difficult to contemplate just how much more flexible and powerful pre-emptive multitasking can be over the co-operative sort, especially if you’ve never used an operating system with co-operative multitasking. Pre-emptive multitasking is a development in operating systems which essentially underpins all modern personal computer operating systems, and allows for multimedia applications and for appropriate background processing.

Imagine that you’re playing a music or video file, in conjunction with another program. With a pre-emptive system, the operating system itself divides up processor cycles evenly between each of the programs. In contrast, with a co-operative system, it is up to the programs themselves to cede control of the processor to the other applications, and all it takes is one poorly-programmed application, or one which is a bit too selfish with the processor cycles, and your music file will start skipping – or even worse, stop playing at all. As I think you’ll agree, this can get rather annoying.

By providing full pre-emptive multitasking in 1985, AmigaOS was even further ahead of its contemporaries than it had been with its lauded graphical and sound capabilities. Mac OS wouldn’t even develop co-operative multitasking until 1988, and it took until 2000 and the development of Mac OS X for it to finally develop pre-emptive multitasking. The IBM PC platform didn’t get a pre-emptive system until the development of OS/2 and Windows 95, and while some previous computers had support for varying forms of UNIX, this was of limited utility, had no GUI (the X Window System being notoriously bloated at the time), and ran slowly on the hardware.

AmigaOS is an operating system consisting of two parts: the Kickstart ROM, which contained most of the necessary base components for the operating system in a stunningly limited amount of space, and Workbench, the GUI layer for the OS, originally contained on a series of floppy discs. Such a dual-layer system may seem odd to more recent adopters of computer technology, but in the days of limited permanent storage, it showed itself to be an ideal way to allow for a complex operating system without compromise. It also allowed for games to use all of the Amiga’s RAM without having the GUI resident in RAM and taking up precious memory; such games thus booted directly from the Kickstart kernel.

Aesthetically, the Workbench GUI of AmigaOS was arguably not as clean or attractive as Apple’s Mac OS to begin with, but had the major advantage of being able to output in colour, which was not available on the Macintosh platform until 1987, and only then on their high-end Macintosh II computers with non-integrated monitors. The ability, exhibited by the Amiga, to output graphics in 4096 colours was a major advantage in the gaming field that the machine had originally been designed for, and only the Atari ST, a similar sort of computer also using a Motorola 68000 processor, could really come close to the Amiga in terms of graphical power.

The Mac OS interface may have been more elegant, but the Amiga had the decided power advantage.

Unfortunately for Commodore, though, a focus on computer gaming and multimedia power gave the machine a “toy-like” reputation which was not to serve them well at a time when computers were only just making their way into businesses. The original IBM PC could hardly be described as a graphical powerhouse, but it was developed by a company which had up to then absolutely dominated the computer market. IBM’s reputation for business machines meant that the IBM PC became a de facto standard in the workplace despite not being as powerful as some of its competitors, and at a time when the computer market was homogenising, IBM managed to secure a healthy share of the high-price end of the market. As such, at this early stage, the Amiga did not manage to attain the success that its powerful hardware and advanced operating system would suggest it deserved.

By 1987, the Amiga computer line-up diversified with the introduction of the low-end Amiga 500 and the high-end Amiga 2000, and with it came a new market for the Amiga. Capable of properly taking the fight to the Atari ST, the Amiga began to pull away from its less powerful competition at the low-end of its market segment. Amiga OS updates with these early machines were of limited scope, but with the advanced base of the programming, the OS hardly needed to be updated.

People were beginning to discover the potential of the Amiga as well, with the powerful graphics hardware for an inexpensive price allowing for the editing of television shows by broadcasters who could not afford more expensive workstations for the job. With applications outside the gaming market, the Amiga managed to carve out its own niche, although this was still relatively insubstantial compared to the office and desktop publishing markets dominated by the IBM PC and the Apple Macintosh respectively.

On the home market front, the Amiga may have had the legs on the Atari ST, but there was another competitor which held it back. Just as the IBM PC had managed to secure the office market, inexpensive IBM-compatible computers had acquired a significant share of the home market. The use of a relatively cheap Intel 8086 processor and an easily-reverse-engineered BIOS in the IBM PC 5150 had led other companies to quickly sell their own cheaper variants of the PC architecture.

As the cross-compatibility between these machines and the IBM machines that occupied offices allowed people to bring their work home, the IBM architecture quickly got a foothold on the home market as well. Computer gaming, the forte of the Amiga, was never as big of a priority at the time. By the time it was, IBM-compatible machines had bridged the gap between their previously slow efforts and the advanced Amigas with more powerful graphics hardware.

In 1990, the first significant change in AmigaOS came in conjunction with the release of the Amiga 3000, a complete upgrade to the Amiga architecture. Workbench 2.0 presented users with a more fluid and unified interface, in comparison to the somewhat messy and chaotic presentation in Workbench 1.x. The improved hardware in the Amiga 3000 gave it a new lease of life – if a short one – and some of the most technically advanced games of the time were to be originally found on the Amiga, including the incredible technical achievements of Frontier: Elite II, a space simulator designed by David Braben of Frontier Development fame, and exhibiting features which really made the most of the hardware.

This might not look like much now, but when I started using PCs in 1994, this was state-of-the-art.

To be honest, the demise of Commodore four years later looked inevitable with the increasing domination of the IBM-compatible architecture and its rapidly-improving graphical technology. Commodore hardly helped things with some of their later developments, though. In 1990, the development of the expensive CDTV, which was intended more as an expensive games console than Commodore’s previous developments, failed utterly when slotted into the market beside the far less expensive Nintendo and Sega games consoles of the time, both of which had far more variety of game titles. The later CD32 was less expensive, but the SNES and Sega Mega Drive made a complete mockery of Commodore’s efforts.

Commodore didn’t seem to do any better marketing their computers than their games consoles. The replacements for the Amiga 500 were intended to give Commodore something to contest the low-end market, but their sales were blunted by a marketing disaster which gave the public the impression that new “super-Amigas” would soon be on the market. Customers held back, creating further problems for the struggling company.

Finally, in 1994, Commodore was finished, going bankrupt and selling the intellectual property of the Amiga in order to pay its tremendous debts. Along with the Amiga died the Commodore 64, which had amazingly lasted 12 years in a market which had accelerated considerably since then. Soon after came the release of Windows 95 and the earliest 3D graphics accelerators, which would have nailed Commodore’s coffin shut, if their poor decisions hadn’t already done so. The Amiga had some final moments of glory after Commodore was gone, though – it was involved in the editing of the acclaimed science-fiction series, Babylon 5, for one thing.

Commodore may have been dead, but AmigaOS lived on – to some extent. Passing from company to company like the British contemporary, RISC OS, AmigaOS maintained a niche market of enthusiasts who were either unwilling to make the shift to the PC platform, or else wished to continue using their programs and games. The OS survives today, now at version 4.1 and being marketed by a Belgian company named Hyperion Entertainment. The nostalgic sorts can indulge themselves by using UAE (Universal Amiga Emulator), which allows one to emulate a wide variety of Amiga hardware from the earliest A1000s to the A4000s produced in 1994. UAE, as befits an open-source emulator, is available on several operating systems, including Windows, Mac OS X and Linux.

Like the Acorn Archimedes, a British contemporary of the Amiga which was itself ahead of the IBM PCs and Apple Macintoshes of the time, the Amiga was a computer which deserved to do well. Poor marketing on the part of Commodore may have had its role, but perhaps a more likely explanation for its failure was that the market wasn’t quite ready for a multimedia computer – or one that was dominant at computer gaming.

What is perplexing, though, is that the advanced operating system didn’t provide more inspiration to its competitors – its mixture of efficient programming (today, the Kickstart ROM is only 1MB!) and advanced multitasking could probably have given more power to the PCs which took over, which only gained pre-emptive multitasking with Windows 95, a notoriously unreliable and bloated operating system. The relative homogeneity of the operating system market may have largely eliminated the problems with software compatibility, but at the cost of computing efficiency, and with mobile platforms becoming more prevalent, perhaps that’s something that programmers should be taking a closer look at.

Why Google Chrome Is Not My Favourite Browser

As the sort of computer nerd that others could use as a yardstick, I’ve taken sides in my fair share of “religious wars” – Emacs over vi, KDE over GNOME, anything over iPhone OS. As such, it is to be expected that I would have picked sides in the constant battle over which web browser to use. This was for a reasonably long time a mostly two-horse race, with Internet Explorer in the lead by virtue of its presence on most people’s desktops, Mozilla Firefox bringing up the rear as a regular choice for the discerning internet user, and other browsers such as Opera and Safari remaining niche choices.

In September 2008, a new browser entered the fray, with the intention of taking as much market share as possible from the two major competitors in the battle. One of those periodic free programs released by Google, Chrome attempted to do things quite a bit differently than its competitors by using a minimalistic interface devoid of the menus present on Firefox and then present on Internet Explorer.

Google Chrome - a minimalist's dream, my nightmare

Google Chrome – a minimalist’s dream, my nightmare.

I suppose therein lies my major contention to Google Chrome. Being a relatively old-school computer user, despite my age, I’ve grown accustomed to the menu as a way of contextually picking commands. In fact, seeing the alternatives presented by Google, and the earlier attempts at using an alternative to the menu presented by Microsoft Office 2007, I’ve come to the conclusion that I much prefer menus. Since neither Internet Explorer nor Google Chrome see fit to present me with the old menus-across-the-top structure that I’ve become acclimatised to, this leaves me with a single alternative among the “big players” – Mozilla Firefox.

Of course, I probably would have chosen Firefox anyway; it’s open-source, unlike Internet Explorer, Opera or Safari. The Chromium base of Google Chrome is open-source under the BSD licence as well, but that doesn’t apply to the executable installer, which works under a rather repressive set of licensing conditions. In comparison, Firefox uses the strongly copyleft GPL (GNU Public Licence), which has a few interesting conditions: freedom can never be taken away from the end-user to redistribute the source code, and any program with GPL-licensed code must itself become licensed under the GPL. Using Firefox thus continues a trend where much of my most commonly-used software is open-source, from my browser to my office software to my media player and image manipulation program.

To be perfectly fair to Google Chrome, though, it does have some nice features which do make up somewhat for what I see as a poor interface. Compared to its main competitors, Internet Explorer and Firefox, it’s blindingly quick. It also has the advantage of giving each tab its own processor thread, but while you would think that this would constitute a major advantage for somebody who regularly has eight tabs open, and has had a record in excess of thirty tabs open in Firefox at once, I’ve never really had any major problems with keeping so many tabs open at once.

The main advantage of Google Chrome over its competitors would be its light memory footprint, not being subject to the absurd memory leak problems of Firefox. This would be a major criticism that I would level at Firefox, if it weren’t for the fact that I’ve run it on a system with no more than 64MB of RAM (although I wasn’t doing anything else at the time), and I was readily able to use it on systems with 128MB of RAM and 512MB of RAM in the past without major slowdown, so even this main advantage doesn’t really constitute enough of an improvement over Firefox to get me to switch.

Ultimately, like in all computer-related religious wars, your choice of web browser really comes down to preference, and in that vein, I’d like to say that I simply prefer the menu interface and strong copyleft licence of Firefox over the speed and simplicity of Google Chrome.

 

Google Chrome - thanks, but no thanks

 

 

Google Chrome, thanks, but no thanks.