Back again after a long wait…

I recognise that it’s been a while since I last wrote anything here, but I’ve been keeping myself busy and haven’t been inclined towards writing. However, I have been working on a few things which seem rather relevant to the scope of this blog. I’ve found quite some use out of the NAS that I mentioned in my last post, I’ve been working on electronics experiments and have further built up my Raspberry Pi collection with a new addition.

Synology DS416 – Performance Benchmarking

In order to get some sort of understanding of the performance of my Synology NAS, which I have connected through its two 1GbE ports to an 8-port Netgear GS108E switch, I decided to use IOmeter to pit it against the data storage drive I have in my computer, a 3TB 7.2K Seagate SATA disk. As artificial as the results from IOmeter can be, it still gives a good idea of what the speeds are like against the SATA disk on which I put most of my non-OS files. Using a maximum disk size of 1,048,576 sectors (or 512MB) and leaving the tests running for 20 minutes each, I got the following results, where Z: corresponds to an iSCSI volume on the NAS and D: corresponds to the SATA disk:

4KiB; 50% read, 50% write; 50% random

Z: 692 IOPS, 2.84MBPS, 23ms latency

D: 185 IOPS, 0.76MBPS, 86ms latency

64KiB; 0% read, 100% write; 0% random

Z: 552 IOPS, 36.15MBPS, 29ms latency

D: 849 IOPS, 55.62MBPS, 18ms latency

64KiB; 100% read, 0% write; 0% random

Z: 1787 IOPS, 112.33MBPS, 8.9ms latency

D: 939 IOPS, 62.91 MBPS, 16.6ms latency

In the 4KiB test, which would appear to be the most representative of real-world tasks involving using the storage as a standard drive (although I would expect more reads and more random data to be passed through in that circumstance), the NAS clearly had the advantage, with three times the IOPS, three times the data throughput and a third of the latency. The NAS is not as good in the 64KiB write-only test, possibly as a consequence of the RAID penalty applied since the four disks in the array are set up in RAID 5, but the NAS takes the lead again in the 64KiB read-only test, both of which might represent me writing or reading in a manner befitting the drives’ purposes as archival drives. The read-only performance of the NAS also lines up with the maximum network read throughput that I’ve seen.

I’ve also installed a Windows 7 VM on a separate 1TB iSCSI volume on the NAS which I’ve used with VMware Workstation to provide me with some ability to play Windows games without having to reboot my system. There have been some issues with stuttering, which would be a deal-breaker in action games but has been sufficient for the strategy and WRPG titles that I have been using it for, like Hearts of Iron III and Planescape: Torment.

Electronics experiments – a new collection of gear to try out!

The electronics experiments that I had been doing before with my Raspberry Pi had gone on the backburner for a long time. However, with Robot Wars back on the BBC recently and my younger brother interested in building a robot, I’ve decided to get back into the world of electronics experiments with the idea of getting enough knowledge to build a basic robot, at which point I can get my brother involved with various tasks. To that end, I’ve stocked up on a whole new list of components. At the moment, most of these are limited to things I can stick on a breadboard instead of motors and so on, but the components I’ve picked up include MCP23017 I/O expanders to complement the MCP23008 that I already have, an alphanumeric LCD display, a few AY-3-8910 sound chips, along with piezoelectric speakers, LM386 amplifiers and audio jacks and a few crystal oscillators for the sound chips. I’m also expecting a delivery of SN76489 sound chips within the month.

So far, I haven’t got too far; I’ve done some experiments to illustrate that the MCP23017 chips and LM386 amplifiers work, but I’ll need to learn how to solder before I can test the alphanumeric LCD and the AY-3-8910 sound chip will require some time to understand before I can get it tested.

Anyway, here’s a basic schematic using the LM386 amplifier:

LM386 Amplifier_bb

Pin 5 on the LM386 IC is connected to a voltage source between 4V and 12V, while pin 4 is connected to ground. Pins 2 and 3 of the LM386 are hooked up to the ground and the positive line of the input from the audio jack respectively. Pins 1 and 8 can be hooked up to a capacitor with a maximum value of 10kΩ to increase the gain, but I decided to go without and use the internal gain of 20 just to test that the IC worked. Then, pin 6 is connected to the positive terminal of a piezoelectric speaker, which produces sound, although the datasheet recommends using various capacitors in order to smooth out the sound and reduce noise. Pin 7 of the LM386 provides a bypass for sound without amplification, but this goes unused in this circuit.

A new Raspberry Pi and plenty of toys for it

The Raspberry Pi Zero, since its launch, has been one of the most desirable and difficult to find models of the single-board computer. I’ve been interested in having one for the novelty of such a small, yet capable computer, but the propensity for it to be sold out has put me off. However, the Raspberry Pi Foundation recently announced the Pi Zero W, a variant of the Pi Zero with on-board Wi-Fi and Bluetooth in the same form factor, although with a price of $10 rather than $5. Since adding Wi-Fi and Bluetooth capacity to a standard Pi Zero would generally increase the cost more than the extra $5 equivalent premium that the Pi Zero W has over the Pi Zero on its own and require awkward USB extenders, this appealed to me and I decided to pick one up along with the official case.

Along with the new Pi Zero, I’ve also picked up the standard and NoIR camera modules, a Sense HAT and a Gertduino add-on board to provide the capacity of an Arduino Uno to my Raspberry Pis, especially the older Model B boards that I rarely use any more. As with the electronic components, I’ve only tested these enough to verify that they work, but I’m looking forward to using them when I find the time.

With respect to the Raspberry Pis that I already have, I’ve recently installed RetroPie on a spare microSD card for my Pi 3 Model B. I’ve been impressed by the performance of the Pi when it comes to emulation; even PlayStation games have been smooth on the system, while older systems like the SNES and Mega Drive have worked excellently. I took the system for a spin with some of my friends in a retro gaming night; let’s just say that my Tekken skills could do with a bit of an improvement! Installation of RetroPie is very simple and very accessible as well; after copying the OS image onto the microSD card, everything else on the system is straightforward and it’s possible to copy ROMs and disk/disc images on with a USB drive without any particular effort.

A Brief Update

Since my last update, I’ve found myself rather busy with family affairs, but nevertheless have still been working on some technologically-focused aims of my own. I’m looking towards studying and becoming a Red Hat Certified System Administrator by the end of the year, a certification which I hope will serve as a good stepping stone for some of my career ambitions in the future.

In my home environment, I’ve finally got the AMDGPU open-source drivers working on my openSUSE system and my Radeon R9 290; I haven’t had much of an opportunity to really push the drivers to check if there has been any performance increase over the FGLRX drivers that I had been using, but it at least opens the door for me using Vulkan in the future. I’ve started playing a few games as well, but have not been able to finish them; these include Planescape: Torment and Doom (2016). Maybe when I have a bit more time, I’ll be able to go a bit further with these particular games.

But the biggest news of the last three months on the technological side of things is my purchase of a Synology DS416 NAS system for home storage. I’ve loaded it with four 4TB WD Red hard drives and set it up for RAID 5, with a total advertised capacity of 10.90TB in total. At present, I’ve only allocated 4TB of this space as a backup storage space, which I’m generally accessing through NFS on Linux, but can also set up for SMB on Windows, but with iSCSI capabilities on the system and my own experience with iSCSI through my job doing technical support for a certain brand of enterprise iSCSI SAN arrays, I will likely have ample opportunities to set up the system for VM space on either KVM or as a practice environment for VMware ESXi. I’ll report more on this when I’ve had a chance to do performance benchmarking and to set up more environments on the system.

The First “PC Master Race” – Part 2: The Golden Age of the 8-Bit Home Computer (up to 1990)

At the end of Part 1 of this series, I discussed two releases in 1985 that would shape the video game markets of Europe and North America respectively in the coming years: The Commodore Amiga and the Nintendo Entertainment System. However, to understand how these systems fit historically into this divide, I must first fill a few gaps that were not addressed in Part 1, including the events that led to the development of both systems and the marketplace into which they emerged.

The state of the European 8-bit home computer market in 1985

The home computer as a market segment emerged in 1977 in the United States and by 1982, there were many home computer developers in the US competing for a piece of the pie. Among these were Apple, Tandy Radio Shack, Commodore Business Machines, Atari, Texas Instruments, Exidy and Timex Sinclair. Furthermore, there were several consoles with pretensions of becoming home computers with add-ons, including the ColecoVision console and Coleco Adam computer, the Intellivision with its Keyboard Component (delays to which would make the Intellivision a running joke in the media) and the Atari 5200 which, under the shell, was a stripped-down Atari 8-bit computer. By 1984, many of these companies had been marginalised or were in the process of leaving the market completely.

A major catalyst to this was Commodore’s release in late 1982 of the Commodore 64, which even on release represented a comparatively affordable machine respective to its competition, but which soon dropped even further in price due to a price war instigated by the ruthless founder of Commodore, Jack Tramiel, partially as revenge for being driven out of the electronic calculator market in the past by Texas Instruments and partially as a defence mechanism against the Japanese, who were expected to try to secure the computer market similar to the way that they had taken over the calculator market. By June 1983, the list price of the Commodore 64 had been halved, from a release price of $595 to $300, while rebates and bundle deals would drive down the price even further. A contributing factor to Commodore’s ability to drive down prices so far was their ownership of MOS Technology, which produced the 6510 processor and the custom chips of the Commodore 64, while other computer manufacturers had to purchase their processors and chips from other manufacturers.

In any case, the combination of a low price and technical sophistication through the Commodore 64’s custom chips meant that few other computer manufacturers could compete. Texas Instruments, which Tramiel had a particular grudge against, had a torrid time with their TI99/4A system, which soon dropped to an unsustainable price of $99. Manufacturers which had entered the market in search of a quick buck balked at competing with a company so willing to drive down prices in search of market share. Some companies managed to sustain themselves, such as Apple who were already in the process of trying to transition to sophisticated GUI-driven systems and managed to keep the Apple ][ line going as a cash cow through their involvement with business and education, but in general, the Commodore 64 proved a difficult machine to compete against in the American market.

Obviously, there were going to be plenty of objections about Commodore’s market practices, from retailers who saw the practices as predatory, from other companies whose systems matched up badly in the market versus the Commodore 64 – and even inside the company itself! Irving Gould, chairman of the company and an investor who had provided money to keep Commodore afloat in the past, clashed badly with Tramiel over his race to the bottom in search of market share. This would lead to a power struggle which would see Tramiel kicked out of the company he had created in January 1984. With many Commodore personnel leaving the company to join Tramiel in his next venture, this would have serious knock-on effects which will be discussed in more detail below, but with Tramiel gone, Commodore looked to diversify and move on from the Commodore 64.

Several systems would be released throughout 1984, but none would achieve anywhere near the success of the Commodore 64. The Commodore 128, a more sophisticated model with more memory and complete reverse compatibility with the Commodore 64, would come the closest to success with approximately 4.5 million sales over its lifetime, but would be hamstrung by the fact that few developers saw a point in developing software specifically for it rather than the older and more popular system with which it was compatible. The Commodore 16 and Plus/4 models would fare worse; designed as a range of computers to replace the Commodore 64, they were completely incompatible and were, despite some success in Europe, a complete flop in the US market.

While Commodore was struggling, however, the Commodore 64 was still going strong. The worldwide sales leader until 1985 (when the IBM PC and its clones started to take off, catalysed largely by the platform’s attractiveness to businesses) and dominant in the low-end computer market in the US, it was also in a strong position in the European market, which had, as mentioned in Part 1 of the series, already adopted the home computer as the gaming platform of choice. The Sinclair ZX Spectrum, also mentioned before, was one of its main competitors in this role. Another system, not previously mentioned, would complete the trifecta of the most popular 8-bit gaming computers in Europe throughout the 1980s and early 1990s.

Amstrad, a British company founded by Sir Alan Sugar and then in the field of low-end consumer electronics, decided to join the home computer market in 1984 with the release of the Amstrad CPC. The Amstrad CPC was somewhere between the ZX Spectrum and Commodore 64 in terms of graphical and sound capabilities, came with an integrated tape drive and, unlike the ZX Spectrum and Commodore 64, was specifically designed around having a separate monitor rather than plugging into a television set. At a time when households were likely to only have a single television set, this was a novel feature freeing the TV for people to watch while the computer was used on the separate monitor. Released at a price of £199 with a green-screen monitor or £299 with a colour monitor, it was a reasonable prospect against the Commodore 64 which was then £195.95 on its own without the C2N Datasette tape deck or the ZX Spectrum at £129 with 48KB of memory versus 64KB on the CPC 464 and again without a tape deck included.

These three systems would end up trading blows right through to the early 1990s, with a lot of multiplatform releases which would target all three to different extents. The success of these systems caused other computer manufacturers to diminish, similar to the situation in the United States, with the Oric and Dragon systems mentioned in Part 1 soon going by the wayside and France’s Thomson systems being pushed out by foreign competitors.

A few other systems managed to carve out a slice of the pie, including the BBC Micro, which was the platform of genesis for several important games – including the aforementioned Chuckie Egg and the seminal Elite – along with the MSX series, the latter being a Japanese-developed series of computers peculiar among 8-bit systems for not being designed and manufactured by a single company but instead representing a standard based around off-the-shelf components and a standard Microsoft-designed BASIC ROM whereby any manufacturer could licence the ROM and build their own system. (In this way, it was similar to the IBM PC which was readily cloned, but instead with the explicit permission of the standard’s designers.) Different countries had different preferences for systems; the Commodore 64 would be particularly embraced by Germany, for instance, while the Amstrad CPC would become the most popular system in France and the MSX range would be especially popular in the Netherlands due to Philips’ production of several MSX systems.


The Amstrad CPC 464, here branded with French-language markings.

This set of competitors represented the more affordable end of systems at the time. But 1985 was important for other reasons, as a new generation of systems entered the market that would later become the source of focus for the game developers of Europe.

The Commodore Amiga and Atari ST: Beginnings of the 16-bit war

Despite being ousted from his own company, Jack Tramiel didn’t call it a day. He had soon established Tramel Technology, with several former Commodore employees joining him and by April 1984, was planning a new computer based around the Motorola 68000 CPU. Soon afterwards, he learnt that Warner Communications were looking to sell Atari. Atari had been the market leader in the console market prior to the North American video game crash in 1983 (as well as being one of the instigators of the crash) and therefore had the most to lose. Atari was haemorraging money by 1984, losing an approximated $1 million per day and becoming a major drain on Warner’s resources.

In July 1984, Tramiel purchased Atari’s Consumer Division, including their home computer and console assets and immediately got to work to moulding the newly formed Atari Corporation into his own company. Using Atari’s stock of video game consoles as a means to stay afloat, Tramiel’s engineers continued to work on their new computer design. But a couple of months later, Tramiel’s son, Leonard, found a contract negotiated with Atari Inc. which was of particular relevance to a company looking for a new computer design.

Jay Miner, a designer of the custom chips used in the Atari 2600 and the Atari 8-bit family, had tried to convince Atari to invest in a design for a new computer and console architecture. When he was rebuffed, he left Atari in 1982 along with some other Atari staffers and founded a new company, initially called Hi-Toro but later renamed Amiga. Similar to Tramel Technology, they staked their future on the Motorola 68000 CPU as well. However, they had exhausted their venture capital by 1983 and were looking for a way to keep going, which led them back to the door of Atari. Atari agreed to fund Amiga for ongoing development work in exchange for a one-year exclusive deal to produce and sell the machine. However, before Amiga could complete the design, Atari went into freefall with the video game crash in 1983, leaving the future of the company in limbo.

While Tramiel was negotiating with Warner to buy Atari, Amiga was looking for alternate sources of funding and ended up going to Commodore. Commodore had already suffered heavily from brain drain after the departure of its staff to Tramel Technology and were planning an injunction to stop Tramiel from releasing his computer given their belief that the former Commodore employees had stolen trade secrets. Desperate for a new computer design after the failure of the Commodore 16 and Plus/4 and relative failure of the Commodore 128, Commodore looked to buy Amiga outright and cancel Amiga’s contract with Atari. Things didn’t end simply, as Tramiel returned the favour by seeking an injunction himself against Amiga, but Commodore did manage to successfully buy Amiga.

It would not be an exaggeration to call Amiga’s first product, the Amiga 1000 released in July 1985, revolutionary. A multimedia PC before the term was even coined, the Amiga represented a huge jump over the previous generation with class-leading graphics that allowed 32 colours out of a palette of 4096 to be displayed in normal use (and more in special modes), one of the best sound chips ever made in the four-channel, 8-bit PCM Paula chip and a very sophisticated pre-emptive multitasking operating system which was close to ten years ahead of its time. (I have discussed AmigaOS in a previous article.) Yet, despite that, it was not absurdly expensive; at an introductory price of $1,295 (with a monitor for an extra $300), it decidedly undercut the Macintosh 512K priced at $2,795 (which had more memory than the Amiga’s 256KB, but a monochrome screen and a single-tasking OS).

Atari might not have secured rights to the Amiga, but they did manage to finish their own computer a couple of months before the Amiga and released the Atari ST in June 1985. The ST was not as sophisticated as the Amiga; while it used the same 68000 processor clocked about one megahertz faster than the Amiga’s chip, its graphical and sound capabilities were less impressive, being able to display a maximum of 16 colours on screen out of a palette of 512 colours and used an off-the-shelf Yamaha derivative of the General Instrument AY-3-8910 with three channels that could produce square waves or white noise (also used in the likes of the Amstrad CPC and MSX along with later ZX Spectrum models). The computer retailed with 512KB of RAM for $799 with a monochrome monitor or $999 with a colour monitor.

In retrospect, the default sound capabilities of the Atari ST were disappointing, given that Atari had been developing an 8-channel additive synthesis chip known as the AMY which would have been inexpensive, yet provided a good counter-argument to the sophistication of Amiga’s Paula chip. Instead, they went for a chip which was not even as good as the MOS Technology SID released three years before the ST. (Atari did include a built-in MIDI in/out port, to be fair, which made the machine popular with musicians, but using it required additional hardware.) As well as that, the operating system wasn’t anywhere near as impressive as AmigaOS, with a single-tasking GUI paradigm which was on par with other systems at the time but was far outstripped by the pre-emptive multitasking of the Amiga. (I also have discussed Atari TOS previously.)

Nevertheless, the Atari ST became the bigger sales success early on, with a more approachable price for the families in Europe who bought many of the early units. Commodore’s woeful marketing didn’t hurt either, with Commodore apparently having no idea how – and no funding in any case – to market their sophisticated machine. (Interestingly, an Easter egg found in an early release of AmigaOS illustrated the Amiga engineers’ discontent at Commodore, with a message reading, “We made Amiga, they fucked it up”.) However, Atari wasn’t exactly in the healthiest of states either, with Tramiel’s ruthlessness with the Commodore 64 coming back to haunt him to some respect. Both systems would, over their lifetimes, appeal more to European audiences than Americans, who were instead focusing on other platforms which would shape the future of computers and of video gaming.

The NES: Saviour of the American video game market, but a “cult classic” in Europe

Japan, like Europe, had not suffered heavily in the wake of the North American video game crash of 1983. With a strong domestic market of arcade games and its own set of personal computer platforms ranging from the NEC PC-8801, the Fujitsu FM-7 and the Sharp MZ and X1 systems, Japan were able to sustain their own market during the contraction in the market in the US.

Nintendo were one of the notable successes of the Japanese market at the time. Having started developing video games in about 1975, with several arcade games, a few Pong clones and the Game & Watch series of handheld games, they had struck gold with Shigeru Miyamoto’s Donkey Kong in 1981. The first game to feature Mario (then a carpenter named Jumpman), Donkey Kong was a smash hit in the arcade and made it onto several home computers and consoles as well. With this success, Nintendo decided to develop their own video game console.

On the 23rd of July, 1983, Nintendo released the Famicom (or Family Computer) in Japan. Designed around a clone of the same MOS 6502 CPU architecture used in the Atari 8-bit systems, the Commodore 64 and the BBC Micro, the Famicom was more of an evolution rather than a revolution. It did have better graphics than anything else on the console market at the time, being able to display 25 colours on screen at once out of a 54-colour palette along with a sophisticated sprite engine, but the CPU was comparable to an Atari 8-bit system and the sound chip, with its five channels including two pulse waves, one sawtooth wave, a noise generator and a 6-bit PCM channel, was approximately on par with the Atari POKEY, General Instrument AY-3-8910 and Texas Instruments SN76489 chips found in other console and home computer systems and couldn’t match the MOS Technology SID in the Commodore 64.

After a slightly slow start, the Famicom soon picked up momentum to become the best-selling console in Japan by 1984. Plans were written up with Atari to distribute the system in the US in a modified form. While the name “Famicom” was a bit of a misnomer for a system that was first and foremost designed for video games, there was an add-on package called Family BASIC with a cartridge and keyboard peripheral which allowed the system to be used as a somewhat limited computer system through BASIC programming. The plans to sell the system in the US made the name look more appropriate. The planned Nintendo Advanced Video System would have come with an integrated keyboard, a cassette drive, a wireless joystick and a BASIC cartridge which would have made it as much a home computer as a video game console.


The Nintendo Advanced Video System, complete with peripherals.

Of course, these plans never came to pass. Atari delayed an initial deal in 1983 to distribute the Famicom in North America after finding that Coleco were illegally bundling their Adam computer with Donkey Kong. Despite this being an unauthorised port, Atari took this as a sign that Nintendo were working with a major competitor in the video game market. The deal was cancelled as Atari’s CEO, Ray Kassar, was fired shortly afterwards for insider trading. A later attempt to market the AVS as mentioned above also fizzled out and in the wake of this, Nintendo decided to distribute the system themselves.

This would end up being a fortuitous decision. Modifying the Famicom further, with a front-loading zero insertion force cartridge slot that was meant to obfuscate the system’s purpose and evoke images of VCRs rather than video games consoles (although this would prove to be inferior to the card edge connector cartridge design of the Famicom and most previous and subsequent cartridge-based consoles), along with the R.O.B. (Robotic Operating Buddy) accessory designed to give the system a place on toy shelves, Nintendo released the Nintendo Entertainment System on the 18th of October, 1985 in limited test markets in the US and later distributing it across the whole United States throughout 1986.

The NES would prove, after a shaky start similar to that of the Famicom, to be a massive sales success in the United States, reigniting American passions with video games. Of the 61.9 million NES/Famicom systems sold worldwide, more than half of these were sold in the Americas and “Nintendo” would become a byword for video gaming in the US in the years to come.

The NES would not be so popular in Europe. Released in two batches across Europe, with continental Europe (apart from Italy) receiving the system on the 1st of September, 1986, while the UK, Ireland, Italy, Canada, Australia and New Zealand in 1987, the console entered a more challenging market, often with a baked-in preference for home computers.The official sales figures for the NES in regions other than Japan and the Americas are 8.5 million and while it’s difficult to get a solid figure for anything more specific, it is clear that not all of that 8.5 million was down to Western Europe.

While the NES had some degree of success in countries like France and Germany, video gamers in the United Kingdom were especially dismissive of the system on its release and sales always remained lukewarm even near the end of its lifespan. Nintendo had implemented some practices when developing the NES for the United States that were particularly inappropriate for the UK audience. Nintendo had deliberately targeted the system in the West more towards younger children, with a harsh policy towards the depiction of violence, profanity or sexuality, which made it look a bit “kiddy” when it was introduced in the UK.

Furthermore, in an attempt to mitigate the development of low-budget shovelware, advergames and pornographic titles that had infested the Atari 2600, Nintendo had instituted a very tight control on publishing for NES games and mandating the use of a lockout chip that required Nintendo’s approval to produce. This, however, was antithetical to the British video game industry which circled so much around the bedroom coder and small teams of indie game developers who lacked the financial backing to produce cartridges in the first place, let alone with a lockout chip. NES games were considerably more expensive than home computer games on cassette tapes and it was a difficult task to sell a device to British audiences that could only be used for video games and where, for the price of a single game, you could buy half-a-dozen games for a computer instead.

By virtue of sales late in its life, the NES would not be a total flop even in the UK, but it was hardly the saviour of the video game industry that it had become in the United States. Its success in the US will become more important later in this series, but for the time being, it served to illustrate the growing divergence in the video game industry between the US and Europe.

Sinclair – pulling defeat from the jaws of victory

The ZX Spectrum had proven to be a big success, with its aim of providing the cheapest possible colour computer resonating well with British buyers who appreciated its “cheap and cheerful” nature. Despite its limitations, such as the rubber-keyed chiclet keyboard and frequent attribute clash due to colour restrictions per on-screen tile, the Spectrum certainly did the trick as an affordable system for learning how to program and play games. However, not all of Sinclair’s ventures were so successful.

I find it interesting that despite the limited utility of early computers apart from entertainment, there were several computer manufacturers that dismissed video gaming as an inappropriate use of their systems. Apple, despite Steve Jobs’ and Steve Wozniak’s history with Atari, actively sought to discourage video game developers early on. IBM didn’t even consider the possibility of video gaming on their systems until the development of the IBM PCjr, which unsuccessfully tried to straddle the ground between the low-end systems like the Commodore 64 and the high-end business market it was already catering for. Sir Clive Sinclair was also famously dismissive of video gaming, having designed the ZX Spectrum to provide people with a platform for programming, but failing to see at the time that what a lot of buyers wanted to program were games.

Sinclair Research sought to follow up the ZX Spectrum in 1984 with the Sinclair QL (or Quantum Leap). Based around the Motorola 68008, a version of the 68000 somewhat analogous to the IBM PC’s Intel 8088 processor in that it had an 8-bit data bus which effectively halved the speed of the CPU, the QL did improve on the Spectrum in some respects, including the pre-emptive multitasking QDOS operating system released a year before AmigaOS, but was hardly the great leap forward that its name suggested.

The QL was rushed onto the market in January 1984 but was far from ready for production, lacking even a working prototype. Even when the first customer deliveries arrived in April, they were found to be unreliable, with multiple bugs in the firmware and numerous issues with the proprietary Microdrive storage system, which aimed to provide a cheaper alternative to the floppy disk by using an endless loop of magnetic tape inside a cartridge case. These issues were later resolved, but the early impression of the system stuck with it until it was discontinued in 1986. Today, the system is arguably most notable for Linus Torvalds’ ownership of one and his requirement to write his own software due to the poor support that the system received.

The QL wasn’t Sinclair Research’s only failure either. The portable Sinclair TV80 used a flat-screen CRT using a side-mounted electron gun and a Fresnel lens to make the picture look larger than it was, but failed to sell enough units to recoup its development costs. However, this was relatively low key compared to the biggest flop in Sinclair’s history: The infamous C5.

Sir Clive had held a long-lasting interest in electric vehicles since the 1950s and by 1983, the success of the ZX Spectrum gave Sir Clive capital to set up his own electric vehicle company called Sinclair Vehicles, Ltd. After in-depth research into the matter from the late 1970s onwards, Sinclair Vehicles released the C5 in 1985. It was a notorious flop, being underpowered, slow and unsafe with no weatherproofing – a big mistake in the frequently rainy climate of the UK. With both electric and pedal power, it was meant to bridge the gap between bicycles and cars, but ended up alienating both sets of people and only sold 5,000 of the 14,000 units produced.

All of these financial failures came at the expense of the device that had made Sinclair’s reputation. While the Spectrum did receive an update in 1984 in the form of the Spectrum+ with a new injection-moulded keyboard to replace the original chiclet keyboard, it took Sinclair’s Spanish distributor to really push for an improved model. The ZX Spectrum 128 added extra memory to the tune of 128kB overall (as the name implied) along with extra features such as an actual sound chip in the form of the AY-3-8912, an RS-232 serial port, an RGB monitor port and a better BASIC editor. Launched in Spain in September 1985, it wasn’t released in its major market of the UK until January 1986.

Faced with financial problems, Sir Clive would sell up the Sinclair branding and computer technology rights over to Amstrad in April 1986. Amstrad continued to not only sell but improve the Spectrum over the years, but their improvements did introduce some incompatibilities with the older models and the Spectrum would not sell in the same numbers as it did in the period up to 1985. There was enough of an install base to keep it relevant in the market and it had been enough of a success for Clive Sinclair to be knighted in 1983, but momentum was to shift to the Commodore 64, the Amstrad CPC and later the Atari ST and Commodore Amiga.

Now, back to the games!

By 1985, there had already been several smash hits on the 8-bit home computers, both in Europe and in the United States. The European games like Elite, Chuckie Egg and Manic Miner have already been discussed in Part 1, but American games like Epyx’s Games series, Lode Runner and Impossible Mission deserve a mention as they started to make their way over to Europe and began to be ported to the ZX Spectrum, Amstrad CPC and BBC Micro.

For the most part, European game development continued in a similar vein to how it had proceeded in previous years, with a distinct “bedroom coder” indie approach to a lot of the titles, with one or two programmers working on the game in their own time. With programming tools accessible as soon as users turned on their computers, along with a steady flow of resources from computer magazines reprinting BASIC and machine or assembly language code listings and books which discussed programming in the various dialects of assembly language on the different systems, the home computers made it a far less daunting prospect to develop and have a successful game published than video game consoles.

As mentioned briefly in Part 1 and as with any creative field where the barrier to entry is low, this led to a lot of mediocre and poor-quality games, many of which aped what their developers saw other games doing. This included a large number of platformers and shoot-’em-ups trying – and failing – to emulate the output of the Japanese companies such as Sega, Konami, Capcom and Irem for the arcades. However, the same low barrier to entry also allowed for genuinely novel games to make their mark on the systems, like 1985’s Paradroid, first developed by Andrew Braybrook for the Commodore 64 and incorporating both elements of the shoot-’em-up and puzzle genres, 1986’s The Sentinel, an esoteric and very original first-person perspective puzzle game first developed by Geoff Crammond (later known for his series of Formula One racing simulators) for the BBC Micro, the bizarre isometric 1987 action-adventure/puzzle/platform game Head over Heels designed first for the ZX Spectrum and the early fighting game International Karate, first developed for the ZX Spectrum in 1985 by System 3 and followed up by its even more successful sequel International Karate + (also known as IK+) in 1987.

Original game series were beginning to emerge as well, often from the platform game genre. The Miner Willy series, comprising Manic Miner and two official sequels in the form of the similarly popular Jet Set Willy in 1984 and the less successful Jet Set Willy Ii in 1985 along with a couple of spin-off titles, had become a smash hit for the ZX Spectrum early on. The Monty Mole series started in 1984, also on the ZX Spectrum and received sequels throughout the rest of the 1980s, including Monty on the Run and Auf Wiedersehen Monty, which joined rather expansive multi-screen platforming worlds with a quirky sense of British humour. Similarly, the Dizzy series, first emerging in 1987, received a whole host of sequels up until 1992 and combined platforming with action-adventure elements.

Speaking of arcade games, the original titles on the market began to be joined by a host of ports of popular arcade titles, including Commando, Ghosts ‘n Goblins and its sequel Ghouls ‘n Ghosts, Green Beret, Bubble Bobble, OutRun and R-Type. These invariably did not match up particularly well to the arcade versions, not only because of the less powerful hardware of home computers versus the specialised and custom-built hardware of arcade machines, but also because the developers of such games had little to no official support from the original developers, were not provided with an overview of the internal workings of the games and often had to spend their own money on watching the arcade game being played so that they could work out how the game worked by deduction. This was at least one area which contemporary consoles had an advantage in, given that the original developers of the arcade games were also responsible to their ports to the console platforms. However, some of the Commodore 64 ports of these games are notable for their astounding soundtracks, among the best on any 8-bit system.

As a matter of fact, very strong music was rapidly becoming a character trait of the Commodore 64. While early soundtracks on the system had tended to use the chip similarly to other sound chips of the time, with a single waveform for each of the three voices on the SID, a trick was devised a few years after the Commodore 64’s release to rapidly change waveforms dynamically on each voice to give the impression of having more channels available at a time. An early example of this technique was illustrated by Rob Hubbard, who popularised the style of music with the soundtrack to 1985’s Monty on the Run. This particular bit of music is influenced heavily by Charles Williams’ “Devil’s Galop”, the theme tune to the popular 1950s BBC radio serial, Dick Barton, but includes its own original take on the music, with a sophisticated sound that was simply not possible in the same way on any other contemporary platform and lasting for an unprecedented six minutes without looping, a veritable lifetime in an era when nearly all game music looped after a minute at most.

There were plenty of highly acclaimed British composers, such as the aforementioned Rob Hubbard, who continued to push the limits of the SID after Monty on the Run, Martin Galway, Ben Daglish, Matt Gray and Tim Follin, but other great composers came from elsewhere, like Chris Hülsbeck and Ramiro Vaca from Germany and Jeroen Tel from the Netherlands. Between them, they explored the limits of the SID, often making a game worth buying for the music alone. Several of these musicians would continue to compose for later platforms, particularly Follin and Hülsbeck.

The Commodore 64 wasn’t the only 8-bit platform that could receive good music, though, as good composers could sometimes modify their pieces to work on the less sophisticated AY-3-8910 and SN76489. Tim Follin, an incredible musician who would manage to compose brilliant pieces on every platform he ever touched, even managed to make the primitive 1-bit beeper of the ZX Spectrum produce surprisingly sophisticated polyphonic music resembling (very buzzy) rock and orchestral pieces.

As the 1980s progressed towards their end, new platforms began to emerge and older ones became more affordable. 1987 saw the release of the Acorn Archimedes, the successor to the BBC Micro and incorporating a sophisticated 32-bit RISC CPU from the ARM architecture as well as a co-operative multitasking OS named RISC OS. (I discuss RISC OS here as installed on the more recent Raspberry Pi.) The Archimedes was not itself particularly popular by itself; while it didn’t lack for hardware grunt with its 8 MHz ARM2 processor producing 4 MIPS, approximately three times that of the comparably clocked 68000s in the Amiga and Atari ST, it suffered from its reputation as an educational computer where the BBC Micro had instead benefitted from it. However, the Archimedes would have an impact that would outlast the computer series itself, as its power-efficient ARM processor would become the de-facto standard in mobile devices such as PDAs, handheld games consoles and smartphones.

Also released in 1987 was the Amiga 500, a redesigned system based on the hardware of the Amiga 1000, but with more onboard memory and a form factor more in keeping with the 8-bit computers, with the keyboard integrated into the case. With a reduced price to make it more enticing to home users, the Amiga 500 would end up becoming the most popular system in the history of the platform, opening it up in particular to British and German audiences who would embrace the system in the years to come.

Finally, 1987 saw the European release of the Sega Master System. While not as popular as the home computers of the time and a distant second in its generation of consoles behind the NES, many of those coming from Brazil, where Sega had shrewdly negotiated a contract for the system to be built and marketed by the Brazilian company Tec Toy in order to avoid huge import duties on foreign-made electronics, the Master System represented something very bizarre in terms of sales figures, as it was far more popular in Europe than it was in either the United States or its home market of Japan. Selling approximately 6.8 million units in Western Europe, it actually outsold the NES in Europe, playing off a more mature image as seen from the arcade games which were Sega’s bread and butter at the time, as well as a more lax licensing policy than Nintendo. Sega had even licenced several of its arcade games for ports to the home computers, including After Burner, Space Harrier and the aforementioned Out Run.

Around the time of the Amiga and Atari ST releases in 1985, games were being made first and foremost for the 8-bit computers and later receiving graphical polish in ports to the more advanced 16/32-bit systems. Near the end of the decade, this situation was beginning to be reversed, as games would be released on the Amiga or Atari ST first and later filter their way down to the 8-bit computers if the games were successful and simple enough to port appropriately. For example, 1989’s Shadow of the Beast by Psygnosis was developed with the Amiga in mind, using complex parallax scrolling and high-quality music to push the system to its limits, but still found its way onto the 8-bit systems soon after its release. Another 1989 title, Populous by Bullfrog Productions, also first released the title on the Amiga and was later ported to multiple other systems – which in this case did not include the 8-bit systems.

Big-name titles also started coming from countries other than Britain, which seemed to dominate proceedings in Europe when it came to popular and retrospectively acclaimed titles on the 8-bit platforms. One developer from Germany, Manfred Trenz, is of particular note here. One of Trenz’s early projects was doing graphical work on a Commodore 64 game called The Great Giana Sisters, which was published by Rainbow Arts. The game was a polished but very obvious clone of Super Mario Bros., with a soundtrack composed by Chris Hülsbeck, but its similarities to Nintendo’s game brought the risk of legal action against Rainbow Arts and the game disappeared from shelves. Nevertheless, Manfred Trenz was planning his own game and in 1988, he developed Katakis, first released on the Commodore 64 and soon after ported to the Amiga by Factor 5, a group formed by five former employees of Rainbow Arts.

Katakis was again a pretty obvious clone of a pre-existing game, this time Irem’s arcade classic, R-Type and again, the threat of legal action loomed over the game. However, in a bizarre set of events, Activision Europe, who held the legal rights to port R-Type to the Amiga, found themselves without programmers to port the game and delivered an ultimatum to Factor 5: either develop the port of R-Type on the Amiga or receive a lawsuit. Katakis, for what it was worth, was later retooled and re-released as Denaris in 1989.

It was Trenz’s next idea, however, that would really kickstart things for Rainbow Arts and Factor 5. Trenz turned his attention to the run-’n’-gun platforming genre and in 1990, developed Turrican for the Commodore 64, with a prompt port to the Amiga by Factor 5. Best described as “Contra meets Metroid” (although apparently heavily influenced by the obscure Japanese arcade game, Psycho-Nics Oscar), Turrican mixed together a blend of “don’t stop shooting” side-scrolling action with relatively open game worlds with multiple secrets containing power-ups and extra lives. Somewhat limited by the one-button Atari-compatible joystick interface on the Commodore 64 and needing to bind the “up” direction to jump, Trenz devised a solution for the problem of aiming up by adding a secondary fire mode which would generate a lightning whip when holding the fire button while standing still. This lightning whip could be rotated around in 360 degrees and provided a novel solution to the issues posed by the limited controls permitted by the Atari joystick interface.

While the Commodore 64 version of the game was one of the most solid and technically advanced games on the platform, it was when it made the move to the Amiga that it would really shine. With better graphics and an astounding retooled soundtrack done by Chris Hülsbeck in what comprised probably his best work until that point and illustrating just how well he had made the transition from the Commodore 64 to the Amiga, Turrican really stood out on the Amiga as one of the best run-’n’-gun games ever to come out of anywhere other than Japan. The game would receive multiple ports to the other 8-bit systems, to the Atari ST and to a few consoles which will receive more attention in the next part of this series, along with multiple sequels throughout the early 1990s.

By 1990, the Amiga had really started to become embraced by European software developers, having reached an affordable cost for a system that despite not receiving any significant upgrades – as will be elaborated on in Part 3 – still managed to impress. But there were new platforms on the horizon, offering a major threat to the Amiga, while horrible mismanagement had already threatened to kill the Amiga and was a constant threat in the years to come. Meanwhile, the Americans couldn’t be ignored forever; having licked their wounds following the video game crash, they were ready to come back in a big way and they favoured consoles over computers and even then much preferred the IBM PC clones to the Amiga. The home computer market was on shaky ground and was close to meeting its doom.

Part 3 will discuss the Amiga in more detail in the period between 1987, with the release of the Amiga 500 and 1994, with the demise of Commodore, along with the other platforms which would soon take the lustre away from the Amiga’s revolutionary design.

The First “PC Master Race” – Part 1: The Start of the European Microcomputer Market (up to 1985)

While I have had my fair share of consoles, both home and handheld, through the years, I have always found myself predominantly drawn to my PCs as gaming platforms. From my first computer, with a 486SX running at 25MHz, a basic VGA graphics card and 8MB of RAM, to my current computer with an overclocked Core i5-4690K, an AMD Radeon R9 290 GPU and 16GB of RAM, each of my desktops has been used heavily for playing video games, even when they were not particularly suited to the games of the time.

Something I’ve noticed throughout the progression of PC specifications over the last fifteen or so years is that PCs have steadily become more compelling options against the dedicated video game consoles of their time. While, when I got my first computer in 1996, you could specify a computer that could outstrip the consoles of the time, it came at a considerably higher price and took considerably more effort to get games going on than the plug-and-play consoles like the PlayStation. Meanwhile, my PC with its standard VGA graphics card was more akin to the previous generation of consoles like the SNES in terms of graphical capability.

In 2016, not only does my current computer, which isn’t even on the pinnacle of PC graphics performance, far outstrip both the Sony PlayStation 4 and Microsoft Xbox One, it is possible in the United States to specify a computer that beats both consoles in graphical capability, yet costs in the same region as them (well, OK, that’s if you don’t want a Windows OS). This PC would also, despite the low price, have flexibility and adaptability unbeknownst to the consoles including the ability to use it for general-purpose computing and tens of thousands of commercial games available in just about every genre under the sun. At the same time, the current generation of consoles have been losing some of the traditional advantages of console platforms, such as the loss of split-screen multiplayer allowing multiple players to compete using a single console and a single screen, as well as the plug-and-play advantages of being able to put a disc or cartridge straight into the console and start playing being eroded by the necessity for multi-gigabyte bug-fixing patches.

Despite the improvements to the PC platform which have made it possible to easily specify a computer that will easily beat the consoles as well as have the capacity to do things other than video gaming and media consumption, PCs are still lumbered with a reputation from their past from when they genuinely were expensive, temperamental and difficult to set up. Furthermore, certain game developers, allured by the easy money of the console market, have allowed these misconceptions to be treated as gospel by their customers, focusing their games on the consoles and then follow up with lazy ports to the PC which fail to take advantage of the superior graphical potential of the platform and which frequently feature control schemas and user interfaces that assume that players are using console-style control pads.

The “PC Master Race” movement, named for a sly insult towards PC enthusiasts and their perceived elitism by the reviewer Yahtzee Croshaw of Zero Punctuation that was later adopted as a term of endearment, seeks to spread awareness of the merits of PC gaming at what they see as the first time in gaming history where PCs have surpassed consoles in every conceivable way for less money”. But what if I were to tell you that there was another period in time when personal computers represented a very compelling alternative to consoles, where they became the preferred gaming platform for most of a continent and where the comparatively high prices of consoles was considered to be detrimental? The story starts in 1982…

The 8-bit micros take off in Europe

The period between 1981 and 1982 represents a turning point in the history of personal computers. Commercially viable computers had been first put on sale in 1977 in the United States, but none, even the long-lasting Apple ][, would have the market impact of the IBM PC, which would later form the standard for the modern personal computer, the Commodore 64, which would become the best-selling computer model of all time and the Sinclair ZX Spectrum, which represented one of the few platforms that stood against the Commodore 64 toe-to-toe and managed to hold its own. Each of these computers was created in a turbulent market where dozens of manufacturers worldwide were already jostling for position and each computer managed to not only survive but thrive as many other models of computer dropped off the radar in later years.

The IBM PC was the first step into the personal computer market from the company that was then the largest computer company in the world, but it was then irrelevant to the gaming market – and will be discussed in passing in this section. On the other hand, both Commodore and Sinclair had precedent in the PC market, both having had previous sales successes. Commodore had been one of the pioneers of personal computing in 1977 with their PET 2001 computer, which competed with the Apple ][ and Atari 400/800, then followed it up with the first million-selling computer in the VIC-20 in 1980. Sinclair’s first releases, the ZX80 in 1980 followed up by the ZX81 in 1981, were even by the standards of the time very limited, with a scant 1KB of RAM by default, but with release prices of £99.95 and £69.95 on release respectively, they represented an affordable entryway into hobbyist computing.


The Commodore 64 and 48K Sinclair ZX Spectrum: Two of the fiercest competitors in the 8-bit home computer market.

A notable characteristic about both the Commodore 64 and the ZX Spectrum was that both computers were particularly inexpensive. The Commodore 64 was released at a price of $595, which compared very well with the Apple ][+ at $1,330, the Atari 800 at $899.95 and the entry-level IBM PC at $1,265. Yet it was surprisingly sophisticated, with the 64 KB of RAM from which it got its name compared to 16 or 32 KB in most contemporaries, a very sophisticated graphics chip which was better than almost anything else on the market and arguably the best sound chip of any 8-bit computer in the MOS Technologies SID, with three voices capable of generating four different waveforms and each with their own ADSR (attack decay sustain release) envelope to further modify the output of each voice. Its only notable weakness was a comparatively slow processor, a MOS 6510 at 1.023 MHz (or 0.985 MHz in PAL regions) which might have matched the Apple ][ range, but did not compare well to the 1.78 MHz processor in the Atari 8-bit computers.

The ZX Spectrum was not as sophisticated, with less sophisticated graphics hardware that lacked the hardware sprites and had a more limited colour palette compared to the Commodore 64 and a simple one-channel beeper which was significantly more limited than the SID on the Commodore machine. On the other hand, on release, it was significantly cheaper at £125 (approximately $220 in 1982) for the 16 KB model and £175 (approximately $310) for the 48 KB model. Both computers would soon become even cheaper with Commodore engaging in a price war against its competitors in the United States which led to the Commodore 64 dropping to $200 by 1983 and Sinclair decreasing prices on the Spectrum in response.

The low price of both computers is significant in the economic context of the time. The economic recession of the 1980s had a greater effect on Europe than it did on the likes of the United States and Japan and hit the United Kingdom especially hard, the country having experienced a string of crises throughout the 1970s. In particular, the conversion rate of the pound sterling dropped significantly in the period from 1980 to 1985, from an average of $2.33 in 1980 to $1.29 in 1985. While adoption of computers was slow between 1982 and 1983, with an estimated 600,000 microcomputers in the UK by the end of 1983, sales picked up significantly by 1984, by which time the economy of the UK would dictate that less expensive computers were most likely to succeed. There were similar situations across Western Europe and as other countries in Europe lacked a strong indigenous home computer market, consumers in these markets were inclined to buy American or British models.

The low price of the Commodore 64 is also significant – and has been linked as a cause – for an event in the video game market which had a huge impact in the United States, but had little effect in other markets. The North American video game crash of 1983 has become legendary, as the glut of consoles in the market at the time succumbed to the arrogance of the marketing executives pushing such systems, who seemed to believe that customers would eat up whatever shovelware the game developers could push out and come back for more, ultimately while the rapid decrease in price of the Commodore 64 made it a compelling alternative.

As unsold copies of the overproduced E.T. The Extra-Terrestrial for the Atari 2600 were being buried in a landfill in New Mexico, causing a contraction in the video game market in North America that would last until the 1985 release of the Nintendo Entertainment System, if you were in Europe, you would be forgiven for not realising that the crash had happened at all. The games market in Europe was already based around personal computers, most notably the Commodore 64 and ZX Spectrum, but also including several other predominantly British home computers such as the Acorn-designed BBC Micro, the Dragon 32/64 systems from Dragon Data and the Oric systems from Tangerine Computer Systems. The NES wouldn’t be released in Europe until 1986 and not in the UK until 1987, by which time the personal computer market had well-and-truly taken hold. Even by 1983, games like Manic Miner and Chuckie Egg, which would become known as some of the best games available on 8-bit platforms (and, incidentally, which would exemplify the “one man programming in his bedroom” sensibilities of the European game development sphere), had already been released – and things were only just getting started.

The home computer market picked up significantly in the UK in 1984, as more than one million home computers were sold, more than doubling the number of PCs in the country. Exposure of the home computer was helped by the BBC’s Computer Literacy Project and television shows like The Computer Programme and Micro Live. For the former, the BBC had put their name to the BBC Micro, an expensive, yet sophisticated computer designed and produced by Acorn Computers. While at a release price of £400 in 1981 (approximately $2,000), the top-of-the-line 32 KB BBC Micro Model B was too expensive for most households at the time, it did find its way into many schools and later sold a respectable 1.5 million units over its history.

The BBC Micro is not just significant for its role in the BBC’s efforts in trying to spread computer literacy, however, as it also plays a large role in computer gaming history. In 1984, a pair of students at the University of Cambridge, David Braben and Ian Bell, would work together to release the seminal game Elite, creating a legacy that lives to this day. Elite is one of the earliest sandbox games, a space simulator in which the player is given the freedom to play the game in any of a multitude of ways and in which there is no true victory condition. This contrasted heavily with the general pattern of games of the era, which were still generally simple, arcade-style affairs. Yet, despite this, Elite was very successful, soon spreading from the BBC Micro and similar game-focused Acorn Electron to be ported on platforms ranging from the Commodore 64 and Spectrum to the Apple II to the Japanese MSX range, even to the Taiwanese Tatung Einstein and then to the next generation of home computers in the mid-1980s.


Elite on the BBC Micro: Wireframe 3D graphics and a universe to explore on less than 32 KB of memory.

Elite was arguably the most sophisticated game of its time and, being designed for a home computer with more memory than consoles would have until the Sega Mega Drive in 1988, was very much a PC-focused game. While it was eventually ported to a console – the NES – in 1991, this required additional hardware and memory mappers to make up for the limitations of the console. In any case, until that point, if you wanted to play Elite, you needed a PC of some sort.

While discussing the efforts of British coders during this period, I do not intend to ignore the fact that American development studios were also developing sophisticated games for home computers at the time, including the Ultima series of role-playing games from Richard Garriott and while the ZX Spectrum design only reached America in the form of the largely incompatible Timex Sinclair 2068, the Commodore 64 was also wildly popular in the United States. However, few of the American games became a sales success in the UK or the rest of Europe for various reasons, largely linked again to the economic downturn in Europe. While European audiences predominantly bought their software on cassette tapes, which had excruciatingly long loading times even by the standards of the day, but were cheap and could use a standard cassette player which was likely already in the home, American games were written for floppy disks, which granted greater capacities as a consequence of not having to load the whole game into memory at once and significantly improved the loading time for software, but were more expensive and required the purchase of an additional floppy drive on top of the base package.

On the other hand, not scared off from the games industry by the collapse of the predominant game market like the Americans, the European coders felt free to exploit their home computers to the limits. Several elements made the home computers much more friendly for hobbyist coders to make the step to commercial game development, including the use of rewritable media like cassettes and floppy disks rather than the cartridges of consoles (although several home computers did have capacity for cartridge-based games). As a result, a huge number of one-man projects were started and had the capacity to become commercially viable. This did, predictably, lead to a lot of dross mixed in with the good games, but it created a crucible for innovation and diversity which would rarely be seen in the industry.

1985 saw the Western release of two systems that would, in the coming years, very much illustrate the differences between the American and European game markets. The Commodore Amiga 1000 was the most sophisticated home computer of its time and while that model itself would not become particularly successful, successor machines such as the Amiga 500 would find far much more success in Europe than they would in the United States from which they came. On the other hand, the Nintendo Entertainment System, derived from the Japanese Famicom (or Family Computer), would be seen as the saviour of the games industry in the United States but was far less successful in Europe. In the meantime, though, the 8-bit home computers had a lot more to offer… and the Germans had not yet illustrated their best.

Part 2 of this series will discuss the years leading up to 1990, which represented a golden age for the home computer in Europe, but where complacency, bad business decisions and the growing threat of the IBM PC would soon after cause the demise of the supremacy of the personal computer for several years afterwards.

A General’s View – Complete Game

Author’s Note: I mentioned in my last post that I had been working on my final year project for college. Just wanted to share what I came up with in the end. The project is a two-player game based on A General’s View, the tabletop strategy game that I designed in 2015 and is written using the Allegro 4 game library. The game is not particularly sophisticated and there are a few interface bugs that need to be ironed out, but it works and could be used for the basis of a more sophisticated game in the future.

The game is played by two human players with the following controls:

Map keys

Up, Down, Left, Right – move cursor

Enter – enter menu

Menu keys

Up, Down – move cursor

Enter – select menu option highlighted by cursor

For full rules and objectives of the game, please see A General’s View: Rules (Alpha Version). Please note that in this version of the game, at most one unit from each player can occupy a tile at once.

The game (in both Windows executable and source code forms) can be downloaded from the following link:



The C Preprocessor

One of the peculiar things about the C programming language is that so many commonly occurring elements are not actually part of the language, per se. All of the functions in the standard library are actually extensions to C, additional parts which give us the input/output, mathematical and utility features which make C powerful. All of these are contained in a set of header files and binaries which are added to programs during the compilation process.

Another extension to C is the C preprocessor, and it is this that gives us the ability to extend the language to perform functions. The C preprocessor is a sort of computer language of its own sort, and while it is not Turing-complete, it is useful enough for the purposes for which it is called. The C preprocessor reads through a C source file, replacing various statements which are important to the preprocessor to ones which are important to the C compiler.

It is somewhat difficult to explain why the C preprocessor is important, but I will attempt to do so with a brief segue into the history of computer languages. Early high-level programming languages, such as Fortran and COBOL, were notable for being able to do one set of tasks very well and most others not so well at all. In some cases, this led to deficiencies which would be considered ghastly today; ALGOL 58 and 60, for instance, did not define any input/output operations, and any I/O routines would be completely machine-dependent.

In the later 1960s, language designers attempted to create new languages which would be suitable for multi-purpose applications. However, these languages, which included PL/I and ALGOL 68, were designed by committees who were made up of conflicting personalities, many of which were desperate to see their pet features included in the languages. As the complexity of the languages grew, so did the complexity of developing an efficient compiler. As computing resources were vastly smaller than they are now, these languages were only suitable for mainframe computers, and then not even efficiently.

Therefore, these language experiments tended to fail. PL/I has some residual support by being supported by IBM, but it is moribund outside of the confines of IBM machines; ALGOL 68 is dead and buried. When C came around, Dennis Ritchie was aiming to create a language which both implemented enough features in order to build an operating system and its applications, while being able to run efficiently on a much less powerful computer than those for which PL/I was designed.

The solution was to create a system in which only the subset of the functions that were required for a specific program would be implemented, rather than the full set. This made compilation of C more efficient, as the compiler generally only had to be concerned with a small number of functions at once. The method chosen to do this was to use the C preprocessor to keep the function definitions of most functions outside of the base language; when C was standardised in 1989 by the ANSI committee, and in 1990 by the ISO, all functions were taken out of the base language and put into header files.

Now that the history lesson is over, we can continue on to the operations of the preprocessor. As mentioned above, the preprocessor scans a C source file – or, in some circumstances, another source file; Brian Kernighan famously developed RATFOR to add similar features to Fortran as in C – and looks for statements that are important to it. It then replaces them with statements that are important to the C compiler or whatever other system the preprocessor is being used for.

The most fundamental operation of the preprocessor is #include. This operation looks for a file which is defined at a path included in the #include directive, then inserts its entire contents into the source file in place of the #include directive. The file’s contents might themselves contain C preprocessor statements, as is common in C header files, so the preprocessor goes through those and acts upon them appropriately.

One of the most common invocations of the #include directive is the following:

#include <stdio.h>

This directive locates the file, stdio.h, and places its contents into a source file. The use of angle brackets around the filename indicates that it is stored in a directory whose path is known to the C compiler, and which is defined as the standard storage path for header files. stdio.h itself contains several preprocessor statements, including #define and #include statements, which are resolved by the preprocessor appropriately.

Let’s define a simple program which can be used to test this. The program will be the standard “hello, world” program as defined in The C Programming Language (Brian Kernighan & Dennis Ritchie, 2nd Edition).

#include <stdio.h>

    printf("hello, world\n");

Now, we can see some of the results when this is passed through the C preprocessor:

typedef long unsigned int size_t;
typedef unsigned char __u_char;
typedef unsigned short int __u_short;
typedef unsigned int __u_int;
typedef unsigned long int __u_long;
typedef signed char __int8_t;
typedef unsigned char __uint8_t;
typedef signed short int __int16_t;
typedef unsigned short int __uint16_t;


struct _IO_FILE {
  int _flags;
  char* _IO_read_ptr;
  char* _IO_read_end;
  char* _IO_read_base;
  char* _IO_write_base;
  char* _IO_write_ptr;
  char* _IO_write_end;
  char* _IO_buf_base;
  char* _IO_buf_end;
  char *_IO_save_base;
  char *_IO_backup_base;
  char *_IO_save_end;
  struct _IO_marker *_markers;
  struct _IO_FILE *_chain;
  int _fileno;
  int _flags2;
  __off_t _old_offset;
  unsigned short _cur_column;
  signed char _vtable_offset;
  char _shortbuf[1];
  _IO_lock_t *_lock;
  __off64_t _offset;
  void *__pad1;
  void *__pad2;
  void *__pad3;
  void *__pad4;
  size_t __pad5;
  int _mode;
  char _unused2[15 * sizeof (int) - 4 * sizeof (void *) - sizeof (size_t)];


extern int fprintf (FILE *__restrict __stream,
      __const char *__restrict __format, ...);
extern int printf (__const char *__restrict __format, ...);
extern int sprintf (char *__restrict __s,
      __const char *__restrict __format, ...) __attribute__ ((__nothrow__));
extern int vfprintf (FILE *__restrict __s, __const char *__restrict __format,
       __gnuc_va_list __arg);
extern int vprintf (__const char *__restrict __format, __gnuc_va_list __arg);
extern int vsprintf (char *__restrict __s, __const char *__restrict __format,
       __gnuc_va_list __arg) __attribute__ ((__nothrow__));


    printf("hello, world\n");

Most of the file has been truncated, but as we can see, the stdio.h header contains typedef declarations for various types, structure definitions including the above one for a FILE type as used in the file input/output routines, and function definitions. By being able to call this file from elsewhere, we save ourselves a lot of time and work from having to copy all of these definitions into our program manually.

While the above definition works for the standard header files, the location of the standard header files is restricted to read-only operations for non-administration users in many operating systems. There is, therefore, another way to specify the location of a source file, which may be an absolute path or relative to the working directory. A set of definitions of this type, using a relative and then an absolute definition, are shown below.

#include "foo.h"
#include "/home/jrandom/bar.h"

The operation of these preprocessor statements is similar to that of the one used for stdio.h; the major difference is in where the files are located. Instead of checking the standard directory for header files, the first definition checks the same directory as the source file for a header file named foo.h, while the second checks the absolute path leading to the /home/jrandom directory for a file named bar.h.

As it is common practice in C programming to leave #define statements, function prototypes and structure definitions in separate header files, this allows us to create our own header files without having to access the standard directory for header files.

The other particularly common invocation of preprocessor statements is the #define statement. The #define statement has two parts, an identifier and a token sequence. The preprocessor changes all instances of the identifier for the token sequence. This is useful for defining more legible names throughout the source code, particularly for so-called “magic numbers” whose purpose is not obvious from observation. A few examples of how this may be used are shown below:

#define MAX_FILENAME 255 /* Defines the maximum length of a filename path */
#define DIB_HEADER_SIZE 40 /* Defines the size of a BMP DIB header in bytes */
#define FOO_STRING "foobarbazquux"

In most cases, the #define tag is simply used to provide effective macros for obscure or complex definitions, but there is another sort of functionality which the #define statement can be used for. The #define statement can be used to define a macro with arguments, which is an effective way of creating shorthand for a piece of simple code which one doesn’t want to consistently repeat, but for which one doesn’t want the overhead of a function. An example of this is shown below:

#define SQUARE(x) (x) * (x)

We might see this definition invoked in a program like so:

#define SQUARE(x) (x) * (x)

int main(void)
    int a;

    printf("Enter an integer: ");
    scanf("%d", &a);
    printf("The square of %d is %d\n", a, SQUARE(a));
    return 0;

When this function is called, the SQUARE(a) invocation is replaced by (a) * (a). Note the brackets around the arguments in the macro; these are imperative for preserving the appropriate order of operations. Let’s say that we were to define SQUARE(x) as the following:

#define SQUARE(x) x * x

and then call it with the following code:

SQUARE(5 + 4)

This would expand out to the following:

5 + 4 * 5 + 4

As multiplication precedes addition, the multiplication in the middle would be performed first, with the multiplication of 4 and 5, giving 20, and then the flanking additions would be performed, giving an answer of 29. This is quite short from the 81 that we would expect from the square of 9. Therefore, it is important to appropriately define your macros in accordance with the expected order of operations.

Macros can have more than one argument, such as the following definition for a macro to find the larger of two numbers:

#define max(a, b) (a) > (b) ? (a) : (b)

Having defined something, we may want to undefine it further down the source file, possibly to prevent interference with certain operations, or to ensure that something is a function rather than a macro. For instance, in the standard libraries for low-power embedded platforms, getchar() and putchar() may be defined as macros in order to prevent the overhead of a function. In order to undefine something, we use #undef. The following code would undefine the SQUARE and max operation which we defined above:

#undef SQUARE
#undef max

Beyond the realms of #include and #define lie the conditional preprocessor directives. The first set of these are used to check whether something has already been defined, while the other set are used to check whether a C statement is true or false. We’ll discuss the definition-related directives first.

#ifdef is used to check if something has already been defined, while #ifndef is used to check whether something has not been defined. In professional code, this is regularly used to check the operating system and other details about the system which the program is to be compiled for, as the elementary operations which make up basic routines differ on different systems. We can also check if something is defined using the “defined” operator; this is useful if we want to continue checking after an #ifdef or #ifndef statement which was not satisfied.

Let’s say that we had a piece of source code which we needed to maintain on Windows, Mac OS X and Linux. Various bits of the source code might not apply to one or more of those operating systems. We could therefore hide the bits of source code that don’t apply to the current operating system using the following:

#ifdef _WIN32

#elif defined MACOSX

#elif defined LINUX


Note the use of #endif to close our set of conditional directives. This is part of the remainder of the conditional directives. #if checks if a C statement is true, and proceeds if it is, #elif is used to check another alternative if the preceding condition was not satisfied, #else is a universal alternative if none of the preceding conditions were satisfied, while #endif closes a block of conditional preprocessor statements. These operations work very similarly to the if…else if…else statements in C. The following example checks whether we are compiling for a 32-bit or 64-bit system:

#if !(defined __LP64__ || defined __LLP64__) || defined _WIN32 && \
    !defined _WIN64
/* we are compiling for a 32-bit system */
/* we are compiling for a 64-bit system */

In this code, we’re looking for a definition of __LP64__ or __LLP64__, which define data models for 64-bit processors, to be false, or a definition of _WIN32, which defines a Windows software platform, to be true without a corresponding definition of _WIN64, which defines a 64-bit version of Windows, to be true. If this is true, the program is compiled for a 32-bit system, which will have different machine instructions to the 64-bit system.

While there are some other details of the preprocessor to discuss, they are best left to external reading. To conclude, there are a number of predefined macros in the C preprocessor, such as __LINE__, which calculates the line number, and __FILE__, which determines the filename. The C preprocessor can be somewhat obscure, but it gives the C language a great deal of flexibility – the sort of flexibility that sees its use on everything from microcontrollers to supercomputers.

A Project With Source Code: A Snake Clone in Allegro

#include <allegro.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>

#define TILE_SIZE 20
#define TILES_HORIZ 32
#define TILES_VERT 24
#define MAX_ENTITIES 768

/* Length of snake */
int length = 5;
/* Grid reference for food */
int food_location[2] = {-1, -1};
/* Grid reference array for segments of snake */
int snake_segment[MAX_ENTITIES + 1][2];
/* Direction of snake - 0 for up, 1 for right, 2 for down, 3 for left */
int snake_direction = 3;

BITMAP *back, *snake_body, *food, *game_over;

/* Function prototypes */
void setup_screen();
void create_snake();
void draw_snake(int i);
void set_food_location();
void move_snake();
int collision_check();
void get_input();
void cleanup();

int main(void)
    int i, check;
    clock_t last_cycle;
    /* Set up Allegro */

    /* Establish beginning conditions */
    last_cycle = clock();

    while(!key[KEY_ESC]) {
	if ((clock() - last_cycle) / (double) CLOCKS_PER_SEC >= 0.1) {
	    last_cycle = clock();
	    check = collision_check();
	    /* If snake collided with walls or itself, end the game */
	    if (check == 1) {
	    } else if (check == 2) {
		/* If snake coincided with food, extend snake and reset food
		   location */

    game_over = load_bitmap("game_over.bmp", NULL);

    /* Display game over message when collision detected */
    while (!key[KEY_ESC]) {
	blit(game_over, screen, 0, 0, 0, 0, SCREEN_W, SCREEN_H);


void setup_screen()
    int i;

    set_gfx_mode(GFX_AUTODETECT_WINDOWED, 640, 480, 0, 0);
    back = create_bitmap(SCREEN_W, SCREEN_H);

    /* Create white grid on background bitmap, blit to screen */
    for (i = 0; i < SCREEN_W; i += TILE_SIZE) {
	vline(back, i, 0, SCREEN_H, makecol(255, 255, 255));

    for (i = 0; i < SCREEN_H; i += TILE_SIZE) {
	hline(back, 0, i, SCREEN_W, makecol(255, 255, 255));

    blit(back, screen, 0, 0, 0, 0, SCREEN_W, SCREEN_H);

void create_snake()
    int i, j;

    for (i = 0, j = 15; i < length; j++, i++) {
	snake_segment[i][0] = j;
	snake_segment[i][1] = 12;

void draw_snake(int mode)
    int i;
    if (snake_body == NULL) {
	snake_body = load_bitmap("snake_body.bmp", NULL);
    for (i = 0; i < length; i++) {
	draw_sprite(screen, snake_body, snake_segment[i][0] * TILE_SIZE + 1,
				snake_segment[i][1] * TILE_SIZE + 1);

    /* If function called from move_snake(), remove final segment of snake
       from the screen */
    if (mode = 1) {
	blit(back, screen, snake_segment[length][0] * TILE_SIZE,
	     snake_segment[length][1] * TILE_SIZE, 
	     snake_segment[length][0] * TILE_SIZE,
	     snake_segment[length][1] * TILE_SIZE,

void set_food_location()
    int i, valid = 1;
    if (food == NULL) {
	food = load_bitmap("food.bmp", NULL);
    /* Ensure food is not positioned on a snake segment */
    do {
	valid = 1;
	food_location[0] = rand() % TILES_HORIZ;
	food_location[1] = rand() % TILES_VERT;
	for (i = 0; i < length; i++) { 	    if (food_location[0] == snake_segment[i][0] && food_location[1] 		== snake_segment[i][1]) 		valid = 0; 	}     } while (!valid);     draw_sprite(screen, food, food_location[0] * TILE_SIZE + 1, 		food_location[1] * TILE_SIZE + 1); } void move_snake() {     int i;     /* Move all grid references for snake segments up one position */     for (i = length - 1; i >= 0; i--) {
	snake_segment[i + 1][0] = snake_segment[i][0];
	snake_segment[i + 1][1] = snake_segment[i][1];

    /* Then, change the appropriate reference point depending on the snake's
       direction */
    if (snake_direction == 0) {
    else if (snake_direction == 1) {
    else if (snake_direction == 2) {
    else if (snake_direction == 3) {


int collision_check()
    int i;

    /* Snake collided with walls - end game */
    if (snake_segment[0][0] < 0 || snake_segment[0][0] >= TILES_HORIZ ||
	snake_segment[0][1] < 0 || snake_segment[0][1] >= TILES_VERT) {
	return 1;

    /* Snake collided with itself - end game */
    for (i = 1; i < length; i++) {
	if (snake_segment[0][0] == snake_segment[i][0] &&
	    snake_segment[0][1] == snake_segment[i][1]) {
	    return 1;

    /* Snake coincided with food - extend snake and reset food position */
    if (snake_segment[0][0] == food_location[0] && snake_segment[0][1] ==
	food_location[1]) {
	return 2;

    return 0;

void get_input()
    if (key[KEY_UP] && snake_direction != 2) {
	snake_direction = 0;

    if (key[KEY_RIGHT] && snake_direction != 3) {
	snake_direction = 1;

    if (key[KEY_DOWN] && snake_direction != 0) {
	snake_direction = 2;

    if (key[KEY_LEFT] && snake_direction != 1) {
	snake_direction = 3;

void cleanup()

Net Neutrality And The Fight Against The Tea Party Movement

This week, the Federal Communications Commission made the monumental decision to classify internet access as a utility, enshrining net neutrality (i.e. the equitable distribution of internet resources to all legal services, no matter what the service is or who owns it) in the United States and striking a decisive blow against the cable companies of the US. I welcome this decision, working as it does in favour of both the common internet user and those companies providing true innovation on the internet – such as Microsoft, Google, Facebook, Netflix, et cetera. Of course, Comcast, Time Warner Cable and so on have protested this decision, but I think it’s time for them to be cut down to size, given their distinct lack of innovation, their oligopolic greed and the fact that they have consistently been among the most unfriendly and unaccommodating companies around, distinct for their dismal customer service and their disregard for any sort of customer satisfaction.

The protests of Comcast, Time Warner Cable and so on aren’t surprising; after all, they have reasons for wanting to protect their oligopoly on the provision of internet connection, even if these work against their customers. Not surprising either are the protests of Ted Cruz, one of the more insipid members of the Tea Party movement of the Republican Party of the United States. Let’s get this straight off the bat: Ted Cruz is an ignoramus, ready to fight any sort of sensible decision as long as he can get one up on the Democratic Party – you know, like the rest of the Tea Party. He’s also a dangerous ignoramus, being the chairman of the Senate Commerce Subcommittee on Science and Space, despite having next to no knowledge of science – he’s not only a climate change denier, but more terrifyingly, a creationist. What’s more, he’s very clearly in the pocket of the big cable companies of the US. However, the very fact that he’s a known crooked, science-denying ignoramus makes him predictable and we shouldn’t be surprised that he’s fighting on the side of the people who pay him to.

What is surprising and more than a little worrying, though, is the fact that anybody has been able to take him seriously. More than a few have, nevertheless, claiming that governmental ‘interference’ will cause the downfall of the internet. The people saying this appear to be the same selfish individualists who have caused the recent outbreaks of measles in the United States due to their strident disregard for public safety by refusing to vaccinate their children. Their thought process seems to be that anything that they can’t perceive as directly helping them and which has the smell of government about it harms their freedom, in a sort of “gubmint bad” sense of the term. This applies even when the end result of the process will actually help them, by not having companies run roughshod over the concept of competition and not having them straitjacketing any company which doesn’t pay a king’s ransom to have their services provided at full speeds.

I’ll be fair here and state that my politics have traditionally been at least centre-left, in the European social democratic tradition, so I’m inherently going to be somewhat opposed to the principles of the Republican Party (and more recently, to the Democrats as well). That said, the trouble here isn’t capitalism, since on many occasions, the competition of a well-regulated market can benefit innovation and lead to new opportunities which improve our lives. However, the oligopoly of the American internet provider market does nothing to benefit innovation and without net neutrality, will actually harm it. Don’t find yourselves roped in by the selfish words of crooked politicians, paid to take a stand and ignorant of the true details behind the issue and if you’re in the US, don’t give the Tea Party any of your credence or support; they’re not on your side.

A new job and a dead GPU: An excuse for a new gaming PC

Something quite notable in my life has happened that I forgot to mention in my last post. After seven years in third-level education and just as much time spent in my previous job as a shop assistant in a petrol station, I’ve finally got a job that is relevant to what I’m studying and am most proficient at. I’m now working in enterprise technical support for Dell, which is quite a change, but both makes use of my technical skills learned both at DIT and the almost twenty years that I’ve spent playing around with computers in my own time and the customer service skills that I learned in my last job. Notably, the new job comes with a considerable increase in my pay; while the two-and-a-half times increase per annum comes mostly because of the fact that I work five days a week now, I am still making more now than I would have working full time previously.

Coincidentally, very recently, I experienced some bizarre glitches on my primary desktop computer, where the X Window System server on Linux appeared to freeze every so often, necessitating a reboot. Resolving the cause of the problem took some time, from using SSH to look at the Xorg logs when the crash occurred to discovering that the issue later manifested itself occasionally as graphical glitches rather than a complete freeze of the X Window System, then later experiencing severe artifacting in games on both Linux and Windows. In the end, the diagnosis led to one conclusion – my five-year-old ATI Radeon HD 4890 graphics card was dead on its feet.

Fortunately, I had retained the NVIDIA GeForce 8800 GTS that the computer had originally been built with, so I was able to keep my primary desktop going for everyday tasks by swapping the old GPU in for the newer, dead one. However, considering the seven years that I’ve got out of this computer so far, I had already been considering building a new gaming desktop during the summer to upgrade from a dated dual-core AMD Athlon 64 X2 to something considerably more modern. The death of my GPU, while not ultimately a critical situation – after all, I did have a replacement, a further three computers that I could reasonably fall back on and five other computers besides – did give me the impetus to speed up the process, though.

After looking into the price of cases, I decided that I would reuse an old full-tower case that currently holds my secondary x86 desktop (with a single-core AMD Athlon 64 and a GeForce 6600 GT), adapting it for the task by cutting holes to accommodate some 120mm case fans and spray-painting it black to cover up the discoloured beige on the front panel. Ultimately, this step will likely cost me almost as much as buying a new full-tower case from Cooler Master, but will at least allow me to keep my current desktop in reserve without having to worry where to find the space to put it. A lot of the cost comes from purchasing the fans, adapters to put 2.5” and 3.5” drives in 5.25” bays and selecting a card reader to replace the floppy drive that will be incompatible with my new motherboard. Nevertheless, the case is huge, has plenty of space for placing new components and should be much better for cooling than my current midi-tower case, even considering the jerry-rigged nature of it.

I had considered quite some time ago that I would go for a reasonably fast, overclock-friendly Core i5 processor and have found that the Core i5-4690K represents the best value for money in that respect – the extra features of the Core i7 are unnecessary for what I’ll be doing with the computer. To get the most out of the processor, I considered the Intel Z97 platform to be a necessity and was originally considering the Asus Z97-P before I realised that it had no support for multi-GPU processing. To be fair, I haven’t actually used either SLI or CrossFireX at any point, but do like the ability to use them later if I wish, so eventually, I settled on the much more expensive but more appropriate Asus Z97-A, which has capacity for both SLI and CrossFireX, the one PS/2 port I need to accommodate my Unicomp Classic keyboard without having to use up a USB slot and which seems to have sufficient room for overclocking of the i5-4690K.

To facilitate overclocking, I have also chosen to purchase 16GB of Kingston 1866MHz DDR3 RAM and an aftermarket Cooler Master Hyper 212 Evo CPU cooler to replace the stock Intel cooler. I’m not looking for speed records here, but would like to have the capacity to moderately overclock the CPU to pull out the extra operations-per-second that might give me an edge in older, less GPU-intensive games. I’ve also gone for some Arctic Silver 5 cooling paste, since cooling has been a concern for me with previous builds and I’d like to make the most of the aftermarket cooler.

Obviously, being a gaming desktop, the GPU will be a big deal. I had originally looked at the AMD Radeon R9 280X as an option, but the retailer that I have purchased the majority of my parts from had run out of stock. As a consequence, I’ve gone a step further and bought a factory-overclocked Asus Radeon R9 290, hoping that the extra graphical oomph will be useful when it comes to playing games like Arma 3, where I experienced just about adequate performance with my HD 4890 at a diminished resolution. The Arma series has been key in making me upgrade my PCs before, so I’m not surprised that Arma 3 is just as hungry for GPU power as its predecessors.

I’ve also gone for a solid-state drive for the first time in order to speed up both my most resource-intensive games and the speed of Windows. I’ve purchased a Crucial MX100 128GB 2.5” SSD, which should be adequate for the most intensive games, while secondary storage will be accommodated by a 1TB Western Digital drive for NTFS and a 320GB Hitachi drive to accommodate everything to do with Linux. I also bought a separate 1TB Western Digital hard drive to replace the broken drive in my external hard drive enclosure, which experienced a head crash when I stupidly let it drop to the floor. Oops. Furthermore, I’ve also gone for a Blu-Ray writer for my optical drive – I’m not sure whether I’ll ever use the Blu-Ray writing capabilities, but for €15 more than the Blu-Ray reader, I decided to take the plunge. After all, I’m spending enough already.

Last but not least is the PSU. “Don’t skimp on the power supply”, I have told several of my friends through the years and this was no exception. Taking in mind the online tier lists for PSUs, I considered myself quite fortunate to find a Seasonic M12II 750W power supply available for under €100, with fully-modular design and enough capacity to easily keep going with the parts that I selected. The benefits for cable management from a modular power supply can’t be overstated, which will be useful even with the generous space in my case.

Overall, this bundle will cost me a whopping €1,500 – almost double what I spent on my current gaming desktop originally. Of course, any readers in the United States will scoff at this price, benefited by the likes of Newegg, but in Ireland, my choices are somewhat more limited, with Irish-based retailers being very expensive and continental European retailers not being as reliable when it comes to RMA procedures if something does go wrong. Nevertheless, I hope the new computer will be worth the money and provide the sort of performance gain that I haven’t had since I replaced my (again, seven-year-old) Pentium III system with the aforementioned single-core Athlon 64 system.

I’ll be looking forward to getting to grips once again with another PC build. Here’s hoping that the process will be a smooth one!

Historical Operating Systems: Xerox GlobalView

Author’s Note: The demonstrations in this article are based on Xerox GlobalView 2.1, the final release of the operating system and used a software collection available from among the links here:

Xerox is not a name which one would usually associate with computing, being far more well-known for their photocopying enterprise. For this reason, it is somewhat bizarre to look at the history of Xerox and realise that through their PARC (Palo Alto Research Center), Xerox were one of the most revolutionary computer designers of all time. Their first design, the Alto minicomputer, was released in 1973 and introduced a functioning GUI, complete with WYSIWYG word processing and graphical features more than ten years before the first developments by any other company. Indeed, the Alto represented the concept of the personal computer several years before even the Apple II, Atari 8-bit family and the Radio Shack TRS-80 arrived in that sector and at a time when most computers still had switches and blinkenlights on their front panels.

The Alto was never sold as a commercial product, instead being distributed throughout Xerox itself and to various universities and research facilities. Xerox released their first commercial product, the Xerox 8010 workstation (later known as the Star) in 1981, but by that stage, they had presented their product to many other people, including Apple’s Steve Jobs and Microsoft’s Bill Gates. Microsoft and Apple would soon release their own GUI operating systems, based heavily on the work of Xerox PARC’s research and ultimately would compete to dominate the market for personal computer operating systems while Xerox’s work remained a footnote in their success.

The Xerox Star was relatively unsuccessful, selling in the tens of thousands. Part of the reason for the lack of success for the Xerox Star, despite its technical advantages, was the fact that a single Star workstation cost approximately $16,000 in 1981, $6,000 more than the similarly unsuccessful Apple Lisa and more than $10,000 more than the Macintosh 128k when that was released in 1984. Consequently, the people who could have made most immediate use of a GUI operating system, including graphic designers, typically couldn’t afford it, while those that could afford it were more likely in the market for computers more suited to data processing, like VAX minicomputers or IBM System/3 midrange computers.

Nevertheless, Xerox continued to market the Star throughout the early 1980s. In 1985, the expensive 8010 workstation was replaced with the less expensive and more powerful 6085 PCS on a different hardware platform. The operating system and application software was rewritten as well for better performance, being renamed to ViewPoint. By this stage, though, the Apple Macintosh was severely undercutting even its own stablemate, the Lisa, let alone Xerox’s competing offering. Meanwhile, GUI operating environments were beginning to pop up elsewhere, with the influential Visi On office suite already on IBM-compatible PCs and Microsoft Windows due to arrive at the end of the year, not to mention the release of the Commodore Amiga and the Atari ST.

Eventually, Xerox stopped producing specialised hardware for their software and rewrote it for IBM PC-compatible computers – along with Sun Microsystem’s Solaris – in a form called GlobalView. Since the Xerox Star and ViewPoint software was written in a language called Mesa – later an influence on Java and Niklaus Wirth’s Modula-2 language – GlobalView originally required an add-on card to facilitate the Mesa environment, but in its final release ran as a layer on top of Windows 3.1, 95 or 98 via an emulator.

As a consequence of running in this emulated environment, Xerox GlobalView 2.1 is not a fast operating system. It takes several minutes to boot on the VirtualBox installation of Windows 3.1 which I used for the process, most of which seems to be I/O-dependent, since the host operating system runs about as fast as Windows 3.1 actually can on any computer. The booting process is also rather sparse and cryptic, with the cursor temporarily replaced with a set of four digits, the meaning of which is only elucidated on within difficult-to-find literature on GlobalView’s predecessors.

Once the booting process is complete, one of the first things that you may notice is that the login screen doesn’t hide the fact that Xerox fully intended this system to be networked among several computers. This was a design decision that persisted from the original Star all the way back in 1981 and even further back with the Alto. Since I don’t have a network to use the system with, I simply entered an appropriate username and password and continued on, whereby the system booted up like any other single-user GUI operating system.

Looking at screenshots of the Xerox Star and comparing it with the other early GUI systems that I have used, I can imagine how amazing something like the Xerox Star looked in 1981 when it was released. It makes the Apple Lisa look vaguely dismal in comparison, competes very well with the Apple Macintosh in elegance and blows the likes of Visi On and Microsoft Windows 1.0 out of the water. Xerox GlobalView retains that same look, but by 1996, the lustre had faded and GlobalView looks rather dated and archaic in comparison to Apple’s System 7 or Windows 95. Nevertheless, GlobalView still has a well-designed and consistent GUI.


Astounding in 1981, but definitely old-fashioned by 1996.

GlobalView’s method of creating files is substantially different to that used by modern operating systems and bizarrely resembles the method used by the Apple Lisa. Instead of opening an application, creating a file and saving it, there is a directory containing a set of “Basic Icons”, which comprise blank documents for the various types of documents available, including word processor documents, paint “canvases” and new folders. This is similar to the “stationery paper” model used by the Lisa Office System, although GlobalView doesn’t extend the office metaphor that far.

Creating a new document involves chording (pressing both left and right mouse buttons at the same time) a blank icon in the Basic Icons folder, selecting the Copy option and clicking the left mouse button over the place where you wish to place the icon. Once the icon has been placed, the new document can be opened in much the same way that it may be opened on any newer PC operating system. By default, documents are set to display mode and you need to actually click a button to allow them to be edited.

GlobalView can be installed as an environment by itself, but is far more useful when you install the series of office applications that come with it. As with any good office suite, there is a word processor and a spreadsheet application, although since the Xerox Star pre-dated the concept of computerised presentations, there is no equivalent to Microsoft’s PowerPoint included. There is also a raster paint program, a database application and an email system, among others.

It’s difficult to talk about GlobalView without considering its historical line of descent and it’s clear that while the Xerox Star presented a variety of remarkable advances in GUI design, by 1996, GlobalView was being developed to placate the few remaining organisations who had staked their IT solutions on Xerox’s offerings in the past. The applications no longer had any sort of advances over the competition. In many cases, they feel clunky – the heavy requirement on the keyboard in the word processor is one example, made more unfriendly to the uninitiated by not following the standard controls that had arisen in IBM PC-compatibles and Macintoshes. Still, considering the historical context once again, these decisions feel idiosyncratic rather than clearly wrong.


The paint program isn’t too bad, though.

Using GlobalView makes me wonder what might have become of personal computing if Xerox had marketed their products better – if in fact they could have marketed them better. Of course, even by the standards of the operating systems that were around by the last version of GlobalView, the interface and applications had dated, but that interface had once represented the zenith of graphical user interface design. Like the Apple Lisa, the Xerox Star and its successors represent a dead-end in GUI design and one that might have led to some very interesting things if pursued further.