World War II Grabbag: Hearts of Iron III & Il-2 Sturmovik

Over the past couple of weeks, I’ve been indulging in a few of my recent purchases from the Steam and GOG.com Summer Sales. Among these have been two games set in World War II, namely Hearts of Iron III and Il-2 Sturmovik. While I haven’t played either game enough to fulfil my criteria when it comes to reviewing them (both games have a campaign mode, which I haven’t completed in either case), I’ll give you my first impressions of the games. While the two games are in very different genres – Hearts of Iron III is a “real-time-with-pause” grand strategy game, while Il-2 Sturmovik is a combat flight simulator, the games share at least one element aside from their historical setting: They are both very involving and extremely complex.

To start off, Hearts of Iron III, developed by Paradox Interactive and part of their collection of grand strategy games, including the Crusader Kings and Europa Universalis series, places you in the role of leader of a country between 1936 and 1948, encompassing the years between Nazi Germany’s reoccupation of the Rhineland and the start of the Cold War. World War II is an inevitability, but it doesn’t need to turn out as it did in reality and the game allows you to explore possibilities like France never falling to Germany, an expansionist United States joining the Axis – or, if you want to go really bizarre, the Comintern – and using their industrial might to take Central America, or Germany forming its Greater German Reich and holding Europe firmly in its grasp.

The game is based around three factions, the Allies, Axis and Comintern, who fight for victory points, which are based around the world map and correspond to important cities and regions. The Allies naturally attract democratic nations, the Axis naturally attract nations under authoritarian governments such as fascism and national socialism, while the Comintern attract socialist and communist nations. Through a combination of military might and diplomatic influence, the three factions attempt to attract new nations to their cause or to subsume them into their own structure, bolstering their claim upon the world. However, a conquered nation may choose to resist, forming a government-in-exile, awaiting assistance from their allies.

The world map encompasses most of the world’s surface, with the exception of Antarctica and the Arctic Circle, both of which are militarily useless at that point of time. The map is subdivided into regions, some of which are more important than others due to their population, resources, industrial capacity and so on. The most important regions are denoted by the aforementioned victory points, which, when conquered, adds those victory points to the total of the faction of which that nation is a member (if any) and also bring the nation whose region has been captured closer to surrender.

However, you can’t just declare war on whoever you like, as your ability to wage war is limited by the wills of your population, which is represented in three ways: your national unity, which represents how closely the people of the nation identify with the nation as a cause; your neutrality, which represents how willing your population is to go to war and the threat posed by various other nations. If your neutrality is too low compared to the threat posed by another nation or the threat posed by you to the other nation, you will be unable to declare war, while if your national unity is too low, you will be unable to follow political policies aimed towards military mobilisation. However, if your country is already at war, some of these policies can be put into place despite low national unity or high neutrality – and typically, when two factions go to war, all of the constituent nations of those factions will wage war against each other.

To progress in the game, you have to balance multiple different facets of your country’s policies, including the deployment and movement of your country’s military forces, looking after the industrial elements of your country, including balancing military production, reinforcement, production of war supplies and consumer goods, diplomatic engagement with other countries and espionage and counter-espionage. As mentioned above, the game is very involving with all of its various facets to be managed. This is harder for nations who have to fight on more than one front at a time, in particular the United Kingdom, whose territories overseas are at just as much risk of invasion as their domestic territories and the Soviet Union, whose expansive territories are thinly reinforced to begin with and who will have to pick their battles intelligently. I recommend starting off with either a nation who will play a small, but important role as part of a faction, such as Canada or South Africa, or a small, neutral nation who can join a faction at their will, such as Ireland or one of the Central American nations.

While the game is rather abstract in various ways, with entire military divisions being represented by a NATO-style symbol on the map, there is plenty of complexity even at that level of abstraction. Industrial capability is represented by a figure called Industrial Capacity, which affects how many military units can be produced, upgraded or reinforced at any one time, along with how many supplies can be produced to feed and arm your troops. Some of that industrial capacity has to be used to produce consumer goods to keep your citizens happy and productive. Industrial capacity can only be maintained with sufficient levels of various resources, like Energy (representing fuels like coal and peat), Metal (steel, aluminium, etc.) and Rare Materials (such as gold, rubber, phospates, etc.). Often, your country will not produce enough of these resources by itself, necessitating trading with other nations. Trading requires the Money resource, a certain amount of which is produced in the country itself, but which can be attained more quickly by trading your surplus resources to other countries.

On the battlefield, troops require supplies and fuel to be provided to them in order for them to be able to fight in enemy territory as well as to fight at their optimal capacity. This requires sufficient infrastructure to be built along the supply train so that the supplies can be delivered in a timely fashion, while enemy encirclement can cut you off from supplies apart from those that can be foraged from the region in which your units reside. However, supplies can be airlifted in using transport planes, but transport planes are vulnerable to enemy interception. Battles are waged on land, in the sea and in the air between different units, which are strong in various areas (and in the case of land units, in different terrains), but weak in others. All of this is before the construction of fortifications, radar stations, additional factories, et cetera, or the development of hierarchical military structures from divisions to corps to armies and army groups. Needless to say after the above summation, there are a lot of things to be taken care of, requiring a lot of attention and care.

While you can choose to have various elements of the gameplay controlled by the game’s AI, which does help with the complexity when you’re starting off, the AI can be inclined to make decisions that are at the least slightly boneheaded. This very style of game appeals most to the sorts of people who will find that anathematic in any case and it is that sort of person – the person who would be known as a “grognard” in tabletop wargaming spheres – that this game will appeal to the most.

Il-2 Sturmovik, while in a very different genre, also displays a level of complexity and detail which can be breathtaking in both the positive and negative connotations of the word. Designed by 1C:Maddox, a Russian developer which was a constituent part of 1C Company, one of Russia’s largest independent game developers and publishers, Il-2 Sturmovik focused originally on the Eastern Front of World War II around the eponymous Soviet ground attack aircraft, but over the course of time has amassed several expansion packs which have taken its scope way beyond its original premise, to the Finnish Continuation War of 1941 to 1944, the war in the Pacific Ocean and even to the Western Front in the speculative 1946 expansion pack, which simulates various late-war experimental aircraft that never made it to production.

While, as with Hearts of Iron III, Il-2 Sturmovik can be made easier by adjusting the options to your liking, the ultimate aim of the game is to be an uncompromisingly hardcore combat flight simulator, feeling as close to the real deal as possible with the technology available and it feels a little like cheating to deny the game that chance by turning off the simulation elements. The game is set at a time where, unlike today’s modern combat planes, whose computerised fly-by-wire systems make them relatively easy to fly and the challenge is in figuring out the avionics, even the best planes had vices and few aircraft approximated the legendary performance of a Spitfire or an Fw 190. In this game, a lot of the challenge is in getting the aeroplane to behave itself even in normal flight, let alone when you’re in a tight dogfight with an enemy plane on your tail.

The flight model in Il-2 Sturmovik is very impressive, capturing the little details which make various planes different, including the tendency for early-model Spitfires to cut out under negative G, the poor low-altitude performance of high-flyers like the P-51 Mustang and MiG-3 and the poor manoeuvrability of several of the heavier aircraft. You also have to manage the state of your plane during flight, with engines that can overheat when they’re kept on full power for too long and excessive stresses on the frame leading to handling difficulties. The planes are all modelled accurately inside as well, with cockpit visibility sometimes becoming a concern with some fighters including the Bf 109 and Hurricane variants.

While the general flight model is a treat to behold, it is in combat where the game really excels. The game really depicts the challenge of taking down even the slowest aircraft, like early-war bombers and transport planes, especially when you have a stream of tracer rounds coming at you from multiple angles. Different parts of the plane react differently when hit, with aileron, elevator and rudder controls that can be damaged, fuel tanks that can be set on fire or even made to explode and engines which can end up splattering oil over your windscreen or with their cooling systems damaged. An engine that’s been hit doesn’t just always catastrophically fail either; you can often feel the gradual loss of power and hear the whining of a failing engine as it slowly succumbs to its damage, necessitating a good deal of care if you want to get back to base in one piece. The pilot can also take damage, with injured legs and arms affecting flight performance and the possibility of bleeding to death.

While I’ve been very impressed from what I’ve seen in the game, I do have one particular complaint about Il-2 Sturmovik, in that the number of expansion packs and the dated UI make it difficult to figure out where to begin. There are numerous missions and campaigns available in multiple air forces along with a quick mission creator and a comprehensive mission editor, but the game doesn’t really direct you to any one of them at this point of time – well, aside, maybe with the title and original premise of the game.

Another minor niggle is that while everything else in the game is depicted with astounding accuracy, starting up your plane involves nothing more than a single button press, which to anybody who knows planes, doesn’t hold true for even the simplest general aviation planes, let alone World War II warbirds. I’m a little more inclined to let that slide than the UI problems, though, since given the number of different planes and the differences in starting all of these up, most people would just get exasperated trying the complex procedures to get various planes going. Il-2 Sturmovik isn’t a study sim, after all.

I’ve got about 60 hours played in Hearts of Iron III and just over 10 hours in Il-2 Sturmovik, but I predict that I’ll get plenty more hours out of both games. The complexity in both games means that I’ve got a lot to learn and a lot of potential left to exploit.

Tropico – A Retrospective Review

Developed by PopTop Software and released in 2001, Tropico is the first of the eponymous series of construction and management simulation games in which the player takes the role of leader of a Caribbean island, building its economy up from humble beginnings, all while trying to keep the population happy – or at least happy enough not to revolt. Set in the Cold War, Tropico combines its construction and management game mechanics with a tongue-in-cheek sense of humour and perspective on banana republics, where the United States and Russia act as mostly unseen forces who will invade if they are suitably dissatisfied and where a certain level of corruption is not only tolerated but expected – including funnelling money to your own secret Swiss bank account as a nest-egg for your retirement, whether that’s by choice or by forcibly being made to retire.

There are two different types of game in Tropico: pre-determined scenarios whereby you have particular constraints on your activities, along with a random map generator where you can set various characteristics of the island and the conditions in the game, like how strong the economy is, the political stability of the island and so on, with a corresponding bonus multiplier to your end-of-game score based on the difficulty. With the expansion pack, Tropico: Paradise Island, there are about forty different scenarios, with conditions ranging from an island of ex-convicts with little immigration and a poor reputation, to an island at the whim of a massive fruit conglomerate and to an island where you play the “third cousin, once removed” of Fidel Castro and have the objective of attaining as much cash as possible. There’s plenty of diversity in the missions, but the random map generator has plenty of mileage in it as well.

While scenarios will typically start you off with a pre-constructed island, the random map game type starts you off with just about enough infrastructure to start making money, with a few farms, a dock, a teamster’s office and a construction office, along with your palace and a population living in shacks. The farms begin by growing corn, which is good for feeding your hungry population, but is not particularly lucrative, but can be set to grow other products, including pineapples, tobacco, sugar and bananas. Some of this produce takes a long time to grow, but is particularly lucrative once it is being sold, while other crops have particularly harsh conditions on their ability to grow. Once the crops have been grown and harvested by your farmers, they’ll be picked up by your teamsters and brought to the dock, whereby your dockworkers will load the produce onto incoming freighters which bring out the fruits of your population’s labour and bring in immigrants to expand your workforce. Other basic resource gathering activities include mining and logging.

Once your activities start making a profit, you can start to diversify your economy by building factories which will take the produce from your farms, mines and logging camps and reprocess it further into a more valuable commodity, or start building hotels and tourist attractions to make your island into a tourist paradise. However, factories require more educated workers and can take quite a long time to become profitable, while Tropico‘s tourists prefer locations away from your farmers’ and labourers’ activities.

While you’re busy building up the economy of your island, you also have to keep the population satisfied by providing them with various facilities and satisfying their needs. Different members of the population have different needs, but in general, your citizens desire better housing, to be sufficiently entertained, to have a nice environment to live in, their religious and healthcare needs met and so on. Meanwhile, there are various factions on the island who favour different approaches to how the island is run; for instance, militarists favour many soldiers employed on the island, while environmentalists favour an environmentally friendly approach to economic activities and the religious prefer to have plenty of churches and fewer pubs, cabarets or casinos as part of the entertainment facilities on the island. You also have various characteristics for your character which can increase or decrease your favour with some of these factions as well as setting the democratic expectations for your character as part of the way you were installed into power. In a scenario, these are already pre-selected for you, while in a random map game, you have a choice, with several pre-prepared templates representing real-world dictators and revolutionaries – as well as, bizarrely, the mambo singer Lou Bega, who was then particularly popular for his version of “Mambo No. 5”.

Unfortunately, while the concept is very good at creating a challenge for the player in balancing the needs of the citizens with the desire to make money, most of the frustrations in the game come from dealing with the population. There is very little in the way of micromanagement in the game, with your interactions mostly coming from choosing which buildings to place and where, along with the pay for the workers or price of services at various buildings, which I quite like, but this can sometimes lead to boneheaded decisions with the AI which add fake difficulty to the game. Construction of new buildings can be mildly annoying, as the pathfinding AI of your workers is poor and this can keep them from constructing a necessary building as quickly as you might need it. Furthermore, before a building is constructed, the ground on which it will stand needs to be flattened and cleared of obstacles, which becomes more difficult as you move away from the relatively flat coasts and move inland. The frustration comes from the fact that it is often difficult to determine the gradient of a certain building plot since it isn’t very obvious from the graphical style of the game.

Considerably more frustrating is the requirements for keeping a good standard of healthcare and religion on your island. While the other needs might be expensive and time-consuming to upkeep, they are at least sensible once you get the buildings constructed. On the other hand, both religion and health care require a lot of buildings for the population, require educated workers who are at a premium at the start of the game and don’t get much more common later on and provide no economic benefit once they are fulfilled.

What’s more, even when you have got appropriately educated workers, there’s no guarantee that they’ll work in the religious or healthcare facilities, even when the pay for the roles is generous. In one game, I spent more than $30,000 – or in other words, enough to buy four or five apartment complexes which will satisfy housing needs for up to 60 citizens – trying to entice workers with college education to become doctors in my clinics, only to find that when they arrived, they immediately decided not to become doctors after all, but instead go into farming or construction despite the fact that my healthcare needs were sorely lacking due to the lack of staff and that the doctor jobs were set to pay more than three times as much as the jobs they were taking.

Nevertheless, putting aside these concerns, the rest of the game works very well and there is certainly a satisfaction to be derived from seeing profits rolling in from your farms as your teamsters draw the crops out to the docks to be loaded onto the freighters, or from seeing tourists flooding into your hotels as your tourism market expands.

At the same time as dealing with your own population, you must deal with the concerns of the United States and the Soviet Union, both of which take an interest in your activities from afar. The US favours a capitalistic economy, with free elections, while the Soviet Union prefers communism, with little income disparity. Much of your early-game income will come as foreign aid from these superpowers, with the amount increasing as the countries’ favour increases. However, if you have a particularly bad relationship with one country, they may send a military force to depose you – and as their favour is tied to some extent to the happiness of the capitalist or communist factions on Tropico, you can’t afford to ignore either of these factions. You can also slowly improve your reputation with either or both countries by building a diplomatic ministry.

As you play, you will also have the option to pass various edicts which will influence policy on the island and with the superpowers looking over your shoulder. As you build more buildings, you have options like enticing tourism with a Mardi Gras festival, funnelling a bit of the building cost of all buildings to your Swiss bank account or holding a book burning at the behest of your religious faction. On a more personal level, if you identify somebody who may be particularly troublesome, you can bribe or imprison them, or, to the horror of your population, even have them eliminated by your own soldiers. This provides the potential for a bit of extra control to the game without sacrificing the aforementioned lack of micromanagement in the game.

Graphically, Tropico was never that impressive, with isometric sprite-based graphics which weren’t a tour de force, even at the time. Nevertheless, aside from the previously mentioned issues with determining gradient, the graphics are good enough for the job, although the age of the game does rule out any options for widescreen resolutions. On the other hand, the music is a particular highlight of the game, with catchy Latin-style tunes which suit the game very well.

The Tropico series is now up to five entries, with most of the entries building on the setting and gameplay of the original. As a consequence, it’s tempting to skip the first game and just go on to play one of the sequels, but at the same time, the first Tropico did build a very good foundation for the games to come. Despite the occasional frustrations with construction, religion and healthcare, the game is built around a very strong concept and executes it very well. At present, Tropico is available on both Steam and GOG.com along with its pirate-themed sequel, Tropico 2: Pirate Cove, for less than Tropico 3 costs on its own and since the games in the series are frequently on sale through both platforms, if you’re looking for an inexpensive entry-point to the series, the original isn’t a bad place to start.

Bottom Line: Tropico combines strong construction and management fundamentals with a subtle, tongue-in-cheek sense of humour and a very catchy soundtrack, but does have some frustrating elements in managing the population in-game.

Recommendation: Given that the series is frequently on sale at several online distributors, I’d wait for a sale and then snatch it up in the Tropico Reloaded package which includes the sequel.

The C Preprocessor

One of the peculiar things about the C programming language is that so many commonly occurring elements are not actually part of the language, per se. All of the functions in the standard library are actually extensions to C, additional parts which give us the input/output, mathematical and utility features which make C powerful. All of these are contained in a set of header files and binaries which are added to programs during the compilation process.

Another extension to C is the C preprocessor, and it is this that gives us the ability to extend the language to perform functions. The C preprocessor is a sort of computer language of its own sort, and while it is not Turing-complete, it is useful enough for the purposes for which it is called. The C preprocessor reads through a C source file, replacing various statements which are important to the preprocessor to ones which are important to the C compiler.

It is somewhat difficult to explain why the C preprocessor is important, but I will attempt to do so with a brief segue into the history of computer languages. Early high-level programming languages, such as Fortran and COBOL, were notable for being able to do one set of tasks very well and most others not so well at all. In some cases, this led to deficiencies which would be considered ghastly today; ALGOL 58 and 60, for instance, did not define any input/output operations, and any I/O routines would be completely machine-dependent.

In the later 1960s, language designers attempted to create new languages which would be suitable for multi-purpose applications. However, these languages, which included PL/I and ALGOL 68, were designed by committees who were made up of conflicting personalities, many of which were desperate to see their pet features included in the languages. As the complexity of the languages grew, so did the complexity of developing an efficient compiler. As computing resources were vastly smaller than they are now, these languages were only suitable for mainframe computers, and then not even efficiently.

Therefore, these language experiments tended to fail. PL/I has some residual support by being supported by IBM, but it is moribund outside of the confines of IBM machines; ALGOL 68 is dead and buried. When C came around, Dennis Ritchie was aiming to create a language which both implemented enough features in order to build an operating system and its applications, while being able to run efficiently on a much less powerful computer than those for which PL/I was designed.

The solution was to create a system in which only the subset of the functions that were required for a specific program would be implemented, rather than the full set. This made compilation of C more efficient, as the compiler generally only had to be concerned with a small number of functions at once. The method chosen to do this was to use the C preprocessor to keep the function definitions of most functions outside of the base language; when C was standardised in 1989 by the ANSI committee, and in 1990 by the ISO, all functions were taken out of the base language and put into header files.

Now that the history lesson is over, we can continue on to the operations of the preprocessor. As mentioned above, the preprocessor scans a C source file – or, in some circumstances, another source file; Brian Kernighan famously developed RATFOR to add similar features to Fortran as in C – and looks for statements that are important to it. It then replaces them with statements that are important to the C compiler or whatever other system the preprocessor is being used for.

The most fundamental operation of the preprocessor is #include. This operation looks for a file which is defined at a path included in the #include directive, then inserts its entire contents into the source file in place of the #include directive. The file’s contents might themselves contain C preprocessor statements, as is common in C header files, so the preprocessor goes through those and acts upon them appropriately.

One of the most common invocations of the #include directive is the following:

#include <stdio.h>

This directive locates the file, stdio.h, and places its contents into a source file. The use of angle brackets around the filename indicates that it is stored in a directory whose path is known to the C compiler, and which is defined as the standard storage path for header files. stdio.h itself contains several preprocessor statements, including #define and #include statements, which are resolved by the preprocessor appropriately.

Let’s define a simple program which can be used to test this. The program will be the standard “hello, world” program as defined in The C Programming Language (Brian Kernighan & Dennis Ritchie, 2nd Edition).

#include <stdio.h>

main()
{
    printf("hello, world\n");
}

Now, we can see some of the results when this is passed through the C preprocessor:

typedef long unsigned int size_t;
typedef unsigned char __u_char;
typedef unsigned short int __u_short;
typedef unsigned int __u_int;
typedef unsigned long int __u_long;
typedef signed char __int8_t;
typedef unsigned char __uint8_t;
typedef signed short int __int16_t;
typedef unsigned short int __uint16_t;

...

struct _IO_FILE {
  int _flags;
  char* _IO_read_ptr;
  char* _IO_read_end;
  char* _IO_read_base;
  char* _IO_write_base;
  char* _IO_write_ptr;
  char* _IO_write_end;
  char* _IO_buf_base;
  char* _IO_buf_end;
  char *_IO_save_base;
  char *_IO_backup_base;
  char *_IO_save_end;
  struct _IO_marker *_markers;
  struct _IO_FILE *_chain;
  int _fileno;
  int _flags2;
  __off_t _old_offset;
  unsigned short _cur_column;
  signed char _vtable_offset;
  char _shortbuf[1];
  _IO_lock_t *_lock;
  __off64_t _offset;
  void *__pad1;
  void *__pad2;
  void *__pad3;
  void *__pad4;
  size_t __pad5;
  int _mode;
  char _unused2[15 * sizeof (int) - 4 * sizeof (void *) - sizeof (size_t)];
};

...

extern int fprintf (FILE *__restrict __stream,
      __const char *__restrict __format, ...);
extern int printf (__const char *__restrict __format, ...);
extern int sprintf (char *__restrict __s,
      __const char *__restrict __format, ...) __attribute__ ((__nothrow__));
extern int vfprintf (FILE *__restrict __s, __const char *__restrict __format,
       __gnuc_va_list __arg);
extern int vprintf (__const char *__restrict __format, __gnuc_va_list __arg);
extern int vsprintf (char *__restrict __s, __const char *__restrict __format,
       __gnuc_va_list __arg) __attribute__ ((__nothrow__));

...

main()
{
    printf("hello, world\n");
}

Most of the file has been truncated, but as we can see, the stdio.h header contains typedef declarations for various types, structure definitions including the above one for a FILE type as used in the file input/output routines, and function definitions. By being able to call this file from elsewhere, we save ourselves a lot of time and work from having to copy all of these definitions into our program manually.

While the above definition works for the standard header files, the location of the standard header files is restricted to read-only operations for non-administration users in many operating systems. There is, therefore, another way to specify the location of a source file, which may be an absolute path or relative to the working directory. A set of definitions of this type, using a relative and then an absolute definition, are shown below.

#include "foo.h"
#include "/home/jrandom/bar.h"

The operation of these preprocessor statements is similar to that of the one used for stdio.h; the major difference is in where the files are located. Instead of checking the standard directory for header files, the first definition checks the same directory as the source file for a header file named foo.h, while the second checks the absolute path leading to the /home/jrandom directory for a file named bar.h.

As it is common practice in C programming to leave #define statements, function prototypes and structure definitions in separate header files, this allows us to create our own header files without having to access the standard directory for header files.

The other particularly common invocation of preprocessor statements is the #define statement. The #define statement has two parts, an identifier and a token sequence. The preprocessor changes all instances of the identifier for the token sequence. This is useful for defining more legible names throughout the source code, particularly for so-called “magic numbers” whose purpose is not obvious from observation. A few examples of how this may be used are shown below:

#define MAX_FILENAME 255 /* Defines the maximum length of a filename path */
#define DIB_HEADER_SIZE 40 /* Defines the size of a BMP DIB header in bytes */
#define FOO_STRING "foobarbazquux"

In most cases, the #define tag is simply used to provide effective macros for obscure or complex definitions, but there is another sort of functionality which the #define statement can be used for. The #define statement can be used to define a macro with arguments, which is an effective way of creating shorthand for a piece of simple code which one doesn’t want to consistently repeat, but for which one doesn’t want the overhead of a function. An example of this is shown below:

#define SQUARE(x) (x) * (x)

We might see this definition invoked in a program like so:

#include 
#define SQUARE(x) (x) * (x)

int main(void)
{
    int a;

    printf("Enter an integer: ");
    scanf("%d", &a);
    printf("The square of %d is %d\n", a, SQUARE(a));
    return 0;
}

When this function is called, the SQUARE(a) invocation is replaced by (a) * (a). Note the brackets around the arguments in the macro; these are imperative for preserving the appropriate order of operations. Let’s say that we were to define SQUARE(x) as the following:

#define SQUARE(x) x * x

and then call it with the following code:

SQUARE(5 + 4)

This would expand out to the following:

5 + 4 * 5 + 4

As multiplication precedes addition, the multiplication in the middle would be performed first, with the multiplication of 4 and 5, giving 20, and then the flanking additions would be performed, giving an answer of 29. This is quite short from the 81 that we would expect from the square of 9. Therefore, it is important to appropriately define your macros in accordance with the expected order of operations.

Macros can have more than one argument, such as the following definition for a macro to find the larger of two numbers:

#define max(a, b) (a) > (b) ? (a) : (b)

Having defined something, we may want to undefine it further down the source file, possibly to prevent interference with certain operations, or to ensure that something is a function rather than a macro. For instance, in the standard libraries for low-power embedded platforms, getchar() and putchar() may be defined as macros in order to prevent the overhead of a function. In order to undefine something, we use #undef. The following code would undefine the SQUARE and max operation which we defined above:

#undef SQUARE
#undef max

Beyond the realms of #include and #define lie the conditional preprocessor directives. The first set of these are used to check whether something has already been defined, while the other set are used to check whether a C statement is true or false. We’ll discuss the definition-related directives first.

#ifdef is used to check if something has already been defined, while #ifndef is used to check whether something has not been defined. In professional code, this is regularly used to check the operating system and other details about the system which the program is to be compiled for, as the elementary operations which make up basic routines differ on different systems. We can also check if something is defined using the “defined” operator; this is useful if we want to continue checking after an #ifdef or #ifndef statement which was not satisfied.

Let’s say that we had a piece of source code which we needed to maintain on Windows, Mac OS X and Linux. Various bits of the source code might not apply to one or more of those operating systems. We could therefore hide the bits of source code that don’t apply to the current operating system using the following:

#ifdef _WIN32
#include 
#include 

#elif defined MACOSX
#include 
#include 

#elif defined LINUX
#include 
#include 

#endif

Note the use of #endif to close our set of conditional directives. This is part of the remainder of the conditional directives. #if checks if a C statement is true, and proceeds if it is, #elif is used to check another alternative if the preceding condition was not satisfied, #else is a universal alternative if none of the preceding conditions were satisfied, while #endif closes a block of conditional preprocessor statements. These operations work very similarly to the if…else if…else statements in C. The following example checks whether we are compiling for a 32-bit or 64-bit system:

#if !(defined __LP64__ || defined __LLP64__) || defined _WIN32 && \
    !defined _WIN64
/* we are compiling for a 32-bit system */
#else
/* we are compiling for a 64-bit system */
#endif

In this code, we’re looking for a definition of __LP64__ or __LLP64__, which define data models for 64-bit processors, to be false, or a definition of _WIN32, which defines a Windows software platform, to be true without a corresponding definition of _WIN64, which defines a 64-bit version of Windows, to be true. If this is true, the program is compiled for a 32-bit system, which will have different machine instructions to the 64-bit system.

While there are some other details of the preprocessor to discuss, they are best left to external reading. To conclude, there are a number of predefined macros in the C preprocessor, such as __LINE__, which calculates the line number, and __FILE__, which determines the filename. The C preprocessor can be somewhat obscure, but it gives the C language a great deal of flexibility – the sort of flexibility that sees its use on everything from microcontrollers to supercomputers.

A Project With Source Code: A Snake Clone in Allegro

#include <allegro.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>

#define TILE_SIZE 20
#define TILES_HORIZ 32
#define TILES_VERT 24
#define MAX_ENTITIES 768

/* Length of snake */
int length = 5;
/* Grid reference for food */
int food_location[2] = {-1, -1};
/* Grid reference array for segments of snake */
int snake_segment[MAX_ENTITIES + 1][2];
/* Direction of snake - 0 for up, 1 for right, 2 for down, 3 for left */
int snake_direction = 3;

BITMAP *back, *snake_body, *food, *game_over;

/* Function prototypes */
void setup_screen();
void create_snake();
void draw_snake(int i);
void set_food_location();
void move_snake();
int collision_check();
void get_input();
void cleanup();

int main(void)
{
    int i, check;
    clock_t last_cycle;
    
    /* Set up Allegro */
    allegro_init();
    setup_screen();
    srand(time(NULL));
    install_keyboard();

    /* Establish beginning conditions */
    create_snake();     
    draw_snake(0);
    set_food_location();
    last_cycle = clock();

    while(!key[KEY_ESC]) {
	if ((clock() - last_cycle) / (double) CLOCKS_PER_SEC >= 0.1) {
	    last_cycle = clock();
	    move_snake();
	    check = collision_check();
	    /* If snake collided with walls or itself, end the game */
	    if (check == 1) {
		break;
	    } else if (check == 2) {
		/* If snake coincided with food, extend snake and reset food
		   location */
		length++;
		set_food_location();
	    }
	    get_input();
	}
    }

    game_over = load_bitmap("game_over.bmp", NULL);

    /* Display game over message when collision detected */
    while (!key[KEY_ESC]) {
	blit(game_over, screen, 0, 0, 0, 0, SCREEN_W, SCREEN_H);
    }

    cleanup();
    allegro_exit();
}
END_OF_MAIN()

void setup_screen()
{
    int i;

    set_color_depth(24);
    set_gfx_mode(GFX_AUTODETECT_WINDOWED, 640, 480, 0, 0);
    
    back = create_bitmap(SCREEN_W, SCREEN_H);
    clear_bitmap(back);

    /* Create white grid on background bitmap, blit to screen */
    for (i = 0; i < SCREEN_W; i += TILE_SIZE) {
	vline(back, i, 0, SCREEN_H, makecol(255, 255, 255));
    }

    for (i = 0; i < SCREEN_H; i += TILE_SIZE) {
	hline(back, 0, i, SCREEN_W, makecol(255, 255, 255));
    }

    blit(back, screen, 0, 0, 0, 0, SCREEN_W, SCREEN_H);
}

void create_snake()
{
    int i, j;

    for (i = 0, j = 15; i < length; j++, i++) {
	snake_segment[i][0] = j;
	snake_segment[i][1] = 12;
    }
}

void draw_snake(int mode)
{
    int i;
    if (snake_body == NULL) {
	snake_body = load_bitmap("snake_body.bmp", NULL);
    }
    
    for (i = 0; i < length; i++) {
	draw_sprite(screen, snake_body, snake_segment[i][0] * TILE_SIZE + 1,
				snake_segment[i][1] * TILE_SIZE + 1);
    }

    /* If function called from move_snake(), remove final segment of snake
       from the screen */
    if (mode = 1) {
	blit(back, screen, snake_segment[length][0] * TILE_SIZE,
	     snake_segment[length][1] * TILE_SIZE, 
	     snake_segment[length][0] * TILE_SIZE,
	     snake_segment[length][1] * TILE_SIZE,
	     TILE_SIZE, TILE_SIZE);
    }
}

void set_food_location()
{
    int i, valid = 1;
    
    if (food == NULL) {
	food = load_bitmap("food.bmp", NULL);
    }
    
    /* Ensure food is not positioned on a snake segment */
    do {
	valid = 1;
	food_location[0] = rand() % TILES_HORIZ;
	food_location[1] = rand() % TILES_VERT;
	
	for (i = 0; i < length; i++) { 	    if (food_location[0] == snake_segment[i][0] && food_location[1] 		== snake_segment[i][1]) 		valid = 0; 	}     } while (!valid);     draw_sprite(screen, food, food_location[0] * TILE_SIZE + 1, 		food_location[1] * TILE_SIZE + 1); } void move_snake() {     int i;     /* Move all grid references for snake segments up one position */     for (i = length - 1; i >= 0; i--) {
	snake_segment[i + 1][0] = snake_segment[i][0];
	snake_segment[i + 1][1] = snake_segment[i][1];
    }

    /* Then, change the appropriate reference point depending on the snake's
       direction */
    if (snake_direction == 0) {
	--snake_segment[0][1];
    }
    else if (snake_direction == 1) {
	++snake_segment[0][0];
    }
    else if (snake_direction == 2) {
	++snake_segment[0][1];
    }
    else if (snake_direction == 3) {
	--snake_segment[0][0];
    }

    draw_snake(1);
}

int collision_check()
{
    int i;

    /* Snake collided with walls - end game */
    if (snake_segment[0][0] < 0 || snake_segment[0][0] >= TILES_HORIZ ||
	snake_segment[0][1] < 0 || snake_segment[0][1] >= TILES_VERT) {
	return 1;
    }

    /* Snake collided with itself - end game */
    for (i = 1; i < length; i++) {
	if (snake_segment[0][0] == snake_segment[i][0] &&
	    snake_segment[0][1] == snake_segment[i][1]) {
	    return 1;
	}
    }

    /* Snake coincided with food - extend snake and reset food position */
    if (snake_segment[0][0] == food_location[0] && snake_segment[0][1] ==
	food_location[1]) {
	return 2;
    }

    return 0;
}

void get_input()
{
    if (key[KEY_UP] && snake_direction != 2) {
	snake_direction = 0;
    }

    if (key[KEY_RIGHT] && snake_direction != 3) {
	snake_direction = 1;
    }

    if (key[KEY_DOWN] && snake_direction != 0) {
	snake_direction = 2;
    }

    if (key[KEY_LEFT] && snake_direction != 1) {
	snake_direction = 3;
    }
}

void cleanup()
{
    destroy_bitmap(back);
    destroy_bitmap(snake_body);
    destroy_bitmap(food);
    destroy_bitmap(game_over);
}

The 2015 Formula One Season and Other Thoughts

After the return of Formula One two weeks ago, in which we saw Mercedes take an imperious one-two and looking unassailable this year, we’ve had a more surprising result today in Malaysia, where Ferrari took the fight to Mercedes, with Sebastian Vettel exploiting what appears to be a slippery chassis and an improved engine to win decisively against Hamilton and Rosberg. Kimi Raikkonen compounded Ferrari’s success, despite his misfortunes in qualifying and suffering a puncture during the race to take a solid fourth place. After seeing Hamilton romp home to take victory a fortnight ago in Australia, I was concerned that we would see a domineering season from a single driver, with Rosberg, possibly chastised from falling short at the end of last season, perhaps left to pick up the scraps. However, if Ferrari can maintain some degree of consistency about their performances, it might bode better in terms of intrigue throughout the season. At this point, I still expect Mercedes to win the World Constructor’s Championship with greater consistency from both their drivers, but if Vettel and Raikkonen can deliver performances at tracks that don’t have such a focus on top speed, they may present themselves as at least dark horses for the World Driver’s Championship.

After Ricciardo’s spectacular performances last season, taking three victories in a season where barely anybody else even came close to snatching glory from the Mercedes, he has become team leader at Red Bull with the move of Vettel to Ferrari. Daniil Kvyat, formerly of Toro Rosso, joins him and has acquitted himself well so far, despite reliability problems which prevented him from taking the grid in Australia. After so many years in the previous naturally-aspirated formula at the top of Formula One, Red Bull have struggled to regain their pace with the turbocharged Renault engines. Reliability gremlins struck both cars in Australia and the Renault engine, even when it is working, still appears to be down on power versus the Mercedes and an improved Ferrari. Unlike last season, where Ricciardo achieved victories, I think that this season will see Red Bull lucky to battle for podiums, more regularly scoring in the middle of the points.

Red Bull’s sister team, Toro Rosso, shares the Renault engines and also suffered some mechanical problems in Australia at the hands of Max Verstappen. Verstappen has drawn a considerable amount of press for his age, being only 17 years old, by a long way the youngest ever Formula One driver. The son of former Formula One journeyman Jos Verstappen, Max has a notorious lack of experience in single-seater racing, with only a single season of Formula 3 under his belt and joins Formula One after a year of test driving for Toro Rosso in 2014. However, on current evidence in the Formula One races so far, he has quite a bit of natural pace, matching his substantially more experienced team-mate, Carlos Sainz Jr., another new entrant and also son to a famous racing driver father. Despite the limited experience of both drivers, they have quickly brought the fight to the other teams, with Sainz scoring in both of his two finishes and Verstappen only being denied a points finish in his race due to an engine failure.

Williams, regularly best of the rest in 2014 and unlucky not to score a victory on occasions, might have to retemper their expectations in 2015. They still have the proven Mercedes engine, have retained both Felipe Massa and Valterri Bottas from last year and still appear to have a fair degree of pace, but with Ferrari looking stronger than last year, Williams will more likely be caught up in a scrap with the likes of Red Bull, Toro Rosso and Lotus – when their car works properly – for the middle points positions. This is slightly disappointing for Bottas, who scored several well-deserved podiums last season and looks like a likely race winner in the future, but the team may be able to take some solace in that they are likely to be at the front of the battle between the teams that aren’t Mercedes or Ferrari.

Closer to the back of the points positions, Sauber appear to have a quicker car than last year, although they are embroiled in a legal battle with Giedo van der Garde over contract issues that looks like it’ll be a slow burner. Considering one of the drivers that they did choose, I would question their decision not to give van der Garde one of the seats this year; Marcus Ericsson, whose results last year were underwhelming even by the standards of the Caterham team and who didn’t cover himself in glory in the lower single-seater formulae, was signed up in his place. The other choice of driver for the team, Felipe Nasr, is more sensible, despite Nasr being a rookie; he did win a championship at Formula Three and came third in last year’s GP2 series. Nevertheless, given the prominent change in livery for Sauber, now proudly displaying the colours of Banco del Brasil, one strongly suspects that both drivers were picked for their ability to bring in sponsorship dollars, since Sauber is suspected to be in a weak position financially.

Another team rumoured to be weak financially and who will also be scrapping for the lower points positions this season is Force India. Their driver line-up, with the podium-scoring Sergio Perez and the pole position-attaining Nico Hulkenberg, is more experienced than that of Sauber, but their car, despite having a Mercedes engine, does not look especially fast. Somewhat benefited in the race in Australia by virtue of reliability where for others it was lacking, Force India managed a double points finish, but I suspect they will struggle to keep that up during the rest of the season.

At least, though, for all their financial woes, Sauber and Force India are performing better than McLaren, who look like they’re going to have an annus horribilis. With the conclusion of McLaren’s contract with Mercedes, McLaren have gone back to a partner who has presented them with considerable success in the past, with Honda engines in the back of their car. Unfortunately, though, the Honda engine is suffering from a distinct lack of development versus Mercedes, Ferrari and even Renault and is by far the least powerful engine on the grid right now. Trundling around at the back is not a place where we have often seen McLaren and the car, while reportedly nice to drive, is unbefitting of the most experienced line-up on the grid, with double World Champion Fernando Alonso and Jenson Button, also a World Champion. McLaren will be lucky to score points this season and have already struggled to complete races.

One of the feel-good stories of the pre-season was Manor Marussia’s phoenix-like rise from the ashes to present two cars at Australia. Unfortunately, having completed no testing and with all software wiped from their computers in preparation for auction, neither car turned a wheel in Australia and it had to wait until Malaysia until we had a full grid of cars ready to take the start. Will Stevens, who competed in one race last season and Roberto Merhi, another rookie driver, have both been signed up to drive for the team, but it remains to be seen whether the position is a poisoned chalice or not. The car, a derivative of the 2014 Marussia, was not on the pace in Malaysia, barely scraping through the 107% rule in free practice, although Merhi’s completion of the race shows that the car may well have reliability on its side. Even as a fan of the plucky underdog, the pace of the car looks prohibitively slow and with the exit of Caterham, who had gone from underdogs in their early seasons to perennial underachievers by the time of their demise, Manor will largely be in a lonely race with themselves. Things are not looking good for the smaller teams.

In terms of tracks for this season, we have gained another classic track in the Mexican Grand Prix, being held at the Autodromo Hermanos Rodriguez, but temporarily lost the German Grand Prix for the first time since 1960. The loss of the German Grand Prix marks another struggle for the classic European tracks where so much of Formula One’s history lies and while the move to new markets has occasionally given us gems like Sepang or Circuit of the Americas, I do think it’s terrible that Germany doesn’t have a Grand Prix this year for financial concerns, despite having three successful German drivers on the grid, while Abu Dhabi, a city in a desert only notable for its oil reserves and the obvious artifice of its settlements, maintains its end-of-season place at a dull, largely featureless track that has been site of some of the most boring races of the last five years, where not even seasons coming down the wire can improve the racing itself.

***

In other news, the BBC finally bit the bullet and sacked Jeremy Clarkson after a career of controversy. To be fair, even as a Top Gear aficionado, from what we have been presented with from reports of the incident between Clarkson and the BBC producer, Oisin Tymon, Clarkson deserved his sacking; assault on a co-worker is very difficult to condone. Nevertheless, though, it looks like it’s the end of Top Gear as we know it; the ribald, politically incorrect humour of Jeremy Clarkson, Richard Hammond and James May will be unlikely to be continued on the BBC. Plenty of names have already been mooted for a completely new set of presenters, several who would be good choices for an informative car show, but few who would present anything like what we have seen since Clarkson took the reins in 2002.

The bookie’s favourite at the moment is Guy Martin, perennial Isle of Man TT competitor, lorry mechanic and occasional TV presenter. To be fair, Guy Martin would be one of the best choices the BBC could make; not only does Guy have a quirky personality that is interesting to watch, he is genuinely knowledgeable and enthusiastic about motor vehicles and has exceptional mechanical sympathy. This would make him a great choice for an informative car show, as I would suspect the BBC would try to retool the show towards, but I’m not sure that Guy would actually bite – after all, it could affect his ability to race successfully at several of the motorcycle road races that take place during the year in Northern Ireland, some of which provide a lead up to the TT.

My fear is that the BBC will bow to pressure from outspoken minorities and take the politically correct route unnecessarily. This includes the lobby to have a woman back on the show – several women did present the show during the original run of Top Gear, but the show was retooled precisely because the original formula had poor ratings and apart from Sabine Schmitz who is already too busy with D-Motor on German television, I can’t think of many female candidates that wouldn’t just be there to tick diversity boxes. Meanwhile, Clarkson will likely find himself a home somewhere on Sky, given his already comfy relationship with several organs of the Murdoch empire, possibly with Richard Hammond and James May in tow, drawing away viewers from the BBC and causing a crisis in an already battered broadcaster.

Finally, I see that Ted Cruz has announced his nomination as the Republican candidate for President of the United States. I already made my views on Ted Cruz very clear earlier this month, but I hate the man even more now – he was dangerous enough as the head of the Senate Commerce Subcommittee on Science and Space without going for the Presidency as well. While the other Republican candidates look more appealing than Ted Cruz, that isn’t exactly a difficult feat, since lighting my pubes on fire would be more appealing to me than voting for Ted Cruz.

From an objective point of view, it looks like the Republicans will present their third terrible candidate in a row in presidential elections; unfortunately, I don’t have enough confidence in the Democrats to present anything better than a mediocre candidate (because perish the thought that they’d actually be sensible and pick Elizabeth Warren) and I don’t have enough confidence in the American populace not to go for the Republican candidate out of spite. Prove me wrong, America; I’m begging for you to prove me wrong.

Net Neutrality And The Fight Against The Tea Party Movement

This week, the Federal Communications Commission made the monumental decision to classify internet access as a utility, enshrining net neutrality (i.e. the equitable distribution of internet resources to all legal services, no matter what the service is or who owns it) in the United States and striking a decisive blow against the cable companies of the US. I welcome this decision, working as it does in favour of both the common internet user and those companies providing true innovation on the internet – such as Microsoft, Google, Facebook, Netflix, et cetera. Of course, Comcast, Time Warner Cable and so on have protested this decision, but I think it’s time for them to be cut down to size, given their distinct lack of innovation, their oligopolic greed and the fact that they have consistently been among the most unfriendly and unaccommodating companies around, distinct for their dismal customer service and their disregard for any sort of customer satisfaction.

The protests of Comcast, Time Warner Cable and so on aren’t surprising; after all, they have reasons for wanting to protect their oligopoly on the provision of internet connection, even if these work against their customers. Not surprising either are the protests of Ted Cruz, one of the more insipid members of the Tea Party movement of the Republican Party of the United States. Let’s get this straight off the bat: Ted Cruz is an ignoramus, ready to fight any sort of sensible decision as long as he can get one up on the Democratic Party – you know, like the rest of the Tea Party. He’s also a dangerous ignoramus, being the chairman of the Senate Commerce Subcommittee on Science and Space, despite having next to no knowledge of science – he’s not only a climate change denier, but more terrifyingly, a creationist. What’s more, he’s very clearly in the pocket of the big cable companies of the US. However, the very fact that he’s a known crooked, science-denying ignoramus makes him predictable and we shouldn’t be surprised that he’s fighting on the side of the people who pay him to.

What is surprising and more than a little worrying, though, is the fact that anybody has been able to take him seriously. More than a few have, nevertheless, claiming that governmental ‘interference’ will cause the downfall of the internet. The people saying this appear to be the same selfish individualists who have caused the recent outbreaks of measles in the United States due to their strident disregard for public safety by refusing to vaccinate their children. Their thought process seems to be that anything that they can’t perceive as directly helping them and which has the smell of government about it harms their freedom, in a sort of “gubmint bad” sense of the term. This applies even when the end result of the process will actually help them, by not having companies run roughshod over the concept of competition and not having them straitjacketing any company which doesn’t pay a king’s ransom to have their services provided at full speeds.

I’ll be fair here and state that my politics have traditionally been at least centre-left, in the European social democratic tradition, so I’m inherently going to be somewhat opposed to the principles of the Republican Party (and more recently, to the Democrats as well). That said, the trouble here isn’t capitalism, since on many occasions, the competition of a well-regulated market can benefit innovation and lead to new opportunities which improve our lives. However, the oligopoly of the American internet provider market does nothing to benefit innovation and without net neutrality, will actually harm it. Don’t find yourselves roped in by the selfish words of crooked politicians, paid to take a stand and ignorant of the true details behind the issue and if you’re in the US, don’t give the Tea Party any of your credence or support; they’re not on your side.

A new job and a dead GPU: An excuse for a new gaming PC

Something quite notable in my life has happened that I forgot to mention in my last post. After seven years in third-level education and just as much time spent in my previous job as a shop assistant in a petrol station, I’ve finally got a job that is relevant to what I’m studying and am most proficient at. I’m now working in enterprise technical support for Dell, which is quite a change, but both makes use of my technical skills learned both at DIT and the almost twenty years that I’ve spent playing around with computers in my own time and the customer service skills that I learned in my last job. Notably, the new job comes with a considerable increase in my pay; while the two-and-a-half times increase per annum comes mostly because of the fact that I work five days a week now, I am still making more now than I would have working full time previously.

Coincidentally, very recently, I experienced some bizarre glitches on my primary desktop computer, where the X Window System server on Linux appeared to freeze every so often, necessitating a reboot. Resolving the cause of the problem took some time, from using SSH to look at the Xorg logs when the crash occurred to discovering that the issue later manifested itself occasionally as graphical glitches rather than a complete freeze of the X Window System, then later experiencing severe artifacting in games on both Linux and Windows. In the end, the diagnosis led to one conclusion – my five-year-old ATI Radeon HD 4890 graphics card was dead on its feet.

Fortunately, I had retained the NVIDIA GeForce 8800 GTS that the computer had originally been built with, so I was able to keep my primary desktop going for everyday tasks by swapping the old GPU in for the newer, dead one. However, considering the seven years that I’ve got out of this computer so far, I had already been considering building a new gaming desktop during the summer to upgrade from a dated dual-core AMD Athlon 64 X2 to something considerably more modern. The death of my GPU, while not ultimately a critical situation – after all, I did have a replacement, a further three computers that I could reasonably fall back on and five other computers besides – did give me the impetus to speed up the process, though.

After looking into the price of cases, I decided that I would reuse an old full-tower case that currently holds my secondary x86 desktop (with a single-core AMD Athlon 64 and a GeForce 6600 GT), adapting it for the task by cutting holes to accommodate some 120mm case fans and spray-painting it black to cover up the discoloured beige on the front panel. Ultimately, this step will likely cost me almost as much as buying a new full-tower case from Cooler Master, but will at least allow me to keep my current desktop in reserve without having to worry where to find the space to put it. A lot of the cost comes from purchasing the fans, adapters to put 2.5” and 3.5” drives in 5.25” bays and selecting a card reader to replace the floppy drive that will be incompatible with my new motherboard. Nevertheless, the case is huge, has plenty of space for placing new components and should be much better for cooling than my current midi-tower case, even considering the jerry-rigged nature of it.

I had considered quite some time ago that I would go for a reasonably fast, overclock-friendly Core i5 processor and have found that the Core i5-4690K represents the best value for money in that respect – the extra features of the Core i7 are unnecessary for what I’ll be doing with the computer. To get the most out of the processor, I considered the Intel Z97 platform to be a necessity and was originally considering the Asus Z97-P before I realised that it had no support for multi-GPU processing. To be fair, I haven’t actually used either SLI or CrossFireX at any point, but do like the ability to use them later if I wish, so eventually, I settled on the much more expensive but more appropriate Asus Z97-A, which has capacity for both SLI and CrossFireX, the one PS/2 port I need to accommodate my Unicomp Classic keyboard without having to use up a USB slot and which seems to have sufficient room for overclocking of the i5-4690K.

To facilitate overclocking, I have also chosen to purchase 16GB of Kingston 1866MHz DDR3 RAM and an aftermarket Cooler Master Hyper 212 Evo CPU cooler to replace the stock Intel cooler. I’m not looking for speed records here, but would like to have the capacity to moderately overclock the CPU to pull out the extra operations-per-second that might give me an edge in older, less GPU-intensive games. I’ve also gone for some Arctic Silver 5 cooling paste, since cooling has been a concern for me with previous builds and I’d like to make the most of the aftermarket cooler.

Obviously, being a gaming desktop, the GPU will be a big deal. I had originally looked at the AMD Radeon R9 280X as an option, but the retailer that I have purchased the majority of my parts from had run out of stock. As a consequence, I’ve gone a step further and bought a factory-overclocked Asus Radeon R9 290, hoping that the extra graphical oomph will be useful when it comes to playing games like Arma 3, where I experienced just about adequate performance with my HD 4890 at a diminished resolution. The Arma series has been key in making me upgrade my PCs before, so I’m not surprised that Arma 3 is just as hungry for GPU power as its predecessors.

I’ve also gone for a solid-state drive for the first time in order to speed up both my most resource-intensive games and the speed of Windows. I’ve purchased a Crucial MX100 128GB 2.5” SSD, which should be adequate for the most intensive games, while secondary storage will be accommodated by a 1TB Western Digital drive for NTFS and a 320GB Hitachi drive to accommodate everything to do with Linux. I also bought a separate 1TB Western Digital hard drive to replace the broken drive in my external hard drive enclosure, which experienced a head crash when I stupidly let it drop to the floor. Oops. Furthermore, I’ve also gone for a Blu-Ray writer for my optical drive – I’m not sure whether I’ll ever use the Blu-Ray writing capabilities, but for €15 more than the Blu-Ray reader, I decided to take the plunge. After all, I’m spending enough already.

Last but not least is the PSU. “Don’t skimp on the power supply”, I have told several of my friends through the years and this was no exception. Taking in mind the online tier lists for PSUs, I considered myself quite fortunate to find a Seasonic M12II 750W power supply available for under €100, with fully-modular design and enough capacity to easily keep going with the parts that I selected. The benefits for cable management from a modular power supply can’t be overstated, which will be useful even with the generous space in my case.

Overall, this bundle will cost me a whopping €1,500 – almost double what I spent on my current gaming desktop originally. Of course, any readers in the United States will scoff at this price, benefited by the likes of Newegg, but in Ireland, my choices are somewhat more limited, with Irish-based retailers being very expensive and continental European retailers not being as reliable when it comes to RMA procedures if something does go wrong. Nevertheless, I hope the new computer will be worth the money and provide the sort of performance gain that I haven’t had since I replaced my (again, seven-year-old) Pentium III system with the aforementioned single-core Athlon 64 system.

I’ll be looking forward to getting to grips once again with another PC build. Here’s hoping that the process will be a smooth one!

Follow

Get every new post delivered to your Inbox.

Join 26 other followers