Net Neutrality And The Fight Against The Tea Party Movement

This week, the Federal Communications Commission made the monumental decision to classify internet access as a utility, enshrining net neutrality (i.e. the equitable distribution of internet resources to all legal services, no matter what the service is or who owns it) in the United States and striking a decisive blow against the cable companies of the US. I welcome this decision, working as it does in favour of both the common internet user and those companies providing true innovation on the internet – such as Microsoft, Google, Facebook, Netflix, et cetera. Of course, Comcast, Time Warner Cable and so on have protested this decision, but I think it’s time for them to be cut down to size, given their distinct lack of innovation, their oligopolic greed and the fact that they have consistently been among the most unfriendly and unaccommodating companies around, distinct for their dismal customer service and their disregard for any sort of customer satisfaction.

The protests of Comcast, Time Warner Cable and so on aren’t surprising; after all, they have reasons for wanting to protect their oligopoly on the provision of internet connection, even if these work against their customers. Not surprising either are the protests of Ted Cruz, one of the more insipid members of the Tea Party movement of the Republican Party of the United States. Let’s get this straight off the bat: Ted Cruz is an ignoramus, ready to fight any sort of sensible decision as long as he can get one up on the Democratic Party – you know, like the rest of the Tea Party. He’s also a dangerous ignoramus, being the chairman of the Senate Commerce Subcommittee on Science and Space, despite having next to no knowledge of science – he’s not only a climate change denier, but more terrifyingly, a creationist. What’s more, he’s very clearly in the pocket of the big cable companies of the US. However, the very fact that he’s a known crooked, science-denying ignoramus makes him predictable and we shouldn’t be surprised that he’s fighting on the side of the people who pay him to.

What is surprising and more than a little worrying, though, is the fact that anybody has been able to take him seriously. More than a few have, nevertheless, claiming that governmental ‘interference’ will cause the downfall of the internet. The people saying this appear to be the same selfish individualists who have caused the recent outbreaks of measles in the United States due to their strident disregard for public safety by refusing to vaccinate their children. Their thought process seems to be that anything that they can’t perceive as directly helping them and which has the smell of government about it harms their freedom, in a sort of “gubmint bad” sense of the term. This applies even when the end result of the process will actually help them, by not having companies run roughshod over the concept of competition and not having them straitjacketing any company which doesn’t pay a king’s ransom to have their services provided at full speeds.

I’ll be fair here and state that my politics have traditionally been at least centre-left, in the European social democratic tradition, so I’m inherently going to be somewhat opposed to the principles of the Republican Party (and more recently, to the Democrats as well). That said, the trouble here isn’t capitalism, since on many occasions, the competition of a well-regulated market can benefit innovation and lead to new opportunities which improve our lives. However, the oligopoly of the American internet provider market does nothing to benefit innovation and without net neutrality, will actually harm it. Don’t find yourselves roped in by the selfish words of crooked politicians, paid to take a stand and ignorant of the true details behind the issue and if you’re in the US, don’t give the Tea Party any of your credence or support; they’re not on your side.

A new job and a dead GPU: An excuse for a new gaming PC

Something quite notable in my life has happened that I forgot to mention in my last post. After seven years in third-level education and just as much time spent in my previous job as a shop assistant in a petrol station, I’ve finally got a job that is relevant to what I’m studying and am most proficient at. I’m now working in enterprise technical support for Dell, which is quite a change, but both makes use of my technical skills learned both at DIT and the almost twenty years that I’ve spent playing around with computers in my own time and the customer service skills that I learned in my last job. Notably, the new job comes with a considerable increase in my pay; while the two-and-a-half times increase per annum comes mostly because of the fact that I work five days a week now, I am still making more now than I would have working full time previously.

Coincidentally, very recently, I experienced some bizarre glitches on my primary desktop computer, where the X Window System server on Linux appeared to freeze every so often, necessitating a reboot. Resolving the cause of the problem took some time, from using SSH to look at the Xorg logs when the crash occurred to discovering that the issue later manifested itself occasionally as graphical glitches rather than a complete freeze of the X Window System, then later experiencing severe artifacting in games on both Linux and Windows. In the end, the diagnosis led to one conclusion – my five-year-old ATI Radeon HD 4890 graphics card was dead on its feet.

Fortunately, I had retained the NVIDIA GeForce 8800 GTS that the computer had originally been built with, so I was able to keep my primary desktop going for everyday tasks by swapping the old GPU in for the newer, dead one. However, considering the seven years that I’ve got out of this computer so far, I had already been considering building a new gaming desktop during the summer to upgrade from a dated dual-core AMD Athlon 64 X2 to something considerably more modern. The death of my GPU, while not ultimately a critical situation – after all, I did have a replacement, a further three computers that I could reasonably fall back on and five other computers besides – did give me the impetus to speed up the process, though.

After looking into the price of cases, I decided that I would reuse an old full-tower case that currently holds my secondary x86 desktop (with a single-core AMD Athlon 64 and a GeForce 6600 GT), adapting it for the task by cutting holes to accommodate some 120mm case fans and spray-painting it black to cover up the discoloured beige on the front panel. Ultimately, this step will likely cost me almost as much as buying a new full-tower case from Cooler Master, but will at least allow me to keep my current desktop in reserve without having to worry where to find the space to put it. A lot of the cost comes from purchasing the fans, adapters to put 2.5” and 3.5” drives in 5.25” bays and selecting a card reader to replace the floppy drive that will be incompatible with my new motherboard. Nevertheless, the case is huge, has plenty of space for placing new components and should be much better for cooling than my current midi-tower case, even considering the jerry-rigged nature of it.

I had considered quite some time ago that I would go for a reasonably fast, overclock-friendly Core i5 processor and have found that the Core i5-4690K represents the best value for money in that respect – the extra features of the Core i7 are unnecessary for what I’ll be doing with the computer. To get the most out of the processor, I considered the Intel Z97 platform to be a necessity and was originally considering the Asus Z97-P before I realised that it had no support for multi-GPU processing. To be fair, I haven’t actually used either SLI or CrossFireX at any point, but do like the ability to use them later if I wish, so eventually, I settled on the much more expensive but more appropriate Asus Z97-A, which has capacity for both SLI and CrossFireX, the one PS/2 port I need to accommodate my Unicomp Classic keyboard without having to use up a USB slot and which seems to have sufficient room for overclocking of the i5-4690K.

To facilitate overclocking, I have also chosen to purchase 16GB of Kingston 1866MHz DDR3 RAM and an aftermarket Cooler Master Hyper 212 Evo CPU cooler to replace the stock Intel cooler. I’m not looking for speed records here, but would like to have the capacity to moderately overclock the CPU to pull out the extra operations-per-second that might give me an edge in older, less GPU-intensive games. I’ve also gone for some Arctic Silver 5 cooling paste, since cooling has been a concern for me with previous builds and I’d like to make the most of the aftermarket cooler.

Obviously, being a gaming desktop, the GPU will be a big deal. I had originally looked at the AMD Radeon R9 280X as an option, but the retailer that I have purchased the majority of my parts from had run out of stock. As a consequence, I’ve gone a step further and bought a factory-overclocked Asus Radeon R9 290, hoping that the extra graphical oomph will be useful when it comes to playing games like Arma 3, where I experienced just about adequate performance with my HD 4890 at a diminished resolution. The Arma series has been key in making me upgrade my PCs before, so I’m not surprised that Arma 3 is just as hungry for GPU power as its predecessors.

I’ve also gone for a solid-state drive for the first time in order to speed up both my most resource-intensive games and the speed of Windows. I’ve purchased a Crucial MX100 128GB 2.5” SSD, which should be adequate for the most intensive games, while secondary storage will be accommodated by a 1TB Western Digital drive for NTFS and a 320GB Hitachi drive to accommodate everything to do with Linux. I also bought a separate 1TB Western Digital hard drive to replace the broken drive in my external hard drive enclosure, which experienced a head crash when I stupidly let it drop to the floor. Oops. Furthermore, I’ve also gone for a Blu-Ray writer for my optical drive – I’m not sure whether I’ll ever use the Blu-Ray writing capabilities, but for €15 more than the Blu-Ray reader, I decided to take the plunge. After all, I’m spending enough already.

Last but not least is the PSU. “Don’t skimp on the power supply”, I have told several of my friends through the years and this was no exception. Taking in mind the online tier lists for PSUs, I considered myself quite fortunate to find a Seasonic M12II 750W power supply available for under €100, with fully-modular design and enough capacity to easily keep going with the parts that I selected. The benefits for cable management from a modular power supply can’t be overstated, which will be useful even with the generous space in my case.

Overall, this bundle will cost me a whopping €1,500 – almost double what I spent on my current gaming desktop originally. Of course, any readers in the United States will scoff at this price, benefited by the likes of Newegg, but in Ireland, my choices are somewhat more limited, with Irish-based retailers being very expensive and continental European retailers not being as reliable when it comes to RMA procedures if something does go wrong. Nevertheless, I hope the new computer will be worth the money and provide the sort of performance gain that I haven’t had since I replaced my (again, seven-year-old) Pentium III system with the aforementioned single-core Athlon 64 system.

I’ll be looking forward to getting to grips once again with another PC build. Here’s hoping that the process will be a smooth one!

SimTower – A Retrospective Gaming Review

Back when I started playing video games on my first PCs, my interests leant more towards simulation and strategy games than any other genre. One of the first titles that I really got involved with was SimCity 2000 and many of my earliest games came from broadly similar genres, like Sid Meier’s Civilization II and Command & Conquer. Another game I remember playing at a relatively young age was another title published by Maxis, SimTower. SimTower was not, in fact, designed or developed by the core team at Maxis, but instead by a Japanese developer called Yoot Saito, director of OPenBook Co. Ltd (now known as Vivarium). Nevertheless, SimTower encompassed the same constructive rather than destructive gameplay, where the player would build up from simple roots to create something potentially majestic in scale.

The core gameplay of SimTower is very simple – starting with a plot of land, the player builds up from a ground-floor lobby to build a tower block composed of offices, condominiums, restaurants, hotel rooms and other tenant facilities, ensuring that there are sufficient elevators for everybody to move around the tower. There are a few caveats to consider, though – an elevator can only span a maximum of 30 storeys out of a maximum tower size of 100 above-ground and 10 underground storeys, they can only accommodate a certain amount of traffic and certain types of tenant will require the use of elevators more regularly than others. Much of the game, therefore, becomes an exercise in planning the layout of the building and of the elevators in order to optimise traffic flow. This sounds tedious to begin with, but can actually be rather rewarding.

The player starts out by only being able to build a small range of different facilities, including basic elevators, stairs, offices, condominiums and fast food restaurants, but as the tower expands and the player meets more expansion goals, the range of facilities grows to include hotel rooms, restaurants, cinemas and more sophisticated elevators, among others. There are a number of star ratings contingent on the tower’s permanent population; there are five star ratings to achieve altogether, the later ones also requiring certain features to be added to the tower to satisfy tenant demands. The ultimate goal is to build a tower with 100 above-ground storeys and the requisite population and then place a cathedral on the top where visitors can get married.

A few limitations are present on tower design, including the ability to place lobbies (which serve as hubs for elevator travel) every 15 floors and the practical limitations of placing busy fast food restaurants or shops directly beside condos, offices or hotel rooms. None of these limitations are too challenging to work around, though and most of a player’s concern will revolve around keeping the tenants and residents of their tower satisfied.

Satisfaction levels rise and fall based on the conditions in the tower; mostly, satisfaction will be contingent on how well the transportation system is laid out. As mentioned above, standard elevators can only span a maximum of 30 storeys and it is not always sensible to even go this far with them; express elevators can carry many more people than standard elevators and have no height restrictions, but only stop at lobbies and underground floors, thus necessitating standard elevators to get to their destination floor. Satisfaction levels for shops and restaurants are contingent on how many customers visit them per day; fast food restaurants thrive during the day, especially with a large number of office workers, while more sophisticated restaurants depend on condominium residents and outside visitors. Shops also depend on outside visitors, but more of these can be attracted with the presence of cinemas.

Another factor that plays into the construction of the tower is the player’s ability to maintain a steady cash flow. Tenant buildings bring income, while various other elements, such as elevators, stairs and a variety of necessities later on in the development of your tower, like security offices, cost money to maintain. Different tenant facilities have various trade-offs against one another; offices pay a rent once a week – a week in-game consisting of two weekdays and a weekend – and hold a large population proportionate to their size, but make heavy use of elevators and are difficult to keep satisfied, while the tenants of condominiums are easy to keep satisfied, but only pay a one-time payment to purchase the condo as opposed to the weekly rent of offices and the condo itself holds a considerably smaller population for its size than offices. Hotel rooms do not keep a permanent population at all, but offer the potential for payment every day, which can be useful to ensure that maintenance costs don’t run you into the red. Restaurants and shops have their own criteria determining their profitability and are largely contingent on other tenant facilities. Therefore, to ensure the smooth running of a tower, it is important to plan ahead.

A few special events happen during the game as well to keep the player a little bit more on their toes. Occasionally, when your tower is big enough, you will receive messages saying that a bomb has been planted in your tower by a terrorist group; you then receive a choice to pay a considerable amount of money as a ransom or to try to find the bomb before it explodes. To be able to find the bomb, you require an adequate number of security personnel who will then travel through the building via the emergency stairs on either side of your tower. A security office can hold six personnel who can cover a floor each and with a sufficiently narrow tower, a single security office can reasonably cover fifteen floors, but an office every six floors may be sensible in a wider tower. Similarly, fires can break out in your tower that can only be put out by security personnel.

Graphically, SimTower was never especially impressive, but its simplicity suits the gameplay. The player views the tower from a side-on two-dimensional view with simple sprites making up the various elements of the tower, including the facilities, the elevators, the stairs and so on. Tenants and residents are represented by sprites taking the form of silhouettes. These silhouettes are most regularly seen waiting for elevators and change colour from black to pink and then to red based on how long they have been waiting and how stressed out they are. The graphics are simple, but effective enough and while they were designed for the likes of 640×480 displays on computers running Windows 3.1 or 95 or Macintosh System 7, they are at least not ugly on bigger displays.

The sound is very simple as well, with no music, but instead a constant sequence of background noises, like the movement of elevators, office chatter and so on. I think your mileage may vary as to whether you find these effective in a minimalistic way or just annoying; I tend towards the former. There isn’t really any time where these sounds become critical to playing the game, so if they do annoy you, it’s not a big deal to turn them off, but they do enough of a job of giving you some feedback as to the state of your tower that they aren’t obstructive to gameplay.

Thinking about the game as a whole, I don’t think there’s anything that I’d say really stands out in SimTower as a game. The tower management aspect is novel, but similar titles such as the SimCity series offer similar management aspects using a different presentation. The aesthetic elements of the game are not and never were spectacular, but they do the job. However, there isn’t anything bad about SimTower that stands out either. The game is well designed and does what it sets out to do appropriately. The difficulty of progressing past the third star on towards a complete tower may make the game unsuitable as an entry point into construction and management simulations, but the game has a novel perspective to offer people who already enjoy simulators.

Bottom Line: SimTower is an unspectacular but decent simulation game that offers a novel perspective to construction and management simulation.

Recommendation: SimTower will offer the most fun to already experienced simulation gamers. To others, the genre is not action-packed and rewards planning; if that sounds like your thing, SimTower may offer you a fair bit of fun.

Why the Philae lander came at just the right time – a social perspective from a science enthusiast

By now, it has been more than a week since the Philae lander was released from the Rosetta space probe and began its journey onto the surface of Comet 67P/Churyumov-Gerasimenko. The landing didn’t go without trouble, starting with the reported failure of the gas thruster meant to help keep it on the surface before the lander was even released and ending with Philae bouncing twice on the surface of the comet and ending up in the shadow of a cliff, greatly reducing the amount of solar exposure available to the lander. Nevertheless, the mission could be regarded as having succeeded in some respect already, even if conditions do not improve with regard to the sunlight falling on Philae; after all, it did retrieve some potentially useful results from its experimental apparatus before running out of battery power.

Frankly, though, as impressive as the science and engineering of Philae is, a lot of words have been spoken about that aspect long before this post by people far more experienced and talented in those fields than I am. What I want to talk about are some social implications of the fortuitous timing of Philae’s success. The timing of Philae’s mission came in the wake of two unfortunate accidents in the United States by privately-funded aerospace ventures: one the controlled explosion of a failed launch of an Antares rocket developed by Orbital Sciences and designated to send supplies to the International Space Station; the other being the recent crash of the SpaceShipTwo spacecraft, VSS Enterprise, in the Mojave Desert during testing, an accident which led to the death of one of the pilots. At a time when funding for space exploration is hard to come by, these accidents looked embarrassing at best. Rosetta and Philae were launched on their course ten years ago, but arrived in time to at least salvage one reasonable success for space exploration at a time when some people have been quick to criticise it, especially those always willing to fight for petty political victories in matters that mean little.

In that vein, another social implication of Rosetta and Philae comes courtesy of their existence as components of a mission of the European Space Agency. The ESA, funded partially by funds forthcoming from each participating government and partially by the European Union, is a demonstration of the effectiveness of European cooperation at a time when several Eurosceptic groups seek to convince us that such cooperation will lead us nowhere. At a time when these groups have motivations that are at best questionable, like Ukip, while others look like straight-up crypto-fascists, like France’s Front National, I think any sort of success that can show them that Europe can work better if there is sufficient motivation to get things done is useful and desirable. That this happened because a set of scientists and engineers from different countries ignored the call of jingoism and pointless ring-fencing further reinforces my point about these people being willing to fight only in the sake of petty politics when more important things lie at stake. The Rosetta mission – and the ESA in general – shows us the potential and power of cooperation and should be taken as a good example of what the likes of Ukip and FN would take away from us if they were to take power in their respective countries.

Historical Operating Systems: Xerox GlobalView

Author’s Note: The demonstrations in this article are based on Xerox GlobalView 2.1, the final release of the operating system and used a software collection available from among the links here: http://toastytech.com/guis/indexlinks.html

Xerox is not a name which one would usually associate with computing, being far more well-known for their photocopying enterprise. For this reason, it is somewhat bizarre to look at the history of Xerox and realise that through their PARC (Palo Alto Research Center), Xerox were one of the most revolutionary computer designers of all time. Their first design, the Alto minicomputer, was released in 1973 and introduced a functioning GUI, complete with WYSIWYG word processing and graphical features more than ten years before the first developments by any other company. Indeed, the Alto represented the concept of the personal computer several years before even the Apple II, Atari 8-bit family and the Radio Shack TRS-80 arrived in that sector and at a time when most computers still had switches and blinkenlights on their front panels.

The Alto was never sold as a commercial product, instead being distributed throughout Xerox itself and to various universities and research facilities. Xerox released their first commercial product, the Xerox 8010 workstation (later known as the Star) in 1981, but by that stage, they had presented their product to many other people, including Apple’s Steve Jobs and Microsoft’s Bill Gates. Microsoft and Apple would soon release their own GUI operating systems, based heavily on the work of Xerox PARC’s research and ultimately would compete to dominate the market for personal computer operating systems while Xerox’s work remained a footnote in their success.

The Xerox Star was relatively unsuccessful, selling in the tens of thousands. Part of the reason for the lack of success for the Xerox Star, despite its technical advantages, was the fact that a single Star workstation cost approximately $16,000 in 1981, $6,000 more than the similarly unsuccessful Apple Lisa and more than $10,000 more than the Macintosh 128k when that was released in 1984. Consequently, the people who could have made most immediate use of a GUI operating system, including graphic designers, typically couldn’t afford it, while those that could afford it were more likely in the market for computers more suited to data processing, like VAX minicomputers or IBM System/3 midrange computers.

Nevertheless, Xerox continued to market the Star throughout the early 1980s. In 1985, the expensive 8010 workstation was replaced with the less expensive and more powerful 6085 PCS on a different hardware platform. The operating system and application software was rewritten as well for better performance, being renamed to ViewPoint. By this stage, though, the Apple Macintosh was severely undercutting even its own stablemate, the Lisa, let alone Xerox’s competing offering. Meanwhile, GUI operating environments were beginning to pop up elsewhere, with the influential Visi On office suite already on IBM-compatible PCs and Microsoft Windows due to arrive at the end of the year, not to mention the release of the Commodore Amiga and the Atari ST.

Eventually, Xerox stopped producing specialised hardware for their software and rewrote it for IBM PC-compatible computers – along with Sun Microsystem’s Solaris – in a form called GlobalView. Since the Xerox Star and ViewPoint software was written in a language called Mesa – later an influence on Java and Niklaus Wirth’s Modula-2 language – GlobalView originally required an add-on card to facilitate the Mesa environment, but in its final release ran as a layer on top of Windows 3.1, 95 or 98 via an emulator.

As a consequence of running in this emulated environment, Xerox GlobalView 2.1 is not a fast operating system. It takes several minutes to boot on the VirtualBox installation of Windows 3.1 which I used for the process, most of which seems to be I/O-dependent, since the host operating system runs about as fast as Windows 3.1 actually can on any computer. The booting process is also rather sparse and cryptic, with the cursor temporarily replaced with a set of four digits, the meaning of which is only elucidated on within difficult-to-find literature on GlobalView’s predecessors.

Once the booting process is complete, one of the first things that you may notice is that the login screen doesn’t hide the fact that Xerox fully intended this system to be networked among several computers. This was a design decision that persisted from the original Star all the way back in 1981 and even further back with the Alto. Since I don’t have a network to use the system with, I simply entered an appropriate username and password and continued on, whereby the system booted up like any other single-user GUI operating system.

Looking at screenshots of the Xerox Star and comparing it with the other early GUI systems that I have used, I can imagine how amazing something like the Xerox Star looked in 1981 when it was released. It makes the Apple Lisa look vaguely dismal in comparison, competes very well with the Apple Macintosh in elegance and blows the likes of Visi On and Microsoft Windows 1.0 out of the water. Xerox GlobalView retains that same look, but by 1996, the lustre had faded and GlobalView looks rather dated and archaic in comparison to Apple’s System 7 or Windows 95. Nevertheless, GlobalView still has a well-designed and consistent GUI.

globalview1

Astounding in 1981, but definitely old-fashioned by 1996.

GlobalView’s method of creating files is substantially different to that used by modern operating systems and bizarrely resembles the method used by the Apple Lisa. Instead of opening an application, creating a file and saving it, there is a directory containing a set of “Basic Icons”, which comprise blank documents for the various types of documents available, including word processor documents, paint “canvases” and new folders. This is similar to the “stationery paper” model used by the Lisa Office System, although GlobalView doesn’t extend the office metaphor that far.

Creating a new document involves chording (pressing both left and right mouse buttons at the same time) a blank icon in the Basic Icons folder, selecting the Copy option and clicking the left mouse button over the place where you wish to place the icon. Once the icon has been placed, the new document can be opened in much the same way that it may be opened on any newer PC operating system. By default, documents are set to display mode and you need to actually click a button to allow them to be edited.

GlobalView can be installed as an environment by itself, but is far more useful when you install the series of office applications that come with it. As with any good office suite, there is a word processor and a spreadsheet application, although since the Xerox Star pre-dated the concept of computerised presentations, there is no equivalent to Microsoft’s PowerPoint included. There is also a raster paint program, a database application and an email system, among others.

It’s difficult to talk about GlobalView without considering its historical line of descent and it’s clear that while the Xerox Star presented a variety of remarkable advances in GUI design, by 1996, GlobalView was being developed to placate the few remaining organisations who had staked their IT solutions on Xerox’s offerings in the past. The applications no longer had any sort of advances over the competition. In many cases, they feel clunky – the heavy requirement on the keyboard in the word processor is one example, made more unfriendly to the uninitiated by not following the standard controls that had arisen in IBM PC-compatibles and Macintoshes. Still, considering the historical context once again, these decisions feel idiosyncratic rather than clearly wrong.

globalview2

The paint program isn’t too bad, though.

Using GlobalView makes me wonder what might have become of personal computing if Xerox had marketed their products better – if in fact they could have marketed them better. Of course, even by the standards of the operating systems that were around by the last version of GlobalView, the interface and applications had dated, but that interface had once represented the zenith of graphical user interface design. Like the Apple Lisa, the Xerox Star and its successors represent a dead-end in GUI design and one that might have led to some very interesting things if pursued further.

Half-Life 2 – A Retrospective Review

“Rise and shine, Mister Freeman, rise and… shine. Not that I wish… to imply that you have been sleeping on… the job. No one is more deserving of a rest, and all the effort in the world would have gone to waste until… well… let’s just say your hour has come again. The right man in the wrong place can make all the difference in the world. So wake up, Mister Freeman…wake up and… smell the ashes.” – The G-Man, during the introduction to Half-Life 2.

When Valve Software released Half-Life in 1998, they came straight out of the gate with a game that is now regarded as one of the best and most important computer games ever released. Half-Life not only brought a stronger sense of storytelling and atmosphere into the mainstream of first-person shooters, but also served as the launch point for a huge variety of mods, including Counter-Strike, Day of Defeat and Team Fortress Classic. With this pedigree, Half-Life 2 became one of the most hyped titles of the early 2000s – and managed to live up to the hype. Half-Life 2 revolutionised computer game physics, represented the best in a generation of increasingly realistic graphics and used some of the most intelligent AI code seen to that point.

Half-Life 2 continues the adventures of Gordon Freeman, the protagonist of the original Half-Life. At the time of the original game, Gordon Freeman was a theoretical physicist, recently awarded his doctorate and working at the Black Mesa Research Facility, a military installation controlled by the United States government. Against the odds, Gordon Freeman managed to survive the alien invasion of the facility after an experimental disaster and was employed by the enigmatic G-Man, being kept in suspended animation until his services were required again.

Twenty years later, at the beginning of Half-Life 2, Gordon Freeman is brought out of his suspended animation, ending up on a train entering City 17, a mega-city located tentatively in Eastern Europe. The game wastes no time in presenting the consequences of the invasion at Black Mesa, as Gordon Freeman returns to a world where the people of Earth have been enslaved, under the administration of Doctor Breen, former administrator of Black Mesa and Quisling to the invading forces of the interstellar empire of the Combine. Floating camera drones buzz around, constantly observing and photographing the citizens of Earth; armed, uniformed and masked guards of Civil Protection stand as sentinels around the city, with no hesitation at beating and humiliating citizens for any hint of defiance.

The Vortigaunts who had proved so hostile against Gordon Freeman in the original game have been reduced to an even lower status than the humans, abjectly left to janitorial roles under the supervision of the brutish Civil Protection, while huge war machines resembling the tripods from The War of the Worlds march through the streets of City 17. Unarmed and given little indication of where to go, Gordon soon meets with Barney Calhoun, a security guard from Black Mesa and friend of Gordon who has been working undercover as a Civil Protection guard.

Directed towards the hidden lab of Dr. Isaac Kleiner, another old friend of Gordon who had worked with him at the time of the Black Mesa incident, Gordon goes towards the laboratory and before long is being chased through the streets of City 17 by Civil Protection guards and APCs. With the assistance of Alyx Vance, the daughter of another former scientist at Black Mesa, Gordon reaches Dr. Kleiner’s lab, where the revelation is made that the surviving scientists from Black Mesa have covertly been doing their own research into teleportation.

With the return of Gordon Freeman, who through his improbable survival of the events of Black Mesa, stopping the initial alien invasion, has inadvertently become a prophetic figure and a standard to rally behind, the seeds are sown for rebellion and insurrection. However, the teleportation technology of the resistance is untested. A failure of one of the components during an initial teleportation run ends up alerting the Combine to Gordon’s presence and leaves Gordon in a situation where he must run and fight for his life – and eventually for the lives of humanity.

The game presents this narrative to the player through a strong and distinctive cinematic technique where the camera perspective never leaves the sight of Gordon Freeman. Half-Life 2 uses the visual medium superbly, with a distinctive architectural arrangement which evokes the crumbling concrete apartment blocks of the Soviet era in Eastern Europe. This contrasts with the futuristic, industrial, metallic aesthetic of the buildings of the Combine, especially the colossal Citadel at the centre of the city, reaching far into the clouds and dominating the skyline. Gigantic screens dot the city, presenting propaganda broadcasts from Doctor Breen and the Combine. The citizens of Earth have been outfitted with the same overall-style clothing, which both invokes a sense of the citizens being unskilled workers and prisoners on their own planet.

Importantly, the game doesn’t become overbearing with these details, presenting just enough of them at a time to create a realistic impression of the world after the Black Mesa incident and the Combine invasion. Indeed, Valve’s attention to detail seems to be extremely professional, with a polish which shows the artistry that went into the game.

The gameplay demonstrates similar polish. At its core, it continues the same sort of linear first-person shooter action of its predecessor, but brings a set of important improvements which help update the game and make it feel more immersive and visceral. Chief among these was the introduction of realistic physics through the use of the Havok middleware package. The use of realistic physics not only helps immersion through relatively realistic interactions of objects, such as the scattering of objects with explosions or the ragdoll physics of dead enemies, but also plays a big part in the game itself.

One of the biggest and most touted features in Half-Life 2 was the Zero Point Energy Manipulator (also known as the Gravity Gun), a device allowing the player to pick up, move and violently hurl objects around them. This comes in handy at several points in the game, where it can be used to move obstacles out of one’s path, use other objects to shield one’s self or build impromptu stacks of objects to climb to out-of-the-way places or use the objects as weapons by hurling them into enemies. It does seem appropriate that a game named after a physics concept, with a physicist as a main character, was one of the first to use realistic physics in such a way.

However, there are a few instances where the game turns into a showcase for the physics engine and the Gravity Gun. There are a few instances where you must manipulate certain objects in a certain way to proceed and the game seems to go almost as far as to shout out, “This is a physics puzzle!”, which doesn’t help with immersion. Luckily, such occasions are few and far between. By and large, the physics manipulations are integrated very well into the game and really help with making the game feel more of an authentic experience.

Another place in which Half-Life 2 feels distinctive is in the vehicular sections. At certain parts of the game, you are required to use various vehicles in order to progress – an airboat used for getting through the canals of City 17 and a stripped-out scout buggy for roaming the countryside outside of the city itself. While vehicular sections in first-person shooters weren’t new by that stage, most contemporary games rendered their vehicle sections in either third-person, in imitation of Halo, or in a modified first-person perspective, such as through gun sights. Half-Life 2, on the other hand, steadfastly sticks to its “eyes of Gordon Freeman” first-person perspective throughout.

The vehicular sections in Half-Life 2 are a bit of a love-or-hate beast, since they are quite a divergence from the core gameplay, but I personally love them. They present a sense of speed and exhilaration as you make your way through obstacles, enemies and the scenery around you. There are plenty of stunning set-pieces, such as being chased through the canals and tunnels by an attack helicopter, culminating in a duel to the death near a large dam. There are opportunities to experience the potential of the vehicles as weapons in their own right as you use them to plough through the infantry forces of the Combine. Between that and the use of realistic physics with the vehicle handling, I think that these sections represent some of the best vehicular action in any first-person shooter.

Speaking of set-piece battles, there are some spectacular ones outside of the vehicle sections as well. Alien gunships periodically attack, forcing the player to shoot them down with rockets, steering the rockets past the defences of the gunship as it seeks to shoot down the player’s rockets in mid-flight. Even the standard infantry of the Combine can offer some impressive battles, with AI that was at that point very impressive, even if you don’t get to see their full potential in the tight corridors of the city.

Half-Life 2 was a graphical masterpiece when it was released, even managing to look distinctly better than its best contemporaries. Surprisingly, the game still looks good ten years after its first release, especially with the addition of HDR lighting in conjunction with the release of Half-Life 2: Episode One. While later games have improved on texturing, especially at close ranges, Half-Life 2 certainly does not look embarrassing, especially given that its architectural aesthetic was so distinctive.

The sound design of the game is similarly impressive. There are realistic sounds for all interactions with the environment, including the meaty sounds of the guns in the game. The sounds of the enemies are all distinctive and impressive, from the muffled radio reports of the Combine soldiers to the screeches of the headcrabs and the groaning of the zombies. The game’s music is a peculiar mix of various genres, from rock to techno to ambient, but it is set up very well to create atmosphere and is a credit to Kelly Bailey, long-time composer for the series.

Given the polish of Half-Life 2 and the way it shines out in gameplay and presentation, there are few flaws which I can point at in the game. Some of the physics puzzles are a bit blatant, while there is a short period after you are forced to abandon the scout buggy where I feel the game slows down a lot in a jarring change from fast-paced action and set-piece battles. The section of the game takes place on the coast line outside of City 17, where alien creatures known as antlions burrow out of the ground whenever you touch the sand on the beach. Cue frustration as you try to either fend off enemies as they persistently attack you or try desperately to stack objects in front of you in what feels like an extended game of “keep off the lava”. The addition of an achievement for getting through this section without touching the sand adds to the frustration; I have the achievement, more out of sheer bloody-minded completionism more than anything else, but I won’t be going for it again any time soon.

Despite those occasional flaws, Half-Life 2 is a triumph of first-person shooter design. The polished professionalism shines out as an example of how to do a cinematic game without bogging down the action with overly long cutscenes. The gameplay is tight and intuitive, while the game physics and the strong AI work well to improve immersion. Half-Life 2 is a masterpiece of modern game design and should stand as an example for any developers hoping to develop in the genre.

Bottom Line: Half-Life 2 is a masterpiece, combining excellently polished gameplay and design with graphics and sound that are still impressive. The cinematic presentation works exceptionally well and creates immersion in a way that should be an example to other developers even now.

Why I hope that SteamOS will be successful

I’m a Linux user. Linux has, for several years, been my primary operating system on nearly every computer that I own – I have run openSUSE on my desktops since before it was called openSUSE, I run various versions of Ubuntu on my laptops, Raspbian on my two Raspberry Pis and I even have Debian derivatives running on my Wii and PlayStation 2.

I am also a PC gamer, something which really shouldn’t come as a surprise given my history of video game reviews. I have been playing PC games since the mid-1990s, starting with various MS-DOS games such as SimEarth and Indiana Jones and the Fate of Atlantis, proceeding to Windows 95 with Command & Conquer: Tiberian Dawn, Sid Meier’s Civilization II and SimCity 2000 and continuing to the present day with my most recent acquisitions including The Elder Scrolls V: Skyrim, the entire Tropico series and most recently, Arma 3.

Unfortunately for me, these two facts do not reconcile very well. While gaming of some variety has been possible on Linux since before I started, many of the games available have been open-source projects, ranging from casual puzzle or card games up to the likes of NetHack and Battle for Wesnoth. Most commercial video games on Linux have been from indie developers whose audiences are committed enough to their titles to deal with any hiccups they might experience when dealing with Linux, while a few older FPS titles come courtesy either of id Software’s policy of releasing their engines under open-source licences a few years after their release or by extensive reverse-engineering of the game engines to allow the games to run under Linux.

A lot of the games in question are very impressive in terms of gameplay and are to be lauded for that, but being a Linux user has often meant some sort of compromise in gaming terms. In order to experience the same games as the mainstream audience, one either has to run Windows as a secondary operating system, with the commensurate use of disc space on a separate partition, faff around with Windows emulation, which falls short on the most recent games and on several older titles or simply buy a console at significant expense. I have traditionally opted for the former but consider it to be somewhat irritating in the face of disadvantages of Windows that led me towards using Linux primarily in the first place.

Until recently, the highest-profile company whose games were available under Linux were id Software, largely because of John Carmack’s insistence on the open-source availability of their engines. That has changed of the last year or so, when Valve Software announced the release of SteamOS, a custom distribution of Linux designed for playing games. Valve Software have been one of the poster children for PC gaming for several years. After coming straight off the starting blocks with Half-Life in 1998, they have barely put a foot wrong since then. The highlight of this streak of strong titles has been the groundbreaking Half-Life 2 in 2004, a game which proved that Valve’s original title wasn’t just a flash in the pan.

What’s important to note here, though, is that Valve have also been a strong force for promoting independent game design. Steam, released in late 2003, has been the most notable example of a content delivery system done right. Among Steam’s features are the automated installation of patches, several community features allowing coordination of gameplay with friends and publication of screenshots and videos and a cloud storage system allowing save files and achievement progress to be distributed quickly to several different systems.

Valve are also known for their strong commercial and distribution advantages. The Steam store frequently has sales on various game titles, occasionally offering extensive discounts on games – my purchase of The Elder Scrolls V: Skyrim for €3.74 in June this year has been a particular highlight for me. They also promote independent game studios and offer a strong alternative to traditional publishers. Recent additions to Steam include the Steam Workshop and Greenlight, the former a way of quickly distributing user-created content, thus promoting one of the biggest advantages of PC gaming and the latter being a way for independent game developers to promote titles they may want to be made available on Steam. A number of “Early Access” titles in alpha or beta form are available through the Steam store as some forms of PC game development proceed towards a more audience-oriented method of bug-testing.

Valve have already done a great deal of work in promoting Linux as a gaming platform, having ported all of their own titles to Linux and selling or distributing hundreds more through the Steam store. Valve may well be the vanguard for making Linux gaming a viable alternative to Windows and offer the strong selection of their own titles along with the notable advantages of Steam as a game distribution platform. The proof in the pudding, however, will be when more mainstream developers see fit to release Linux titles as a consequence of Valve’s own orientation towards the platform.

As for the statement posited in the title, I prefer Linux as an operating system environment. I prefer the way that, even with a hefty desktop environment like KDE, my computer will feel quicker and less prone to hiccups in utility software when running Linux versus Windows. I prefer the flexibility to change parts of Linux as one sees fit, running different desktop environments or window managers as desired. I prefer the free and open-source nature of Linux and while Steam won’t offer much in the way of software that is “free as in freedom”, most of my utility software will remain free for me to modify as necessary – or even as desired. I’ve also grown used to the idiosyncracies of a Unix-like environment, from the file system to the command line – I can use Windows perfectly fine, along with a host of other graphical user interfaces, but my growing experience with Unix-like systems gives me a sense of familiarity that I find more pleasing.

PC gaming has, by and large, required me to use Windows. I find Windows works perfectly fine when I run games on it – they run smoothly right up to the point where the graphics card or processor cries uncle. I don’t find that sort of smoothness with utility software. Mozilla Firefox hiccups and splutters, frequently lacking response. Windows Explorer isn’t much better and in any case lacks some neat features from Konqueror on KDE or even Thunar for XFCE, including tabbed browsing. These sorts of hiccups may be down to the fact that my installation of Windows 7 really needs reinstallation, but there are too many idiosyncratic solutions I’ve had to make to get modded games running for me to do that. Then there’s the fact that I have to pretty much install a Unix-like environment through Cygwin if I want to have a programming environment like I’m used to. None of this software has regular updates through a package manager like I’ve become accustomed to on Linux either.

As a consequence, having to switch between the two operating systems between playing games and running utility and programming software is awkward and in any case, running Windows feels like a chore. I’ve said in the past elsewhere and I’ll say it again: Find me a way to get my game library running on Linux without much more effort than it takes to run the games on Windows and it will be difficult to find a reason for me to use anything else.

To close this article, I’ve recently reinstalled Steam on Linux with the aim of experimenting on how well games actually work. Installing Steam wasn’t too difficult – I just had to find a separate package for openSUSE as the package on the Steam website is designed for Ubuntu, Debian and other Debian derivatives with an APT package manager. I tested the original Half-Life, which ran pretty much perfectly – not a surprise, as I already knew that Quake III worked properly. I then installed Half-Life 2: Deathmatch, the smallest Source engine game in my collection.

After a bit of searching on Google to find the workaround to a problem involving a certain proprietary bit of software known as S3TC – one of those patent-related exploits which is unpatentable under the superior European patent system – I was able to get the game running. While I shouldn’t have been surprised, given the age of the Source engine, I had almost top settings straight off the bat with reasonably smooth performance using open-source ATI drivers. This was a pleasant surprise as I had expected stuttering, even given that my Radeon HD 4890 is easily capable of running Half-Life 2. Valve have clearly put effort into making sure that their Linux ports work, which is good to see. If Valve can succeed with Linux and convince other mainstream game companies to follow in their wake, we could see a viable alternative to Windows in yet another way.

Follow

Get every new post delivered to your Inbox.