The 2015 Formula One Season and Other Thoughts

After the return of Formula One two weeks ago, in which we saw Mercedes take an imperious one-two and looking unassailable this year, we’ve had a more surprising result today in Malaysia, where Ferrari took the fight to Mercedes, with Sebastian Vettel exploiting what appears to be a slippery chassis and an improved engine to win decisively against Hamilton and Rosberg. Kimi Raikkonen compounded Ferrari’s success, despite his misfortunes in qualifying and suffering a puncture during the race to take a solid fourth place. After seeing Hamilton romp home to take victory a fortnight ago in Australia, I was concerned that we would see a domineering season from a single driver, with Rosberg, possibly chastised from falling short at the end of last season, perhaps left to pick up the scraps. However, if Ferrari can maintain some degree of consistency about their performances, it might bode better in terms of intrigue throughout the season. At this point, I still expect Mercedes to win the World Constructor’s Championship with greater consistency from both their drivers, but if Vettel and Raikkonen can deliver performances at tracks that don’t have such a focus on top speed, they may present themselves as at least dark horses for the World Driver’s Championship.

After Ricciardo’s spectacular performances last season, taking three victories in a season where barely anybody else even came close to snatching glory from the Mercedes, he has become team leader at Red Bull with the move of Vettel to Ferrari. Daniil Kvyat, formerly of Toro Rosso, joins him and has acquitted himself well so far, despite reliability problems which prevented him from taking the grid in Australia. After so many years in the previous naturally-aspirated formula at the top of Formula One, Red Bull have struggled to regain their pace with the turbocharged Renault engines. Reliability gremlins struck both cars in Australia and the Renault engine, even when it is working, still appears to be down on power versus the Mercedes and an improved Ferrari. Unlike last season, where Ricciardo achieved victories, I think that this season will see Red Bull lucky to battle for podiums, more regularly scoring in the middle of the points.

Red Bull’s sister team, Toro Rosso, shares the Renault engines and also suffered some mechanical problems in Australia at the hands of Max Verstappen. Verstappen has drawn a considerable amount of press for his age, being only 17 years old, by a long way the youngest ever Formula One driver. The son of former Formula One journeyman Jos Verstappen, Max has a notorious lack of experience in single-seater racing, with only a single season of Formula 3 under his belt and joins Formula One after a year of test driving for Toro Rosso in 2014. However, on current evidence in the Formula One races so far, he has quite a bit of natural pace, matching his substantially more experienced team-mate, Carlos Sainz Jr., another new entrant and also son to a famous racing driver father. Despite the limited experience of both drivers, they have quickly brought the fight to the other teams, with Sainz scoring in both of his two finishes and Verstappen only being denied a points finish in his race due to an engine failure.

Williams, regularly best of the rest in 2014 and unlucky not to score a victory on occasions, might have to retemper their expectations in 2015. They still have the proven Mercedes engine, have retained both Felipe Massa and Valterri Bottas from last year and still appear to have a fair degree of pace, but with Ferrari looking stronger than last year, Williams will more likely be caught up in a scrap with the likes of Red Bull, Toro Rosso and Lotus – when their car works properly – for the middle points positions. This is slightly disappointing for Bottas, who scored several well-deserved podiums last season and looks like a likely race winner in the future, but the team may be able to take some solace in that they are likely to be at the front of the battle between the teams that aren’t Mercedes or Ferrari.

Closer to the back of the points positions, Sauber appear to have a quicker car than last year, although they are embroiled in a legal battle with Giedo van der Garde over contract issues that looks like it’ll be a slow burner. Considering one of the drivers that they did choose, I would question their decision not to give van der Garde one of the seats this year; Marcus Ericsson, whose results last year were underwhelming even by the standards of the Caterham team and who didn’t cover himself in glory in the lower single-seater formulae, was signed up in his place. The other choice of driver for the team, Felipe Nasr, is more sensible, despite Nasr being a rookie; he did win a championship at Formula Three and came third in last year’s GP2 series. Nevertheless, given the prominent change in livery for Sauber, now proudly displaying the colours of Banco del Brasil, one strongly suspects that both drivers were picked for their ability to bring in sponsorship dollars, since Sauber is suspected to be in a weak position financially.

Another team rumoured to be weak financially and who will also be scrapping for the lower points positions this season is Force India. Their driver line-up, with the podium-scoring Sergio Perez and the pole position-attaining Nico Hulkenberg, is more experienced than that of Sauber, but their car, despite having a Mercedes engine, does not look especially fast. Somewhat benefited in the race in Australia by virtue of reliability where for others it was lacking, Force India managed a double points finish, but I suspect they will struggle to keep that up during the rest of the season.

At least, though, for all their financial woes, Sauber and Force India are performing better than McLaren, who look like they’re going to have an annus horribilis. With the conclusion of McLaren’s contract with Mercedes, McLaren have gone back to a partner who has presented them with considerable success in the past, with Honda engines in the back of their car. Unfortunately, though, the Honda engine is suffering from a distinct lack of development versus Mercedes, Ferrari and even Renault and is by far the least powerful engine on the grid right now. Trundling around at the back is not a place where we have often seen McLaren and the car, while reportedly nice to drive, is unbefitting of the most experienced line-up on the grid, with double World Champion Fernando Alonso and Jenson Button, also a World Champion. McLaren will be lucky to score points this season and have already struggled to complete races.

One of the feel-good stories of the pre-season was Manor Marussia’s phoenix-like rise from the ashes to present two cars at Australia. Unfortunately, having completed no testing and with all software wiped from their computers in preparation for auction, neither car turned a wheel in Australia and it had to wait until Malaysia until we had a full grid of cars ready to take the start. Will Stevens, who competed in one race last season and Roberto Merhi, another rookie driver, have both been signed up to drive for the team, but it remains to be seen whether the position is a poisoned chalice or not. The car, a derivative of the 2014 Marussia, was not on the pace in Malaysia, barely scraping through the 107% rule in free practice, although Merhi’s completion of the race shows that the car may well have reliability on its side. Even as a fan of the plucky underdog, the pace of the car looks prohibitively slow and with the exit of Caterham, who had gone from underdogs in their early seasons to perennial underachievers by the time of their demise, Manor will largely be in a lonely race with themselves. Things are not looking good for the smaller teams.

In terms of tracks for this season, we have gained another classic track in the Mexican Grand Prix, being held at the Autodromo Hermanos Rodriguez, but temporarily lost the German Grand Prix for the first time since 1960. The loss of the German Grand Prix marks another struggle for the classic European tracks where so much of Formula One’s history lies and while the move to new markets has occasionally given us gems like Sepang or Circuit of the Americas, I do think it’s terrible that Germany doesn’t have a Grand Prix this year for financial concerns, despite having three successful German drivers on the grid, while Abu Dhabi, a city in a desert only notable for its oil reserves and the obvious artifice of its settlements, maintains its end-of-season place at a dull, largely featureless track that has been site of some of the most boring races of the last five years, where not even seasons coming down the wire can improve the racing itself.

***

In other news, the BBC finally bit the bullet and sacked Jeremy Clarkson after a career of controversy. To be fair, even as a Top Gear aficionado, from what we have been presented with from reports of the incident between Clarkson and the BBC producer, Oisin Tymon, Clarkson deserved his sacking; assault on a co-worker is very difficult to condone. Nevertheless, though, it looks like it’s the end of Top Gear as we know it; the ribald, politically incorrect humour of Jeremy Clarkson, Richard Hammond and James May will be unlikely to be continued on the BBC. Plenty of names have already been mooted for a completely new set of presenters, several who would be good choices for an informative car show, but few who would present anything like what we have seen since Clarkson took the reins in 2002.

The bookie’s favourite at the moment is Guy Martin, perennial Isle of Man TT competitor, lorry mechanic and occasional TV presenter. To be fair, Guy Martin would be one of the best choices the BBC could make; not only does Guy have a quirky personality that is interesting to watch, he is genuinely knowledgeable and enthusiastic about motor vehicles and has exceptional mechanical sympathy. This would make him a great choice for an informative car show, as I would suspect the BBC would try to retool the show towards, but I’m not sure that Guy would actually bite – after all, it could affect his ability to race successfully at several of the motorcycle road races that take place during the year in Northern Ireland, some of which provide a lead up to the TT.

My fear is that the BBC will bow to pressure from outspoken minorities and take the politically correct route unnecessarily. This includes the lobby to have a woman back on the show – several women did present the show during the original run of Top Gear, but the show was retooled precisely because the original formula had poor ratings and apart from Sabine Schmitz who is already too busy with D-Motor on German television, I can’t think of many female candidates that wouldn’t just be there to tick diversity boxes. Meanwhile, Clarkson will likely find himself a home somewhere on Sky, given his already comfy relationship with several organs of the Murdoch empire, possibly with Richard Hammond and James May in tow, drawing away viewers from the BBC and causing a crisis in an already battered broadcaster.

Finally, I see that Ted Cruz has announced his nomination as the Republican candidate for President of the United States. I already made my views on Ted Cruz very clear earlier this month, but I hate the man even more now – he was dangerous enough as the head of the Senate Commerce Subcommittee on Science and Space without going for the Presidency as well. While the other Republican candidates look more appealing than Ted Cruz, that isn’t exactly a difficult feat, since lighting my pubes on fire would be more appealing to me than voting for Ted Cruz.

From an objective point of view, it looks like the Republicans will present their third terrible candidate in a row in presidential elections; unfortunately, I don’t have enough confidence in the Democrats to present anything better than a mediocre candidate (because perish the thought that they’d actually be sensible and pick Elizabeth Warren) and I don’t have enough confidence in the American populace not to go for the Republican candidate out of spite. Prove me wrong, America; I’m begging for you to prove me wrong.

Net Neutrality And The Fight Against The Tea Party Movement

This week, the Federal Communications Commission made the monumental decision to classify internet access as a utility, enshrining net neutrality (i.e. the equitable distribution of internet resources to all legal services, no matter what the service is or who owns it) in the United States and striking a decisive blow against the cable companies of the US. I welcome this decision, working as it does in favour of both the common internet user and those companies providing true innovation on the internet – such as Microsoft, Google, Facebook, Netflix, et cetera. Of course, Comcast, Time Warner Cable and so on have protested this decision, but I think it’s time for them to be cut down to size, given their distinct lack of innovation, their oligopolic greed and the fact that they have consistently been among the most unfriendly and unaccommodating companies around, distinct for their dismal customer service and their disregard for any sort of customer satisfaction.

The protests of Comcast, Time Warner Cable and so on aren’t surprising; after all, they have reasons for wanting to protect their oligopoly on the provision of internet connection, even if these work against their customers. Not surprising either are the protests of Ted Cruz, one of the more insipid members of the Tea Party movement of the Republican Party of the United States. Let’s get this straight off the bat: Ted Cruz is an ignoramus, ready to fight any sort of sensible decision as long as he can get one up on the Democratic Party – you know, like the rest of the Tea Party. He’s also a dangerous ignoramus, being the chairman of the Senate Commerce Subcommittee on Science and Space, despite having next to no knowledge of science – he’s not only a climate change denier, but more terrifyingly, a creationist. What’s more, he’s very clearly in the pocket of the big cable companies of the US. However, the very fact that he’s a known crooked, science-denying ignoramus makes him predictable and we shouldn’t be surprised that he’s fighting on the side of the people who pay him to.

What is surprising and more than a little worrying, though, is the fact that anybody has been able to take him seriously. More than a few have, nevertheless, claiming that governmental ‘interference’ will cause the downfall of the internet. The people saying this appear to be the same selfish individualists who have caused the recent outbreaks of measles in the United States due to their strident disregard for public safety by refusing to vaccinate their children. Their thought process seems to be that anything that they can’t perceive as directly helping them and which has the smell of government about it harms their freedom, in a sort of “gubmint bad” sense of the term. This applies even when the end result of the process will actually help them, by not having companies run roughshod over the concept of competition and not having them straitjacketing any company which doesn’t pay a king’s ransom to have their services provided at full speeds.

I’ll be fair here and state that my politics have traditionally been at least centre-left, in the European social democratic tradition, so I’m inherently going to be somewhat opposed to the principles of the Republican Party (and more recently, to the Democrats as well). That said, the trouble here isn’t capitalism, since on many occasions, the competition of a well-regulated market can benefit innovation and lead to new opportunities which improve our lives. However, the oligopoly of the American internet provider market does nothing to benefit innovation and without net neutrality, will actually harm it. Don’t find yourselves roped in by the selfish words of crooked politicians, paid to take a stand and ignorant of the true details behind the issue and if you’re in the US, don’t give the Tea Party any of your credence or support; they’re not on your side.

A new job and a dead GPU: An excuse for a new gaming PC

Something quite notable in my life has happened that I forgot to mention in my last post. After seven years in third-level education and just as much time spent in my previous job as a shop assistant in a petrol station, I’ve finally got a job that is relevant to what I’m studying and am most proficient at. I’m now working in enterprise technical support for Dell, which is quite a change, but both makes use of my technical skills learned both at DIT and the almost twenty years that I’ve spent playing around with computers in my own time and the customer service skills that I learned in my last job. Notably, the new job comes with a considerable increase in my pay; while the two-and-a-half times increase per annum comes mostly because of the fact that I work five days a week now, I am still making more now than I would have working full time previously.

Coincidentally, very recently, I experienced some bizarre glitches on my primary desktop computer, where the X Window System server on Linux appeared to freeze every so often, necessitating a reboot. Resolving the cause of the problem took some time, from using SSH to look at the Xorg logs when the crash occurred to discovering that the issue later manifested itself occasionally as graphical glitches rather than a complete freeze of the X Window System, then later experiencing severe artifacting in games on both Linux and Windows. In the end, the diagnosis led to one conclusion – my five-year-old ATI Radeon HD 4890 graphics card was dead on its feet.

Fortunately, I had retained the NVIDIA GeForce 8800 GTS that the computer had originally been built with, so I was able to keep my primary desktop going for everyday tasks by swapping the old GPU in for the newer, dead one. However, considering the seven years that I’ve got out of this computer so far, I had already been considering building a new gaming desktop during the summer to upgrade from a dated dual-core AMD Athlon 64 X2 to something considerably more modern. The death of my GPU, while not ultimately a critical situation – after all, I did have a replacement, a further three computers that I could reasonably fall back on and five other computers besides – did give me the impetus to speed up the process, though.

After looking into the price of cases, I decided that I would reuse an old full-tower case that currently holds my secondary x86 desktop (with a single-core AMD Athlon 64 and a GeForce 6600 GT), adapting it for the task by cutting holes to accommodate some 120mm case fans and spray-painting it black to cover up the discoloured beige on the front panel. Ultimately, this step will likely cost me almost as much as buying a new full-tower case from Cooler Master, but will at least allow me to keep my current desktop in reserve without having to worry where to find the space to put it. A lot of the cost comes from purchasing the fans, adapters to put 2.5” and 3.5” drives in 5.25” bays and selecting a card reader to replace the floppy drive that will be incompatible with my new motherboard. Nevertheless, the case is huge, has plenty of space for placing new components and should be much better for cooling than my current midi-tower case, even considering the jerry-rigged nature of it.

I had considered quite some time ago that I would go for a reasonably fast, overclock-friendly Core i5 processor and have found that the Core i5-4690K represents the best value for money in that respect – the extra features of the Core i7 are unnecessary for what I’ll be doing with the computer. To get the most out of the processor, I considered the Intel Z97 platform to be a necessity and was originally considering the Asus Z97-P before I realised that it had no support for multi-GPU processing. To be fair, I haven’t actually used either SLI or CrossFireX at any point, but do like the ability to use them later if I wish, so eventually, I settled on the much more expensive but more appropriate Asus Z97-A, which has capacity for both SLI and CrossFireX, the one PS/2 port I need to accommodate my Unicomp Classic keyboard without having to use up a USB slot and which seems to have sufficient room for overclocking of the i5-4690K.

To facilitate overclocking, I have also chosen to purchase 16GB of Kingston 1866MHz DDR3 RAM and an aftermarket Cooler Master Hyper 212 Evo CPU cooler to replace the stock Intel cooler. I’m not looking for speed records here, but would like to have the capacity to moderately overclock the CPU to pull out the extra operations-per-second that might give me an edge in older, less GPU-intensive games. I’ve also gone for some Arctic Silver 5 cooling paste, since cooling has been a concern for me with previous builds and I’d like to make the most of the aftermarket cooler.

Obviously, being a gaming desktop, the GPU will be a big deal. I had originally looked at the AMD Radeon R9 280X as an option, but the retailer that I have purchased the majority of my parts from had run out of stock. As a consequence, I’ve gone a step further and bought a factory-overclocked Asus Radeon R9 290, hoping that the extra graphical oomph will be useful when it comes to playing games like Arma 3, where I experienced just about adequate performance with my HD 4890 at a diminished resolution. The Arma series has been key in making me upgrade my PCs before, so I’m not surprised that Arma 3 is just as hungry for GPU power as its predecessors.

I’ve also gone for a solid-state drive for the first time in order to speed up both my most resource-intensive games and the speed of Windows. I’ve purchased a Crucial MX100 128GB 2.5” SSD, which should be adequate for the most intensive games, while secondary storage will be accommodated by a 1TB Western Digital drive for NTFS and a 320GB Hitachi drive to accommodate everything to do with Linux. I also bought a separate 1TB Western Digital hard drive to replace the broken drive in my external hard drive enclosure, which experienced a head crash when I stupidly let it drop to the floor. Oops. Furthermore, I’ve also gone for a Blu-Ray writer for my optical drive – I’m not sure whether I’ll ever use the Blu-Ray writing capabilities, but for €15 more than the Blu-Ray reader, I decided to take the plunge. After all, I’m spending enough already.

Last but not least is the PSU. “Don’t skimp on the power supply”, I have told several of my friends through the years and this was no exception. Taking in mind the online tier lists for PSUs, I considered myself quite fortunate to find a Seasonic M12II 750W power supply available for under €100, with fully-modular design and enough capacity to easily keep going with the parts that I selected. The benefits for cable management from a modular power supply can’t be overstated, which will be useful even with the generous space in my case.

Overall, this bundle will cost me a whopping €1,500 – almost double what I spent on my current gaming desktop originally. Of course, any readers in the United States will scoff at this price, benefited by the likes of Newegg, but in Ireland, my choices are somewhat more limited, with Irish-based retailers being very expensive and continental European retailers not being as reliable when it comes to RMA procedures if something does go wrong. Nevertheless, I hope the new computer will be worth the money and provide the sort of performance gain that I haven’t had since I replaced my (again, seven-year-old) Pentium III system with the aforementioned single-core Athlon 64 system.

I’ll be looking forward to getting to grips once again with another PC build. Here’s hoping that the process will be a smooth one!

SimTower – A Retrospective Gaming Review

Back when I started playing video games on my first PCs, my interests leant more towards simulation and strategy games than any other genre. One of the first titles that I really got involved with was SimCity 2000 and many of my earliest games came from broadly similar genres, like Sid Meier’s Civilization II and Command & Conquer. Another game I remember playing at a relatively young age was another title published by Maxis, SimTower. SimTower was not, in fact, designed or developed by the core team at Maxis, but instead by a Japanese developer called Yoot Saito, director of OPenBook Co. Ltd (now known as Vivarium). Nevertheless, SimTower encompassed the same constructive rather than destructive gameplay, where the player would build up from simple roots to create something potentially majestic in scale.

The core gameplay of SimTower is very simple – starting with a plot of land, the player builds up from a ground-floor lobby to build a tower block composed of offices, condominiums, restaurants, hotel rooms and other tenant facilities, ensuring that there are sufficient elevators for everybody to move around the tower. There are a few caveats to consider, though – an elevator can only span a maximum of 30 storeys out of a maximum tower size of 100 above-ground and 10 underground storeys, they can only accommodate a certain amount of traffic and certain types of tenant will require the use of elevators more regularly than others. Much of the game, therefore, becomes an exercise in planning the layout of the building and of the elevators in order to optimise traffic flow. This sounds tedious to begin with, but can actually be rather rewarding.

The player starts out by only being able to build a small range of different facilities, including basic elevators, stairs, offices, condominiums and fast food restaurants, but as the tower expands and the player meets more expansion goals, the range of facilities grows to include hotel rooms, restaurants, cinemas and more sophisticated elevators, among others. There are a number of star ratings contingent on the tower’s permanent population; there are five star ratings to achieve altogether, the later ones also requiring certain features to be added to the tower to satisfy tenant demands. The ultimate goal is to build a tower with 100 above-ground storeys and the requisite population and then place a cathedral on the top where visitors can get married.

A few limitations are present on tower design, including the ability to place lobbies (which serve as hubs for elevator travel) every 15 floors and the practical limitations of placing busy fast food restaurants or shops directly beside condos, offices or hotel rooms. None of these limitations are too challenging to work around, though and most of a player’s concern will revolve around keeping the tenants and residents of their tower satisfied.

Satisfaction levels rise and fall based on the conditions in the tower; mostly, satisfaction will be contingent on how well the transportation system is laid out. As mentioned above, standard elevators can only span a maximum of 30 storeys and it is not always sensible to even go this far with them; express elevators can carry many more people than standard elevators and have no height restrictions, but only stop at lobbies and underground floors, thus necessitating standard elevators to get to their destination floor. Satisfaction levels for shops and restaurants are contingent on how many customers visit them per day; fast food restaurants thrive during the day, especially with a large number of office workers, while more sophisticated restaurants depend on condominium residents and outside visitors. Shops also depend on outside visitors, but more of these can be attracted with the presence of cinemas.

Another factor that plays into the construction of the tower is the player’s ability to maintain a steady cash flow. Tenant buildings bring income, while various other elements, such as elevators, stairs and a variety of necessities later on in the development of your tower, like security offices, cost money to maintain. Different tenant facilities have various trade-offs against one another; offices pay a rent once a week – a week in-game consisting of two weekdays and a weekend – and hold a large population proportionate to their size, but make heavy use of elevators and are difficult to keep satisfied, while the tenants of condominiums are easy to keep satisfied, but only pay a one-time payment to purchase the condo as opposed to the weekly rent of offices and the condo itself holds a considerably smaller population for its size than offices. Hotel rooms do not keep a permanent population at all, but offer the potential for payment every day, which can be useful to ensure that maintenance costs don’t run you into the red. Restaurants and shops have their own criteria determining their profitability and are largely contingent on other tenant facilities. Therefore, to ensure the smooth running of a tower, it is important to plan ahead.

A few special events happen during the game as well to keep the player a little bit more on their toes. Occasionally, when your tower is big enough, you will receive messages saying that a bomb has been planted in your tower by a terrorist group; you then receive a choice to pay a considerable amount of money as a ransom or to try to find the bomb before it explodes. To be able to find the bomb, you require an adequate number of security personnel who will then travel through the building via the emergency stairs on either side of your tower. A security office can hold six personnel who can cover a floor each and with a sufficiently narrow tower, a single security office can reasonably cover fifteen floors, but an office every six floors may be sensible in a wider tower. Similarly, fires can break out in your tower that can only be put out by security personnel.

Graphically, SimTower was never especially impressive, but its simplicity suits the gameplay. The player views the tower from a side-on two-dimensional view with simple sprites making up the various elements of the tower, including the facilities, the elevators, the stairs and so on. Tenants and residents are represented by sprites taking the form of silhouettes. These silhouettes are most regularly seen waiting for elevators and change colour from black to pink and then to red based on how long they have been waiting and how stressed out they are. The graphics are simple, but effective enough and while they were designed for the likes of 640×480 displays on computers running Windows 3.1 or 95 or Macintosh System 7, they are at least not ugly on bigger displays.

The sound is very simple as well, with no music, but instead a constant sequence of background noises, like the movement of elevators, office chatter and so on. I think your mileage may vary as to whether you find these effective in a minimalistic way or just annoying; I tend towards the former. There isn’t really any time where these sounds become critical to playing the game, so if they do annoy you, it’s not a big deal to turn them off, but they do enough of a job of giving you some feedback as to the state of your tower that they aren’t obstructive to gameplay.

Thinking about the game as a whole, I don’t think there’s anything that I’d say really stands out in SimTower as a game. The tower management aspect is novel, but similar titles such as the SimCity series offer similar management aspects using a different presentation. The aesthetic elements of the game are not and never were spectacular, but they do the job. However, there isn’t anything bad about SimTower that stands out either. The game is well designed and does what it sets out to do appropriately. The difficulty of progressing past the third star on towards a complete tower may make the game unsuitable as an entry point into construction and management simulations, but the game has a novel perspective to offer people who already enjoy simulators.

Bottom Line: SimTower is an unspectacular but decent simulation game that offers a novel perspective to construction and management simulation.

Recommendation: SimTower will offer the most fun to already experienced simulation gamers. To others, the genre is not action-packed and rewards planning; if that sounds like your thing, SimTower may offer you a fair bit of fun.

Why the Philae lander came at just the right time – a social perspective from a science enthusiast

By now, it has been more than a week since the Philae lander was released from the Rosetta space probe and began its journey onto the surface of Comet 67P/Churyumov-Gerasimenko. The landing didn’t go without trouble, starting with the reported failure of the gas thruster meant to help keep it on the surface before the lander was even released and ending with Philae bouncing twice on the surface of the comet and ending up in the shadow of a cliff, greatly reducing the amount of solar exposure available to the lander. Nevertheless, the mission could be regarded as having succeeded in some respect already, even if conditions do not improve with regard to the sunlight falling on Philae; after all, it did retrieve some potentially useful results from its experimental apparatus before running out of battery power.

Frankly, though, as impressive as the science and engineering of Philae is, a lot of words have been spoken about that aspect long before this post by people far more experienced and talented in those fields than I am. What I want to talk about are some social implications of the fortuitous timing of Philae’s success. The timing of Philae’s mission came in the wake of two unfortunate accidents in the United States by privately-funded aerospace ventures: one the controlled explosion of a failed launch of an Antares rocket developed by Orbital Sciences and designated to send supplies to the International Space Station; the other being the recent crash of the SpaceShipTwo spacecraft, VSS Enterprise, in the Mojave Desert during testing, an accident which led to the death of one of the pilots. At a time when funding for space exploration is hard to come by, these accidents looked embarrassing at best. Rosetta and Philae were launched on their course ten years ago, but arrived in time to at least salvage one reasonable success for space exploration at a time when some people have been quick to criticise it, especially those always willing to fight for petty political victories in matters that mean little.

In that vein, another social implication of Rosetta and Philae comes courtesy of their existence as components of a mission of the European Space Agency. The ESA, funded partially by funds forthcoming from each participating government and partially by the European Union, is a demonstration of the effectiveness of European cooperation at a time when several Eurosceptic groups seek to convince us that such cooperation will lead us nowhere. At a time when these groups have motivations that are at best questionable, like Ukip, while others look like straight-up crypto-fascists, like France’s Front National, I think any sort of success that can show them that Europe can work better if there is sufficient motivation to get things done is useful and desirable. That this happened because a set of scientists and engineers from different countries ignored the call of jingoism and pointless ring-fencing further reinforces my point about these people being willing to fight only in the sake of petty politics when more important things lie at stake. The Rosetta mission – and the ESA in general – shows us the potential and power of cooperation and should be taken as a good example of what the likes of Ukip and FN would take away from us if they were to take power in their respective countries.

Historical Operating Systems: Xerox GlobalView

Author’s Note: The demonstrations in this article are based on Xerox GlobalView 2.1, the final release of the operating system and used a software collection available from among the links here: http://toastytech.com/guis/indexlinks.html

Xerox is not a name which one would usually associate with computing, being far more well-known for their photocopying enterprise. For this reason, it is somewhat bizarre to look at the history of Xerox and realise that through their PARC (Palo Alto Research Center), Xerox were one of the most revolutionary computer designers of all time. Their first design, the Alto minicomputer, was released in 1973 and introduced a functioning GUI, complete with WYSIWYG word processing and graphical features more than ten years before the first developments by any other company. Indeed, the Alto represented the concept of the personal computer several years before even the Apple II, Atari 8-bit family and the Radio Shack TRS-80 arrived in that sector and at a time when most computers still had switches and blinkenlights on their front panels.

The Alto was never sold as a commercial product, instead being distributed throughout Xerox itself and to various universities and research facilities. Xerox released their first commercial product, the Xerox 8010 workstation (later known as the Star) in 1981, but by that stage, they had presented their product to many other people, including Apple’s Steve Jobs and Microsoft’s Bill Gates. Microsoft and Apple would soon release their own GUI operating systems, based heavily on the work of Xerox PARC’s research and ultimately would compete to dominate the market for personal computer operating systems while Xerox’s work remained a footnote in their success.

The Xerox Star was relatively unsuccessful, selling in the tens of thousands. Part of the reason for the lack of success for the Xerox Star, despite its technical advantages, was the fact that a single Star workstation cost approximately $16,000 in 1981, $6,000 more than the similarly unsuccessful Apple Lisa and more than $10,000 more than the Macintosh 128k when that was released in 1984. Consequently, the people who could have made most immediate use of a GUI operating system, including graphic designers, typically couldn’t afford it, while those that could afford it were more likely in the market for computers more suited to data processing, like VAX minicomputers or IBM System/3 midrange computers.

Nevertheless, Xerox continued to market the Star throughout the early 1980s. In 1985, the expensive 8010 workstation was replaced with the less expensive and more powerful 6085 PCS on a different hardware platform. The operating system and application software was rewritten as well for better performance, being renamed to ViewPoint. By this stage, though, the Apple Macintosh was severely undercutting even its own stablemate, the Lisa, let alone Xerox’s competing offering. Meanwhile, GUI operating environments were beginning to pop up elsewhere, with the influential Visi On office suite already on IBM-compatible PCs and Microsoft Windows due to arrive at the end of the year, not to mention the release of the Commodore Amiga and the Atari ST.

Eventually, Xerox stopped producing specialised hardware for their software and rewrote it for IBM PC-compatible computers – along with Sun Microsystem’s Solaris – in a form called GlobalView. Since the Xerox Star and ViewPoint software was written in a language called Mesa – later an influence on Java and Niklaus Wirth’s Modula-2 language – GlobalView originally required an add-on card to facilitate the Mesa environment, but in its final release ran as a layer on top of Windows 3.1, 95 or 98 via an emulator.

As a consequence of running in this emulated environment, Xerox GlobalView 2.1 is not a fast operating system. It takes several minutes to boot on the VirtualBox installation of Windows 3.1 which I used for the process, most of which seems to be I/O-dependent, since the host operating system runs about as fast as Windows 3.1 actually can on any computer. The booting process is also rather sparse and cryptic, with the cursor temporarily replaced with a set of four digits, the meaning of which is only elucidated on within difficult-to-find literature on GlobalView’s predecessors.

Once the booting process is complete, one of the first things that you may notice is that the login screen doesn’t hide the fact that Xerox fully intended this system to be networked among several computers. This was a design decision that persisted from the original Star all the way back in 1981 and even further back with the Alto. Since I don’t have a network to use the system with, I simply entered an appropriate username and password and continued on, whereby the system booted up like any other single-user GUI operating system.

Looking at screenshots of the Xerox Star and comparing it with the other early GUI systems that I have used, I can imagine how amazing something like the Xerox Star looked in 1981 when it was released. It makes the Apple Lisa look vaguely dismal in comparison, competes very well with the Apple Macintosh in elegance and blows the likes of Visi On and Microsoft Windows 1.0 out of the water. Xerox GlobalView retains that same look, but by 1996, the lustre had faded and GlobalView looks rather dated and archaic in comparison to Apple’s System 7 or Windows 95. Nevertheless, GlobalView still has a well-designed and consistent GUI.

globalview1

Astounding in 1981, but definitely old-fashioned by 1996.

GlobalView’s method of creating files is substantially different to that used by modern operating systems and bizarrely resembles the method used by the Apple Lisa. Instead of opening an application, creating a file and saving it, there is a directory containing a set of “Basic Icons”, which comprise blank documents for the various types of documents available, including word processor documents, paint “canvases” and new folders. This is similar to the “stationery paper” model used by the Lisa Office System, although GlobalView doesn’t extend the office metaphor that far.

Creating a new document involves chording (pressing both left and right mouse buttons at the same time) a blank icon in the Basic Icons folder, selecting the Copy option and clicking the left mouse button over the place where you wish to place the icon. Once the icon has been placed, the new document can be opened in much the same way that it may be opened on any newer PC operating system. By default, documents are set to display mode and you need to actually click a button to allow them to be edited.

GlobalView can be installed as an environment by itself, but is far more useful when you install the series of office applications that come with it. As with any good office suite, there is a word processor and a spreadsheet application, although since the Xerox Star pre-dated the concept of computerised presentations, there is no equivalent to Microsoft’s PowerPoint included. There is also a raster paint program, a database application and an email system, among others.

It’s difficult to talk about GlobalView without considering its historical line of descent and it’s clear that while the Xerox Star presented a variety of remarkable advances in GUI design, by 1996, GlobalView was being developed to placate the few remaining organisations who had staked their IT solutions on Xerox’s offerings in the past. The applications no longer had any sort of advances over the competition. In many cases, they feel clunky – the heavy requirement on the keyboard in the word processor is one example, made more unfriendly to the uninitiated by not following the standard controls that had arisen in IBM PC-compatibles and Macintoshes. Still, considering the historical context once again, these decisions feel idiosyncratic rather than clearly wrong.

globalview2

The paint program isn’t too bad, though.

Using GlobalView makes me wonder what might have become of personal computing if Xerox had marketed their products better – if in fact they could have marketed them better. Of course, even by the standards of the operating systems that were around by the last version of GlobalView, the interface and applications had dated, but that interface had once represented the zenith of graphical user interface design. Like the Apple Lisa, the Xerox Star and its successors represent a dead-end in GUI design and one that might have led to some very interesting things if pursued further.

Half-Life 2 – A Retrospective Review

“Rise and shine, Mister Freeman, rise and… shine. Not that I wish… to imply that you have been sleeping on… the job. No one is more deserving of a rest, and all the effort in the world would have gone to waste until… well… let’s just say your hour has come again. The right man in the wrong place can make all the difference in the world. So wake up, Mister Freeman…wake up and… smell the ashes.” – The G-Man, during the introduction to Half-Life 2.

When Valve Software released Half-Life in 1998, they came straight out of the gate with a game that is now regarded as one of the best and most important computer games ever released. Half-Life not only brought a stronger sense of storytelling and atmosphere into the mainstream of first-person shooters, but also served as the launch point for a huge variety of mods, including Counter-Strike, Day of Defeat and Team Fortress Classic. With this pedigree, Half-Life 2 became one of the most hyped titles of the early 2000s – and managed to live up to the hype. Half-Life 2 revolutionised computer game physics, represented the best in a generation of increasingly realistic graphics and used some of the most intelligent AI code seen to that point.

Half-Life 2 continues the adventures of Gordon Freeman, the protagonist of the original Half-Life. At the time of the original game, Gordon Freeman was a theoretical physicist, recently awarded his doctorate and working at the Black Mesa Research Facility, a military installation controlled by the United States government. Against the odds, Gordon Freeman managed to survive the alien invasion of the facility after an experimental disaster and was employed by the enigmatic G-Man, being kept in suspended animation until his services were required again.

Twenty years later, at the beginning of Half-Life 2, Gordon Freeman is brought out of his suspended animation, ending up on a train entering City 17, a mega-city located tentatively in Eastern Europe. The game wastes no time in presenting the consequences of the invasion at Black Mesa, as Gordon Freeman returns to a world where the people of Earth have been enslaved, under the administration of Doctor Breen, former administrator of Black Mesa and Quisling to the invading forces of the interstellar empire of the Combine. Floating camera drones buzz around, constantly observing and photographing the citizens of Earth; armed, uniformed and masked guards of Civil Protection stand as sentinels around the city, with no hesitation at beating and humiliating citizens for any hint of defiance.

The Vortigaunts who had proved so hostile against Gordon Freeman in the original game have been reduced to an even lower status than the humans, abjectly left to janitorial roles under the supervision of the brutish Civil Protection, while huge war machines resembling the tripods from The War of the Worlds march through the streets of City 17. Unarmed and given little indication of where to go, Gordon soon meets with Barney Calhoun, a security guard from Black Mesa and friend of Gordon who has been working undercover as a Civil Protection guard.

Directed towards the hidden lab of Dr. Isaac Kleiner, another old friend of Gordon who had worked with him at the time of the Black Mesa incident, Gordon goes towards the laboratory and before long is being chased through the streets of City 17 by Civil Protection guards and APCs. With the assistance of Alyx Vance, the daughter of another former scientist at Black Mesa, Gordon reaches Dr. Kleiner’s lab, where the revelation is made that the surviving scientists from Black Mesa have covertly been doing their own research into teleportation.

With the return of Gordon Freeman, who through his improbable survival of the events of Black Mesa, stopping the initial alien invasion, has inadvertently become a prophetic figure and a standard to rally behind, the seeds are sown for rebellion and insurrection. However, the teleportation technology of the resistance is untested. A failure of one of the components during an initial teleportation run ends up alerting the Combine to Gordon’s presence and leaves Gordon in a situation where he must run and fight for his life – and eventually for the lives of humanity.

The game presents this narrative to the player through a strong and distinctive cinematic technique where the camera perspective never leaves the sight of Gordon Freeman. Half-Life 2 uses the visual medium superbly, with a distinctive architectural arrangement which evokes the crumbling concrete apartment blocks of the Soviet era in Eastern Europe. This contrasts with the futuristic, industrial, metallic aesthetic of the buildings of the Combine, especially the colossal Citadel at the centre of the city, reaching far into the clouds and dominating the skyline. Gigantic screens dot the city, presenting propaganda broadcasts from Doctor Breen and the Combine. The citizens of Earth have been outfitted with the same overall-style clothing, which both invokes a sense of the citizens being unskilled workers and prisoners on their own planet.

Importantly, the game doesn’t become overbearing with these details, presenting just enough of them at a time to create a realistic impression of the world after the Black Mesa incident and the Combine invasion. Indeed, Valve’s attention to detail seems to be extremely professional, with a polish which shows the artistry that went into the game.

The gameplay demonstrates similar polish. At its core, it continues the same sort of linear first-person shooter action of its predecessor, but brings a set of important improvements which help update the game and make it feel more immersive and visceral. Chief among these was the introduction of realistic physics through the use of the Havok middleware package. The use of realistic physics not only helps immersion through relatively realistic interactions of objects, such as the scattering of objects with explosions or the ragdoll physics of dead enemies, but also plays a big part in the game itself.

One of the biggest and most touted features in Half-Life 2 was the Zero Point Energy Manipulator (also known as the Gravity Gun), a device allowing the player to pick up, move and violently hurl objects around them. This comes in handy at several points in the game, where it can be used to move obstacles out of one’s path, use other objects to shield one’s self or build impromptu stacks of objects to climb to out-of-the-way places or use the objects as weapons by hurling them into enemies. It does seem appropriate that a game named after a physics concept, with a physicist as a main character, was one of the first to use realistic physics in such a way.

However, there are a few instances where the game turns into a showcase for the physics engine and the Gravity Gun. There are a few instances where you must manipulate certain objects in a certain way to proceed and the game seems to go almost as far as to shout out, “This is a physics puzzle!”, which doesn’t help with immersion. Luckily, such occasions are few and far between. By and large, the physics manipulations are integrated very well into the game and really help with making the game feel more of an authentic experience.

Another place in which Half-Life 2 feels distinctive is in the vehicular sections. At certain parts of the game, you are required to use various vehicles in order to progress – an airboat used for getting through the canals of City 17 and a stripped-out scout buggy for roaming the countryside outside of the city itself. While vehicular sections in first-person shooters weren’t new by that stage, most contemporary games rendered their vehicle sections in either third-person, in imitation of Halo, or in a modified first-person perspective, such as through gun sights. Half-Life 2, on the other hand, steadfastly sticks to its “eyes of Gordon Freeman” first-person perspective throughout.

The vehicular sections in Half-Life 2 are a bit of a love-or-hate beast, since they are quite a divergence from the core gameplay, but I personally love them. They present a sense of speed and exhilaration as you make your way through obstacles, enemies and the scenery around you. There are plenty of stunning set-pieces, such as being chased through the canals and tunnels by an attack helicopter, culminating in a duel to the death near a large dam. There are opportunities to experience the potential of the vehicles as weapons in their own right as you use them to plough through the infantry forces of the Combine. Between that and the use of realistic physics with the vehicle handling, I think that these sections represent some of the best vehicular action in any first-person shooter.

Speaking of set-piece battles, there are some spectacular ones outside of the vehicle sections as well. Alien gunships periodically attack, forcing the player to shoot them down with rockets, steering the rockets past the defences of the gunship as it seeks to shoot down the player’s rockets in mid-flight. Even the standard infantry of the Combine can offer some impressive battles, with AI that was at that point very impressive, even if you don’t get to see their full potential in the tight corridors of the city.

Half-Life 2 was a graphical masterpiece when it was released, even managing to look distinctly better than its best contemporaries. Surprisingly, the game still looks good ten years after its first release, especially with the addition of HDR lighting in conjunction with the release of Half-Life 2: Episode One. While later games have improved on texturing, especially at close ranges, Half-Life 2 certainly does not look embarrassing, especially given that its architectural aesthetic was so distinctive.

The sound design of the game is similarly impressive. There are realistic sounds for all interactions with the environment, including the meaty sounds of the guns in the game. The sounds of the enemies are all distinctive and impressive, from the muffled radio reports of the Combine soldiers to the screeches of the headcrabs and the groaning of the zombies. The game’s music is a peculiar mix of various genres, from rock to techno to ambient, but it is set up very well to create atmosphere and is a credit to Kelly Bailey, long-time composer for the series.

Given the polish of Half-Life 2 and the way it shines out in gameplay and presentation, there are few flaws which I can point at in the game. Some of the physics puzzles are a bit blatant, while there is a short period after you are forced to abandon the scout buggy where I feel the game slows down a lot in a jarring change from fast-paced action and set-piece battles. The section of the game takes place on the coast line outside of City 17, where alien creatures known as antlions burrow out of the ground whenever you touch the sand on the beach. Cue frustration as you try to either fend off enemies as they persistently attack you or try desperately to stack objects in front of you in what feels like an extended game of “keep off the lava”. The addition of an achievement for getting through this section without touching the sand adds to the frustration; I have the achievement, more out of sheer bloody-minded completionism more than anything else, but I won’t be going for it again any time soon.

Despite those occasional flaws, Half-Life 2 is a triumph of first-person shooter design. The polished professionalism shines out as an example of how to do a cinematic game without bogging down the action with overly long cutscenes. The gameplay is tight and intuitive, while the game physics and the strong AI work well to improve immersion. Half-Life 2 is a masterpiece of modern game design and should stand as an example for any developers hoping to develop in the genre.

Bottom Line: Half-Life 2 is a masterpiece, combining excellently polished gameplay and design with graphics and sound that are still impressive. The cinematic presentation works exceptionally well and creates immersion in a way that should be an example to other developers even now.

Follow

Get every new post delivered to your Inbox.