Final Fantasy IV – A Retrospective Review

(This review is based on the North American Final Fantasy II on the SNES.)

As I have stated before in my review of Final Fantasy VI, the SNES is renowned for being a strong platform for JRPG games. Of particular note is the volume of JRPG games released or published by Squaresoft during this period, including Secret of Mana, several entries in the SaGa series – although none of these were released outside of Japan on the SNES – Chrono Trigger and of course several entries in the Final Fantasy series. Final Fantasy IV was the first game that Squaresoft released on the SNES and was also the first game in the Final Fantasy series proper released in North America since the original Final Fantasy in 1987.

The game focuses on the travails of Cecil Harvey, who begins the game as a Dark Knight in the service of the kingdom of Baron and as leader of the Red Wings, the airship air force of the Baronian kingdom. However, as the game begins, Cecil is beginning to have concerns, shared by his crew, about the aggression displayed by Baron in their aim to collect Crystals that are scattered around the world. After a vicious attack on a largely defenceless town named Mysidia, Cecil decides to air his concerns to the King of Baron. In response, the King strips Cecil of his captaincy and sends him on an errand to deliver an object to the nearby village of Mist, a location renowned for its Callers, who tap the magical powers of monsters in the form of summons. When he arrives, though, accompanied by his friend and leader of the Dragoons of Baron, Kain, the object that Cecil has delivered ends up spawning monsters who set Mist ablaze.

Finding a young girl, Rydia, whose mother has been killed as an unforeseen side-effect of Cecil and Kain’s slaying of the summoned guardian of Mist, Cecil and Kain attempt to make restitution with the girl. However, Rydia ends up being a Caller in her own right, summoning a powerful force which ends up changing the face of the land around them. Waking up in a forest, Cecil finds himself cut off from Baron, separated from Kain and trying to find medical assistance for a girl who hates him. Meanwhile, he has made an enemy of Baron, is also separated from his romantic partner, Rosa, a powerful healer and archer also serving Baron and faced with the goal of finding allies to discover exactly what is going on with the kingdom of Baron.

The setting of Final Fantasy IV is by and large typical quasi-medieval swords-and-sorcery fantasy, complete with the focus on the Crystals which was then common in Final Fantasy games, although there are enough plot twists to keep the setting from becoming completely generic. Nevertheless, this game is very much rooted in its setting and from this aspect, will provide no real shocks to those familiar with either European fantasy or with other JRPGs.

A more impressive aspect of the game is the number of playable characters involved in the plot. Being the first Final Fantasy game to introduce characters with distinct, non-generic personalities, the game involves the adventures of twelve separate characters, of which five or fewer can be present in the party at one time. The game maintains the restriction on party characters by shuffling characters out as the plot proceeds, although some of the events which change characters happen in somewhat contrived circumstances. Regardless, the game does do well to give each character their own motivations, characterisation and personality – and to give each character disparate character skills and abilities, something which hasn’t always been present in the Final Fantasy series – Final Fantasy VI and VII come to mind.

Gameplay should also be familiar to JRPG fans, particularly players of later Final Fantasy games. The game uses a prototypical form of the Active Time Battle system also used in many later Final Fantasy games, although without the bars indicating which character is to be ready next and how long it will take for them to be ready. In the world map, there is the usual “not too linear” approach where players have some degree of free rein over where they are to travel to next, although there is a relative dearth of sidequests to make some of the additional locations worthwhile to visit.

The game is reasonably challenging, especially in the early game where healing comes at a price and losing any part of your party can be catastrophic. Even at the end of the game, a bit of level grinding will ease your way through the final dungeons, giving you a better chance against some of the tougher enemies. The bosses don’t have the most advanced artificial intelligence, but have enough potential to smack the characters around to make them dangerous.

Unfortunately, the game’s translation doesn’t meet the standards of the gameplay, with sloppy mistakes and strange turns of phrase scattered throughout the game. While the translation is not the poorest of any SNES-era JRPG – the train-wreck that is the English translation in Breath of Fire II comes to mind – and is at least legible, it is neither good, nor even endearing in the way that some poor translations can be – well, apart from one famous line (“You spoony bard!”) which is oddly translated yet proper, if archaic English. Given the excellent, endearing and amusing translations in Final Fantasy VI and Chrono Trigger by Ted Woolsey later on in the SNES era, it’s a pity that Square didn’t get a good translator earlier on. (Re-releases of Final Fantasy IV have retranslated the game to a far higher standard – but they have kept the famous line described above.)

Thankfully, the graphics and sound in this game are quite a bit better than the translation. Similarly to the later Final Fantasy VI, the graphics are not the best on the SNES or even in the genre – the fabulous Chrono Trigger comes to mind again – yet they are serviceable and use the vivid palette of the SNES rather well. The sound effects are also serviceable; there may be no stand-out sounds like Kefka’s infamous cackling laugh in Final Fantasy VI or the unearthly scream of Lavos in Chrono Trigger – but then again, there isn’t really a place in the game for such impressive effects to be appropriate.

The music, as befits a Final Fantasy game, is very good, though not as distinctive or memorable as I would like. Nevertheless, there are some very good tracks scattered throughout the soundtrack, including right from the very start with the theme of the Red Wings. Other exceptional tracks include the theme of Golbez, one of the main villains in the game, along with the music accompanying two of the final dungeons.

Final Fantasy IV has all the components for a strong JRPG, including a fairly strong plot, good characterisation, solid gameplay fundamentals and very good music. From the perspective of the genre, it is a good game. Yet, comparing it to other JRPGs later in the same console generation, it comes across as being slightly underwhelming. It may be that the many successors to Final Fantasy IV have overshadowed the game somewhat, but there weren’t any particular moments that I considered outstanding in the same way as some moments in Final Fantasy VI or Chrono Trigger were. However, it was clearly a good enough game for me to see it to the end and from a historical perspective, Final Fantasy IV is clearly very important for its pioneering work in gameplay mechanics and character development.

Bottom Line:Final Fantasy IV is a good game with solid gameplay fundamentals and a reasonably good plot, along with being historically important, but sometimes comes across as slightly underwhelming compared to later JRPGs.

Recommendation: If you’re going to play Final Fantasy IV, do yourself a favour and give the original SNES version a miss. Unlike Final Fantasy VI, you don’t lose an interesting, funny yet proficient translation by going to the newer versions. Other than that, this is a good game for entrenched JRPG fans and not a terrible starting point for new JRPG fans, but it won’t convert anybody who has already made their mind up about the genre.

The Cryptocurrency Conundrum: Why Bitcoin and its contemporaries have failed to convince me

It would have been pretty difficult to avoid hearing anything about Bitcoin in the past few months, given its jump from being a mere curiosity known only by technical enthusiasts to a potential investment that mainstream economists and journalists are watching avidly. Some of the advocates for Bitcoin and other cryptocurrencies say that they offer a completely different paradigm for currency transactions, while others are interested in the investment opportunities.

However, the recent bankruptcy and collapse of Mt. Gox, one of the premier Bitcoin exchanges, along with increased scrutiny on the nature of cryptocurrencies by various treasury agencies has caused the price of Bitcoin to jump around like a hyperactive kangaroo. I fail to be convinced of the long-term viability of Bitcoin or other contemporary cryptocurrencies, neither as an investment nor as a unit of currency. I will focus on Bitcoin here, since it is the cryptocurrency with the highest market capitalisation and correspondingly the most interest, along with being the basis for most other cryptocurrencies out there.

Admittedly, as a technical enthusiast, there are some details of cryptocurrencies and by extension, Bitcoin, that I find interesting. The idea that cryptographic protection is built into the protocol, thus stymieing attempts at counterfeiting, has merit particularly from the perspective of e-commerce. Commercial activities have taken off on the internet to an astounding extent, despite the decided vulnerabilities in current payment mechanisms, especially from the perspective of security. Having a secure, well-supported method of payment that is outside the commercial interests of any single party could be useful for improving the weaknesses that currently exist in internet commerce.

Unfortunately, the few advantages that Bitcoin can indisputably claim over conventional currencies are not enough to make up for the many things that can be held against it. These problems begin at the generation (i.e. “mining”) phase, spiralling out from there and include both the computational side and economic factors.

The generation of Bitcoin is done by a process called “mining”. Bitcoin mining effectively involves solving SHA-2 cryptographic hashes for a certain set of criteria, trying to find the most complex way of arriving at that result. I can already see a problem here. As far as I can tell, the only people who actually need to solve cryptographic hashes are security organisations such as the NSA and professional cryptographers. Bitcoin doesn’t fall under the purview of “professional” cryptography – it is simply rewarding computational make-work that has no relation to legitimate problems that distributed computing would resolve. In this regard, Bitcoin is no better than fiat currency, since you’re only trading off the trust of a government’s ability to pay its debts for the trust of computer cycles.

Actually, since I don’t trust computer cycles as a backing for a means of payment unless those computer cycles have been used for something useful, I have to regard Bitcoin as worse than fiat currency. It’s not like there aren’t plenty of things that could be done with those computer cycles either; everything from protein folding to Mersenne prime number solvers to running through the data of a large-scale scientific experiment could be done with the distributed computational power of computers currently used for Bitcoin mining, but instead, they’re being used to solve bloody cryptographic hashes. That’s one strike already for Bitcoin and we haven’t even got past the generation phase.

While we’re on the subject of mining, bitcoins used to be generated effectively by the CPUs of home computers, but as the difficulty of generating bitcoins has increased (as part of a process which I’ll talk about below), the mining process gradually transitioned towards the use of GPGPU techniques, then onto the current trend of application-specific integrated circuits (or ASICs). These ASICs, as the name implies, are not general-purpose computers, but are specialised for the purpose of Bitcoin mining. So, not only does Bitcoin mining involve the make-work job of throwing away computer cycles – and electricity, by extension – on solving cryptographic hashes, but it’s led to the creation of computers for which that is their raison d’être. Great work, Satoshi Nakamoto, whoever the hell you are.

To be fair, though, once you get past the mining stage, the nature of the Bitcoin protocol looks alright from the perspective of computer science – for a while, at least. Bitcoins are stored using a digital “wallet”, to which the user provides an address which takes the form of a long hexadecimal number. Payments can be made to other Bitcoin users by knowing their addresses. However, we hit a stumbling block here when it comes to using Bitcoin as a means of payment for e-commerce. Bitcoin has a one-way system for transfers, which for various reasons is not suitable for most purposes in the field of e-commerce. What about refunds, for instance? That isn’t covered very well within the Bitcoin protocol. Nor are transaction cancellations, which would have particularly interesting, if not especially desirable consequences regarding micro-transactions within smartphone or tablet apps, where an alternative to payment by credit cards would be rather desirable.

Let’s just consider the case of a parent who has just handed their child their smartphone and returns to find that the child has bought several hundred dollars of in-game purchases in some shitty freemium game. This sort of scenario can happen and has happened in several high-profile cases – and certainly, the parent isn’t going to want to keep all of those in-game purchases. Some people may say that the burden should be placed on the parent and if they didn’t want it to happen, they should have been more careful, but really, I can sympathise with the parent on this one.

I mean, let’s say your child is whining about something they want, which happens regularly. You’re busy trying to get some work done around the house or something and you just want a break from the moaning going on in your ear. So, you hand your smartphone to the child hoping that they’ll find something that will shut them up for just one moment. Unfortunately, you forgot to sign your account out of the smartphone’s store and of course, Murphy’s Law will dictate that the one time you forgot to sign out will be the time when the child wants to make his way through the catalogue of crappy in-game purchases. By saying that you should be more careful as to let your child mess around with your phone, you could just as well insinuate that you should be more careful as to have children in the first place and that argument doesn’t tend to go down well.

Bitcoin is even more vulnerable than credit cards to this sort of scenario; credit cards usually have limits, whereas somebody with a Bitcoin wallet could spend the lot and all you’d have would be the recordings of the transactions on the address. Good luck getting your money back as well, since these transactions can’t be cancelled when it turns out that you’ve made a mistake – or that the product that you ordered is late or whatever problem you’re having. Not a very good thing for a currency, wouldn’t you think?

Returning to a point which I made above, Bitcoin mining has become more difficult as time has progressed, which has prompted the use of ASICs. Part of the reason why Bitcoin mining has become more difficult is because of an inherent detail of the protocol and of Bitcoin in general – there is a finite number of bitcoins that can be generated. Only 21 million bitcoins will ever exist, generated at a steady rate per week – which requires the generation of bitcoins to be more difficult as more computational power is used to generate them – and the rewards will diminish with time. Danger, Will Robinson! Talk about an economic faux-pas: what we’ve fallen into here is an inherently hyper-deflationary currency.

Inflation and deflation are not two sides of the same coin, but both are considered to be deleterious to some extent in an economic system. However, mainstream economists tend to consider inflation less harmful than deflation and fiat currency systems that are in use today have a small, but usually controlled rate of inflation. The reason that economists prefer inflation to deflation is that such a scenario encourages people to buy things since the value of their money will decrease rather than increasing with time, along with being more helpful to debtors rather than creditors – a debt made for a certain amount in a deflationary system will continue to accrue value, which discourages entities from taking out debts in the first place. When those debts would be used to catalyse the growth of a new business, then there becomes a case where deflation becomes harmful to economic growth. The once-vaunted Japanese economy, which looked set to take over from the United States of America as the world’s biggest economy in the 1980s, has suffered from deflation since the early 1990s. With Bitcoin, you would therefore use a system which actually incorporates deflation – and at a huge rate – into its very form of being. I’ll leave you to draw the conclusions.

Another problem for Bitcoin from an economic perspective is its volatility. As I’ve said above, the price of Bitcoin jumps around from day to day like a hyperactive kangaroo, and sometimes, hundreds of dollars per bitcoin can ride on various decisions by speculators and treasury agencies wary of the potential effects of Bitcoin alike. The recent collapse of Mt. Gox sums this up nicely; in the last month, the price of bitcoins has jumped between more than $700 per bitcoin to a trough of less than $480 on February 25th, when Mt. Gox went offline, before promptly jumping back up to more than $600. Bear in mind, this was in a single month – if the dollar went haywire like that, there would be hell to pay! Would you really risk spending money with the potential for it to add another half again onto its value, or receive it with the risk of it almost halving in value? If you would, you’re braver than I am – or infinitely more foolhardy.

We need to note here what Mt. Gox actually stood for – it was originally an initialism for “’Magic: The Gathering Online’ Exchange”. No, you’re not reading that wrong – it was originally a site for the exchange of cards from a fantasy collectible card game. Actually, no, I have that slightly wrong – it was originally a site for the exchange of digital, virtual cards from the online version of a fantasy collectible card game which only exist at the whims of Wizards of the Coast. This is what I find to be one of the most terrifying things about any ideas of moving to Bitcoin as a currency – putting your money into the hands of a bunch of nerds who have no real clue about anything but the mathematical tendencies of economics and have probably convinced themselves that their computer science experience gives them insights into the economic world while only understanding that small portion of it. That, to me at least, seems like a big mistake waiting to happen – and I say that as a nerd with no real clue about anything but the mathematical tendencies of economics who has convinced myself that my computer science experience gives me insights into the economic world while only understanding that small portion of it.

Another group of people who are very vocal on the issue of Bitcoin and who are correspondingly very worrying are the Randite libertarians who have embraced Bitcoin and its decentralised nature. Randites are particularly annoying to deal with, because of their odious selfishness-led philosophies and their propensity to believe any sort of ludicrous fantasies as long as they work against the aims of organised government. It doesn’t help that the very founder of their Objectivist ideals was a hypocrite who railed against government assistance, yet seemingly felt no shame in using it herself, nor does it help that Alan Greenspan, who, as Chair of the Federal Reserve, presided over the biggest recession since the Great Depression, is a self-confessed Randite. I think that’s the piece of straw that broke the camel’s back on that issue, although interestingly, even Alan Greenspan doesn’t think that Bitcoin is a good idea. You’d think he’d have first-hand experience of a financial bubble, wouldn’t you?

Creationism is not science

To anybody of a rational, scientific mindset, the title of this article should invoke thoughts somewhat along the lines of, “No shit, Sherlock”. Evolutionary science has underpinned the efforts of biologists for decades or even centuries, providing an observable, tested mechanism for the diversity of species. Through the allied efforts of geneticists, it has given us a stronger grasp on how we can improve efforts towards artificial selection. Yet, in all of this, small but vocal groups, many situated within the United States, deny evolutionary science. Instead, they wish to implant their own unscientific creationist hypotheses into the education system, subverting the scientific consensus with their theologically-driven political charges.

Creationism appears to be driven by some sort of offence and insecurity at the idea that humans might have been derived from what creationists see as lower species, or that we might be related in some way to apes and monkeys. Christian creationism, the most vocal kind in the Western world, professes that a creator God designed humans in his own image – although I have to ask whether any creator God would actually want to claim a species with such a variety of known flaws as Homo sapiens as being in his or her image.

The most egregiously and brutally unscientific of the creationist hypotheses is that of Young Earth creationism, a ridiculously bizarre hypothesis that contravenes most of the major branches of natural science, along with many humanities disciplines and a couple of branches of mathematics to boot. Essentially, Young Earth creationism states that the world, in accordance with various calculations on figures given in the Bible, is somewhere in the region of six thousand years old. The recent, controversial debate between Bill Nye and Ken Ham was conducted at the Creation Museum, an establishment which claims Young Earth creationism to be true and accurate.

There are so many things wrong with this that it’s difficult to know where to begin, but how about beginning by stating that there are human settlements which have been dated more than five thousand years before that? I have a back-dated copy of National Geographic beside me (June 2011, if anybody’s interested in reading it) that discusses the archaeological site of Göbekli Tepe in Turkey, an elaborate and intricately designed religious site that is estimated to date back to 9600 BC.

That immediately puts a rather inconvenient stumbling block in front of Young Earth creationism, and I haven’t even got to the science yet. Aside from myriad fields of biology, including genetics, botany, zoology, biochemistry and more, all of which must be denied in order to claim Young Earth creationism as correct, we have various elements of physics, such as astronomy and radiometric dating which peg the Earth at somewhere near 4.5 billion years old, with the universe at least 13.7 billion years old.

Not only are creationists willing to deny reams of scientific evidence from fields all over the scientific spectrum, but they’re also willing to try to twist actual science to fit their demands. Among the most absurd arguments for creationism is the idea that evolution somehow violates the Second Law of Thermodynamics – a claim that could only be made by somebody who either doesn’t understand the Second Law of Thermodynamics or who thinks little enough of their audience to believe that the audience won’t understand it.

The Second Law of Thermodynamics, in a paraphrased form, states that in a closed system, all elements tend towards entropy. In more practical terms, it states that heat cannot flow from a cold object to a hot object without external work being applied to the system. The Earth is not a closed system. Heat is transferred between the Earth and its surroundings; heat flows into the Earth’s atmosphere from the Sun, while heat flows out of it via radiation. As for biological organisms, they must and do conduct external work on their own systems to maintain local order. Much of the energetic requirements of a human being are expended as heat in order to stave off localised entropy, with the brain being the prime example of this use of energy. None of this works in any way like the creationists explain – and their attempted perversion of science in this way demonstrates a ruthless and worrying disregard for the role of observation and experiment in their aims to push their pet hypotheses.

Young Earth creationism is, as a scientific hypothesis, a sad joke with no observable evidence behind it whatsoever and the works of several dozen fields of science and the humanities against it. However, creationism doesn’t stop there, as it has another, more presentable face in the form of so-called “Intelligent Design” – but this face is just as odious from a scientific perspective, since unlike the patently absurd Young Earth hypotheses, Intelligent Design pays lip-service to science while simultaneously ignoring many of its core tenets.

Intelligent Design, just as with any other form of creationism, posits the idea of a creator entity. The word “intelligent” in the name appears to relate to an intelligent entity rather than the design itself being intelligent – for, as I’ve intimated above, it would be pretty difficult to suggest that human anatomy, for example, is particularly intelligent. You know, with the backwards eye where light shines in through the wiring, the hip design which causes labouring mothers to experience a lot of pain, so on, so forth. The hypothesis appears on the surface to provide answers that other forms of creationism just can’t answer, like accountability for the actual, observed microevolution occurring in bacteria at this very moment – and probably including some of the bacteria living on the bodies of the readers. Yet, Intelligent Design still contravenes scientific consensus – largely for the reason that it is not falsifiable.

Falsifiability is a very important concept in science and plays a major role in the scientific method which underpins research in the physical sciences. The scientific method involves the use of a chain of steps, taking the rough form of observation-hypothesis-prediction-experimentation-reproduction, in order to test a hypothesis and attempt to produce observable, testable results which can then be reproduced by other scientists in order to eliminate any bias or contamination that may affect your experimentation procedure. A hypothesis with sufficiently large observed evidence for its correctness may then become a theory (a word which has become rather loaded when it comes to reporting science to non-practitioners, often being confused with a hypothesis in the sense described above). The principle of falsifiability plays deep into this process, since for an experiment to be useful, there must be a chance for the hypothesis that it tests to be invalided by the experiment.

This is not the case with Intelligent Design. An advocate for Intelligent Design could claim, if an experiment was ever undertaken to attempt to disprove the hypothesis, that the experimental conditions were themselves incorrect for any variety of experimental conditions. As a result, Intelligent Design, just as with any other form of creationism, is of no scientific value and therefore its teaching in a scientific curriculum would not only be useless but deleterious to other scientific disciplines.

Unfortunately, creationism is being peddled by a mixture of slick operators who play on a perceived public distrust of science and religiously motivated preachers who decry any attack on their religion – or at least the way in which they interpret their religion, since evolution does not inherently discount the idea of the existence of a god – even when that perceived attack relates to issues which should not have religious motivations behind them anyway.

This isn’t helped by the difficulty for scientists facing off against creationists; by debating them face to face, evolution scientists give creationists an air of scientific respectability that their beliefs do not deserve, while those who openly decry creationist teaching are often vocal atheists as well, creating a perspective that evolution marches in lockstep with atheism. Ignoring creationists might well magnify the erroneous idea of an ivory-tower scientific elite. In my eyes, the best thing to do would be to contest the principles of any school where creationist teachings are being given scientific credence either as an alternative or replacement for evolutionary theory, while trying to keep the vocal attacks on religion away from the subject while doing so. I may be an atheist myself, but I see having people conflating evolutionary science with atheism as a problem waiting to happen – the science should come first.

Gran Turismo – A Retrospective Review

Back when I was a young child, I was a big car enthusiast – and I am again today. However, for a period stretching from about the time I was eight years old and fourteen years old, my enthusiasm dropped off for a while as I became more interested in computer operation and gaming. My enthusiasm didn’t die completely, though, with one of the reasons for that being my opportunities to play Gran Turismo on an uncle’s PlayStation. A world of realistic driving physics, contextual representations of all of those statistics such as horsepower and large collections of different car models was opened to me. When I got my own PlayStation later on, Gran Turismo was one of the used games I bought for it. I have returned several times to the game, including a recent re-exploration of the game as I tested out a PlayStation emulator on my PC.

Gran Turismo was developed by a division of Sony Computer Entertainment, later renamed Polyphony Digital and released in 1998 after a long and protracted development process that stretched from the original Sony plans for a business deal with Nintendo through to Sony’s own immensely popular first attempt at the console market. At the time, racing games were frequently arcade-oriented, with the racing simulator market being limited to Windows PCs. Gran Turismo changed that, with one of the most accurate simulations of car physics available in 1997 along with an expansive set of customisation options and a collection of available cars that was particularly expansive for the time.

The game consists of two modes. The Quick Arcade mode allows for quick, short races against either computer opponents or another human player through two-player split-screen. The Gran Turismo mode is rather more expansive and gives the player the role of a budding racing driver. Starting with 10,000 credits, the player is expected to purchase a used car. From there, by winning races, completing licence tests which in turn open up more advanced races with more competitive opposition and purchasing new cars and new customised parts for existing cars, the game gives a simulation of a racing driver’s career.

The car line-up consists of cars from six Japanese, two American and two British car manufacturers. This was an impressive number in 1997, but even then showed a lack of scope; there are notably no Italian or German marques and even in the countries that are represented, there are conspicuously absent manufacturers such as Jaguar, Ford, Lotus and so on. The game claims 178 cars, but many of these are different models of various Japanese cars that are present, most notably the Nissan Skyline and the Subaru Impreza. This would not be the last time that Polyphony Digital over-represented certain car models, but most of the other games in the series had a greater range of car manufacturers to make up for it.

Regardless, it’s not bad for a first attempt, especially given that the enormous success of the Gran Turismo series was likely not expected and licences for reproducing cars would be correspondingly more difficult. I also have to respect the developers for not succumbing completely to cultural bias and making the arguably best and most well-rounded car in the game a TVR homologation special instead of an over-tuned Skyline designed as a drag-strip special.

There is also sufficient difference between the physics representations of the cars to make the different models more than just a series of cosmetically different skins for the same physics framework. As befits a game which unironically described itself as a “driving simulator”, the physics and car modelling remain reasonably accurate, with simulation of shifting of the car’s weight as it passes through corners, proper oversteer and understeer, the ability to tune some of the Japanese cars to ridiculous lengths and so on, so forth, right up to the TVRs being terrifying, unrefined beasts which you have to grab by the scruff of the neck and pound into submission. A little like real life, then.

Most of the cars that you can purchase are stock models of real road cars of the period around 1985 to 1997, but nearly all of these cars can be further modified using custom parts. These parts range from new turbochargers to a variety of tyres and from new clutches and transmissions to suspension systems. In turn, many of these sub-systems can be adjusted, including gear ratios, suspension travel and so on. Some of these systems create dilemmas, such as the choices between different turbocharger systems – will you put in that top-rated turbocharger and have to thrash it through all of the corners, or will you hold back, with less power when it comes to the straights? The game presents these options in terms which would not be unfamiliar to mechanics in real life, so real-world knowledge of automotive settings is helpful.

While the original Gran Turismo did not present any of the real-world circuits found in later iterations of the series, it did have a decent collection of original circuit designs. With eleven circuits, ten of which are also present in reverse layouts, there is a nice balance between slower street circuits and more winding dedicated tracks. Each of them is well-designed, with sufficient variety to keep them from getting dull or painfully repetitive.

The goal of the Gran Turismo mode is not only to win races, which earns money for new cars and improvements to existing cars in the player’s garage, but to win race series, the prizes for which include additional new cars. From the Sunday Cup, where your opponents are hatchbacks or small sports cars with leisurely performance to the GT World Cup, where only the homologation specials or the fastest of player-tuned cars can race with confidence, the different race series deal up different challenges and different rewards. Some of these races can be immensely challenging, including three-plus hour endurance races around the most difficult circuits in the game.

To race in these series, however, the player must successfully complete licence tests. These tests both have the aim of teaching the player the skills they require to succeed and acting as a test of driving proficiency. While they serve well as an extended tutorial, they can be very frustrating, especially in the final licence, the International A licence. The licence tests are arguably far more difficult than those in later iterations in the series – I was only able to obtain the International A licence in about 2009, after several years of playing. The International A licence is made more difficult by the cars that are used in the tests; the Dodge Viper is the most powerful stock car in the game and has lairy handling to boot, while the TVR Griffith is a TVR, and therefore designed to eat small children.

GT1

Looks great, goes like stink, but has an unfortunate habit of trying to kill its occupants.

The frustration doesn’t only come from the difficulty of the licence tests, but from the way the game locks out a lot of the content until you complete them. Notably, you require the International A licence to do the GT World Cup along with all of the endurance races, a limitation which would not apply as heavily in later Gran Turismo games.

That said, given the vintage of the game, Gran Turismo still has extraordinarily tight gameplay. The game is no longer a paragon of simulation physics, being supplanted not only by its own successors, but by ultra-realistic PC racing simulators which could focus more on being uncompromising to extremes by the virtue of their market niche and their userbase. Nevertheless, Gran Turismo makes you work for your victories. It doesn’t condescend to the player who only wants to thrash their cars around with no consideration of racing strategy. It’ll punish the person who thinks that high power is the be-all and end-all of automotive racing, the sort of person who doesn’t take proper regard for their suspension and transmission settings.

This game is not for everybody. The simulation bent of the game isn’t going to appeal to all. The licence tests may prove to be a challenge too much for some players, while the in-depth tuning settings will alienate those that just want to sit down and race. The Quick Arcade mode goes some way in giving the casual player some degree of playability, but it is still largely subject to the same sort of realistic gameplay as the Gran Turismo mode.

There are other flaws that exist as well, some of which are also present in later Gran Turismo games. The artificial intelligence is poor, sticking unswervingly to a racing line, even when you’re on that racing line. There is no attempt at making the AI look anything other than robotic, missing those little mistakes, daring or desperate overtaking manoeuvres or the occasional spinouts that would characterise a human driver. Due to licence agreements with the companies which produced the depicted cars, there is no damage modelling, meaning that a player is not heavily punished for some scenarios which would put them out of a race or have them disqualified in a real race. Given that contemporary PC racing simulators did have car damage, this is disappointing from the perspective of realism.

Surprisingly, though, the graphics haven’t held up too badly. The low-polygon nature of the game is evident, but Gran Turismo was one of the most technically ambitious projects on the PlayStation and the ambition neither overstretched the technology nor the execution. Considering that the PlayStation only possessed 2 MB of main RAM and an additional 1 MB of video RAM, the graphics are fluid, consistent and are more serviceable than many other games of the era. The unlockable high-resolution mode makes this even more apparent, but is unfortunately limited to the night street circuits and time trial racing.

GT2

A bit grainy, a bit blocky, yes, but it held up much better than the likes of Final Fantasy VII.

The sounds haven’t held up quite as well, but are still serviceable. Compared to the visceral roars of engines in the contemporary PC Formula One simulation, Grand Prix Legends, the engine notes in Gran Turismo are really rather tame. However, they aren’t embarrassing and it would hardly be an easy task to go out and record every iteration of an engine note for every car in the game. The music very much places this game in the Nineties, with a mixture of alternative rock from Garbage, Ash and Feeder, along with a set of techno tracks from Cubanate.

Gran Turismo has been overtaken in many ways by its successors and later racing simulators of other series. However, Gran Turismo presented something which was then unknown – a realistic presentation of racing on a console platform. Without the original, the Gran Turismo series would not continue to delight the series’ fervent fans today. The driving physics may be decidedly dated by modern standards, particularly by the standards of niche-market PC racing simulators, but Gran Turismo is still fun even after more than fifteen years since its release.

Bottom Line: While the driving physics no longer conform to the idea of a racing simulator, Gran Turismo is still a fun game with a real challenge behind completing it. It’s held up surprisingly well considering its age, but lacks content versus most of its successors.

Recommendation:Gran Turismo is still worth playing and with more than 10 million copies sold, it won’t be too hard to get a copy. Its primitive graphics and dated physics simulation will definitely put off some gamers, though and its appeal lies more in it still being fun rather than its accuracy or its level of content.

The Listicle – An Unfortunate Trend Towards Throwaway Journalism

2013 has had a myriad of throwaway trends, many of them spurred on by social media, ranging from Gangnam Style and the Harlem Shake (remember that?) to a series of phenomena described by increasingly cringeworthy names like Bitcoin (which, incidentally, I regard as nothing more than a tulip panic for Randite libertarians who get off to their copies of The Fountainhead), twerk and selfie. These trends seem to have something in common, namely their transient nature. There’s nothing necessarily wrong with transient trends, although most of them just seem extraordinarily silly to me and just seem like the sort of thing to bring up in twenty years’ time just to embarrass you in front of your children. However, there’s another trend that has taken grip during 2013, one that has far more potential of going beyond impermanence and one which indicates a worrying and unfortunate descent in people’s reading standards.

This trend is, as with so many others of 2013, described by a deeply unsatisfying name – the listicle. Lists have been a component part of journalism for quite some time now, but outside of internet-focused writing, tend to be reserved for when a writer needs an easy way out near the end of the year, such as “The Top 10 Thingamajigs of 2013”. They aren’t always particularly satisfying or fulfilling reads, but they’re quickly-read, quickly-written, condensed forms of writing. The listicle attempts to shoehorn this style into an article format – and that’s when the problems start.

The listicle pre-dates the term, being the chief output of flashy magazines like Cosmopolitan and websites like Cracked. I once read a great deal of the contents of Cracked, who at least usually fill their articles with enough content to merit the “article” part of the listicle. Such writing is again not particularly satisfying nor fulfilling. It’s more like the McDonalds Extra-Value Meal of the writing world – quickly made, quickly digested and leaves you craving more about half-an-hour afterwards. I believe it to be a lazy stop-gap in place of proper articles, but the trend has been there for quite some time without showing any signs of stopping and in some rare cases, these listicles are treated as an authority on a subject – for example, Rolling Stone‘s Top 500 Songs of All Time.

Unfortunately, as the listicle has obtained its name and become apparent to more and more people, standards have dropped even lower. One of the major offenders in this field is Buzzfeed. If Cracked serves out the journalistic equivalent of McDonalds, Buzzfeed serves out the equivalent of pet food – doled out in industrial quantities with only the slightest regard for quality and after a while, making you wonder whether it’s really acceptable to be digesting it at all. But yet, there seems to be very little shame in reading Buzzfeed articles; I see them all the time popping up in my Facebook feed. I’ve read some of them. They’re almost entirely devoid of actual substance, with maybe a blurb underneath each picture to provide the sole input of the writer into the piece of writing. The relation between the elements which defines a list is tenuous at best in a Buzzfeed article. The number of elements in the list often just seems to be drawn out of a hat and there isn’t even always a match-up between what the article title indicates and what the URL indicates. I’m starting to feel dirty for venturing into them at all.

Not everybody has the appetite for writing – or reading – extensive footnoted articles. I get that. I understand why there’s an appetite for the sort of listicles that Cracked delivers. Hell, I’ve read a great deal of them. It’s a lazy sort of writing, often without a proper conclusion, but as I said, not everybody wants to – or even can – write a substantial, serious article. On the other hand, I cannot excuse Buzzfeed for its sort of writing. Shamelessly lazy, devoid of substance and cynically simplistic, a Buzzfeed listicle isn’t worth the hard disc space it’s written on.

Of course, this sort of lazy, uninspired, unfulfilling writing extends past Buzzfeed – Buzzfeed is just the lowest common denominator in all of this. My worry is that this sort of horrid writing will continue to be popular, drawing in new writers who merely wish to hop on the bandwagon and have no respect or dignity towards their own work and who will conspire to dole out this pigswill masquerading as proper writing. It behooves me to be this conservative about a digital medium, but if this is the rubbish that we will continue to receive from internet journalism, I will continue to wish in vain for the death of internet journalism and the return of printed newspapers.

By this point, some people may be thinking to themselves, “You’re probably just envious because you’re not a good writer and your articles aren’t popular”. Actually, I already know that I’m not a particularly good writer, for a variety of reasons. That said, I consider it a point of pride that I have never written a ranked listicle – and I’m open to opinions as to whether my Probing The Inaccuracies articles count as listicles or are simply sub-headed articles.

My monitor resolution problem – or is it a problem?

About three days ago, the computer monitor attached to my main desktop died. It had been giving trouble for several weeks by that point, with the backlight resolutely refusing to illuminate the screen unless it was left for several minutes with the power on in advance. Eventually, as this wait dragged on to almost an hour, my patience ran out. Fortunately, I had a spare monitor which I have occasionally used for my other desktops, including my Power Mac G5, so after some fumbling with wires, I had the backup monitor in place and ready to go.

I had some trepidation in using my backup monitor. The monitor which I had been using wasn’t exactly top of the line; at 17 inches and a maximum resolution of 1280×1024, it seemed distinctly dated by the standards of the widescreen 1920×1080 monitors which are now commonplace. The problem, as I saw it, was that the backup monitor is even older – a 15-inch screen with a maximum resolution of 1024×768. I was concerned – what would a modern operating system look like with such a low resolution? Would I experience problems with dialogue boxes, as I have with my netbook with a somewhat similar resolution?

It turns out that I needn’t have worried that much. I’m currently sitting in front of my backup monitor, and things are going fine. I hadn’t recognised that I used the extra resolution of my bigger monitor as little as I did, because most programs that I’ve used haven’t been overly inconvenienced by the lower resolution of this screen. OK, there are some advantages of a higher resolution that I’d like to get back to as soon as possible, like being able to fit multiple Emacs or terminal windows on one screen without overlap, but these are not so critical as to make my computer unusable.

Something that I have noticed, however, is that the ostensibly superficial difference in resolution between my backup monitor (1024×768) and my netbook monitor (1024×600) does actually have much more of a difference than that between my dead monitor and the one I am using right now. For some time, I have questioned the advantages of the extra horizontal pixels provided by widescreen monitors, particularly those with 16:9 aspect ratios, over the more limited horizontal space provided by a monitor with a 4:3 aspect ratio. I have rarely used a computer and wished for vastly more horizontal space; it is instead vertical space that is at a premium. While I recognise that you can rotate the screen display on many modern operating systems such that the vertical axis lines up with the longest side of the monitor, it still makes me wonder why exactly the horizontal axis is given such high priority.

I understand that some of it is to do with media consumption, including movies and television programs. Those aren’t, however, activities which take up most of my time on a computer. Most of my activities instead involve reading or writing things in one of many text formats – types of formats which benefit from substantially narrower viewing angles than watching movies or television programs. What’s more, coming back to the difference between my backup monitor and my netbook, the 1024×600 resolution of my netbook, along with its small screen size, provides limitations on using my netbook as a video consumption device anyway. The extra 168 vertical pixels per column would have come in very handy with my netbook, but instead, it’s lumbered with a 16:9 aspect ratio when it neither needs nor benefits from it.

Until my dead monitor gave up the ghost, I had considered holding off on buying a new monitor until 16:10 aspect ratios became more affordable. Unfortunately, this looks like it won’t ever happen; such monitors look like they have become decidedly niche devices. Instead, I will replace my monitor with a more affordable 16:9 monitor, but I still doubt that actually having a widescreen monitor will give me an incentive to find additional uses for the extra horizontal resolution.

Evince and the Detriments of Oversimplification

Very recently, the newest version of Ubuntu, 13.10, was released and I, running Xubuntu on my netbook, upgraded to it. While, as with most software version updates, most of the programs which were upgraded either remained at a level with imperceptible differences or improved, one specific program which I use rather frequently was changed drastically.

The program in question was Evince, the GNOME-developed document reader for PDF, DjVu and other similar files. Clearly, somewhere between the last version of Evince that I had used on Xubuntu 13.04 and the new version on Xubuntu 13.10, somebody in the GNOME team decided that it would be a good idea to dramatically change the graphical interface. Away went the menus and, as far as I can tell, the ability to customise the icon bar and in its place was placed the sort of user interface familiar to users of Google Chrome, with minimalism the order of the day, a few icons across the top and a single menu button placed on the right-hand side of the screen rather than the left.

I’m not a fan. My very first post for this blog criticised Google Chrome for exactly the same reasons, but at least Chrome started out like that. Evince had a serviceable interface, if not exceptional by my standards, but the new changes are not at all to my taste. The context provided by the menus to the options contained within them has disappeared, the remaining menu options are placed on what I perceive to be the wrong side of the screen and the removal of the icon bar customisation options make it slower to do what I want.

At this point, some of you may be thinking that the way to get around the minimalism of the user interface is to learn the keyboard equivalents of the commands I want. To be fair, I have used some very minimalistic and very idiosyncratic programs, including the ed text editor and other programs with very little graphical indication of what is going on. On the other hand, PDF is by its nature a graphical medium and it would therefore fit to have a program which uses graphics extensively in its interface.

On my main desktop, I use openSUSE 12.3 with the KDE interface, and my document viewer in that operating system is therefore Okular, the KDE equivalent of Evince. While I understand that KDE’s heavyweight support of features over minimalism isn’t to everybody’s taste, Okular is one of the best PDF readers that I’ve used, with an interface that I can tool closer to my liking than many other document readers, with an accurate depiction of the typography of the page. (Compared to this, I am not impressed by the typographic depiction done by Evince; it appears on my netbook at least to align typefaces to pixel boundaries rather than subpixel boundaries, giving a very disappointing image with many misaligned fonts.)

I’d be inclined to use Okular on my netbook as well, if it wasn’t for the fact that to install Okular, you need to install the entire KDE desktop environment on your computer. There wouldn’t be much point in doing that without using KDE as the desktop environment – and on a netbook with an Intel Atom processor running at 1.6GHz, there isn’t much hope of it running quickly enough to satisfy my needs. This doesn’t leave me with many options – Adobe Reader and Foxit Reader are both too proprietary for my liking, Evince’s interface is poor and its font rendering is, at least in my experiences, shoddy, Okular requires an entire desktop environment to be installed and most other options are too obscure or too old to consider.

Perhaps the best step would be to move towards an even more minimalistic document reader in the guise of MuPDF; at least with this, the minimalism is to be expected and the keyboard shortcuts are therefore correspondingly more intuitive. All I can hope is that not everybody decides to go towards a Google Chrome-style interface – I’d rather use command-line programs all day long than have to deal with that sort of compromise.

Follow

Get every new post delivered to your Inbox.