On The Gaming PC Upgrade Cycle and The Future

It has been an often-perpetuated myth, with little backing in reality, that personal computer gaming is a ludicrously expensive business, with people having to spend thousands of dollars every six months to stay on top of the curve. This myth contains a lot of exaggeration; personal computer gaming is expensive, but upgrade cycles come every two to four years for most computer gamers, depending on their tolerance for lower resolutions, and the “sweet spot” for computer design right now is somewhere around the €750 mark, with a €500 budget yielding a still-reasonable machine.

Indeed, over the last three years or so, desktop computer design has reached a point where most computers on the market with any sort of discrete graphics card can play the majority of modern games acceptably. Games consoles are a major contributor to this situation, especially with Sony’s insistence that the PlayStation 3 will last for a decade (perhaps as a budget option beside the PlayStation 4, but they’re dreaming if they think that it’ll last as a vanguard in the console war).

Consoles are considerably less powerful than gaming computers. The graphics that they generate are fairly impressive, but they’re not generating 1080p resolutions on most games – the graphics are upscaled for higher-resolution displays. As consoles have become the predominant platform for graphics-intensive games, the graphical quality of multi-platform games seems to be limited by the lowest common denominator, the Xbox 360. Because it doesn’t seem to pay much to try to push a computer to its limits, the most graphically-intensive game in common knowledge, Crysis, dates back to 2007.

All of this has made me consider the gaming computer in this context. If consoles with hardware which would be considered as mid-range in the PC market when the consoles were released are going to limit graphical quality on personal computers, there’s little point in spending huge amounts of money on a ridiculously powerful monolith of a machine. An AMD (previously ATI) Radeon HD 5970, the most powerful graphics card available, really requires three monitors, preferably with 2560×1600 resolutions, to demonstrate its power properly. This isn’t performance to turn one’s nose up at, but not many people have €8,000 or so of money to spend on a computer and three monitors simply to get the best out of games which already look impressive at less-demanding resolutions. Even my own machine, with a Radeon HD 4890, is a bit over-kill for the native 1280×1024 resolution of my monitor.

That’s a pretty obvious conclusion, but there are still elements of the gaming PC which don’t make much sense in context. Whenever people ask for recommendation for the specifications that their own designs for gaming PCs should follow, there will usually be a big discussion on the power supply unit. There’s a good reason for this, as the PSU is one of the most likely components of a personal computer to fail, and I advise never going cheap on the component, but a lot of the suggestions on the internet recommend a 750 watt power supply.

I’ve started to wonder of late why a personal computer should need 750 watts of power to sustain its processes, enough to light a house full of non-CFL lightbulbs, and several houses with energy-saver bulbs. Even with the slight inherent inefficiency of power supplies, and assuming that the PSU will run with 80% efficiency at maximum load, that’s still 600 watts required by the internals of the computer.

Graphics cards are a major culprit in this scenario. As the power of a graphics card increases, the amount of power it requires will also increase, sometimes to slightly absurd levels. With the market for expensive, high-end GPUs closing up as specifications for games stay relatively level, perhaps it’s time for graphics card manufacturers and developers to start considering how to increase the power efficiency of their products.

AMD has taken a small step towards this goal with their Radeon HD 5xxx series of cards, with the 5770 producing about as much graphical potential as a previous-generation Radeon HD 4870 card, but with less demand for electrical power. Yet, a discrete graphics processing unit at idle speeds still sucks up a lot of power, sometimes in the region of 100 watts. My HD 4890 would go unused most of the time if I didn’t run an instance of Folding@home on it in the background while doing less computer-intensive tasks, sucking up power while only rendering Windows 7 Aero effects. Perhaps it’s time to try working on graphics cards with separate units for slower and faster graphical settings, with the ability to hot-swap automatically between them. This capacity has been demonstrated on laptops, where power consumption is a big deal, but it also needs to be demonstrated on desktops.

The recent developments of SLI and CrossFire by both of the main graphics card developers has caused another problem. Yes, I understand that NVIDIA and AMD need to get rid of that lot of mediocre graphics cards you have left over somehow. That doesn’t excuse them from trying to dump this technology on us. If they want to sell dual-graphics card systems, at least make it so that the end-user gets graphical potential more equivalent to the amount of electrical power that gets used up. Less dual-graphics card systems, less high-end graphics units which serve more to show off the potential of the company than to do any real tasks, and more decent, power-efficient mid-range components which make financial sense, please.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: