I’ve mentioned previously the relevance of test computers: the best/most relevant test bed is the one that matches your computer system exactly (and runs the exact same software). Obviously, that isn’t very practical – you have to choose some sort of standard eventually, there are just too many configurations to test otherwise. For this purposes of this article, we’ll focus on two platforms:
- Intel Core i5 + 1x GTX 970
- AMD FX-8350 + 2x GTX 970
While two GTX 970s on both platforms would bring the total system cost within range of each other, we’ll be assuming for the purposes of this article that a user starts with the AMD platform – therefore, the Intel system’s budget is spent on switching sockets rather than adding graphics power. The AMD system will require a bit more cooling and a quality motherboard to reach the higher overclocks, but overall they would be natural competitors.
As a bonus, we’ll break down the results of each platform and analyze the frame times as well. The intent will be to discover which platform enables the best gaming experience. Frame Times, in my opinion, are central to that experience. It is my position that 100 consistent frames per second is preferred over an alternating 120/80 FPS – even on a display capable of displaying 120+ FPS. Why do frame times matter so much? Well, let’s talk about that.
FPS (Frames Per Second) can only tell us so much. I’ll be referring to “Frame Times” throughout this article, and it’s worth spending a little time explaining why. In short, Frame Time refers to – simply enough – the time it takes to render a single frame. Why is this important and why should we care? For the best article I’ve seen written about the subject (and possibly the reason anyone began to care about frame times) a discussion would not be complete without referring to Scott Wasson’s work and article about the subject at techreport.com. I’d highly recommend giving it a read for an even better understanding of the topic.
The problem isn’t frame times necessarily, it’s more a result of a limitation with the FPS measurement. Within a fraction of a second, a lot can happen – especially in a desktop computer system which literally runs at speeds of billions of cycles per second. Knowing what happens inside the second can tell us quite a bit. For example, a typical sequence of 100 frames might take just one full second to render – but each one of those frames was individually rendered during that time, and each one took a varying amount of time to do so. Ideally we could split that measurement and spread it equally across the frames, but we all know this assumption is inaccurate (our 100 FPS figure doesn’t mean 100 frames each took exactly .01 seconds – or 10 ms – to render). It is also quite possible that one of the frames took a full tenth of a second (.1 s) to render, and the rest .00909 seconds (milliseconds are a unit better suited to comparing sections of a second, so that would be 100 ms vs 9.09 ms). Which scenario would provide the better gaming experience?
Perhaps an analogy to illustrate: two cars drive the same distance with the same average speed. Let’s say they are each tasked with driving a 60-mile section of road with a goal to complete the route in one hour. Obviously, the average speed needed to complete that section of road would be 60 miles per hour, right? Let’s propose the two drivers have a different driving style. One drives at 100 MPH for a minute, then slows to 20 MPH, accelerating back to 100 MPH after another minute and so on – while the other driver maintains a steady 60 mph speed, only varying 2-3 mph. Each would finish the section of road in the same amount of time, but which experience is better? If top speed is the only thing that is important you’d clearly pick to ride with the first driver. Most would have to agree – however exhilarating it would be – that the first experience would be much more jarring. Notice it’s the consistency that matters most of all in regards to the experience – if that same driver spent the first half of the trip at 100 MPH, then slowed for the entire second half to 20 MPH, the average speed would again be the same yet the experience much different.
Why does this matter to gamers – or to anyone, for that matter? We would probably have to discuss human vision as well to completely understand the relevance, but…I’m definitely not a neuroscientist, and I’m not that knowledgeable about the subject in the first place. I do have my own experiences though, and it’s those that I’ll consider in the following theory.
First – what matters to computer gamers? Most would answer something like “smooth gameplay” or “high FPS.” But why? I contend that it’s the link between your hand movements and the action happening on screen that matters the most – FPS being just an aspect of that. In fact, I think it’s the most overrated aspect in that chain. In order for hand-eye coordination and reaction times in the sub-.5 sec range to develop in the first place, it seems like we need something consistent for our brain to “anticipate” – consistent being the key term there. If a single motion happens the exact same way every time, our brain seems to be able to take a “shortcut” – the reaction is expected, therefore less brainpower is needed to perform that action. As soon as an anomaly is detected, the brain now has to compensate, however slightly – this becomes vastly more difficult when the expected behavior varies widely from action to action. Therefore, I personally feel keeping that link between the motion performed by your hands and the action on-screen as consistent as possible – the chain unbroken – is what really matters to gamers. Obviously there are other factors as well – a human watching an image at 100 frames per second still has twice the amount of information to work with – whether they can make use of it or not – than a human watching an image at 50 FPS. Although we don’t really “see” in frames per second (in fact, apparently we don’t even know for sure the exact mechanics – it’s far beyond the scope of this article anyway), it’s the most useful measurement we probably have when referring to desktop computers that generate images as such. Until now, anyway – hence the discussion of Frame Times.
So we begin to arrive at the reason I recommended a computer upgrade to an Intel system first. I’ve begun to notice the frame times are much more consistent with the Intel systems I’ve benchmarked compared to the AMD-based platforms. Note the previous work on frame times was concerned with GPUs – I propose that the greater contribution to a frame time, on most desktop computer systems, lies with the CPU. Obviously, when comparing GPUs, you’d want to compare them on the same computer to eliminate this variable. However, when choosing what to upgrade on an AMD system, this task becomes a little more complicated. The rest of this article will hopefully illustrate in more certain terms how much of a difference there really is between the two platforms, and if it’s worth switching from one to the other.
Note that all of this is pointless if you use VSync on 60 Hz display , since each frame (as long as it’s above 60fps/takes less than 16.6ms to render) won’t even display until the monitor is “ready” (making what happens inside the frame a non-issue). However, without VSync and above/below 60FPS or on higher refresh rate displays it can drastically impact the “experience,” which is what we’re trying to discover.
This is where I ran into a roadblock: SLI isn’t as simple as it appears at first glance. I haven’t delved much into dual graphic cards, to be perfectly honest. Sure, I’ve Crossfired a few cards here and there, and that process was relatively simple. Until recently, I didn’t bother with duplicate Nvidia GPUs since I was generally more interested in the capabilities of different categories of GPUs. Hearing that SLI was generally a better experience and seeing the improvement and scaling in game benchmarks finally made me try it out – but in typical “enthusiast” fashion I had to do it the hard way. In order to do the following tests, some shenanigans were necessary. Turns out that NVIDIA isn’t entirely truthful when they say you can SLI any two cards of the same GPU (2x GTX960, 2x GTX970, etc.). They actually need to be identical. Cards that use the same PCB *should* work, but I ran into issues enabling SLI with a Zotac GTX 970 and a reference Nvidia GTX 970.
From NVIDIA’s FAQ:
Can I mix and match graphics cards from different manufacturers?
Using 180 or later graphics drivers, NVIDIA graphics cards from different manufacturers can be used together in an SLI configuration. For example, a GeForce XXXGT from manufacturer ABC can be matched with a GeForce XXXGT from manufacturer XYZ.
This was not my experience. As it turns out, I’m not alone. Enter Different SLI [AUTO]. It’s a tool that patches the Nvidia drivers to enable SLI between cards that would not normally allow it, and it’s the only tool that enabled the two GTX 970s I had on hand to work together.
I’d invite anyone with access to an AMD FX/Core i5 system and two identical GTX 970s to attempt to reproduce my results. From what I’ve read, using Different SLI shouldn’t have more than a 1-2 FPS impact on performance, but without having two identical GTX 970s to test that theory, it remains just an anecdote.
While this article isn’t a review on that particular software tool I can confirm that it did work. However, it was a bit of a hassle, and I’m not sure I would rely on it for a “production” / main computer system. Still, it did allow me to run two GTX 970s in SLI, which enabled the results we’ll look at later.