Each computer system ran through a full demo of 3DMark Firestrike Extreme before running the rest of the benchmarks to allow for any temperature related issues to settle (or reveal themselves), as well as any caching to occur. The rest of the benchmarks were run in order, using FRAPS and the FRAFS Bench Viewer tools to capture frametimes and avg/min/max frames in a two-minute run throughout each game (the 3DMark sections were allowed to complete fully).
Crysis 3 and ARMA3, the two games chosen to compare/illustrate differences in each brand’s CPUs, were each run three times (the results shown are the average of all the results).
I chose to do the SLI tests first. After verifying that SLI was indeed enabled, I ran each system through the gauntlet.
The AMD system is certainly respectable. Two GTX 970s are a lot of graphics horsepower, and this configuration doesn’t exactly struggle with an average frame rate. In fact, when that Bulldozer/Piledriver architecture is allowed to, it can really churn out some frames at those higher clocks. Unfortunately, it all comes crashing down when we look at the 99-percentile frame times: The AMD system can’t keep any sort of consistency between frames.
What exactly does that mean? A percentile is just a comparison between a portion of the data to a group above/below that mark. Essentially, the graph above says the following: 99% of the time, the AMD system will generate frame rates faster than 27 FPS. This has the effect of ignoring outliers that happen less than 1% of the time – generally, those frame times would be safe to ignore due to their rarity. For an excellent write up on frame time percentiles, check out Ian Cantlay’s article Analysing Stutter – Mining More from Percentiles on Nvidia’s developer site.
Here’s an example frame-time capture from both computers (running ARMA3) that illustrates my point:
These graphs show the actual frame time of each individual frame for each computer. ARMA3’s engine, with all its AI/objects/complexity, is very CPU-intensive – and it isn’t very friendly to AMD CPUs. The extra graphics horsepower doesn’t really show up here like it may in other engines (like Crysis 3 or Frostbite-based games). An average FPS measurement of 40 isn’t a very good showing for $700-worth of graphics hardware. Now, this is only a single example, and it’s one of the most extreme – many games don’t struggle on AMD machines, but as we’ll see later it seems to reflect the nature of AMD’s “module” based approach.
While the average frame rate is different between the two computer systems as well, pay attention to the “tightness” of the line. The less scatter or jitter, the “smoother” (more consistent) the experience appears on-screen. The Intel system generates significantly more consistent frame times, keeping a 20 FPS/50ms advantage over the AMD system. Remember, this is with both computers in SLI – we’ll get to the actual computer comparison/upgrade solution (Intel + 1x 970 vs. AMD vs 2x 970) later.
When we see the SLI results from the Intel system, the frame rates follow the same overall pattern. The extra graphics power is realized here as well, especially in the CPU-constrained benchmarks like Starcraft 2 and ARMA3. If we look at this in context of the original question (to SLI or switch to Intel), switching to Intel would ultimately realize a significant gain if a user would then choose to SLI after switching platforms.
Now that we’ve seen how each platform performs with two GTX 970s, let’s remove that variable and spend some time analyzing each platform individually; bringing us closer to answering our original question.