SPECViewPerf 11 Test Results
The Standard Performance Evaluation Corporation is “…a non-profit corporation formed to establish, maintain and endorse a standardized set of relevant benchmarks that can be applied to the newest generation of high-performance computers.” Their free SPECviewperf benchmark incorporates code and tests contributed by several other companies and is designed to stress computers in a reproducible way. SPECviewperf 11 was released in June 2010 and incorporates an expanded range of capabilities and tests. Note that results from previous versions of SPECviewperf cannot be compared with results from the latest version, as even benchmarks with the same name have been updated with new code and models.
SPECviewperf comprises test code from several vendors of professional graphics modeling, rendering, and visualization software. Most of the tests emphasize the CPU over the graphics card, and have between 5 and 13 sub-sections. For this review I ran the Lightwave, Maya, and Seimens Teamcenter Visualization tests. Results are reported as abstract scores, with higher being better.
The lightwave-01 viewset was created from traces of the graphics workloads generated by the SPECapc for Lightwave 9.6 benchmark. The models for this viewset range in size from 2.5 to 6 million vertices, with heavy use of vertex buffer objects (VBOs) mixed with immediate mode. GLSL shaders are used throughout the tests. Applications represented by the viewset include 3D character animation, architectural review, and industrial design.
The maya-03 viewset was created from traces of the graphics workload generated by the SPECapc for Maya 2009 benchmark. The models used in the tests range in size from 6 to 66 million vertices, and are tested with and without vertex and fragment shaders. State changes such as those executed by the application- including matrix, material, light and line-stipple changes- are included throughout the rendering of the models. All state changes are derived from a trace of the running application.
Siemens Teamcenter Visualization Mockup
The tcvis-02 viewset is based on traces of the Siemens Teamcenter Visualization Mockup application (also known as VisMockup) used for visual simulation. Models range from 10 to 22 million vertices and incorporate vertex arrays and fixed-function lighting. State changes such as those executed by the application- including matrix, material, light and line-stipple changes- are included throughout the rendering of the model. All state changes are derived from a trace of the running application.
Lightwave scales nicely, although Maya and TCVIS seem bound more by the video card than the CPU. In the next section I’ll try some video transcoding…
X264HD 5.0 Test
Tech ARP’s x264 HD Benchmark comprises the Avisynth video scripting engine, an x264 encoder, a sample 1080P video file, and a script file that actually runs the benchmark. The script invokes four two-pass encoding runs and reports the average frames per second encoded as a result. The script file is a simple batch file, so you could edit the encoding parameters if you were interested, although your results wouldn’t then be comparable to others.
This is another example of a useful benchmark that’s based on real-world code. I like encoding benchmarks since they’re one of the few tests that can measure a real-world use of the power of modern multi-core processors. I like this particular benchmark since it’s the best “overclock killer” I’ve seen: systems that will run most stress tests all day long with a given set of overclock settings will crash on this benchmark.
Here we can really see the performance benefits of overclocking. Just flipping a switch on the motherboard raises the Run 1 performance by almost 8% and the Run 2 performance by almost 9%. Auto tuning gives a 9% and 10% boost, respectively, while my manual overclock hits 17% and 15% increases.
Join me in the next section for my final thoughts and conclusion.