For serious overclockers, pushing the limits of their personal computer’s processing speed is an addictive and competitive sport. These highly technical hobbyists push their PC hardware to perform far beyond its specifications. As processing speeds increase, the hardware generates more heat. Too much heat and the hardware can be damaged. Experienced overclockers know to cool down their central processing units. In some extreme cases, overclockers use liquid nitrogen coolants to bring temperatures down. And the prize for those who attain the highest speeds? A place in Futuremark’s Hall of Fame, a showcase for the world’s fastest PC hardware and most talented overclockers.
While overclockers are on the extreme end of technical capabilities, they share a common goal with comparison shoppers and product reviewers — to better understand and compare the performance of hardware. How do you compare the performance of graphics cards, processors, motherboards and mobile devices amid the bewildering number of choices and competing claims? Many have come to rely upon Futuremark (a UL company) for its benchmark applications, such as 3DMark®, PCMark™, and Powermark®.
“Two graphics cards that cost the same amount of money can have significant performance differences, and it’s not always easy to compare the technical specs,” noted James Gallagher, Futuremark marketing manager. “We have found a willing and ready audience, both with home users and reviewers in the press, who need an objective tool to determine what they or others should buy.”
As with other UL businesses, Futuremark maintains close relationships with manufacturers, through its Benchmark Developers Program, while preserving its neutrality. Engineers with the company share specifications and target performance levels with manufacturers early in the design phase of new benchmarks. The manufacturers then provide input and guidance on new hardware in development, and a continual cycle of feedback during product development helps ensure accurate benchmarks.
Cooperation keeps everyone honest. “If one manufacturer suggests something self-serving, the others will protest and suggest a better way to implement the benchmark, helping to ensure that it is accurate,” added Gallagher. “If only one manufacturer is delighted, then there’s a problem. If all manufacturers are satisfied, then we’ve done our job well.”
Adding to the complexity is that Futuremark engineers are often writing the source code for their benchmarks while the manufacturers are developing their products in parallel. The benchmark source code is shared with the manufacturers throughout development, a process that can take up to two years. Up until the launch of the benchmark, Futuremark continues to take feedback; however, once it launches, no further changes are allowed. At that point, the consumer must have trust in the integrity and certainty of the benchmark.
Futuremark benchmarks are very popular with the media. Hundreds of websites and magazines provide in-depth hardware performance reviews and buyer’s guides based upon Futuremark benchmarks, clarifying for consumers their choices among an overwhelming number of fast-changing technology options.
Beyond providing third-party tools for product performance assessment, Futuremark has changed the technology pricing dynamic. When benchmarks were not commonly used for performance comparisons, product pricing in the market was often similar despite large differences in performance. Today, thanks to Futuremark’s benchmarks, the consumer is more knowledgeable about product performance, leading to better market pricing of products relative to their performance.
Similarly, Futuremark benchmarks have encouraged improvements in product performance. Gallagher put it this way: “Imagine if a manufacturer released a new flagship device that performed only slightly better than the previous model? Customers expect large leaps in performance with each new generation of hardware. The widespread use of benchmarks drives manufacturers to design products that deliver those leaps.”