SPEC

SPEC Blog

News and views about SPEC, our products, and the people in our community.

The State of AI/ML Benchmarking: Addressing Gaps and Driving Progress

In the rapidly evolving field of artificial intelligence (AI) and machine learning (ML), enhancing the accuracy and computational efficiency of large models remains a critical challenge. As of 2024, there are over 100 commercial large language models worldwide, with OpenAI’s ChatGPT boasting more than 500 million users. The swift development of these models is driving significant advances in AI capabilities.

SPECworkstation 4.0 Benchmark Measures Latest Workstation Hardware, Adds AI/ML Workloads

I’m extremely excited that SPEC has announced the release of the SPECworkstation 4.0 benchmark, a major update to SPEC’s comprehensive tool designed to measure all key aspects of workstation performance. This significant upgrade from version 3.1 incorporates cutting-edge features to keep pace with the latest workstation hardware and the evolving demands of professional applications, including the increasing reliance on AI and machine learning (ML).

The Latest News from the SPEC Embedded Group

The first edition of the EG Newsletter is out, giving us a regular opportunity to update the community on everythign happening with SPEC’s Embedded Group.

The SPECvirt® Datacenter 2021 Benchmark — An Answer to the Datacenter Architect’s Dilemma

An IT architect designing a virtualization-focused data center faces a variety of crucial choices.

Wrapping Up ICPE 2024: A Milestone Event in Performance Engineering

We are delighted to share insights and highlights from the recently concluded 15th annual ACM/SPEC International Conference on Performance Engineering (ICPE) 2024, held May 7-11th in the historic and vibrant city of London. ICPE has been the flagship conference in the area of performance engineering over the years and has a tradition of bringing together both academic scholars and industrial practitioners in discussing the recent advances in the area.

The SPECworkstation 3.1 Benchmark — A User's Story

Nikki is a procurement manager at a large digital media and marketing company that offers vertical solutions for a range of industries, including media and entertainment, life sciences and ecommerce. She was tasked with purchasing over 300 new workstations to satisfy the needs of power users across the company – while staying within a finite budget.

The SPEChpc 2021 Benchmark Suites – A User’s Story

The SPEChpc 2021 benchmark suites, a standard High-Performance Computing (HPC) test tool, proves valuable in various situations. In this blog, Nick Hagerty, HPC engineer from the Oak Ridge National Laboratory (ORNL) and Dr. Junjie Li, Research Associate from the Texas Advanced Computing Center (TACC), two of the largest HPC centers in the U.S., share their experiences with the SPEChpc 2021 Benchmark Suites in their research and daily data center operations.

Updated PTDaemon Interface Meets Today's Industry Needs

A new version of the PTDaemon interface is now available. It includes multi-channel support for the ZES LMG 450/500/600 families of power analyzers and support for TFA Dostmann and Cleware USB temperature sensors on Linux-based systems. It also includes bug fixes and enhancements, enabling more organizations to use SPEC benchmarks and other tools for more use cases related to increasing energy efficiency, achieving a low carbon footprint, and implementing new cooling technologies.

The SPECapc Benchmark  — A User's Story

Ashley, a college sophomore majoring in graphics and design, specializes in 3D modeling. Her modeling tool of choice was Autodesk Maya 2024, the latest version of the popular 3D software that both professionals and enthusiasts use to create realistic characters and blockbuster-worthy effects.

SPECaccel® 2023 Benchmark Suite — A Teaching Tool for High Performance Computing

The development of the SPECaccel 2023 benchmark suite is a collaboration by representatives from industry vendors, high performance computing centers and academic institutions to provide a baseline to compare accelerator performance. While vendor supplied languages may offer optimal performance for a particular accelerator, the use of directives showcases performance portability across many accelerators.

EEMBC Becomes SPEC Embedded Group

SPEC and EEMBC combining forces will significantly further both organizations’ missions to provide global, independent and high-quality benchmarks, and provide one source for benchmarks which cover the smallest microcontroller to the largest supercomputers.

The SPECvirt Datacenter 2021 Benchmark -- A User's Story

An IT Manager at a rapidly growing financial services company, needed to specify the hardware required for the company’s new private cloud deployment. He was pleased to find the SPECvirt Datacenter 2021 benchmark, which offered several benefits compared to the 2013 version – and to any of the other tools John had in his arsenal.

SPEC Delivers a Major Update to the SPECapc for Creo 9 Performance Benchmark

The benchmark has undergone a significant transformation with three major upgrades since it first released for Creo 3, and we are particularly gratified with being able to work directly with PTC on this update. In addition to new test cases that exercise features added to Creo over the last few releases, we have significantly enhanced the benchmark’s interface to make it far more user-friendly.

The SPECviewperf® Benchmark — A User's Story

Emma is a tech enthusiast who develops product designs and plays games on the same workstation, which includes a CPU and GPU that were mid-range when she bought the system in 2018. She narrowed her choice down to five possibilities based on price and the brands she favored, but she obviously couldn’t afford to purchase all five and test them. She was at a bit of a loss until she noted that in several of the reviews, GPU performance comparisons were based on the SPECviewperf benchmark.

SPEC Member Professor Lizy Kurian John Receives Joe J. King Professional Engineering Achievement Award

Pleased to congratulate Professor Lizy Kurian John, IEEE Micro Editor-in-Chief and Truchard Foundation Chair at the Department of Electrical and Computer Engineering, University of Texas at Austin, on receiving the Joe J. King Professional Engineering Achievement Award. Professor John is well known within SPEC for her contributions to SPEC CPU, and in turn, her contribution to new CPU processor design.

It's a Wrap — Successful 14th Annual ICPE 2023 Marks Return of In-Person Event

This year’s event marked an exciting return to an in-person conference, and nearly 150 attendees enjoyed three keynote speeches, 28 research presentations, seven data challenge presentations, a range of workshops and more. The presentations offered a broad range of topics, including AI, fair data sharing practices in research, and performance engineering practices at companies such as ABB, MongoDB and Redis.

SPEC Adds Benchmark Search Program for Graphics and Workstation Performance Group

SPEC believes that the most effective computing benchmarks are based on how various user communities run actual applications. To enable us to do this, Search Programs encourage users outside of SPEC to contribute applications, workloads, or models that will enable us to build more comprehensive and more applicable benchmarks, which in turn will better serve their communities. The new Benchmark Search Program is for the SPEC Graphics and Workstation Performance Group (GWPG).

SPECapc for Maya 2023 Benchmark – Measuring Performance of the Latest High Performance Workstations

The SPECapc for Maya 2023 benchmark consists of 47 tests using eleven different models and animations. It includes eight different graphics tests in various modes and five different CPU tests. The graphics-oriented tests use six different Maya view settings — Shaded, Shaded SSAO, wireframe on shading, wireframe on shaded SSAO, textured, and textured SSAO. Various tests measure both animation and 3D model rotation performance. Five CPU tests within the benchmark perform CPU ray tracing and evaluation caching in various modes.

SPEC Search Programs — How You Can Contribute to the Future of Computing

SPEC believes that the most effective computing benchmarks are developed based on how various user communities run actual applications. To enable us to do this, SPEC regularly conducts Search Programs that encourage those outside of SPEC to contribute applications, workloads, or models that will enable us to build more comprehensive and more applicable benchmarks that will better serve their communities.

From performance rating to sustainability: SPEC tackles a key global challenge

As sustainability has become an increasingly important global issue, the SPECpower benchmark has played a critical role in enabling and encouraging vendors to improve the energy efficiency of their products. Over the last few years, the growing focus on sustainability has also led to an important new direction for SPEC.

SPEC Releases SPECapc for Solidworks 2022 Benchmark

The SPECapc for Solidworks 2022 benchmark enables workstation vendors to measure the performance of their systems designed to run Solidworks 2022, while enabling users to make more informed decisions when purchasing and configuring their workstations. The updated benchmark was designed in collaboration with Dassault Systèmes and reflects our joint commitment to ensuring the benchmark keeps pace with the evolving needs of the industry and users.

New SPECviewperf 2020 v3.1 Benchmark for Measuring Graphics Performance

Graphics requirements continue to evolve rapidly, and the SPEC® Graphics Performance Committee has responded with the release of the SPECviewperf 2020 v3.1 benchmark, a key update to the worldwide standard for measuring graphics performance. We’ve added support for Microsoft Windows 11, which is now being installed by OEMs on most new PCs. The new benchmark also offers enhanced GUI support for 4K, making it clearer when 3840x2160 resolution has been selected and indicating in results files that a benchmark was run at 4K resolution.

SPEC Machine Learning Committee to Develop Vendor-Agnostic Benchmark to Measure End-to-End Performance for Machine Learning Training and Inference Tasks

The SPEC Open Systems Group (OSG) reached an important milestone this year with the establishment of the Machine Learning Committee, which is developing practical methodologies for benchmarking artificial intelligence (AI) and machine learning (ML) performance in the context of real-world platforms and environments.