SPEC Cloud® IaaS 2016
The SPEC Cloud® IaaS 2016 benchmark benchmark is SPEC's first benchmark suite to measure cloud performance. The benchmark suite's use is targeted at cloud providers, cloud consumers, hardware vendors, virtualization software vendors, application software vendors, and academic researchers. It addresses the performance of infrastructure-as-a-service (IaaS) cloud platforms. IaaS cloud platforms can either be public or private.
Benchmark Retirement
The SPEC Cloud® IaaS 2016 benchmark has been retired in favor of its successor, SPEC Cloud IaaS 2018. As of March 6, 2019, the product is no longer actively supported by SPEC.
Benchmark results for this suite are not allowed to be published unless they are reviewed by SPEC. After the March 6, 2019 review cycle, SPEC will no longer be reviewing results; therefore, no publication may be done of alleged rule-compliant results after March 20, 2019 (close of final review cycle). See the SPEC fair use rules for more information.
The current version of the benchmark is version 1.1, released on December 20, 2016. This update includes bug fixes and usability improvements, libcloud support for easy adapter development, and OpenStack driver updates.
The benchmark is designed to stress provisioning as well as runtime aspects of a cloud using I/O and CPU intensive cloud computing workloads. SPEC selected the social media NoSQL database transaction and K-Means clustering using map/reduce as two significant and representative workload types within cloud computing.
Each workload runs in multiple instances, referred to as an application instance. The benchmark instantiates multiple application instances during a run. The application instances and the load they generate stress the provisioning as well as run-time aspects of a cloud. The run-time aspects include CPU, memory, disk I/O, and network I/O of these instances running in a cloud. The benchmark runs the workloads until quality of service (QoS) conditions are reached. The tester can also limit the maximum number of application instances that are instantiated during a run.
The key benchmark metrics are:
- Scalability measures the total amount of work performed by application instances running in a cloud. The aggregate work performed by one or more application instances should linearly scale in an ideal cloud. Scalability is reported for the number of compliant application instances (AIs) completed and is an aggregate of workloads metrics for those AIs normalized against a set of reference metrics.
- Elasticity measures whether the work performed by application instances scales linearly in a cloud when compared to the performance of application instances during baseline phase. Elasticity is expressed as a percentage.
- Mean Instance Provisioning Time measures the time interval between the instance provisioning request and connectivity to port 22 on the instance. This metric is an average across all instances in valid application instances.
For more detail on the SPEC Cloud® IaaS 2016 benchmark benchmark please review the benchmark documentation listed below.
Results
- Submitted Results
- Includes all of the results submitted to SPEC from the SPEC member companies and other licensees of the benchmark.