Abstract
Historically, benchmark tests have been one of several ways to size a computer system, and measure its performance. Today, it is more important to test the System Software than the machine hardware. (Thus the term "watermark" (as on bond paper) for software tests.) Watermarks of software suffer the same limitations and risks as benchmarks of hardware: e.a., they should be supplemented with simulations, models, and other analysis and design tools of our trade. Perhaps most significantly, watermarks, like benchmarks, can be biased by their creators.
Recommendations
The DaCapo benchmarks: java benchmarking development and analysis
OOPSLA '06: Proceedings of the 21st annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applicationsSince benchmarks drive computer science research and industry product development, which ones we use and how we evaluate them are key questions for the community. Despite complex runtime tradeoffs due to dynamic compilation and garbage collection ...
Reversible Data Embedding for Tamper-Proof Watermarks
ICICIC '06: Proceedings of the First International Conference on Innovative Computing, Information and Control - Volume 3In this paper, a novel reversible data embedding for tamper-proof watermarks is proposed. A reversible watermark is embedded into robust watermark in the discrete wavelet transform (DWT) domain using a feature map and a location map. Generally, the ...
Large System Performance of SPEC OMP2001 Benchmarks
ISHPC '02: Proceedings of the 4th International Symposium on High Performance ComputingPerformance characteristics of application programs on large-scale systems are often significantly different from those on smaller systems. SPEC OMP2001 is a benchmark suite intended for measuring performance of modern shared memory parallel systems. ...
Comments