The original version of this TODS paper appeared as "Benchmarking Simple Database Operations,"  by Rubenstein, Kubicar, and Cattell, in the SIGMOD conference in 1987. The long time lag between conference and journal publication isn't surprising if you think of the original version, as I do, as the paper that launched a thousand arguments - but isn't that true of any paper that proposes a new benchmark? The original paper proposed three simple operations that the authors thought would be typical of the applications that would want to use object-oriented databases, operations that involved traversing hierarchies of information about parts. The paper also gave results from running the benchmark on several different databases. The subsequent firestorm touched on many aspects of the OO1 benchmark, but perhaps the biggest theme in the complaints was that the OO1 benchmark unfairly favored object-oriented databases over relational databases, in the details of how the benchmark measured performance. By the time the paper made it to TODS, it had rather more of a benchmark-by-committee feel about it, as some of the biggest complaints had been resolved. Follow-on work by others, such as the OO7 benchmark, targeted other perceived faults in OO1.
I like to use the OO1 benchmark when I teach about benchmarking. It is fun to read the introductory chapters of Jim Gray's book on benchmarking, where you learn the four characteristics of a good benchmark, and then evaluate the original OO1 benchmark on those characteristics. (OO1 is also described in that book.) A comparison of the original proposal in SIGMOD with what actually appeared in TODS is a fun way to contrast your evaluation of the original OO1 with that of experts. It is also fun (and quick and easy) to implement OO1 and have a contest to see whose code runs OO1 the fastest, and who comes the closest to cheating on the benchmark, without actually cheating.
Copyright © 2001 by the author(s). Review published with permission.