Home / Educational Content / Database & Technology / SELECT Journal / Database Benchmarking Made Easy

Database Benchmarking Made Easy

By Bert Scalzo | Edited by Jim Czuprynski

Database benchmarking can be a very useful skill set for any Oracle DBA to possess when attempting to provide opinions or research-based insights relating to complex technical questions about the business’ current or planned information technology strategy. In fact, with all of today’s technology changes and corresponding business initiatives, most senior DBAs and data architects should expect to be asked for information regarding potential database performance impacts from such strategic proposals. Here are just three example scenarios where database benchmarking might provide some useful insights:

Virtualization

IT organizations often debate the effectiveness of transitioning their databases from a “bare metal” environment to a virtualized environment—for example, by using Oracle Virtual Machine (OVM) or VMWare. But that transition raises quite a few performance questions:

  • What rough performance impact can we expect from switching our production databases on dedicated physical hosts to virtual machines?
  • What rough performance impact can we expect from upgrading our production virtualization hypervisor to a new release or another vendor?
  • What rough performance impact can we expect from upgrading our production virtualization server to newer processors and/or more memory?
  • How many virtual machines running production databases can we have share a virtualization server physical host?

Consolidation

Today’s IT shops constantly feel the “pinch of the pocketbook” as their organizations continue to demand reductions in CAPEX and increased utilization of expensive hardware and software licenses. But consolidation for the sake of consolidation brings to the forefront some crucial questions that must be answered as well:

  • What rough performance impact can we expect from switching our production databases on dedicated physical hosts or virtual machines to a single multi-instance high-powered SMP server for database consolidation?
  • What rough performance impact can we expect from switching our production databases on dedicated physical hosts or virtual machines to a single multi-instance Exadata database machine for database consolidation?
  • What rough performance impact can we expect from switching our production databases on dedicated physical hosts or virtual machines to a single 12c multitenant (i.e. pluggable databases) high-powered SMP server or Exadata database machine for database consolidation?

Cloud

Whether transitioning to a private cloud environment—for example, via Oracle’s Cloud at Customer offering—or taking the plunge into a completely off-premises environment like Oracle Public Cloud via its Database As a Service (DBaaS) offering, or a merging of both via a hybrid cloud strategy, there are similar pressing performance questions that must be asked and answered:

  • What rough performance impact can we expect from switching our on-premises production databases on dedicated physical hosts or virtual machines to a cloud database platform?
  • What cloud instance type/size will we need to deploy in order to have approximately the same rough performance and scalability as we do currently?
  • Finally – and most often overlooked – will there be any network latency issues?

Deciding What to Measure and Why

There are many more such questions where database benchmarking can help to formulate an opinion about large or radical technology changes. Some might correctly argue that a workload capture and replay is a far superior “what if” mechanism for an accurate comparison. I cannot effectively argue against that position. But sometimes the cost and availability of workload capture and replay makes the better solution realistically unfeasible. Thus, database benchmarking can quite often suffice for rough analysis and comparison. Stated differently, a tool that’s inexpensive and readily available can often be pressed into service to obtain good enough results. Database benchmarking can often be that choice.

The first step of any successful database benchmarking effort should be to decide upon what it is you want to effectively measure. Are you trying to capture and compare raw performance or real-world performance? While this question may well sound like a trivial difference, in truth it defines the key metric for success.

For example, imagine that you’re driving an automobile and then spot a police car with radar up ahead. Which gauges on the dashboard do you instinctively and immediately check?

  • The gas gauge, because it shows the current amount of fuel remaining?
  • The odometer, which shows the grand total distance traveled by the vehicle over its lifetime?
  • The tripometer, which shows your vehicle’s current trip distance?
  • The tachometer, because it displays the engine workload in rotations per minute (RPMs)?
  • The speedometer, which measures and displays the current vehicle speed?

In this situation, the speedometer is the obvious choice. But what if instead, you’re trying to figure out if your vehicle’s transmission is slipping, and thus requiring the engine to work harder for any given speed? The tachometer is now the correct choice.

Similarly, what if you need to assess how efficiently the vehicle’s engine is performing overall? Instead, you’d now examine both the gas gauge and odometer or tripometer since the last fuel tank fill up to calculate the miles per gallon. The same pattern is true with database benchmarking – what you should measure depends entirely on what question you are trying to answer.

Benchmarks and Benchmarking Tools

Occasionally, DBAs want to test real-world performance, which commonly means to measure and compare some database meaningful metric such as transactions per second (TPS). Familiar industry standard database benchmarks such as the TPC-C, TPC-E, TPC-H, and TPC-DS created by the Transaction Processing Performance Council (TPC) are designed to approximate such “real world performance” type workloads and corresponding performance metrics. Three very popular database benchmarking tools are HammerDB, Swingbench and Benchmark Factory. The first two are free, whereas Benchmark Factory is commercial software that also offers a freeware version that simply limits testing to no more than 100 concurrent database sessions, which is often sufficient.

But how just accurate and useful is a metric like TPS? These industry-standard benchmarks have mandatory built-in delays such as keying and think times which can significantly affect the accurate understanding of the results. Moreover, database benchmarking tools sometimes vary from the benchmark specification regarding these delays, and as such results can vary from tool to tool (or even from one run to another with the same tool). So much like our prior automobile example, maybe we’re simply looking at the wrong gauge.

More often than not, DBAs really want to test raw performance, which typically means to measure and compare some extremely low-level metrics such as IO’s per second (IOPS). There are several reasons for this. First, IOPS is simple to understand and does not include any skewing factors like industry standard benchmark delays. Second, storage vendors often express performance and throughput in these terms. Thus assuming you want to measure “raw performance” metrics like IOPS, you’ll want to use a database benchmarking tool whose design and purpose match that goal. One of the most popular lower level database performance testing tools is the Simple Little Oracle Benchmark (SLOB). Many people swear by this tool, and several have written extensions or wrappers to make improve upon it. But SLOB has notable limitations in my opinion:

  1. The logic is hard coded in a C program and also depends on a C program for semaphore management. I prefer to keep all the code in shell script, SQL and PL/SQL which are all far easier for anyone to change as needed.
  2.  Setup and running require a multi-step process. I prefer one simple command interface.
  3. SLOB’s core IO design is “reads” do lightweight block access and “writes” bypass index overhead. I prefer more natural database IO patterns more likely to match real-world database usage.
  4. The resulting SLOB load profiles are textual and for a single run (hence why some of the wrappers and extensions).

I prefer a simple tool that does iterative testing and produces one easy-to-read chart. Thus I wrote and have made freely available my own tool called DBBENCHMARK. It’s basically a SLOB-like tool that fully addresses those stated limitations.

DBBENCHMARK: A New Benchmarking Tool
Running DBBENCHMARK is very easy – it’s just a simple call to one shell script with the user specifying just a few fairly self-explanatory parameters as shown here:

Code example one

Here’s an example of the command actually executing and its output while running:

Code example two

DBBENCHMARK will create both a directory and a single zip file of that directory’s content of the format: DBBENCHMARK_ + YYYY_MM_DD_ + HH.MM.SS (e.g. DBBENCHMARK_2016.06.26_21.54.50.zip). There will be three types of files in that directory: SAR files per iteration, rollups of the key SAR data, and four GNU Plot JPEG files for the key rollup metrics shown here. This provides one simple picture from which to interpret the results.


So what do these graphed results demonstrate? DBBENCHMARK indicates that three concurrent sessions is the clear sweet spot for this 4 CPU with 16GB RAM VM running on my quad core notebook with an SSD. Both the IOPS and SAR IO graphs peak at three while the CPU and run queue lengths remain acceptable. Note also that higher level performance metric for TPS does not fully correlate with these results for all the reasons explained above. And while this particular test was run on a low-end machine, DBBENCHMARK works just as simply in all cases; for a larger machine, you would just specify different parameters for database size, number of starting/ending concurrent sessions and run time limit. The same pattern for finding the optimal performance point would remain exactly the same.

Conclusion

As I’ve hopefully explained and demonstrated, no single database performance benchmarking tool is necessarily the perfect choice for every situation. Whether it’s virtualization, consolidation, or cloud technology that is driving your need to benchmark, the most important thing to understand is exactly what questions you are trying to answer and why those answers are important for your organization’s unique needs. Those considerations will drive which benchmarking tool is most appropriate for your organization to employ.