In the first instance, our PL/SQL code will not use bind variables, but rather will use string concatenation to insert data:

To automate this, the prior code was placed in a text file named nb.sql. Then the next bit of SQL automatically generates a shell script to call the prior PL/SQL as well as to call the Statspack:

The contents of the generated shell script contain this (varies depending on the number of variables you pass to the prior code):

You can download the script that encapsulates the logic and code for this example from the GitHub website for this book (look in the ch06 folder, in the nobinds subfolder). The script with this code is named nobind.sql and is executed as follows:

I ran the test in single-user mode with just one session for the first test (i.e., by itself with no other active database sessions). The Statspack report came back with this information:

I included the SGA configuration for reference, but the relevant statistics are as follows:

•\ Elapsed (DB time) time of approximately 10 seconds (0.16 of a minute)
•\ 1796.9 hard parses per second
•\ Nine CPU seconds used

Now, if we were to run two of these programs simultaneously, we might expect the hard parsing to jump to about 3600 per second (we have multiple CPUs available, after all) and the CPU time to double to perhaps 22 CPU seconds. Let’s take a look:

What we discover is that the hard parsing goes up a little bit, but the CPU time more than doubles. How could that be? The answer lies in Oracle’s implementation of latching. On this multi-CPU machine, when we could not immediately get a latch, we spun. The act of spinning itself consumes CPU. Process 1 attempted many times to get a latch onto the shared pool only to discover that process 2 held that latch, so process 1 had to spin and wait for it (consuming CPU). The converse would be true for process 2; many times it would find that process 1 was holding the latch to the resource it needed. So, much of our processing time was spent not doing real work, but waiting for a resource to become available. If we page down through the Statspack report to the “Latch Sleep Breakdown” report, we discover the following:

Note how the number 17,789 appears in the SLEEPS column here? That number corresponds very closely to the number of WAITS reported in the preceding “Top 5 Timed Events” report.

Note The number of sleeps corresponds closely to the number of waits; this might raise an eyebrow. Why not exactly? The reason is that the act of taking a snapshot is not atomic; a series of queries are executed gathering statistics into tables during a Statspack snapshot, and each query is as of a slightly different point in time. So, the wait event metrics were gathered at a time slightly before the latching details were.

Our “Latch Sleep Breakdown” report shows us the number of times we tried to get a latch and failed in the spin loop. That means the Top 5 report is showing us only the tip of the iceberg with regard to latching issues—the 131,050 misses (which means we spun trying to get the latch) are not revealed in the Top 5 report for us. After examination of the Top 5 report, we might not be inclined to think we have a hard parse problem here, even though we have a very serious one. To perform two units of work, we needed to use more than two units of CPU. This was due entirely to the fact that we need that shared resource, the shared pool. Such is the nature of latching.

You can see that it can be very hard to diagnose a latching-related issue, unless you understand the mechanics of how they are implemented. A quick glance at a Statspack report, using the Top 5 section, might cause us to miss the fact that we have a fairly bad scaling issue on our hands. Only by deeper investigation in the latching section of the Statspack report will we see the problem at hand.

Additionally, it is not normally possible to determine how much of the CPU time used by the system is due to this spinning—all we know in looking at the two-user test is that we used 21 seconds of CPU time and that we missed getting a latch on the shared pool 131,050 times. We don’t know how many times we spun trying to get the latch each time we missed, so we have no real way of gauging how much of the CPU time was spent spinning and how much was spent processing. We need multiple data points to derive that information.

In our tests, because we have the single-user example for comparison, we can conclude that about 3 CPU seconds or so was spent spinning on the latch, waiting for that resource. We can come to this conclusion because we know that a single user needs only 9 seconds of CPU time so two single users would need 18 seconds, and 21 (total CPU seconds) minus 18 is 3.


Post datePost date 03/23/2024
Last editLast edit
AuthorAuthor
TagsTags

Leave a Reply

Your email address will not be published. Required fields are marked *