Oracle Motorsports

February 25, 2014 Nigel Pepper

TL;DR: Troubleshooting Oracle performance issues is non-trivial. Pay attention to Oracle’s memory management settings when experiencing local slowness, particularly if running against Oracle XE (express edition) where memory thresholds a quite conservative by default.

In circuit racing, there’s an old mantra,

Slow in, Fast out.

The principle is pretty simple. The driver comes in fast, towards a corner, brakes to complete braking before the apex, and gets on the gas nice and early to power out of the corner, maintaining maximum velocity throughout. So how does all this relate to Oracle? Well, as with all things, this process requires a little tuning from time to time, vehicle to vehicle. Your database may be performing well one day, then you make a change, and something that functioned quite well, no longer does so.

The other day, we encountered this very situation on a project, and I’d like to take you through what happened, our findings, and how we resolved it.

The application I’m currently working on, is a mathmatically intensive product to provide financial insight on potential future sales. As the application was quite data-heavy, we decided fairly early on in development that it would be appropriate to leverage the database as a compute engine, and perform many of the summation and aggregation calculations on data at-source, thereby saving us from loading a bunch of stuff into memory, iterating over it, and performing the relevant calculations. This had been serving our needs very well.

We had a database view which was reasonably straightforward, performing a number of inner joins on indexed ID columns, across 7 tables. Involved, but not rocket science. The view’s performance had always been in the order of 0.1 to 0.2 seconds.


Running our rspec suite one afternoon we encountered an Oracle out of memory error:


We re-ran the test suite and noticed it was considerably slower. So slow in fact, that a number of tests were simply timing out. Assuming a simple fix, we restarted the local oracle instance, and the virtual machine on which it was running. Same result. Painfully slow tests, with random timeouts. More investigation required.

Having verified that no changes to the database view DDL (schema) had been made, our attention turned to the data itself. We took a look at the explain plan generated for the SQL statement that creates our view. Explain plans aren’t the most user friendly things to decode without prior experience, but it was fairly evident that something was awry. Our plan looked something like this:

           NESTED LOOPS

It took a whopping 11.5 seconds to return 240 rows.

As a means of comparison, we executed the same query on our integration environment and saw the expected performance (0.2sec) for query execution. The explain plan looked like this:


Clearly something was wrong. We started considering the differences between the two environments. Both environments were running Oracle 11g, but locally we were using the express edition (XE). Suspecting performance may be bound on available memory, we looked into how to configure Oracle’s memory allocation.


A couple of searches later, we determined that Oracle allocates a block of memory on startup referred to as SGA (System Global Area). The size of this can be obtained from a running instance by running:


Oracle also allows configuration of memory on a per-process basis. This is acheived by setting PGA (Process Global Area). Again, this value can be obtained running:


At at the Oracle console. Using Oracle XE, this defaults to a reasonably conservative 200MB. We discovered that Oracle XE only supports allocation of up to 1GB of RAM so took a look at how we could increase our measley 200MB allocation.


Oracle’s configuration and initialization files are stored in a binary format which wasn’t particularly helpful when attempting to troubleshoot our issues. After some digging, we discovered the recommended way to interact with Oracle’s binary configuration is to generate a text version of the config file, test your changes, and then re-generate the binary version from your new text file.

You can locate the SPFILE currently in use by running:

# ----------- -------- --------
# spfile string /u01/oracle/admin/conf/spfileMYDB.ora

The first step was to generate a text version of the binary config file:

CREATE PFILE='/home/oracle/mypfile.ora' FROM SPFILE;

Opening up the newly generated pfile, we saw something like this:

XE.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
*.dispatchers='(PROTOCOL=TCP) (SERVICE=XEXDB)'

Of particular interest were __pga_aggregate_target, __sga_target and memory_target As described above, pga_target and sga_target variables specify the memory to allocate for Oracle processes.

We altered these parameter so they read:


To test our changes, we restarted our local Oracle instance using the pfile we’d just created.

STARTUP PFILE=/home/oracle/mypfile.ora

The database reported the new values had been loaded correctly.

We restarted our instance and … it was still painfully slow. Frustration growing, we did some more digging. We suspected that either our changes had made no difference, or there was some other factor in the query plan. At this point we made some useful discoveries as to how the query optimizer works.

Paraphrasing Oracle’s documentation, the Query Optimizer in the DB engine generates statisics about the tables within your schema, around the data distribution and storage characteristics of tables, columns, indexes, and partitions. It then uses these stats to calculate the most efficient way to execute your given query. This is actually quite cool for large scale deployments, insofar as these statistics can be exported, and used in another instance of the database, such as a test environment where you can replicate a production environment’s performance characteristics. Further details here.

With the suspicion that these stats may have been optimized to cope with the relatively low previous memory footprint, we looked into how to clear or regenerate these stats.

Regenerate for Joy

We regenerated table statistics for our tables. Happy days. Performance was restored, brows were swept, and the world was good.

EXEC DBMS_STATS.gather_table_stats('MYSCHEMA', 'BIG_TABLE', estimate_percent => DBMS_STATS.auto_sample_size);

About the Author


Brad Scheppler – Data Driven Management
Brad Scheppler – Data Driven Management

Brad Scheppler from tenXer shows how workplace data can be used to help inform decisions, improve individua...

Vodafone Spain to Demo at Mobile World Congress: How Big, Fast Data Will Revolutionize Telco, Powered by Pivotal
Vodafone Spain to Demo at Mobile World Congress: How Big, Fast Data Will Revolutionize Telco, Powered by Pivotal

Mobile World Congress, underway this week, is one of the largest events in the mobile space. Covering the l...

SpringOne 2021

Register Now