• Hi Guest - Come check out all of the new CP Merch Shop! Now you can support CigarPass buy purchasing hats, apparel, and more...
    Click here to visit! here...

Not cigar related - An Oracle database question

jgohlke

My other hobby
Joined
Sep 15, 2004
Messages
990
I know it's kind of long-shot posting here, but hey, you guys know everything right? I mean, as a community, we're a cigar-smoking version of Google!

I've been trying to find this out for a couple of days now and haven't been able to figure it out. It is very Oracle-database-specific. (you've been warned!)

We have a long-running query in the database and at the same time another process is flooding the schema with data inserts. At some point the query performance starts to be really slow. We have the tables set to self-monitor, but the process only runs about every 3 hours according to the Oracle documentation. We can run the analyze command (we use the dbms package for this), but it doesn't seem to help the query that is running unless we stop and restart the job. I suspect that the execution plan for the query isn't being changed by the new statistics generated from the analyze. Of course, restarting the job forces a new execution plan. We could also flush the shared pool, but that seems kind of drastic.

I'd like to be able to leave the query job running, re-gen the table stats and then force the query to re-plan.

edits:typos
 
Hi

Not to be mean or anything...but from what I have seen on this forum I'm not sure you will get a response anytime soon :D In my previous job I worked with an web application that used an Oracle/SQL Server database. Our problem was normally not the database side of the application but the webserver side of things. So we normally set up a recycling job/refresh every night to make sure that the application were 100% up to speed when users came to work in the morning.
 
Appreciate the response.

After spending a ton of time on the Tom Kyte website (asktom), I finally found the answer:

"automatic dependency tracking"

Just for completeness, and to make it easier for anybody that finds this via a search engine, here's a quick brain dump of what I've learned about this.....

ADT is a mechanism in Oracle 10G (9i too) that keeps track of the parsed execution plans in the shared pool. There are several triggers for re-parsing the sql and creating a new execution plan, including; altering the table, changing the indexes and analyzing the table. Once the new table statistics are generated (whether via the analyze command or the new, faster, more thorough "dbms_stats" job), the sql is automatically flagged to be reparsed. Tom Kyte states this happens on the "next execution", but in practice, it seems to take a few minutes before we notice the performance improvement.

Additionally, the dbms_stats supports providing table statistics as well generating them, so it is possible to construct table statistics and have the table "grow into" them.

The common practice of stopping/starting the database or flushing the shared pool is not recommended. In fact, it is likely to degrade performance, rather than enhance it. It does have the desired effect of forcing a re-parse of the target sql, but that is because it forces a re-parse of all the sql.....it's kind of like standing outside with a cigar, asking for a light and having an F4 drop a 500 lb napalm bomb on you. It lights your cigar, but it's a little overkill.
 
We have a long-running query in the database and at the same time another process is flooding the schema with data inserts. At some point the query performance starts to be really slow. We have the tables set to self-monitor, but the process only runs about every 3 hours according to the Oracle documentation. We can

I'd think that it would be a matter of concurrency - flooding the schema with inserts is probably modifying the underlying data between instantiation and completion. Oracle doesn't hold a lock through a query but if you're constantly modifying the underlying tables, you're going to force it to reparse the query (and probably multiple times if it appears to be slowing right down).

When I was at IBM, YEARS ago, we had specific overnight time windows for bulk data update and start-of-day reporting. Might be worth looking at.
 
For jollies try going to a database forum and asking a cigar question. :laugh:
 
I used to work at Oracle writing code for database internals. I was an Oracle DBA for a number of years as well.

In versions of Oracle I'm familiar with (up through Oracle 8) statistics generated by ANALYZE would be used by the database for some tuning purposes, but would NOT result in the automatic generation of indexes, primary keys, or anything like that.

Basically, unless things have changed drastically, you need to look carefully at the statistics, see which queries are taking the longest to complete, figure out which tables and columns are involved, and see if there are obvious gross inefficiencies.

The typical error is to not build indexes on the right columns (leads to slow reads) or to build indexes that are never used (leads to slow writes).

Another typical error is to have nice indexes on some columns, but because of dumb SQL query writing, you end up not using them.

A good rule of thumb is that pretty much any table should have a primary key, generally a single column, and that primary key should be referenced in queries in almost every case where appropriate.

Hope this helps.
 
Oh, and rereading your posts, I think there may be some other simple things to try.

Are you using snapshots? If it were me, and I was running a query against the database that takes a while to complete, and meanwhile there is data coming in, I'd want to do a snapshot and reference it in the query.

This will improve performance massively, or at least it should, as otherwise you aren't being clear to Oracle what it is you want... this method tells Oracle explicitly that you are only interested in records at the time the query began.

Alternatively, if you want the query to return all results, you can do a snapshot, run the query and put the results in a temporary table, then rerun the query merging the results from that with the temporary results. Again, this is a tactic that I've used successfully when I need relatively accurate results on a big big query that includes up to date data.

Finally, again, keep looking at the simple stuff. Do you have enough memory that all indexes and primary keys are held in memory at all times?
 
I knew somebody might have some ideas...thanks for taking the time to reply.

The process runs really well (fast) when the tables have data in them and the analyze is fresh. We have done lots of tunning on the processes, schema and sql. It is screaming fast when it is running well.

On the occasion when we start clean (empty tables), performance is terrible for a while until the tables get some data in them and the automatic analyze gets in synch. This "catch-up" sometimes takes hours. We can force it to be good right away, but stopping the query process, analyzing the tables and re-starting the query process.

From what I have read, the "re-analyze" should have immediate effect, even against a running process. I am not finding this to be true in my testing. We are working out some test scenarios to generate some very specific numbers....

We are having a very specific problem, not the usual "my query is slow" issue....
 
Let me go old school here and mention something fairly obvious.

The Oracle database isn't doing anything magic with the execution plan and analysis of the queries.

It is possible to hand optimize the queries by providing hints to Oracle in the SQL.

It's also possible to force specific execution plans by building indexes in advance and writing SQL queries in a way to take advantage of these indexes.

Basically what you need to figure out is what the system is doing when it is running well, and force that same thing right from startup by designing the database tables and queries to get Oracle to use the same execution plan right off the bat.

I haven't worked with a recent version of Oracle, so I can't speak to the details, but 99.99% of the time this kind of stuff is due to poor design. Oracle is compensating by doing some intelligent tweaking on its own, but ultimately, that's all stuff that you can tweak yourself by hand, up front.
 
Bob, puh-LEAZE come consult for my employer! Oracle is seldom spoken aloud here without a string of bad words.
 
Bob, puh-LEAZE come consult for my employer! Oracle is seldom spoken aloud here without a string of bad words.

LOL, thanks for the props. Hope my advice is helpful.

(In a former life, I did consulting work building web applications on top of Oracle... $175 per hour, adds up pretty quick...)
 
Bob, puh-LEAZE come consult for my employer! Oracle is seldom spoken aloud here without a string of bad words.

LOL, thanks for the props. Hope my advice is helpful.

(In a former life, I did consulting work building web applications on top of Oracle... $175 per hour, adds up pretty quick...)

You were CHEAP! Shop around today and see what the new crop is charging.
 
Bob, puh-LEAZE come consult for my employer! Oracle is seldom spoken aloud here without a string of bad words.

LOL, thanks for the props. Hope my advice is helpful.

(In a former life, I did consulting work building web applications on top of Oracle... $175 per hour, adds up pretty quick...)

You were CHEAP! Shop around today and see what the new crop is charging.

Yeah, but that was 14 years ago. Pretty much was the going rate then. I've been told by friends that if I resume consulting work I should charge at least $250 per hour and that's based on jobs that run at least a few days worth of time, not an hour here an hour there...
 
The system is highly tuned and we feel pretty comfortable it is not a "bad design". The same group of folks have been working on the system for several years, they have lots of experience and the system is smoking fast when it's tuned. We support hundreds of users, some very large batch jobs, databases in the multi-terabyte size and 2 dozen installations around the world. We use table partitioning, run the databases in archive log mode, the applications use bind variables, etc.....We know what we are doing and this isn't the usual "full table scan" problem. We've been tuning the same system for years. It's a screamer.

This specific problem only happens on the first day of a brand new database when we start from empty tables. Going from empty to several hundred thousand rows skews the table stats so badly that the execution plan is way out of date and the queries run slow. Eventually, Oracle fixes itself (usually by the next day things are back in synch). This problem focuses on the first 4-6 hour window of the systems life.

We are pretty sure it is the specific problem of flooding the database with tons of data very quickly and then using out-of-date execution plans. Oracle's documentation provides some hints about how this process works and while we use the dbms_stats table monitoring, Oracle doesn't actually run it very often (once every 3 hours), so the database doesn't know that the status are stale. There doesn't appear to be any way to trigger it to run either. It generates table statistics for any table changed more 10%, but it only looks for those tables every 3 hours.

The documentation also states that a re-analyze should trigger a re-parse, but that doesn't appear to be happening, at least not right away.

We know the query that runs slow, it runs on one of the main tables that experiences the huge initial growth. On the first day, this query is all CPU, by the 2nd day, the query is running fine. Our systems sometimes run for months or even years after startup (growing to 1 TB+ in size) and we only have this problem on the first day. Once the system is up and running, it's fine.

We are currently investigating using generated table statistics to construct the table so that Oracle thinks the table has data in it right from the beginning. There are several discussions about this tactic on the Tom Kyte web page. This appears to be the strategy to fix this problem.

Thanks, it's fun to talk Oracle with a fellow cigar smoker!
 
Top