I have a report that is run overnight. Previously the report held hard coded variables in the select clause, however now I want to read from a table of static data.
So far I have created a new table which is used in the report and prior to the report being run a delete from and insert into and commit; happens.
Here is the process:
@sql/regenerate_table.sql spool data/report.csv @sql/report.sql spool off quit;
regenerate_table.sql looks like this:
delete from schema.static_table_used_in_report@dblink; insert into schema.static_table_used_in_report@dblink select * from schema.master_table@dblink; commit;
The report is then along the lines of;
select x,y,z from (select x,y,y from @db1 where not in select * from schema.static_table_used_in_report@dblink union all select x,y,y from @db2 where not in select * from schema.static_table_used_in_report@dblink union all etc. group by x, y, z;
Currently the data in schema.static_table_used_in_report is the same as the hard coded values in the previous report, however the previous report took around 1 hour, whereas the snapshot too old is happening around 4 hours after it has started. I have tried timings on hardcoded vs dblink and there isn’t any appreciable difference between the two.
It was my understanding that undo was only required when consecutive queries / transactions are taking place – since no other data is using the schema.static_table_used_in_report@dblink table it shouldn’t be required for read consistency purposes as they can be read and re-read directly from the disk.