Using 10g datapump and scheduler to copy schemas
Update 13-Feb-07: Setting the datapump table_exists_action to replace does not overwrite views, sequences, plsql objects. They have to be dropped separately before datapump is called, see gotchas and code below.
Update 02-Mar-07: dba_ views rather than all_ views have to be used to identify what objects to drop prior to datapump, for reasons explained in this post.
For a nightly process to copy small schemas from one database to another, the normal, or old style, way to do that is with export/import in a cron controlled shell script, which would include a sql-from-sql script to drop all the target schema objects before doing the import.
But that presents problems in RAC, which is what I was working on – which node does the cron script reside on? What happens if that node is down?
An option would be to have the script run on another machine, one unrelated to the RAC cluster, but that introduces more points of failure, and means sending data back and forward over sql*net.
So instead I used the oracle’s 10g scheduler and datapump, as below. This data pumps straight over a database link, without having to write to a dump file in between, which is nice. And because it is all in the database, it ought to be unaffected by particular RAC nodes being down.
Some gotchas:
- The documentation for dbms_datapump isn’t at all clear on what the syntax for SCHEMA_LISTS’s should be. It should be this type of format:
name => 'SCHEMA_LIST' , value => '''SCHEMA1'',''SCHEMA2'''
which is a lot of quotes. Thanks to Sunit Joshi for that tip in http://www.webservertalk.com/message1791234.html
- The dbms_datapump documentation has a bug: dbms_datapump.open does not have a parameter called “mode“, it has one called “job_mode” instead.
- When calling dbms_datapump from within a stored procedure, as opposed to an anonymous pl/sql block, the owner of the stored procedure has to have had “create table” privilege granted to them explicitly, rather than through a role. That’s detailed in metalink note 315488.1, but I wasted a lot of time trying to debug that before I went on to metalink. The error reported doesn’t help much:
ERROR at line 1: ORA-31626: job does not exist ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79 ORA-06512: at "SYS.DBMS_DATAPUMP", line 911 ORA-06512: at "SYS.DBMS_DATAPUMP", line 4330 ORA-06512: at "NIGHTLY_IMPORT", line 4 ORA-06512: at line 1
- Jobs are created disabled by default with dbms_scheduler, you have to specify “
enabled => TRUE
” to get them to run. - new 13-Feb-07 Setting table_exists_action to ‘replace’ overwrites any existing tables ok, but does not replace view definitions, plsql objects, sequences, and so on. They all have to be explicitly dropped before datapump is called. That is quite a tricky counterintuitive limitation of the table_exists_action replace functionality – why would you want new tables copied over but old sequences left in place? It is suprises like this that make old fashioned (but well understood) exp/imp still worth using.
- new 13-Feb-07 Be aware that the dropped tables will stay on in the recyclebin. There is a good recyclebin article at http://orafaq.com/node/968.
Good information on this is at http://orafaq.com/wiki/Datapump and http://orafaq.com/node/862.
Here’s my code. I included a check against a flag table which allows developers to switch the job on and off:
create table run_copy ( run_copy varchar2(3) ) tablespace users ; grant select, insert, update, delete, references on run_copy to public ; create public synonym run_copy for run_copy ; insert into run_copy ( run_copy ) values ( 'Yes' ) ; create database link nightly_import_sourcedb connect to username identified by password using 'sourcedb' ; -- procedure owner needs explicit create table grant create table to username ; CREATE OR REPLACE PROCEDURE nightly_import AS -- Andrew Fraser 30 January 2007 run_import NUMBER ; hand NUMBER ; BEGIN run_import := 0 ; SELECT COUNT(*) INTO run_import FROM run_copy WHERE UPPER(run_copy) IN ('N','NO') ; IF run_import > 0 THEN -- dont run import if someone has set NO flag in run_copy table NULL ; ELSE -- first have to drop plsql and other objects that table_exists_action does not deal with FOR d1 IN ( select object_type, owner, object_name from all_objects where object_type in ('FUNCTION','PACKAGE','PROCEDURE','SEQUENCE','SYNONYM','VIEW') and owner in ('BACKUP','GATEKEEPER','REFDATA','TRANSDATA','WHSUSR','WHS_VIEWER') ) LOOP execute immediate 'drop '||d1.object_type||' '||d1.owner||'.'||d1.object_name ; END LOOP ; -- then run datapump import itself hand := dbms_datapump.open ( operation => 'IMPORT' , job_mode => 'SCHEMA' , remote_link => 'nightly_import_sourcedb' ) ; dbms_datapump.metadata_filter ( handle => hand , name => 'SCHEMA_LIST' , value => '''BACKUP'',''GATEKEEPER'',''REFDATA'',''TRANSDATA'',''WHSUSR'',''WHS_VIEWER''' ) ; dbms_datapump.set_parameter ( handle => hand , name => 'TABLE_EXISTS_ACTION' , value => 'REPLACE' ) ; dbms_datapump.start_job(hand) ; END IF ; END ; / BEGIN DBMS_SCHEDULER.CREATE_JOB ( job_name => 'run_nightly_import' , job_type => 'STORED_PROCEDURE' , job_action => 'nightly_import' , start_date => TRUNC(sysdate+1)+4/24 , /* start 4am tomorrow */ repeat_interval => 'FREQ=DAILY; INTERVAL=1', /* run every night */ enabled => TRUE , comments => 'Andrew Fraser 30 January 2007'); END; /
Even with the code for this method so short, I’m not sure I would always want to use it over cron and old fashioned export/import. But for RAC, and also for Windows servers (because they lack a decent shell scripting language), this is the way to go.
Nice script…
How about an Export?