12 thoughts on “How To Import A Schema on Amazon RDS”

  1. I've the export files stored under the following tree:
    oracle/files/exports

    any idea how to download the files under that tree without downloading the whole bucket? my bucket have TBs of files however my export files are just 12GB.

  2. This format will work for you:

    SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
    p_bucket_name => '',
    p_s3_prefix => 'oracle/files/exports/',
    p_directory_name => 'DATA_PUMP_DIR')
    AS TASK_ID FROM DUAL;

    Don't forget the "/" after the last directory in the path 🙂

  3. For one table it worked with me and exported successfully. for another table while I'm using exactly the same import block, I'm keep getting this error:

    ORA-06550: line 22, column 46:
    PLS-00312: a positional parameter association may not follow a named association
    ORA-06550: line 22, column 3:
    PL/SQL: Statement ignored
    ORA-06550: line 25, column 3:
    PLS-00306: wrong number or types of arguments in call to 'SET_PARAMETER'
    ORA-06550: line 25, column 3:
    PL/SQL: Statement ignored
    ORA-06550: line 28, column 46:
    PLS-00312: a positional parameter association may not follow a named association
    ORA-06550: line 28, column 3:
    PL/SQL: Statement ignored
    06550. 00000 – "line %s, column %s:n%s"

  4. I'm familiar with the same error as I faced it many times, just simply remove "handle =>" from all the lines of the import command so it will look like this:

    DECLARE
    ind NUMBER; — Loop index
    h1 NUMBER; — Data Pump job handle
    percent_done NUMBER; — Percentage of job complete
    job_state VARCHAR2(30); — To keep track of job state
    le ku$_LogEntry; — For WIP and error messages
    js ku$_JobStatus; — The job status from get_status
    jd ku$_JobDesc; — The job description from get_status
    sts ku$_Status; — The status object returned by get_status
    BEGIN

    h1 := DBMS_DATAPUMP.OPEN( operation => 'IMPORT', job_mode => 'SCHEMA', job_name=>null);

    — Specify the single dump file and its directory DBMS_DATAPUMP.ADD_FILE(h1, directory => 'DATA_PUMP_DIR', filename => 'EXPORT_eps_owner_STG_04-03-19.dmp');
    — Specify the logfile for the import process: [Very important to read it later after the completion of the import] DBMS_DATAPUMP.ADD_FILE(h1, directory => 'DATA_PUMP_DIR', filename => 'import_eps_owner_STG_04-03-19.LOG', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);

    — Disable Archivelog for the import: [12c new feature] DBMS_DATAPUMP.metadata_transform (h1, name => 'DISABLE_ARCHIVE_LOGGING', value => 1);

    — REMAP SCHEMA:
    — DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_SCHEMA','EPS_OWNER','EPS');
    — If a table already exists: [SKIP, REPLACE, TRUNCATE]
    DBMS_DATAPUMP.SET_PARAMETER(h1,'TABLE_EXISTS_ACTION','SKIP');

    — REMAP TABLESPACE: DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_TABLESPACE','EPS','TBS_EPS');

    — Start the job. An exception is returned if something is not set up properly. DBMS_DATAPUMP.START_JOB(h1);

    — The following loop will monitor the job until it get complete.meantime the progress information will be displayed:
    percent_done := 0;
    job_state := 'UNDEFINED';
    while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
    dbms_datapump.get_status(h1,
    dbms_datapump.ku$_status_job_error +
    dbms_datapump.ku$_status_job_status +
    dbms_datapump.ku$_status_wip,-1,job_state,sts);
    js := sts.job_status;

    — If the percentage done changed, display the new value. if js.percent_done != percent_done
    then
    dbms_output.put_line('*** Job percent done = ' ||
    to_char(js.percent_done));
    percent_done := js.percent_done;
    end if;

    — If any work-in-progress (WIP) or Error messages were received for the job, display them. if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
    then
    le := sts.wip;
    else
    if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
    then
    le := sts.error;
    else
    le := null;
    end if;
    end if;
    if le is not null
    then
    ind := le.FIRST;
    while ind is not null loop
    dbms_output.put_line(le(ind).LogText);
    ind := le.NEXT(ind);
    end loop;
    end if;
    end loop;

    — Indicate that the job finished and gracefully detach from it. dbms_output.put_line('Job has completed');
    dbms_output.put_line('Final job state = ' || job_state);
    dbms_datapump.detach(h1);
    END;
    /

  5. Getting below error : Any idea?

    Error starting at line : 1 in command –
    DECLARE
    ind NUMBER; — Loop index
    h1 NUMBER; — Data Pump job handle
    percent_done NUMBER; — Percentage of job complete
    job_state VARCHAR2(30); — To keep track of job state
    le ku$_LogEntry; — For WIP and error messages
    js ku$_JobStatus; — The job status from get_status
    jd ku$_JobDesc; — The job description from get_status
    sts ku$_Status; — The status object returned by get_status
    BEGIN

    h1 := DBMS_DATAPUMP.OPEN( operation => 'IMPORT', job_mode => 'SCHEMA', job_name=>null);

    — Specify the single dump file and its directory
    DBMS_DATAPUMP.ADD_FILE(handle => h1, directory => 'DATA_PUMP_DIR', filename => 'RMS_TEST.dmp');
    — Specify the logfile for the import process: [Very important to read it later after the completion of the import]
    DBMS_DATAPUMP.ADD_FILE(handle => h1, directory => 'DATA_PUMP_DIR', filename => 'RMS_TEST.LOG', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);

    — Disable Archivelog for the import: [12c new feature] DBMS_DATAPUMP.metadata_transform ( handle => h1, name => 'DISABLE_ARCHIVE_LOGGING', value => 1);

    — REMAP SCHEMA:
    — DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_SCHEMA','EPS_OWNER','EPS');
    — If a table already exists: [SKIP, REPLACE, TRUNCATE]
    DBMS_DATAPUMP.SET_PARAMETER(h1,'TABLE_EXISTS_ACTION','REPLACE');

    — REMAP TABLESPACE: DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_TABLESPACE','EPS','TBS_EPS');

    — Start the job. An exception is returned if something is not set up properly.
    DBMS_DATAPUMP.START_JOB(h1);

    — The following loop will monitor the job until it get complete.meantime the progress information will be displayed:
    percent_done := 0;
    job_state := 'UNDEFINED';
    while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
    dbms_datapump.get_status(h1,
    dbms_datapump.ku$_status_job_error +
    dbms_datapump.ku$_status_job_status +
    dbms_datapump.ku$_status_wip,-1,job_state,sts);
    js := sts.job_status;

    — If the percentage done changed, display the new value.
    if js.percent_done != percent_done
    then
    dbms_output.put_line('*** Job percent done = ' ||
    to_char(js.percent_done));
    percent_done := js.percent_done;
    end if;

    — If any work-in-progress (WIP) or Error messages were received for the job, display them.
    if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
    then
    le := sts.wip;
    else
    if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
    then
    le := sts.error;
    else
    le := null;
    end if;
    end if;
    if le is not null
    then
    ind := le.FIRST;
    while ind is not null loop
    dbms_output.put_line(le(ind).LogText);
    ind := le.NEXT(ind);
    end loop;
    end if;
    end loop;

    — Indicate that the job finished and gracefully detach from it.
    dbms_output.put_line('Job has completed');
    dbms_output.put_line('Final job state = ' || job_state);
    dbms_datapump.detach(h1);
    END;
    Error report –
    ORA-39001: invalid argument value
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 3507
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 3756
    ORA-06512: at line 15
    39001. 00000 – "invalid argument value"
    *Cause: The user specified API parameters were of the wrong type or
    value range. Subsequent messages supplied by
    DBMS_DATAPUMP.GET_STATUS will further describe the error.
    *Action: Correct the bad argument and retry the API.

  6. Hello, i had a quick question if i run this from SQLDeveloper it runs ok –

    SELECT rdsadmin.rdsadmin_s3_tasks.upload_to_s3( p_bucket_name => '', p_prefix => '', prefix => '', p_directory_name => 'DATA_PUMP_DIR') AS TASK_ID FROM DUAL;

    But if i put that statement as part of a Stored procedure i get the below error –

    "rdsadmin"."rdsadmin_s3_tasks"."upload_to_s3" :invalid identifer. Would you know why?

  7. DBMS_DATAPUMP.METADATA_TRANSFORM(handle => hdnl, name => 'OID', 0, null);
    *
    ERROR at line 7:
    ORA-06550: line 7, column 1:
    PLS-00306: wrong number or types of arguments in call to 'METADATA_TRANSFORM'
    ORA-06550: line 7, column 1:
    PL/SQL: Statement ignored

  8. Simply add more lines for dbms_datapump.add_file, each line represent a dump file; like this:

    ….
    dbms_datapump.add_file(hdnl,'EXPORT_01.dmp','DATA_PUMP_DIR');
    dbms_datapump.add_file(hdnl,'EXPORT_02.dmp','DATA_PUMP_DIR');
    ….

Leave a Reply

Your email address will not be published. Required fields are marked *