Create Fast Insert Scripts
PL/SQL Export Utilities are now able to export table data as insert scripts
Some weeks ago I released a new version of my PLEX project. I improved the ability to export table data as insert scripts. Originally PLEX was only able to export CSV data. The reason was, that I designed PLEX to quickstart version control in Oracle DB projects and the ability to export CSV data was meant to track changes in master data tables.
It turned out that a colleague of mine wanted to have PLEX to export insert scripts for all data in all tables for the initial deployment of a new app. I implemented the first version but was not happy with the runtime of the scripts. I can hear some people argue that there are better tools than insert scripts for importing larger datasets - but sometimes this is the only accepted way because there is only SQL*Plus available to run the deployment scripts…
Then I stumbled over an older post from Connor McDonald on how to create fast insert scripts. I incorporated his performance hacks into PLEX.
You can use it like so (see last parameter):
DECLARE
l_zip_file BLOB;
BEGIN
l_zip_file := plex.to_zip(plex.backapp(
p_app_id => 100, -- parameter only available when APEX is installed
p_include_ords_modules => true, -- parameter only available when ORDS is installed
p_include_object_ddl => true,
p_include_data => true,
p_data_as_of_minutes_ago => 0, -- Read consistent data with the resulting timestamp (SCN). Defaults to 0.
p_data_max_rows => 10000, -- Maximum number of rows per table. Defaults to 1000.
p_data_table_name_like => 'CAT\_%,USERS,ROLES,RIGHTS',
p_data_table_name_not_like => '%\_HIST,LOGGER%',
p_data_format => 'insert:10'
));
-- Do something with the zip file.
-- Your code here...
END;
/
The parameter p_data_format is new and can contain a comma-separated list of
formats - currently supported formats are CSV and INSERT. An example:
csv,insert
will export for each table a CSV file and a SQL file with insert
statements. For insert, you can also give the number of rows per insert all
statement (defaults to 20) - example: csv,insert:10
or insert:5
.
As you can see with the parameter p_data_table_name_not_like =>
'%\_HIST,LOGGER%'
PLEX accepts lists of like filters. In our example
%\_HIST,LOGGER%
is translated to where ... and (table_name not like '%\_HIST'
escape '\' and table_name not like 'LOGGER%' escape '\')
. For the parameter
p_data_table_name_like => 'CAT\_%,USERS,ROLES,RIGHTS'
the tables are filtered
in this way: where ... and (table_name like 'CAT\_%' escape '\' or table_name
like 'USERS' escape '\' or ... /*you get the idea*/)
.
There is a good reason why the default of the parameter p_data_max_rows
is set
to 1000
- if you provide no table name filters then all tables are exported…
The resulting insert scripts are looking like so:
-- Script generated by PLEX version 2.4.2 - more infos here: https://github.com/ogobrecht/plex
-- Performance Hacks by Connor McDonald: https://connor-mcdonald.com/2019/05/17/hacking-together-faster-inserts/
-- For strange line end replacements a big thank to SQL*Plus: https://support.oracle.com/epmos/faces/DocumentDisplay?id=2377701.1 (SQL Failed With ORA-1756 In Sqlplus But Works In SQL Developer)
prompt - insert 847 rows into USERS (exported 2021-01-31 14:24:30.703234000 +00:00)
set define off feedback off sqlblanklines on
alter session set cursor_sharing = force;
alter session set nls_numeric_characters = '.,';
alter session set nls_date_format = 'yyyy-mm-dd hh24:mi:ss';
alter session set nls_timestamp_format = 'yyyy-mm-dd hh24:mi:ssxff';
alter session set nls_timestamp_tz_format = 'yyyy-mm-dd hh24:mi:ssxff tzr';
begin
insert all
into USERS(U_ID,U_FIRST_NAME,U_LAST_NAME,U_EMAIL) values (1,'Ylcmsajbil','Fojkjryntnixzfh','qvspjgvwmtbi@ghovilkddx.mly')
into USERS(U_ID,U_FIRST_NAME,U_LAST_NAME,U_EMAIL) values (2,'Axify','Cofjlkwzxytdih','ajgttnqlds@minokpyfo.gu')
into USERS(U_ID,U_FIRST_NAME,U_LAST_NAME,U_EMAIL) values (3,'Ckyqqvuqrkuktb','Igacqwp','qpygabuhbrs@nsjxpgjlle.ze')
into USERS(U_ID,U_FIRST_NAME,U_LAST_NAME,U_EMAIL) values (4,'Noythhl','Gausfu','ngmgsbr@duyxqzn.hmyo')
into USERS(U_ID,U_FIRST_NAME,U_LAST_NAME,U_EMAIL) values (5,'Kpewbrecnfzsi','Nwbsnjh','xwlhcfaxko@uhqsibdojjp.hsm')
into USERS(U_ID,U_FIRST_NAME,U_LAST_NAME,U_EMAIL) values (6,'Qlzyizkl','Gwnaojlvyud','kzndqj@nsosenf.fm')
into USERS(U_ID,U_FIRST_NAME,U_LAST_NAME,U_EMAIL) values (7,'Xpepttcemrd','Ktaqqdnqyfvc','uhbnzezvz@buiptt.lkrm')
into USERS(U_ID,U_FIRST_NAME,U_LAST_NAME,U_EMAIL) values (8,'Ybyinr','Vngairocujhy','igvfzoegbh@hsepkqiwbst.evs')
into USERS(U_ID,U_FIRST_NAME,U_LAST_NAME,U_EMAIL) values (9,'Ctjixltj','Yvsiei','ozpspssyw@vooiyfuf.xeh')
into USERS(U_ID,U_FIRST_NAME,U_LAST_NAME,U_EMAIL) values (10,'Lttzwsavnozxu','Kcyjalvzrl','yvwowaqrpku@dyapdumb.fvi')
select * from dual;
/* S N I P */
insert all
into USERS(U_ID,U_FIRST_NAME,U_LAST_NAME,U_EMAIL) values (841,'Gpkabykoveq','Gljhlrijqop','imnhrheyr@ypccyiu.ah')
into USERS(U_ID,U_FIRST_NAME,U_LAST_NAME,U_EMAIL) values (842,'Mwrfazphbvmekpw','Kxirzfth','fxoatt@frlbwbn.tf')
into USERS(U_ID,U_FIRST_NAME,U_LAST_NAME,U_EMAIL) values (843,'Hzxomvkliaxl','Mstdrrmgfmsy','gpeidglzwfa@hwyumsansy.fet')
into USERS(U_ID,U_FIRST_NAME,U_LAST_NAME,U_EMAIL) values (844,'Vwvwezfsd','Xtfouojiymtlu','zgsdtowsvt@ywfngnijgts.ozd')
into USERS(U_ID,U_FIRST_NAME,U_LAST_NAME,U_EMAIL) values (845,'Eljntjfkxx','Sifgii','gksggat@ubfmmdopqy.ly')
into USERS(U_ID,U_FIRST_NAME,U_LAST_NAME,U_EMAIL) values (846,'Ewvyjeudjb','Anihlpdgeg','gietuk@ezciwejuedy.nuf')
into USERS(U_ID,U_FIRST_NAME,U_LAST_NAME,U_EMAIL) values (847,'Wscdamsgssmouf','Omtaofvlrjs','jrwyzftmbmo@gjylnuez.esq')
select * from dual;
end;
/
commit;
alter session set cursor_sharing = exact;
set define on
PLEX saves your NLS settings when you do the export and put it into the insert
scripts - that means you should not have any number or date conversion errors.
The performance hacks from Connor are the alter session set cursor_sharing =
force;
and the insert all
things - thanks for sharing Connor :-)
PLEX has many parameters to configure your export - the focus of PLEX is to quickstart version control in existing projects (see also this article about PLEX in general). If you only want to use it for the export of table data, then you should create a wrapper around it which fits your personal needs - here an example:
create or replace function to_insert (
p_table_name varchar2,
p_max_rows integer default 1000,
p_batch_size integer default 20
) return clob is
l_file_collection plex.tab_export_files;
begin
l_file_collection := plex.backapp(
p_include_data => true,
p_data_max_rows => p_max_rows,
p_data_table_name_like => p_table_name,
p_data_format => 'insert:' || to_char(p_batch_size),
--overwrite defaults to save work for plex:
p_include_templates => false,
p_include_runtime_log => false,
p_include_error_log => false
);
return l_file_collection(1).contents;
end;
/
Now you can have a quick export at any time like so: select to_insert('USERS',
10000) from dual;
Happy exporting and scripting
Ottmar