I usually create ‘local’ backups to the tables I modify as part of a Client data maintanance activity. I mean, if space allows I create simple tables using CTAS. This is usually a simple action to just backup the data or save it for reference until the changes are tested and approved by the Client. Therefore I do ignore in most of the cases the indexing or partitioning on the original table. The only purpose is just to have an easily accessible original data copy without the need to go and restore a RMAN backup if anything is removed by mistake during the data maintenance operations.
Typically the statements I use are simple as:
CREATE TABLE radu.observations_bck AS SELECT * FROM radu.observations ;
But sometimes, I have to deal with really large (multi-billion rows tables) tables that require relative long time to copy the data. The purpose of this post is to provide a couple of options to improve this type of operation.
So I improved the statement above:
CREATE TABLE radu.observations_bck COMPRESS PARALLEL 14 NOLOGGING AS SELECT /*+parallel(radu.observations 28) */ * FROM radu.observations ;
The parallel 14 makes the inserts into the target table be done with 14 parallel processes. Nologging prevents these changes from going to the redo logs for recovery. The hint – parallel (source 28) – causes the source table to be queried with 28 parallel processes. What makes this so fast is that both the select and insert are done in parallel. Also, in this “nologging create table as select” scenario the #oracle database uses direct path inserts which loads the blocks up in bulk rather than one row at a time.