Is Catching ORA-02292 A Good Solution? It Depends…

I’ve recently realized that some technique that I use sometimes is subject to an important restriction which I used to ignore.

The Requirement

We have two tables (let’s call them PARENTS and CHILDREN) with a foreign key between them (CHILDREN references PARENTS).
We need to write a procedure that deletes a given child, and if its parent has no other children the parent should be deleted as well.

Demo Data

SQL> select * from parents;

        ID
----------
         1
         2

SQL> select * from children;

        ID  PARENT_ID
---------- ----------
       101          1
       201          2
       202          2

The Implementation

A simple solution for the conditional deletion of the parent is to try to delete it.
If this parent has no other children, the DELETE statement will succeed.
If it has other children, the DELETE statement will fail with ORA-2292 (integrity constraint violated – child record found), and we can catch this exception and ignore it.

I like this kind of solutions mainly because Oracle automatically takes care of the necessary serialization of concurrent sessions (trying, in this case, to delete or insert children of the same parent).

SQL> create or replace package demo as
  2      procedure delete_child(i_id in children.id%type);
  3  end demo;
  4  /

Package created.

SQL> create or replace package body demo as
  2      procedure delete_parent_if_it_has_no_children(i_id in parents.id%type) is
  3          e_children_exist exception;
  4          pragma exception_init(e_children_exist, -2292);
  5      begin
  6          delete parents p
  7          where  p.id = i_id;
  8          dbms_output.put_line('parent was deleted successfully');
  9      exception
 10          when e_children_exist then
 11              dbms_output.put_line('parent was not deleted');
 12      end delete_parent_if_it_has_no_children;
 13
 14      procedure delete_child(i_id in children.id%type) is
 15          v_parent_id children.parent_id%type;
 16      begin
 17          delete children c
 18          where  c.id = i_id
 19          returning c.parent_id into v_parent_id;
 20
 21          delete_parent_if_it_has_no_children(v_parent_id);
 22
 23      end delete_child;
 24  end demo;
 25  /

Package body created.

Parent 1 has only one child – 101, so when we delete child 101 its parent is deleted as well:

SQL> exec demo.delete_child(101)
parent was deleted successfully

PL/SQL procedure successfully completed.

Parent 2 has two children – 201 and 202. When we delete one of the children, the parent is not deleted. When we delete the second child, the parent is deleted.

SQL> exec demo.delete_child(201)
parent was not deleted

PL/SQL procedure successfully completed.

SQL> exec demo.delete_child(202)
parent was deleted successfully

PL/SQL procedure successfully completed.

SQL> rollback;

Rollback complete.

The Catch

This solution is based on the fact that the foreign key constraint is enforced in the statement level. It means that we can use this solution as long as the foreign key is not deferred. Deferred constraints are enforced at the end of the transaction, and therefore the DELETE PARENTS statement will succeed without raising an exception, even if the deleted parent has children.

I executed the previous example after creating the tables as follows:

SQL> create table parents (
  2    id number not null primary key
  3  );

Table created.

SQL> create table children (
  2    id number not null primary key,
  3    parent_id not null
  4      constraint fk_children_parents
  5        references parents
  6        deferrable initially immediate
  7  );

Table created.

SQL> begin
  2    insert into parents values (1);
  3    insert into parents values (2);
  4
  5    insert into children values (101,1);
  6    insert into children values (201,2);
  7    insert into children values (202,2);
  8
  9    commit;
 10  end;
 11  /

PL/SQL procedure successfully completed.

Now let’s set the foreign key as deferred, and try to delete only one child of parent 2.

SQL> set constraint fk_children_parents deferred;

Constraint set.

SQL> exec demo.delete_child(201)
parent was deleted successfully

PL/SQL procedure successfully completed.

The parent was deleted successfully, although it still has an existing child.
But when we try to commit the transaction, the foreign key is checked, and the whole transaction is rolled back (including the deletion of the child).

SQL> commit;
commit
*
ERROR at line 1:
ORA-02091: transaction rolled back
ORA-02292: integrity constraint (TRANZMATE_DEV.FK_CHILDREN_PARENTS) violated - child record found

ENABLE NOVALIDATE – Too Polite?

Onine DDL operations are much more polite than offline DDL operations. They usually wait patiently for transactions that hold resources they need until these transactions end, and they do not block new DML statements.

As I wrote in the past, adding a constraint as Enabled and Validated (which is the default for new constrtaints) is an offline operation, but if we split it into two DDL statements – one for adding the constraint as Enabled and Not Validated and the second for making the constraint Validated – then each of these two separate statements is an online operation.

In this post I’d like to show that the first step – creating the constraint as Enabled and Not Validated – is even “more online” than it seems.

Let’s create some table t and insert one record into it:

One> create table t (
  2    a number,
  3    b number
  4  );

Table created.

One> insert into t(a,b) values (111,-1);

1 row created.

I did not commit or rollback this transaction, so it is still open and it’s locking the table in RX mode.

Now, from another session (note the SQL Prompts “One” and “Two”), I’ll add an Enabled and Not Validated check constraint to the table:

Two> alter table t
  2    add constraint t_b_chk
  3    check (b>0) enable novalidate;

Session Two is blocked now by session One (the wait event is “enq: TX – row lock contention”). Since it is an online operation it just waits, without throwing an ORA-54 error as an offline operation would have done.

But actually, it seems that this wait is unnecessary. The operation has already happened.

Using session One, we can see that the constraint already appears in the data dictionary:

One> select constraint_name,
  2         search_condition,
  3         status,
  4         validated
  5  from user_constraints
  6  where table_name = 'T';
  
CONSTRAINT_NAME SEARCH_CONDITION STATUS     VALIDATED
--------------- ---------------- ---------- -------------
T_B_CHK         B>0              ENABLED    NOT VALIDATED

And if we try now to perform a DML that violates the constraint, we’ll get an error message, because the constraint is already enabled:

One> insert into t(a,b) values (222,-2);
insert into t(a,b) values (222,-2)
*
ERROR at line 1:
ORA-02290: check constraint (DEMO.T_B_CHK) violated

Session Two will be released as soon as session One either commits or rolls back, but it seems that it could have been released before that.
Even if we kill session Two before the transaction in session One ends, it doesn’t really matter, because the constraint has already been created and enabled.

Excessive Locking when Dropping a Table

I tried to drop a table today and failed due to “ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired”.
That was weird because I knew that nobody had been using this table for months, and that the table had no enabled foreign keys.
A quick investigation revealed the cause – the DROP TABLE operation tried to lock another table (in the quite aggressive “Share” mode) that was referenced by a disabled foreign key from the table I was trying to drop. The referenced table was locked by other sessions, and therefore the DROP TABLE operation failed.

Even if the foreign key constraint is enabled, there is no good reason in my opinion to lock the referenced table; all the more so if it’s disabled.
There is a workaround (which I think proves my last sentence): it’s possible to drop the constraint first, and then to drop the table. Dropping the constraint does not lock the referenced table.

Here is a simple test I executed in 11.2.0.4, 12.1.0.2 and 12.2.0.1: Continue reading “Excessive Locking when Dropping a Table”

Constraint Optimization Summary

This is the last part of a series about Constraint Optimization.
In this post I’ll summarize the conclusions from the previous parts.

When we add a constraint to an existing table, there are two aspects that are worth taking into consideration: duration and availability.

Duration

When the table contains a significant number of rows, adding a constraint may take a lot of time. In some cases Oracle applies a very nice optimization that can reduce this time to almost zero.

In part 1 we saw that such optimization happens when adding a column and an inline check constraint on that column in a single statement; and that this optimization does not happen for out-of-line check constraint.
In part 2 we saw that this optimization may lead to data integrity violation in some extreme cases.
In part 3 we saw that such optimization happens when adding a column and a foreign key constraint on that column in a single statement (for both inline and out-of-line constraints).
In part 4 we saw that unfortunately this optimization never happens for unique constraints.
In part 5 we saw that this optimization doesn’t happen if the added column is defined with a default value.

Availability

By default, adding a constraint is an offline operation. It means that it cannot start as long as the table is locked by active transactions, and that it blocks DML statements on the table for the duration of the operation. Obviously, the longer the constraint addition takes, the higher the significance of the availability issue.

In part 6 we saw how to add check constraints and foreign key constraints in an online way, by splitting the single operation into several ones.
In part 7 we saw how to add unique constraints in an online way (assuming Enterprise Edition).

Summary

The following table summarizes all of the above:

Adding a column and an inline constraint in a single statement Adding a column and an out-of-line constraint in a single statement Adding a column and a constraint in separate statements
Check Constraint Fast Duration depends on table size Duration depends on table size
Offline Offline Online can be achieved
Foreign Key Constraint Fast Fast Duration depends on table size
Offline Offline Online can be achieved
Unique Constraint Duration depends on table size Duration depends on table size Duration depends on table size
Offline Offline Online can be achieved in Enterprise Edition

Adding a Unique Constraint in an Online Way

Note: unlike most of my posts, this one assumes using Enterprise Edition

I have a table t and I want to add a unique constraint on one of its columns – c1.

The Offline Way

The straightforward and most simple way to do it is using a single alter table statement:

SQL> alter table t add constraint c1_uk unique (c1);

Table altered.

By default, Oracle creates in this operation a unique constraint (named c1_uk) and a corresponding unique index (named c1_uk as well) that enforces the constraint.
The downside is that this is an offline operation – the table is locked in Share mode.
This is true even if we specify that the creation of the index is online:

SQL> alter table t add constraint c1_uk unique (c1) using index online;

Table altered.

If the table contains many records, the creation of the index may take a significant amount of time, during which the table is locked and DML operations on the table are blocked.

The Online Way

We can create the unique constraint in an online way, by splitting the operation into three steps: Continue reading “Adding a Unique Constraint in an Online Way”

Fast but Offline, or Online but Slow?

The Constraint Optimization series:


In the previous parts of this series I showed that Oracle does a nice optimization – that may save plenty of time – when we add in a single ALTER TABLE statement a new nullable with no default value column and a corresponding:
– inline (a.k.a. “column-level”) check constraint
or
– foreign key constraint (either inline or out-of-line)

In these cases Oracle enables the constraint (for future DML statements) and marks the constraint as VALIDATED (stating that existing records respect the constraint) without actually performing any check – counting on the fact that all the existing records have NULL in the new column. This is a beautiful optimization – if the table already contains many records, then skipping the (unnecessary) validation phase can save lots of time.

So, is it always better to add the new column and the corresponding constraint in a single ALTER TABLE statement? No, it isn’t. Sorry for the cliché, but… it depends.

The advantage is clear: skipping the unnecessary validation phase can save lots of time (the bigger the table, the higher the saving).

But there is also a downside: such a combined operation is an offline one. The table is locked; although for a short time, but in eXclusive mode.

If we cannot afford to do it in one short offline operation, we can do it in three separate online operations, but without the aforementioned optimization:

1. Adding the column (a short operation)

alter table t add (c number);

2. Adding the constraint without validating it (a short operation)

alter table t add (constraint c_chk check (c>0) enable novalidate);

3. Validating the constraint (the bigger the table, the longer this operation takes)

alter table t enable validate constraint c_chk;

By default, when we add a constraint it is both enabled and validated, and this is done as an offline operation. This is why I split it into two steps – both are online: the first step is adding the constraint and enabling it without validation, and the second one is validating the constraint.

Implementing Arc Relationships with Virtual Columns? Or Not?

I wrote a post some time ago about implementing arc relationships using virtual columns.
Recently, Toon Koppelaars wrote a detailed and reasoned comment to that post. Since I admire Toon, getting his point of view on something that I wrote is a privilege for me, regardless if he agrees with me or disagrees (and just to be clear, it’s the latter this time). I think that having a public (and civilized) discussion – this time about principles of data modeling and implementation – is a good thing, even if at the end we don’t convince each other.
Therefore I thought Toon’s comment deserves a post of its own. So I’ll quote here everything that he wrote in the comment, with my response after every sentence or paragraph. And everyone is welcome to add their own points of view in the comments section.

I’d like to emphasize that basically I agree with Toon about the general guidelines. What makes life harder, as always, are the nuances, or as we know them well as “it depends”.

And just one general observation first. The starting point of the original post was that the design included an arc. This is the given fact, and my intent was to suggest an implementation for this given fact. I don’t want to put words in your mouth (or keyboard), Toon, but I think that the basis for your arguments is that the arc, as a design concept, is wrong. I think you would prefer to see several entities in the ERD instead of the one with the arc. Am I right in this interpretation?

Toon wrote:
Oren, I just cannot, not comment on this post 😉

And I really appreciate it. Really.

Toon wrote:
Re: Multiple Tables
1) What do you mean by “maintaining another table”? And why is that a disadvantage?

I’ll refer to it after the next paragraph.

Toon wrote:
In my opinion, you should always have a software factory in which the act of introducing a new table is cheap. Why? Because otherwise you end up with database designs with few tables, into which more than few “things” are stored. Which is bad. It’s bad for understandability, it’s bad for performance, it’s bad for future maintenance.

And I completely agree with you.

Toon wrote:
Apparently adding a table is not cheap in your factory?
But, yet, dropping and recreating a CHECK constraint (to involve a new single-char value) is cheap?

As you know me and my database development principles, beliefs and guidelines (for example, here and here), I assume this is a rhetorical question. Of course adding a table is cheap. Actually it is one of the cheapest tasks.
And dropping and recreating a CHECK constraint is not expensive either. It may take some time to VALIDATE it if the table is big, but this is not a reason not to do it. And anyway, if, in order to implement a good design, something is “expensive” (but still realistic), it should not be a reason not to do it (at least, not a good reason).

But, if we have, by design, several tables that are identical – in column names, data types, indexes, constraints – everything except for the table on the other side of the “owner” foreign key, then I start to feel uncomfortable. I get the same feeling that I get when I see two procedures that do the same (or almost the same) thing.
And although many times the arc appears for simple join tables, as in my example, this is not always the case. The entity with the arc may include many columns.
In addition, by “maintaining another table” I did not mean just the CREATE TABLE statement. When a new attribute is added to the designed entity, we need to add a new column – exactly the same column – to all the tables. And the same is true for any change – renaming a column, changing data types, adding or removing constraints, etc.
In my opinion, if I have to repeat the same work over and over instead of doing it just once, I probably do something wrong.
It also reminds me of denormalization. Usually when we talk about denormalization we mean that the same data is kept more than once. I think it’s the same here – just that the duplicate data is kept in the data dictionary tables and not in our schema. And one of the disadvantages of denormalization is that it increases the risk for bugs; in this case, due to a human error we may end up with PERSON_ADDRESSES and COMPANY_ADDRESSESS that are not exactly the same, although they should be.

Toon wrote:
2) Why is the implied code-difference for this solution, more difficult to maintain?

Here I was mainly thinking about lack of reusability.

Toon wrote:
Instead of having these two inserts:
insert into entity_addresses values(AID1,’P’,PID1), and insert into entity_addresses values(AID2,’C’,CID1).

You’d have these two:
insert into person_addresses(AID1,PID1), and insert into company_addresses(AID2,CID1).

Why is the former “cheaper”?

I was thinking of writing only one INSERT statement:

CREATE OR REPLACE PACKAGE BODY address_mgr AS

    PROCEDURE add_entity_address
    (
        i_address_id    IN entity_addresses.address_id%TYPE,
        i_owner_type_id IN entity_addresses.owner_type_id%TYPE,
        i_owner_id      IN entity_addresses.owner_id%TYPE
    ) IS
    BEGIN
        INSERT INTO entity_addresses
            (address_id,
             owner_type_id,
             owner_id)
        VALUES
            (i_address_id,
             i_owner_type_id,
             i_owner_id);
    END add_entity_address;
.
.
.
END address_mgr;
/

Toon wrote:
And this trickles down to all other kinds of SQL statements. Why is:

select *
from adresses a
entity_addresses ea
,persons p
where a.id = ea.address_id
and ea.object_type = ‘P’
and ea.object_id = p.person_id

apparently easier to write and/or maintain (i.e. cheaper), than:

select *
from adresses a
person_addresses pa
,persons p
where a.id = ea.person_id
and pa.person_id = p.person_id

?
I just don’t get that…

Again, I was thinking of a more generic code, rather than duplicating the same code pattern. Something like this:

create type address_t as object (
  id integer,
  street varchar2(30),
  house_number varchar2(10),
  city varchar2(30),
  country varchar2(30)
)
/

create type address_tt as table of address_t 
/

CREATE OR REPLACE PACKAGE BODY address_mgr AS
.
.
.
    PROCEDURE get_addresses
    (
        i_owner_type_id IN entity_addresses.owner_type_id%TYPE,
        i_owner_id      IN entity_addresses.owner_id%TYPE,
        o_addresses     OUT address_tt
    ) IS
    BEGIN
        SELECT address_t(a.id, a.street, a.house_number, a.city, a.country)
        BULK   COLLECT
        INTO   o_addresses
        FROM   addresses        a,
               entity_addresses ea
        WHERE  ea.owner_type_id = i_owner_type_id
        AND    ea.owner_id = i_owner_id
        AND    a.id = ea.address_id;
    END get_addresses;
.
.
.
END address_mgr;
/

CREATE OR REPLACE PACKAGE BODY person_mgr AS

    PROCEDURE get_person
    (
        i_person_id  IN people.id%TYPE,
        o_first_name OUT people.first_name%TYPE,
        o_last_name  OUT people.last_name%TYPE,
        o_addresses  OUT address_tt
    ) IS
    BEGIN
        SELECT p.first_name,
               p.last_name
        INTO   o_first_name,
               o_last_name
        FROM   people p
        WHERE  p.id = i_person_id;
    
        address_mgr.get_addresses(i_owner_type_id => 'P',
                                  i_owner_id      => i_person_id,
                                  o_addresses     => o_addresses);
    END get_person;
.
.
.
END person_mgr;
/

CREATE OR REPLACE PACKAGE BODY company_mgr AS

    PROCEDURE get_company
    (
        i_company_id          IN companies.id%TYPE,
        o_name                OUT companies.name%TYPE,
        o_number_of_employees OUT companies.number_of_employees%TYPE,
        o_description         OUT companies.description%TYPE,
        o_addresses           OUT address_tt
    ) IS
    BEGIN
        SELECT c.name,
               c.number_of_employees,
               c.description
        INTO   o_name,
               o_number_of_employees,
               o_description
        FROM   companies c
        WHERE  c.id = i_company_id;
    
        address_mgr.get_addresses(i_owner_type_id => 'C',
                                  i_owner_id      => i_company_id,
                                  o_addresses     => o_addresses);
    END get_company;
.
.
.
END company_mgr;
/

By the way, we can achieve the same level of code reusability with the “multiple tables” option, by using dynamic SQL instead of static SQL, but I think both of us do not want to go there, do we?

Toon wrote:
To me this alternative (Multiple Tables) *is* the way to deal with the information requirement.
Having entity_adresses, introduces a column in my design (owner_id) whose meaning depends upon the value of another column (owner_type), which is just bad in my opinion.

Isn’t this the case with every composite key, that we need the values of all the columns that compose the key in order to uniquely identify the entity?

Toon wrote:
This alternative also offers the best chances of being able to easily cater for possible future differences in information requirements at the relationship-level. Eg. we may want to track since when a person has had a particular address, but we don’t care for that information requirement in case of companies: you then just add the ‘as-of’ date column to the person_addresses table only, and be done with it.

In my opinion, this becomes a different case. If the entities are not identical in their attributes, I would consider them different entities, and therefore each one deserves its own table. And since I’m not afraid of changing my schema and my code, I would split the generic table into multiple tables only when the need arises.

Toon wrote:
It is the most simple, and easy to understand for others that come join the maintenance team.
Imagine coming in and seeing the table with invisible generated virtual columns that have FK’s on them. Hmmm…

I would probably say: “what a cool idea” 😉

Adding a Column with a Default Value and a Constraint

The Constraint Optimization series:


In the previous parts of this series I showed that Oracle does a nice optimization – that may save plenty of time – when we add in a single ALTER TABLE statement a new nullable with no default value column and a corresponding:
– inline (a.k.a. “column-level”) check constraint
or
– foreign key constraint (either inline or out-of-line)

In these cases Oracle enables the constraint (for future DML statements) and marks the constraint as VALIDATED (stating that existing records respect the constraint) without actually performing any check – counting on the fact that all the existing records have NULL in the new column. This is a beautiful optimization – if the table already contains many records, then skipping the (unnecessary) validation phase can save lots of time (during which the table is locked in a highly restrictive mode [I’ll discuss locking in the next part of this series]).

What about adding a new column with a (constant) default value?

Since all the existing records will have the same value in the new column (the default value) – then the validation phase can be very simple and short. Oracle should only check that this single value respects the constraint; there is no reason to visit each and every record in the table for repeating the same validation over and over again. Unfortunately, this kind of optimization is not done.

I added this suggestion to the OTN Database Ideas section: http://community.oracle.com/ideas/17751. Please vote up if you think it’s a good idea.

With the latest releases of Oracle, such an optimization would be valuable in particular, because as of 11g adding a NOT NULLable column with a default value is a metadata-only operation, and as of 12c the same is true also for adding a NULLable column with a default value.

Here is an example with a check constraint:

> alter table t add (c number default 8 not null constraint c_chk check(c>0)); 

We can see in the SQL trace file the following:

=====================
PARSING IN CURSOR #407309168 len=110 dep=1 uid=194 oct=3 lid=0 tim=745434447778 hv=3957567910 ad='7ffb3434b250' sqlid='2z018fgpy7cd6'
 select /*+ all_rows ordered dynamic_sampling(2) */ A.rowid, :1, :2, :3 from "DEMO"."T" A where not ( c>0)
END OF STMT
.
.
.
FETCH #407309168:c=781250,e=3187349,p=142858,cr=142882,cu=0,mis=0,r=0,dep=1,og=1,plh=1601196873,tim=745437636908
STAT #407309168 id=1 cnt=0 pid=0 pos=1 obj=99064 op='TABLE ACCESS FULL T (cr=142882 pr=142858 pw=0 time=3187348 us cost=38895 size=1250000 card=50000)'
CLOSE #407309168:c=0,e=3,dep=1,type=0,tim=745437636980

And here is an example with a foreign key constraint:

> alter table c add (
  parent_id number default 1 not null constraint c_fk_p references p(id)
  );
PARSING IN CURSOR #386313832 len=207 dep=1 uid=0 oct=3 lid=0 tim=747182328580 hv=3249972061 ad='7ffb09624948' sqlid='83w6f8v0vd8ux'
 select /*+ all_rows ordered dynamic_sampling(2) */ A.rowid, :1, :2, :3 
 from "DEMO"."C" A , "DEMO"."P" B 
 where( "A"."PARENT_ID" is not null) and( "B"."ID" (+)= "A"."PARENT_ID") and( "B"."ID" is null)
END OF STMT
.
.
.
FETCH #386313832:c=906250,e=3396711,p=142857,cr=142883,cu=0,mis=0,r=0,dep=1,og=1,plh=1351743368,tim=747185726213
STAT #386313832 id=1 cnt=0 pid=0 pos=1 obj=0 op='HASH JOIN ANTI (cr=142883 pr=142857 pw=0 time=3396711 us cost=40650 size=38000000 card=1000000)'
STAT #386313832 id=2 cnt=1000000 pid=1 pos=1 obj=98387 op='TABLE ACCESS FULL C (cr=142882 pr=142857 pw=0 time=616429 us cost=38895 size=25000000 card=1000000)'
STAT #386313832 id=3 cnt=1 pid=1 pos=2 obj=98386 op='INDEX FULL SCAN P_PK (cr=1 pr=0 pw=0 time=8 us cost=0 size=13 card=1)'
CLOSE #386313832:c=0,e=3,dep=1,type=0,tim=747185726320

(Lack of) Optimization of Unique Constraint Creation

The Constraint Optimization series:


In the previous parts of this series I showed that Oracle does a nice optimization – that may save plenty of time – when we add in a single ALTER TABLE statement a new (nullable with no default value) column and a corresponding:
– inline (a.k.a. “column-level”) check constraint
or
– foreign key constraint (either inline or out-of-line)

What about unique constraints?

When we create a unique constraint on some column, Oracle creates by default a corresponding unique B*Tree index on that column, in order to enforce the constraint.
When we add a new column with no default value to a non-empty table, and in the same ALTER TABLE statement we create a unique constraint on that column, then the corresponding index will be created, by definition, with no entries (as B*Tree indexes do not contain entirely NULL keys). It would be nice if Oracle would simply create an empty index in this case, without scanning all the table rows, but unfortunately this kind of optimization is not done.
Continue reading “(Lack of) Optimization of Unique Constraint Creation”

Optimization of Foreign Key Constraint Creation

The Constraint Optimization series:


In a recent post I showed that Oracle does a nice optimization when we add a new (nullable with no default value) column with an inline (a.k.a. “column-level”) check constraint in a single ALTER TABLE statement. This optimization does not apply for out-of-line (“table-level”) check constraints.

So, what about foreign key constraints?

Clearly, when adding a new nullable with no default value column to a table which contains records, then, by definition, the new column has NULL in all the existing records. If in the same ALTER TABLE statement we also create a foreign key constraint for the new column, then, by definition, the constraint is valid for all the existing records, because NULLs always pass the foreign key constraint check. So in this case it is unnecessary to access each and every record of the table for the validation phase of the foreign key constraint creation, and indeed Oracle applies this optimization, in both inline and out-of-line foreign key constraints. Let’s see it.
Continue reading “Optimization of Foreign Key Constraint Creation”