EBR – Part 10: Data Dictionary Views for Editioning Views

This is part 10 of a post series about Oracle Edition-Based Redefinition.

Visit the index page for all the parts of the series

Introduction

In a previous post I wrote about Editioning Views and their significant role in online application upgrades using EBR.
In this post we’ll see how editioning views are represented in the data dictionary views.

I’ll discuss only the USER_* views, but everything is true for the corresponding ALL_*, DBA_* and CDB_* views as well

Setup

At this point, after completing three upgrades already (in part 3, part 5 and part 9), our latest edition is V3, and we have the PEOPLE$0 table and two actual instances of the PEOPLE editioning view (one in V2 and one in V3).

Let’s create a new edition – V4:

-- connected as system
ORA$BASE> create edition v4;

Edition created.

ORA$BASE> grant use on edition v4 to demo_ebr;

Grant succeeded.

In the new edition, let’s change the PEOPLE editioning view and give different aliases to some of the columns:

ORA$BASE> conn demo_ebr
Enter password:
Connected.
ORA$BASE> @set_edition v4

Session altered.

V4> create or replace editioning view people as
  2  select id,
  3         first_name   as given_name,
  4         last_name    as family_name,
  5         phone_number,
  6         eye_color
  7  from   people$0;

View created.

And let’s create also a regular view on top of PEOPLE:

V4> create view brown_eyed_people as
  2  select *
  3  from   people
  4  where  eye_color = 'Brown';

View created.

Note that the BROWN_EYED_PEOPLE regular view references the PEOPLE editioning view. As described and emphasized, only editioning views reference tables; all the other editioned objects (including regular views) should reference editioning views (or other regular views) only.

USER_OBJECTS

In USER_OBJECTS and in USER_OBJECTS_AE both regular views and editioning views appear simply as views (the OBJECT_TYPE column contains the value “VIEW”). Editioning views have no distinctive indication here.

V4>  select object_type,
  2         object_name,
  3         edition_name
  4  from   user_objects
  5  where  object_type in ('TABLE', 'VIEW')
  6  order by 1,2;

OBJECT_TYPE  OBJECT_NAME                    EDITION_NAME
------------ ------------------------------ ------------
TABLE        PEOPLE$0
VIEW         BROWN_EYED_PEOPLE              V4
VIEW         PEOPLE                         V4

V4>  select object_type,
  2         object_name,
  3         edition_name
  4  from   user_objects_ae
  5  where  object_type in ('TABLE', 'VIEW')
  6  order by 1,2,3;

OBJECT_TYPE  OBJECT_NAME                    EDITION_NAME
------------ ------------------------------ ------------
TABLE        PEOPLE$0
VIEW         BROWN_EYED_PEOPLE              V4
VIEW         PEOPLE                         V2
VIEW         PEOPLE                         V3
VIEW         PEOPLE                         V4 

USER_VIEWS

In USER_VIEWS and in USER_VIEWS_AE there is a column – EDITIONING_VIEW – that tells whether a view is an editioning one or not:

V4>  select view_name,
  2         editioning_view
  3  from   user_views
  4  order by 1;

VIEW_NAME            EDITIONING_VIEW
-------------------- -----------------
BROWN_EYED_PEOPLE    N
PEOPLE               Y

V4>  select view_name,
  2         editioning_view,
  3         edition_name
  4  from   user_views_ae
  5  order by 1,3;

VIEW_NAME            EDITIONING_VIEW   EDITION_NAME
-------------------- ----------------- ------------
BROWN_EYED_PEOPLE    N                 V4
PEOPLE               Y                 V2
PEOPLE               Y                 V3
PEOPLE               Y                 V4

USER_EDITIONING_VIEWS

There is a dedicated data dictionary view – USER_EDITIONING_VIEWS – that contains only the editioning views.
For each editioning view we can see the view name and the name of the table covered by this view. Recall that (in a specific edition) an editioning view covers one and only one table, and a table may be covered by one editioning view at the most.

V4> select * from user_editioning_views;

VIEW_NAME       TABLE_NAME
--------------- ---------------
PEOPLE          PEOPLE$0

And there is a corresponding USER_EDITIONING_VIEWS_AE view:

V4> select * from user_editioning_views_ae;

VIEW_NAME       TABLE_NAME      EDITION_NAME
--------------- --------------- ------------
PEOPLE          PEOPLE$0        V2
PEOPLE          PEOPLE$0        V3
PEOPLE          PEOPLE$0        V4

USER_EDITIONING_VIEW_COLS

There is a dedicated data dictionary view – USER_EDITIONING_VIEW_COLS – that contains for every editioning view all its columns and the corresponding projected columns from the covered table.

V4> select *
  2  from user_editioning_view_cols
  3  order by 1,2;

VIEW_NAME       VIEW_COLUMN_ID VIEW_COLUMN_NAME     TABLE_COLUMN_ID TABLE_COLUMN_NAME
--------------- -------------- -------------------- --------------- --------------------
PEOPLE                       1 ID                                 1 ID
PEOPLE                       2 GIVEN_NAME                         2 FIRST_NAME
PEOPLE                       3 FAMILY_NAME                        3 LAST_NAME
PEOPLE                       4 PHONE_NUMBER                       4 PHONE_NUMBER
PEOPLE                       5 EYE_COLOR                          5 EYE_COLOR

There is also a corresponding USER_EDITIONING_VIEW_COLS_AE view. Let’s use it to see how the PEOPLE editioning view has evolved in the last 3 editions:

V4> select v2,v3,v4
  2  from   (select *
  3          from   user_editioning_view_cols_ae
  4          where  view_name = 'PEOPLE')
  5  pivot(
  6    max(view_column_name || nullif(' [' || nullif(table_column_name, view_column_name) || ']', ' []'))
  7    for edition_name in ('V2' as v2, 'V3' as v3, 'V4' as v4)
  8  )
  9  order  by view_column_id;

V2             V3             V4
-------------- -------------- -------------------------
ID             ID             ID
FIRST_NAME     FIRST_NAME     GIVEN_NAME [FIRST_NAME]
LAST_NAME      LAST_NAME      FAMILY_NAME [LAST_NAME]
PHONE_NUMBER   PHONE_NUMBER   PHONE_NUMBER
               EYE_COLOR      EYE_COLOR

For other parts of the EBR series, please visit the index page.

Wrong Results with IOT, Added Column and Secondary Index

Update: I reported about this bug to Oracle, and they fixed it in version 20.1

I found a “wrong results” bug yesterday, easily reproduced in 11g, 12c and 18c.

In short, we may get wrong results under the following circumstances:

  • We have an Index-Organized Table (IOT) with multi-column primary key, populated with rows
  • The table has a secondary index on part of the primary key columns
  • We add another column to the existing IOT
  • We select from the IOT while accessing it via the secondary index

Following is a simple example (also uploaded to livesql.oracle.com).

We create an index-organized table named iot, including the columns x and y.
The primary key is comprised of both columns.

SQL> create table iot (
  2    x number,
  3    y number,
  4    constraint iot_pk primary key (x,y)
  5  ) organization index;

Table created.

We create a secondary index on y:

SQL> create index secondary_idx on iot (y);

Index created.

We populate the table with some rows:

SQL> insert into iot select rownum,mod(rownum,3)
  2  from dual
  3  connect by level<=7;

7 rows created.

SQL> commit;

Commit complete.

Now we add another column, z, to the table, and we set some value in z in one row:

SQL> alter table iot add z number;

Table altered.

SQL> update iot set z=42 where x=1;

1 row updated.

SQL> commit;

Commit complete.

So currently this is the content of the table:

SQL> select * from iot;

         X          Y          Z
---------- ---------- ----------
         1          1         42
         2          2
         3          0
         4          1
         5          2
         6          0
         7          1

7 rows selected.

Note that z contains the value 42 in the first row and null in all the other rows.
But if we access the table via the secondary index – for example getting all the records where y=1 – the value of z from the first row “spills over” into the other rows, and we get wrong results:

SQL> set autotrace on explain
SQL> select * from iot where y=1;

         X          Y          Z
---------- ---------- ----------
         1          1         42
         4          1         42
         7          1         42

3 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 177722221

-----------------------------------------------------------------------------------
| Id  | Operation         | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |               |     3 |   117 |     1   (0)| 00:00:01 |
|*  1 |  INDEX UNIQUE SCAN| IOT_PK        |     3 |   117 |     1   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN| SECONDARY_IDX |     1 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("Y"=1)
   2 - access("Y"=1)

Note
-----
   - dynamic sampling used for this statement (level=2)

It seems that the problem is not with the secondary index itself, as we get the correct rows, but something with accessing the table via the secondary index causes the wrong results.
If we hint the query to access the table directly we get the correct results:

SQL> select /*+ index_ffs (iot) */ * from iot where y=1;

         X          Y          Z
---------- ---------- ----------
         1          1         42
         4          1
         7          1

3 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 3252171408

-------------------------------------------------------------------------------
| Id  | Operation            | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |        |     3 |   117 |     2   (0)| 00:00:01 |
|*  1 |  INDEX FAST FULL SCAN| IOT_PK |     3 |   117 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("Y"=1)

Note
-----
   - dynamic sampling used for this statement (level=2)

EBR – Part 9: Adding a New Column

This is part 9 of a post series about Oracle Edition-Based Redefinition.

Visit the index page for all the parts of the series

Introduction

In part 7 (“Editioning Views”) I introduced our third development use case – adding a new column that represents a new business logic.

I emphasize the fact it’s a new business logic, because sometimes we add new columns that come instead of existing columns, for replacing an existing business logic. As we’ll see in future posts, such cases require some additional handling in online upgrades.

Now, after part 8 (“The Last Planned Downtime”), in which we converted our schema to use editioning views and made it fully ready for online upgrades, we can implement the requirement in an online way.

The Task

We want to add a new attribute – Eye Color – to the Person entity.
For that we need to add a new column – EYE_COLOR – to the PEOPLE table, add make the corresponding changes in the PEOPLE_DL and APP_MGR packages (adding an input parameter – i_eye_color – to the people_dl.add procedure and changing the implementation of people_dl.add and app_mgr.do_something accordingly).

The Current State

At this point, V2 is our latest edition and it is exposed to the production clients. When connected to V2, we see the following objects: Continue reading “EBR – Part 9: Adding a New Column”

EBR – Part 8: The Last Planned Downtime

This is part 8 of a post series about Oracle Edition-Based Redefinition.

Visit the index page for all the parts of the series

Introduction

I concluded the previous post by stating that the application code should never reference tables directly; instead, every table should be covered by an editioning view and the application code should reference the views.

Starting from Scratch?

If you start developing a new system from scratch, I strongly recommend to follow this practice, even if initially there is no requirement for online upgrades. That way, you will be ready for online upgrades from day one, so should such a requirement arise at a later stage you will not need to change anything – neither your data structures nor your code. The only overhead is maintaining an editioning view for every table, and I’m sure you’ll realize very quickly that this overhead is negligible. As long as downtime during your upgrades is acceptable, you only need to enable your schema owner for editions (a one-time operation, as shown in part 3), and (by default) you will simply use a single edition – the default one.

Existing Systems

If you have an existing system, and you want to enjoy the benefits of online upgrades and zero downtime, you need to do a one-time conversion of your schema – to cover all the tables by editioning views. Actually, even if you have an existing system and you do not want to enjoy these benefits, I would still recommend getting ready for online upgrades now. The rationale is the same as in the previous section (“Starting from Scratch?”).

Converting the Demo Schema

Let’s handle first the demo schema I’ve been using in this post series. We currently have the following objects: Continue reading “EBR – Part 8: The Last Planned Downtime”

EBR – Part 7: Editioning Views

This is part 7 of a post series about EBR.

Visit the index page for all the parts of the series

Introduction

Our next use case is adding a column that represents a new logic to the PEOPLE table, and making the corresponding changes in the PEOPLE_DL and APP_MGR packages. Of course, as we speak about EBR, the upgrade from the previous version to the new one should be online.
An online upgrade means that the application users should be able to continue working uninterruptedly. The code objects that they use should remain valid and available at any time.
In addition to the challenges raised from the first and second use cases, the current use case introduces another challenge. And unlike the previous challenges, this one cannot be overcome just by using a new edition.

The Current State

At this point, after completing two upgrades already – in part 3 and part 5, we have three editions, with the following actual objects:

V2> select object_name,object_type,status, edition_name
  2  from   user_objects
  3  order  by object_name,object_type;

OBJECT_NAME  OBJECT_TYPE  STATUS  EDITION_NAME
------------ ------------ ------- ------------
APP_MGR      PACKAGE      VALID   ORA$BASE
APP_MGR      PACKAGE BODY VALID   V2
PEOPLE       TABLE        VALID
PEOPLE_DL    PACKAGE      VALID   V2
PEOPLE_DL    PACKAGE BODY VALID   V2
PEOPLE_PK    INDEX        VALID

6 rows selected.

As discussed in part 6, a table is a non-editioned object, and therefore we have a single instance of the PEOPLE table – referenced by all the instances of the PEOPLE_DL package body in all the editions.

The Task

We want to add a new attribute – Eye Color – to the Person entity.
For that we need to add a new column – EYE_COLOR – to the PEOPLE table, add make the corresponding changes in the PEOPLE_DL and APP_MGR packages (adding an input parameter – i_eye_color – to the people_dl.add procedure and changing the implementation of people_dl.add and app_mgr.do_something accordingly).

Our Problem

If we try to add the new column to the table, then Continue reading “EBR – Part 7: Editioning Views”

A Single Query with Many Filter Combinations – Part 3

In a recent post I suggested a way to write a single SQL query that filters a table by one or more of several columns.
Here is the query from that post:

select * from employees where rowid in (
  select rid from (
    select rowid rid from employees where department_id = :department_id
    union all
    select rowid rid from employees where job_id = :job_id
    union all
    select rowid rid from employees where manager_id = :manager_id
    union all
    select rowid rid from employees where last_name = :last_name)
  group by rid
  having count(*) = nvl2(:department_id,1,0) + 
                    nvl2(:job_id,1,0) + 
                    nvl2(:manager_id,1,0) + 
                    nvl2(:last_name,1,0)
);

This is actually a simplification of a real task I handled some time ago.
In today’s post I’d like to look at a variation of the problem, which is closer to the real issue I handled. Continue reading “A Single Query with Many Filter Combinations – Part 3”

A Single Query with Many Filter Combinations – Part 2

In the previous post I suggested a way to write a single SQL query that filters the EMPLOYEES table (as an example) by one or more of the following columns: DEPARTMENT_ID, JOB_ID, MANAGER_ID and LAST_NAME.

Here is the query from the previous post:

select * from employees where rowid in (
  select rid from (
    select rowid rid from employees where department_id = :department_id
    union all
    select rowid rid from employees where job_id = :job_id
    union all
    select rowid rid from employees where manager_id = :manager_id
    union all
    select rowid rid from employees where last_name = :last_name)
  group by rid
  having count(*) = nvl2(:department_id,1,0) + 
                    nvl2(:job_id,1,0) + 
                    nvl2(:manager_id,1,0) + 
                    nvl2(:last_name,1,0)
);

In this query, it is assumed that at least one of the bind variables (:department_id, :job_id, :manager_id, :last_name) contains a non-null value.
If all of them contain nulls, then the result set of the query is empty. Note that in this case Oracle will not even try looking for data; as Stew Ashton commented in the previous post, the Index Range Scan will do no consistent gets for a predicate such as “column_name = :bind_variable” when :bind_variable is null (because Oracle knows that no row satisfies a “column=null” condition).

If we want to support filtering by zero or more of the columns, we should add (union) a subquery to handle the case when all the bind variables are nulls: Continue reading “A Single Query with Many Filter Combinations – Part 2”

A Single Query with Many Filter Combinations

Let’s assume the EMPLOYEES table (from the HR schema) contains many records, and we want to write an (efficient) SQL query that filters it by one or more of the following columns: DEPARTMENT_ID, JOB_ID, MANAGER_ID and LAST_NAME.
For example:

  • in one execution we may want to get all the employees with DEPARTMENT_ID = 80
  • in a second execution all the employees that their LAST_NAME is King
  • in a third execution all the employees that their JOB_ID is ST_CLERK and their MANAGER_ID is 124
  • and so on

These columns are indexed, each one in a separate index:

SQL> select index_name,
  2         listagg(column_name, ',') within group(order by column_position) index_columns
  3  from user_ind_columns
  4  where table_name = 'EMPLOYEES'
  5  group by index_name;

INDEX_NAME           INDEX_COLUMNS
-------------------- --------------------
EMP_DEPARTMENT_IX    DEPARTMENT_ID
EMP_EMAIL_UK         EMAIL
EMP_EMP_ID_PK        EMPLOYEE_ID
EMP_JOB_IX           JOB_ID
EMP_MANAGER_IX       MANAGER_ID
EMP_NAME_IX          LAST_NAME,FIRST_NAME

6 rows selected.

Many Queries, Many Indexes

We can write 15 different queries – a query for every possible combination. Continue reading “A Single Query with Many Filter Combinations”

Constraint Optimization Summary

This is the last part of a series about Constraint Optimization.
In this post I’ll summarize the conclusions from the previous parts.

When we add a constraint to an existing table, there are two aspects that are worth taking into consideration: duration and availability.

Duration

When the table contains a significant number of rows, adding a constraint may take a lot of time. In some cases Oracle applies a very nice optimization that can reduce this time to almost zero.

In part 1 we saw that such optimization happens when adding a column and an inline check constraint on that column in a single statement; and that this optimization does not happen for out-of-line check constraint.
In part 2 we saw that this optimization may lead to data integrity violation in some extreme cases.
In part 3 we saw that such optimization happens when adding a column and a foreign key constraint on that column in a single statement (for both inline and out-of-line constraints).
In part 4 we saw that unfortunately this optimization never happens for unique constraints.
In part 5 we saw that this optimization doesn’t happen if the added column is defined with a default value.

Availability

By default, adding a constraint is an offline operation. It means that it cannot start as long as the table is locked by active transactions, and that it blocks DML statements on the table for the duration of the operation. Obviously, the longer the constraint addition takes, the higher the significance of the availability issue.

In part 6 we saw how to add check constraints and foreign key constraints in an online way, by splitting the single operation into several ones.
In part 7 we saw how to add unique constraints in an online way (assuming Enterprise Edition).

Summary

The following table summarizes all of the above:

Adding a column and an inline constraint in a single statement Adding a column and an out-of-line constraint in a single statement Adding a column and a constraint in separate statements
Check Constraint Fast Duration depends on table size Duration depends on table size
Offline Offline Online can be achieved
Foreign Key Constraint Fast Fast Duration depends on table size
Offline Offline Online can be achieved
Unique Constraint Duration depends on table size Duration depends on table size Duration depends on table size
Offline Offline Online can be achieved in Enterprise Edition

Implementing Arc Relationships with Virtual Columns? Or Not?

I wrote a post some time ago about implementing arc relationships using virtual columns.
Recently, Toon Koppelaars wrote a detailed and reasoned comment to that post. Since I admire Toon, getting his point of view on something that I wrote is a privilege for me, regardless if he agrees with me or disagrees (and just to be clear, it’s the latter this time). I think that having a public (and civilized) discussion – this time about principles of data modeling and implementation – is a good thing, even if at the end we don’t convince each other.
Therefore I thought Toon’s comment deserves a post of its own. So I’ll quote here everything that he wrote in the comment, with my response after every sentence or paragraph. And everyone is welcome to add their own points of view in the comments section.

I’d like to emphasize that basically I agree with Toon about the general guidelines. What makes life harder, as always, are the nuances, or as we know them well as “it depends”.

And just one general observation first. The starting point of the original post was that the design included an arc. This is the given fact, and my intent was to suggest an implementation for this given fact. I don’t want to put words in your mouth (or keyboard), Toon, but I think that the basis for your arguments is that the arc, as a design concept, is wrong. I think you would prefer to see several entities in the ERD instead of the one with the arc. Am I right in this interpretation?

Toon wrote:
Oren, I just cannot, not comment on this post 😉

And I really appreciate it. Really.

Toon wrote:
Re: Multiple Tables
1) What do you mean by “maintaining another table”? And why is that a disadvantage?

I’ll refer to it after the next paragraph.

Toon wrote:
In my opinion, you should always have a software factory in which the act of introducing a new table is cheap. Why? Because otherwise you end up with database designs with few tables, into which more than few “things” are stored. Which is bad. It’s bad for understandability, it’s bad for performance, it’s bad for future maintenance.

And I completely agree with you.

Toon wrote:
Apparently adding a table is not cheap in your factory?
But, yet, dropping and recreating a CHECK constraint (to involve a new single-char value) is cheap?

As you know me and my database development principles, beliefs and guidelines (for example, here and here), I assume this is a rhetorical question. Of course adding a table is cheap. Actually it is one of the cheapest tasks.
And dropping and recreating a CHECK constraint is not expensive either. It may take some time to VALIDATE it if the table is big, but this is not a reason not to do it. And anyway, if, in order to implement a good design, something is “expensive” (but still realistic), it should not be a reason not to do it (at least, not a good reason).

But, if we have, by design, several tables that are identical – in column names, data types, indexes, constraints – everything except for the table on the other side of the “owner” foreign key, then I start to feel uncomfortable. I get the same feeling that I get when I see two procedures that do the same (or almost the same) thing.
And although many times the arc appears for simple join tables, as in my example, this is not always the case. The entity with the arc may include many columns.
In addition, by “maintaining another table” I did not mean just the CREATE TABLE statement. When a new attribute is added to the designed entity, we need to add a new column – exactly the same column – to all the tables. And the same is true for any change – renaming a column, changing data types, adding or removing constraints, etc.
In my opinion, if I have to repeat the same work over and over instead of doing it just once, I probably do something wrong.
It also reminds me of denormalization. Usually when we talk about denormalization we mean that the same data is kept more than once. I think it’s the same here – just that the duplicate data is kept in the data dictionary tables and not in our schema. And one of the disadvantages of denormalization is that it increases the risk for bugs; in this case, due to a human error we may end up with PERSON_ADDRESSES and COMPANY_ADDRESSESS that are not exactly the same, although they should be.

Toon wrote:
2) Why is the implied code-difference for this solution, more difficult to maintain?

Here I was mainly thinking about lack of reusability.

Toon wrote:
Instead of having these two inserts:
insert into entity_addresses values(AID1,’P’,PID1), and insert into entity_addresses values(AID2,’C’,CID1).

You’d have these two:
insert into person_addresses(AID1,PID1), and insert into company_addresses(AID2,CID1).

Why is the former “cheaper”?

I was thinking of writing only one INSERT statement:

CREATE OR REPLACE PACKAGE BODY address_mgr AS

    PROCEDURE add_entity_address
    (
        i_address_id    IN entity_addresses.address_id%TYPE,
        i_owner_type_id IN entity_addresses.owner_type_id%TYPE,
        i_owner_id      IN entity_addresses.owner_id%TYPE
    ) IS
    BEGIN
        INSERT INTO entity_addresses
            (address_id,
             owner_type_id,
             owner_id)
        VALUES
            (i_address_id,
             i_owner_type_id,
             i_owner_id);
    END add_entity_address;
.
.
.
END address_mgr;
/

Toon wrote:
And this trickles down to all other kinds of SQL statements. Why is:

select *
from adresses a
entity_addresses ea
,persons p
where a.id = ea.address_id
and ea.object_type = ‘P’
and ea.object_id = p.person_id

apparently easier to write and/or maintain (i.e. cheaper), than:

select *
from adresses a
person_addresses pa
,persons p
where a.id = ea.person_id
and pa.person_id = p.person_id

?
I just don’t get that…

Again, I was thinking of a more generic code, rather than duplicating the same code pattern. Something like this:

create type address_t as object (
  id integer,
  street varchar2(30),
  house_number varchar2(10),
  city varchar2(30),
  country varchar2(30)
)
/

create type address_tt as table of address_t 
/

CREATE OR REPLACE PACKAGE BODY address_mgr AS
.
.
.
    PROCEDURE get_addresses
    (
        i_owner_type_id IN entity_addresses.owner_type_id%TYPE,
        i_owner_id      IN entity_addresses.owner_id%TYPE,
        o_addresses     OUT address_tt
    ) IS
    BEGIN
        SELECT address_t(a.id, a.street, a.house_number, a.city, a.country)
        BULK   COLLECT
        INTO   o_addresses
        FROM   addresses        a,
               entity_addresses ea
        WHERE  ea.owner_type_id = i_owner_type_id
        AND    ea.owner_id = i_owner_id
        AND    a.id = ea.address_id;
    END get_addresses;
.
.
.
END address_mgr;
/

CREATE OR REPLACE PACKAGE BODY person_mgr AS

    PROCEDURE get_person
    (
        i_person_id  IN people.id%TYPE,
        o_first_name OUT people.first_name%TYPE,
        o_last_name  OUT people.last_name%TYPE,
        o_addresses  OUT address_tt
    ) IS
    BEGIN
        SELECT p.first_name,
               p.last_name
        INTO   o_first_name,
               o_last_name
        FROM   people p
        WHERE  p.id = i_person_id;
    
        address_mgr.get_addresses(i_owner_type_id => 'P',
                                  i_owner_id      => i_person_id,
                                  o_addresses     => o_addresses);
    END get_person;
.
.
.
END person_mgr;
/

CREATE OR REPLACE PACKAGE BODY company_mgr AS

    PROCEDURE get_company
    (
        i_company_id          IN companies.id%TYPE,
        o_name                OUT companies.name%TYPE,
        o_number_of_employees OUT companies.number_of_employees%TYPE,
        o_description         OUT companies.description%TYPE,
        o_addresses           OUT address_tt
    ) IS
    BEGIN
        SELECT c.name,
               c.number_of_employees,
               c.description
        INTO   o_name,
               o_number_of_employees,
               o_description
        FROM   companies c
        WHERE  c.id = i_company_id;
    
        address_mgr.get_addresses(i_owner_type_id => 'C',
                                  i_owner_id      => i_company_id,
                                  o_addresses     => o_addresses);
    END get_company;
.
.
.
END company_mgr;
/

By the way, we can achieve the same level of code reusability with the “multiple tables” option, by using dynamic SQL instead of static SQL, but I think both of us do not want to go there, do we?

Toon wrote:
To me this alternative (Multiple Tables) *is* the way to deal with the information requirement.
Having entity_adresses, introduces a column in my design (owner_id) whose meaning depends upon the value of another column (owner_type), which is just bad in my opinion.

Isn’t this the case with every composite key, that we need the values of all the columns that compose the key in order to uniquely identify the entity?

Toon wrote:
This alternative also offers the best chances of being able to easily cater for possible future differences in information requirements at the relationship-level. Eg. we may want to track since when a person has had a particular address, but we don’t care for that information requirement in case of companies: you then just add the ‘as-of’ date column to the person_addresses table only, and be done with it.

In my opinion, this becomes a different case. If the entities are not identical in their attributes, I would consider them different entities, and therefore each one deserves its own table. And since I’m not afraid of changing my schema and my code, I would split the generic table into multiple tables only when the need arises.

Toon wrote:
It is the most simple, and easy to understand for others that come join the maintenance team.
Imagine coming in and seeing the table with invisible generated virtual columns that have FK’s on them. Hmmm…

I would probably say: “what a cool idea” 😉