I came across the concept of global temporary tables in Oracle SQL server. According to a tutorial on global temporary tables:
The data in a global temporary table is private, such that data inserted by a session can only be accessed by that session.
I believe this could allow for the following scenario: a client starts a stored procedure twice, filling up a global temporary table beforehand each time with data for the sp to process. The two computations don’t interfere with each other, since each sees only its own data, as opposed to sharing data, as would be the case with a regular table.
However, it all depends on what constitutes a session, and when one starts and ends.
According to the Oracle Database Concepts Glossary:
Session: A logical entity in the database instance memory that represents the state of a current user login to a database. A single connection can have 0, 1, or more sessions established on it.
Connection: Communication pathway between a client process and an Oracle database instance.
This didn’t clear up all of my confusion, so here’s my concrete question:
My application maintains a single connection to an Oracle SQL server throughout its lifetime. It executes multiple SQL commands/queries on that same connection consecutively, will each of these queries be its own session, will they share a session, or is it not that simply answered?
Consider this pseudo code:
Command c1 = new Command("insert into TMP_TABLE (FOO) values ('TEST')"); Command c2 = new Command("select FOO from TMP_TABLE"); c1.Execute(); foreach (var value in c2.Query().Select("FOO")) { print(value); }
I have run the above in my application, and as expected, the command c2
returned zero results. I take this to mean that each query constitutes its own session, which is what I’m looking for. But can I rely on that?
Advertisement
Answer
What is the lifecycle of a Oracle SQL session?
A session is a logical entity that exists from the time your application code connects to the database to the time it disconnects.
Sessions are independent of the physical (resource-based) entities that support them, such as connections, server processes, network connections, etc.
In the simplest (and most common, imo) configurations (i.e., “dedicated server”), there are 1-1 relationships between the logical sessions and the physical connections and server processes. In more advanced configurations to support large numbers of users, physical resources (connections, processes, network resources) may be shared by/multiplexed among multiple sessions.
I have run the above in my application, and as expected, the command c2 returned zero results. I take this to mean that each query constitutes its own session
Unless your application code is disconnecting from or closing/releasing its connection between calls, this is almost certainly not the case.
Global temporary tables can be created with two options: ON COMMIT DELETE ROWS
and ON COMMIT PRESERVE ROWS
.
The first will cause all GTT data to removed upon a commit
or rollback
. The second will cause all GTT data to persist for the entire session.
If you are NOT using ON COMMIT DELETE ROWS
, then you should not assume the GTT is empty at the start of a given call. If you require the GTT to be empty at the start of the a procedure, you must DELETE FROM
your GTT (or TRUNCATE
it) at the beginning of the procedure (or the very end).
Note that even with this restriction (i.e., having to handle emptying out the GTT yourself within a session), global temporary tables are still useful because they still protect one session from seeing another session’s data. They also write less redo than regular tables, especially in version 12.1 and later. In fact, as of Oracle 12.1, they need not write any redo at all, making them useful in read-only and standby databases.
Aside…
I have run the above in my application, and as expected, the command c2 returned zero results
For this to make sense, your GTT must have been created with ON COMMIT DELETE ROWS
and your application code has some kind of “autocommit” feature enabled where it is automatically committing after each Command.Execute()
. Or else you’re misinterpreting/misreporting your test results.