Instrumenting your PL/SQL code

Introduction

As an exceptional Oracle Developer you already know the value of instrumenting your code. You have read enough of Tom Kyte, Cary Millsap & Steve McConnell to understand that with the correct instrumentation diagnosing problems especially time critical problems (are there any others?) becomes much easier.

So the question is not why you should instrument your code but how. In this article I will explain the installation and use of the PL/SQL Logger which was originally developed by Tyler Muth and is now available on Github.

Installation

Once you have downloaded Logger from Githib and unzip the contents to a directory that can be seen by your Oracle client (I use SQL Plus). navigate to the directory “releases” and expand the folder for the Logger release you wish to install. At the time of writing version 2.1.2 was the latest so that will be the version used in this article.

Whilst you can install logger into an existing schema, I am going to use the supplied @create_user.sql to install Logger into it’s own schema. So start SQL Plus and connect to the database as system or a user with the DBA role and run the following script

@create_user.sql

When prompted to do so, enter the username, tablespace, temporary tablespace and password for the new schema. In this example I have accepted the suggested defaults of logger_user, users and temp respectively.

Now connect to the database as the newly created logger_user and run the installation script:

@logger_install.sql

If you run into problems with the installation it is worth starting with the supplied documentation.

In this article I will be accessing the Logger objects from the standard Oracle HR user so the following grant is required:

GRANT EXECUTE ON logger TO hr
/

Once installation is complete, you can view the status of the logger by running the following command

exec logger.status;

which displays the following:


SQL> exec logger.status;
Project Home Page :

https://github.com/tmuth/Logger---A-PL-SQL-Logging-Utility/

Logger Version : 2.1.2
Debug Level : DEBUG
Capture Call Stack : TRUE
Protect Admin Procedures : TRUE
APEX Tracing : Enabled
SCN Capture : Disabled
Min. Purge Level : DEBUG
Purge Older Than : 7 days
Pref by client_id expire : 12 hours
For all client info see : logger_prefs_by_client_id

PL/SQL procedure successfully completed.

How to use

Below is a procedure that updates the salary of a given employee. Now lets pretend it is part of a overnight batch job and is just one of thousands of procedures that need to run. To the outside world this procedure is a blackbox. Without querying the employee after the procedure has run (and committed) you have no idea what has happened when it was called.


CREATE OR REPLACE PROCEDURE raise_salary
(
   p_emp_id     IN employees.employee_id%TYPE,
   p_new_salary IN employees.salary%TYPE
)
IS
BEGIN

   UPDATE employees e
      SET e.salary = p_new_salary
    WHERE e.employee_id = p_emp_id;

END raise_salary;
/

Lets change that by adding some instrumentation.


CREATE OR REPLACE PROCEDURE raise_salary
(
   p_emp_id     IN employees.employee_id%TYPE,
   p_new_salary IN employees.salary%TYPE
)
IS
BEGIN

 logger_user.logger.log('Start of raise_salary');

 UPDATE employees e
   SET e.salary = p_new_salary
 WHERE e.employee_id = p_emp_id;

 logger_user.logger.log('Number of rows updated: {' || TO_CHAR(SQL%ROWCOUNT) || '}'); 

 logger_user.logger.log('End of raise_salary'); 

END raise_salary;
/

The revised version of the procedure captures some basic information as the start of the procedure, how many rows were updated and finally that the procedure completed successfully.

Even in this simple case it is easy for the real code to be submerged by the instrumentation code. There is no one size fits all solution. Careful consideration is required to decide how much instrumentation is required.

When the procedure is run, one of the Logger views can be queried and the output reviewed:


SELECT text
  FROM logger_user.logger_logs_5_min
/

Here is the output from the revised procedure after it has been run:


TEXT
----------------------------
Start of raise_salary
Number of rows updated: {1}
End of raise_salary

 Level Up

The output from Logger is controlled by the level constant. The level can be set by calling the logger.set_level procedure. In the previous example I have used the log procedure to instrument my routine. This means that the output will insert an entry into the logger_logs table when the logger_level is set to debug.

There are a number of additional log_x routines that you can use with the various logging levels. For instance, log_warning will insert an entry into the logger_logs table when the logger level has been set to “warning”, likewise if you are only interested in capturing errors, you can set the Logger level to error and use the logger_error procedure.

One of the nicest features of the level facility is the ability to turn logging on for specified clients rather than system wide. In the following example, I have turned on debug logging for user 1 whilst user 2 continues to use the system but without instrumentation messages being inserted.

Logger_user session

The following commands are run:

exec logger_user.logger.set_level(p_level => 'OFF')
/

This system wide command turns Logger off. This means currently no instrumentation is being recorded.

exec logger_user.logger.set_level(p_level => 'DEBUG', p_client_id => 'user_one')
/

This command turns on debug logging for the client identifier user_one. (see user one session below for how this is set). This means any instrumentation that uses the logger.log procedure will be inserted into the log table.

The following query will show you the current clients that logging is enable for:

SELECT *
  FROM logger_prefs_by_client_id
/

User_one session

The following command sets the client identifier (this is used by the call to enable logging for this client by the Logger_user session above)


exec dbms_session.set_identifier('user_one');

I then execute the raise_salary procedure and commit.

User_two session

Almost a carbon copy of the user_one session with the exception of the identifier.


exec dbms_session.set_identifier('user_two');

I then execute the raise_salary procedure and commit.

Now when I query the logger table I see the following output which shows that only user_one’s actions have been captured.


SQL> SELECT client_identifier, text
2      FROM logger_user.logger_logs_5_min
3 /

CLIENT_IDENTIFIER TEXT
------------------------------------
user_one Start of raise_salary

user_one Number of rows updated: {1}

user_one End of raise_salary

But wait there is more…

All of the main Logger procedures allow you to give context to the information you are recording. The context would normally be where the message has originated from, i.e. Application, Package etc. The facility is provided by the p_scope parameter. Here is the raise_salary procedure used earlier refactored to include the p_scope parameter.


CREATE OR REPLACE PROCEDURE raise_salary_scope
(
   p_emp_id     IN employees.employee_id%TYPE,
   p_new_salary IN employees.salary%TYPE
)
IS

   l_scope logger_user.logger_logs.scope%type := 'raise_salary_scope';

BEGIN

   logger_user.logger.log('Start', l_scope);

   UPDATE employees e
      SET e.salary = p_new_salary
    WHERE e.employee_id = p_emp_id;

   logger_user.logger.log('Number of rows updated: {' || TO_CHAR(SQL%ROWCOUNT) || '}', l_scope);

   logger_user.logger.log('End', l_scope);

END raise_salary_scope;
/

Each of the calls to the log procedure now has the name of procedure contained within the l_scope parameter passed to it.

The following query shows results of using the scope parameter


SQL> SELECT scope, text
2      FROM logger_user.logger_logs_5_min
3    /

SCOPE                TEXT
------------------------------------------------
raise_salary_scope   Start

raise_salary_scope   Number of rows updated: {1}

raise_salary_scope   End

The logging of parameter values is handled within Logger by using the p_params object. This is easier to show with an example rather than even more text!

Here is the final revision of the raise_salary procedure.


CREATE OR REPLACE PROCEDURE raise_salary_params
(
   p_emp_id     IN employees.employee_id%TYPE,
   p_new_salary IN employees.salary%TYPE
)
IS

   l_scope    logger_user.logger_logs.SCOPE%TYPE := 'raise_salary_scope';
   l_params   logger_user.logger.tab_param;

BEGIN

   logger_user.logger.append_param(l_params, 'p_emp_id', p_emp_id);
   logger_user.logger.append_param(l_params, 'p_new_salary', p_new_salary);

   logger_user.logger.log('Start', l_scope, null, l_params);

   UPDATE employees e
      SET e.salary = p_new_salary
    WHERE e.employee_id = p_emp_id;

   logger_user.logger.log('Number of rows updated: {' || TO_CHAR(SQL%ROWCOUNT) || '}', l_scope);

   logger_user.logger.log('End', l_scope);

END raise_salary_params;
/

The parameters are added to a pre-defined Logger associative array and that array is included in the first call to the log procedure.

The parameters are stored in the extra column and the following query shows how to display them.


SQL> SELECT text, extra
2      FROM logger_user.logger_logs_5_min
3    /

TEXT                         EXTRA
---------------------------------------------------------------------------------
Start                        *** Parameters *** p_emp_id: 100 p_new_salary: 4500

Number of rows updated: {1}

End

Summary

There is no getting away from the fact that the instrumentation is an overhead to your application. Trying to find the balance between too much and too little information that is to be captured, the extra code you need to develop and then the extra resources required to execute the larger code base all need to be carefully considered.

My opinion is that this is a price I am willing to pay. When bad things happen and you are under pressure to track down the cause you will find the problem far quicker with code that is telling you what exactly is happening. Compare this with the alternative. Peering into an enormous black box armed only with guesses.

Acknowledgements

Tyler Muth

Martin Giffy D’Souza

The second easiest way to get started with Oracle’s Application Express

Following on from my earlier post,  using VirtualBox  with the pre-built Database App Development Virtual Machine  is in my opinion, the second easiest method to getting up and running with Application Express. It is also my preferred method of prototyping, trying new features and providing answers to Stack Overflow Apex questions.

I particularly like the fact that once you have imported the pre-built Virtual Machine into Virtual box, you are good to go, there is no further configuration. It only takes three simple steps to start working with your Oracle environment.

Step 1 Start the Virtual Machine

Capture1

Step 2 From the the pre built VM desktop select the Apex Track

Capture

Step 3 Start working with Apex

Untitled

It is worth mentioning that the Apex credentials at the time of writing were:

Workspace: obe
Username: obe
Password: oracle

At the time of this post, the installation of Virtual box can be found here and the pre-built Database App Development Virtual Machine here .

Using SQL with Associative Arrays of records in Oracle 12c

The ability of using SQL to operate on Associative Arrays or PL/SQL tables as they were known when I started working as a Database Developer is one of my favourite new features of 12c. I wasn’t aware of it until I read the post by Connor Mcdonald.

It’s announcement within the new features guide is easy to overlook:

“The table operator can now be used in a PL/SQL program on a collection whose data type is declared in PL/SQL. This also allows the data type to be a PL/SQL associative array. (In prior releases, the collection’s data type had to be declared at the schema level.)”

The following example was developed using the the pre-built Oracle Virtual Machine Database Database App Development image with the versions:

SQL Developer: 4.0.0.13

Oracle Database: 12.1.0.1.0

The first step is to create a package spec which will contain the record type and Associative Array.

CREATE OR REPLACE PACKAGE cake
IS

   TYPE cake_record IS RECORD (cake_name   VARCHAR2(100),
                               cake_price  NUMBER(4));

   TYPE cake_aa IS TABLE OF cake_record
      INDEX BY PLS_INTEGER;

END cake;

There should be nothing surprising with this code, a record type is defined (Lines 4 & 5) and then an Associative Array is defined (Lines 7 & 8) based on the record.

To demonstrate the new functionality I create the following anonymous block:

DECLARE

laa_cakes   cake.cake_aa;
ln_price    NUMBER(4);

BEGIN

   laa_cakes(1).cake_name := 'Eccles';
   laa_cakes(1).cake_price := 10;

   laa_cakes(2).cake_name := 'Cinnamon Bun';
   laa_cakes(2).cake_price := 40;

  SELECT c.cake_price
    INTO ln_price
    FROM table(laa_cakes) c
   WHERE c.cake_name = 'Cinnamon Bun';

   DBMS_OUTPUT.PUT_LINE('The price of the Cinnamon Bun is: '|| TO_CHAR(ln_price));                     

END;
/

Lines 3 – 12, I create a Associative Array of the type defined in the package and populate it with a couple of cakes and their prices. I then use SQL via the TABLE  pseudofunction to query the Associative Array to tell me the price of one of the cakes.

And once run, I see the following output:

Capture

Before 12c I would have had to start looping through the Associative Array to look for the item of interest.

It is important to note that in order to use the TABLE pseudofunction the Associative Array has to reside in a package spec.

Summary

In this post I have demonstrated how to use SQL with Associative Array of Records.

Acknowledgements

Connor Mcdonald

A simple example of using the Oracle ROLLUP extension

Recently I needed to find out how many rows in a table had a certain status and display a grand total of the number of objects.

In this post, rather than create a demo table with the various status I am going to base my example on the user_objects data dictionary view.

Using the following query I could see the types of objects as well as how many of each object I had.

SELECT object_type, 
       COUNT(*)
  FROM user_objects
 GROUP BY object_type
/

Capture

All that was missing was the total (which should be 37). After various attempts to roll my own query to produce a total which soon started to get unwieldy or over complicated, I knew there must be a better way. A quick search and I found this excellent answer on Stack Overflow from Rob van Wijk which used the Oracle Group By Extension, ROLLUP

Here is the query amended to use ROLLUP, along with the output. Note the total.

SELECT object_type, 
       COUNT(*)
  FROM user_objects
 GROUP BY ROLLUP(object_type)
/

Capture2

Amending my query to use ROLLUP was trival, (I only had to change line 4) and I now have the total I required produced by a straightforward SQL query.

Acknowledgements

Rob van Wijk

 

In Oracle SQL, why doesn’t ROWNUM = 2 work?

The inspiration for this post came after recently overhearing a developer asking why his query was not returning a row. It had been working fine, the only change was instead of ROWNUM = 1, he had changed it to ROWNUM = 2.

The query was could have been doing something similar to the following:


SELECT x.*
  FROM (SELECT e.employee_id,
               e.last_name,
               e.salary
          FROM employees e
         ORDER BY salary DESC) x
 WHERE ROWNUM = 1 
/

which returns information about the top earning employee.

However when the last line of the query is changed to ROWNUM = 2 to find the second highest earning employee no rows are returned and this is what was causing confusion.

I have read enough Tom Kyte to understand superficially why the change stopped the query from working but not in enough detail to stand up in front of my peers or even my cat for that matter and confidently explain why. This post is my attempt to change that.

What is ROWNUM? In Oracle terms, ROWNUM is a pseudocolumn which returns a number for each row returned by a query. The ROWNUM returned is not permanently assigned to a row. A ROWNUM value is assigned after it passes the predicate part of the query but before any sorting or aggregation. It is important to know that the ROWNUM is only incremented after it has been assigned.

Stepping through our example, where ROWNUM = 2. The first row fetched is assigned a ROWNUM of 1 and makes this condition false (So at this point ROWNUM = 1, we are looking for ROWNUM = 2 or to put it another way 1 = 2 which of course is false), the second row to be fetched is now the first row and ROWNUM is still 1 (It hasn’t incremented because it wasn’t assigned) so again the condition is false and this repeats for all rows so none are returned.

So how do you find the second highest earning employee? That will be the subject of a future post.

Acknowledgements

On ROWNUM and Limiting Results By Tom Kyte

Does using a sequence in a PL/SQL expression improve performance?

In versions of the Oracle Database prior to 11g Release 1, to populate a variable with the next value from a sequence you would have call the NEXTVAL function from within a SQL statement. From Oracle Database 11g Release 1 you can now use the NEXTVAL function in a PL/SQL expression.

So other than less typing, does using a sequence in this way improve performance?

The following examples are all based on the following sequence

CREATE SEQUENCE emp_seq
/

 
and were developed using the Oracle Developer Days Environment and used the following versions:

Oracle Linux running via Oracle Virtual Box
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0
Oracle SQL Developer 3.2.20.10.

To recap, the pre 11g method was:

...
SELECT emp_seq.nextval
  INTO some_variable
  FROM dual;
...

and from 11g Release one you can use the following:

...
some_variable := emp_seq.nextval;
...

Better Performance?

Using the method described by Adrian Billington you will be able see if the 11g method improves performance.

ALTER SESSION SET SQL_TRACE = TRUE
/

DECLARE

   emp_id PLS_INTEGER;

BEGIN
   
   FOR i IN 1 .. 10000 
   LOOP
   
      emp_id := emp_seq.NEXTVAL;   
   
   END LOOP;

   FOR i IN 1 .. 10000 
   LOOP
  
      SELECT emp_seq.NEXTVAL AS nval 
        INTO emp_id 
        FROM dual;
   
   END LOOP;

END; 
/

SQL Trace is enabled and the new 11g method of using NEXTVAL via an expression is called 10000 times after which the pre 11g method of using a SQL statement is also called 10000 times. Note the alias of nval is used with the SQL statement method this will help us identify the SQL statement method.

Here is the output from the trace file. The first entry is for the new 11g method and the second entry is for pre 11g method (as proved by the use of the alias nval)

20140502_TraceFile

The trace file shows very little differences when using the new 11g method. Which is not surprising as the trace file also shows that “under the covers” Oracle has not changed the implementation for the new 11g method and NEXTVAL is still obtained via a SQL statement.

Summary

In this article I have shown the two methods that are available to populate a variable with the next value from a sequence and that there is no performance gains to be had by choosing one method over another.

Acknowledgements

Adrian Billington (oracle-developer.net)

Writing PL/SQL in Oracle Application Express

There is an excellent article on Writing PL/SQL in Oracle Application Express in the March/April 2014 edition of the Oracle Magazine.

There is a lot of useful information about where your code should be located. My favourite part of the article is comparison between the parameterless function relying on the Apex V function and a function that passes the values explicitly via parameters.

It is an excellent article that will repay the short time you will invest in reading it.

 

 

There is all the more reason to use database PL/SQL…

Oracle have published a white paper entitled: “With Oracle Database 12c there is all the more reason to use database PL/SQL” it’s author is Bryn Llewellyn who is the Product Manager for PL/SQL.

In this paper Bryn lists the reasons he hears from customers on why they are not using PL/SQL in the database and then methodically sets about demolishing each of these objections.

It is an accessible, interesting paper and well worth reading.

As Oracle have a habit of periodically changing (breaking!) their links you can download the paper from here.