Category Archives: Database

Saving a Twitter stream to RavenDB database using C#

In an earlier post  I explained how you can use C# to access a Twitter stream. In this post I will show you how to save the tweets from a Twitter stream to RavenDB.

The goal of this post is not to perform a deep dive into NOSQL databases or the Tweetinvi API. Instead its to get you up and running with the minimum of ceremony so you can start conducting your own experiments.

Raven DB is an open source NOSQL database for.NET which as my first experience of a NOSQL database I have found relatively straightforward to start experimenting with.

You can download RavenDB from here.  At the time of writing the stable release was 3.5.3 and I chose to use the installer which then proceeded to install RavenDB via the familiar wizard installation process.

 

 

 

 

 

 

 

 

 

 

 

Once installed you should have a folder structure similar to this:

If, like me you are new the world of NoSQL databases it is worth working your way through the Fundamentals tutorial. I found this an excellent introduction which I highly recommend.

To start RavenDB double click on the Start.cmd batch file in the root of the RavenDB directory. You should shortly see a new command window and a new tab of your default browser showing what databases you have. (which will be empty for the first time launch)

With RavenDB installed and running we can now start Visual Studio and create a new console application. I’ve called mine TrendingOnTwitterNoSQL

Using NuGet, add the following packages:

TweetinviAPI

RavenDB.Client

 

Navigate to Program.cs and add the following using statements:

using System;

using Raven.Client.Document;

using Tweetinvi;

Within the Main method add the following:

Auth.SetUserCredentials("CONSUMER_KEY", "CONSUMER_SECRET", "ACCESS_TOKEN", "ACCESS_TOKEN_SECRET");

Replace COMSUMER_KEY etc. with your Twitter API credentials. If you don’t yet have them. You can obtain them by going here and following the instructions.

Now add the following two lines:

  var stream = Stream.CreateFilteredStream();
  stream.AddTrack("CanadianGP");

The first line creates a filtered Twitter stream. A Twitter stream gives you the developer access to live information on Twitter. There are a number of different streams available. In this post we will be using one that returns information about a trending topic. More information about Twitter streams can be found in the Twitter docs and the TweetInvi docs.

At the time of writing, the Canadian Grand Prix was trending on Twitter which you can see in the second line.

The next step is to create a new class which will manage the  RavenDB document store.  Here is the complete code.

using System; 
using Raven.Client; 
using Raven.Client.Document; 

namespace TrendingOnTwitterNoSQL 
{ 
  class DocumentStoreHolder 
    { 
      private static readonly Lazy<IDocumentStore> LazyStore = 
          new Lazy<IDocumentStore>(() => 
          { 
            var store = new DocumentStore 
            { 
              Url = "http://localhost:8080", 
              DefaultDatabase = "CanadianGP" 
            }; 
            return store.Initialize(); 
           }); 
    
    public static IDocumentStore Store => LazyStore.Value; 
  } 
}

In the context of RavenDB, the Document Store holds the RavenDB URL, the default database etc. More information can be found about the Document Store in the tutorial.

According to the documentation for typical applications you normally need one document store hence the reason why the DocumentStoreHolder class is a Singleton.

The important thing to note in this class is the database URL and the name of the Default Database, CanadianGP. This is the name of the database that will store Tweets about the CanadianGP.

Returning to Program.cs add the following underneath stream.AddTrack to obtain a new document store:

  var documentStore = DocumentStoreHolder.Store;

The final class that needs to be created is called TwitterModel and is shown below

namespace TrendingOnTwitterNoSQL
{
  class TwitterModel
  {
    public long Id { get; set; }
    public string Tweet { get; set; }
  }
}

This class is will be used to save the Tweet information that the program is interested in, the Twitter ID and the Tweet.  The is a lot of other information that is available, but for the sake of brevity this example is only interested in the id and the tweet.

With this class created the final part of the code is shown below

using (BulkInsertOperation bulkInsert = documentStore.BulkInsert())
{
  stream.MatchingTweetReceived += (sender, theTweet) =>
  {
    Console.WriteLine(theTweet.Tweet.FullText);
    var tm = new TwitterModel
    {
      Id = theTweet.Tweet.Id,
      Tweet = theTweet.Tweet.FullText
    };

    bulkInsert.Store(tm);
  };
stream.StartStreamMatchingAllConditions();
}

As the tweets will be arriving in clusters, the RavenDB BulkInsert method is used. You can see this at line 1.

Once a matching Tweet is found, line 3, it is output to the console. Next a new TwitterModel object is created and its fields are assigned the Tweet Id and the Tweet Text. This object is then saved to the database.

The complete Program.cs should now look like:

using System;
using Raven.Client.Document;
using Tweetinvi;

namespace TrendingOnTwitterNoSQL
{
  class Program
  {
    static void Main(string[] args)
    {
      Auth.SetUserCredentials("CONSUMER_KEY", "CONSUMER_SECRET", "ACCESS_TOKEN", "ACCESS_TOKEN_SECRET");

      var stream = Stream.CreateFilteredStream();
      stream.AddTrack("CanadianGP");

      var documentStore = DocumentStoreHolder.Store;

      using (BulkInsertOperation bulkInsert = documentStore.BulkInsert())
      {
        stream.MatchingTweetReceived += (sender, theTweet) =>
        {
          Console.WriteLine(theTweet.Tweet.FullText);

          var tm = new TwitterModel
          {
            Id = theTweet.Tweet.Id,
            Tweet = theTweet.Tweet.FullText
          };

          bulkInsert.Store(tm);

       };
       stream.StartStreamMatchingAllConditions();
     }
   }
 }
}

After running this program for a short while you will have a number of Tweets saved. To view them, switch back to your browser, if not already on the RavenDB page navigate to http://localhost:8080 and click on the database that you created.

 

 

 

 

 

Selecting the relevant database you will then see the tweets.

 

 

 

 

 

 

 

Summary

In this post I have detailed the steps required to save a Twitter Stream of a topic of interest to a RavenDB.

A complete example is available on github

Acknowledgements

The genesis of this post came from the generous answers given to my question on StackOverflow.

Is there more than one reason to use PL/SQL WHERE CURRENT OF ?

I can’t remember using the PL/SQL construct where current of  in code I have written so along with my usual article style of how use to use this feature, this is also an experiment to find out if there are any other reasons to use where current of.

The code examples in this post have been developed using the Oracle Database App Development VM for Oracle VirtualBox which at the time of this post was using Oracle database 12.1.0.2. (Although nothing in this post is 12c specific)

First I create a table that can be used for the experiment

create table t
as
select *
from all_objects
/

With the table created, I then use it in the following example which I will explain below.  At this point I would like to say that I appreciate the example could be rewritten as a single SQL statement however I wanted to concentrate on illustrating the use of where current of  without an overtly complicated example where the point of this post is lost in a forest of code.

declare

  cursor c 
  is 
  select t.object_name
    from t
   where t.object_type in ('PACKAGE', 'PACKAGE BODY')
     and t.owner = 'APEX_040200'
     for update of object_name; 

  ln_count pls_integer := 1;

begin

  for i in c
  loop
    
    update t
       set t.object_name = 'X - ' || ln_count
     where current of c;
 
    ln_count := ln_count + 1;
 
  end loop;

  commit;  

end;

At line 3 I declare a cursor that identifies the records of interest, which are the package specifications and bodies owned by the APEX user.  Note the use of the for update clause. This is mandatory when using where current of . If you remove it and try to run the example you will see the following error message

PLS-00404: cursor 'C' must be declared with FOR UPDATE to use with CURRENT OF

Moving on to the execution section,  for each iteration of the cursor for loop,  the object name is updated (to something not very useful!) and you can see at line 20 what I think is the primary benefit of using where current of is that instead of repeating the where clause used by the cursor I have used where current of c. So when the cursor has to change to return different records the only change that needs to be made is to the cursor.

The final part of the example is once the loop completes I release the lock on the table by committing the changes.

The primary benefit of where current of is the reduction in duplicated code which is a good thing but am I missing anything else? Please use the comments to let me know.

Summary

In this post I have demonstrated how to use the PL/SQL construct where current of along with what I believe is it’s primary benefit, the reduction of duplicated code.

Acknowledgements

The inspiration for this post came from Steven Feuerstein’s book, Oracle PL/SQL Programming  5th Edition.

 

The auditing capabilities of Flashback Data Archive in Oracle 12c.

The goal of this post is to build a prototype that demonstrates the auditing capabilities of Flashback Data Archive in Oracle 12c.

Although the documentation mentions that Flashback Data Archive provides the ability to track and store transactional changes to a table I was struggling to see how this functionality actually worked and what audit information was available hence my need to build something that I could test drive.

This post is not a deep dive into Flashback Data Archive for that I would refer you to the landing page of Flashback Data Archive within the documentation.

The prototype was developed using the pre-built developer app vm for virtual box which at the time of writing used Oracle database version 12.1.0.2.

The goal of the demo is show how a table is audited for changes, the audit will show who changed the record and when the change was made.

Creating the prototype is split into the following steps:

  1. Create a new user (optional).
  2. Creation of a table that you wish to audit changes to.
  3. Creation of the Flashback Data Archive objects.
  4. Creation of a new context and the dependent package.
  5. Executing DML statements against the table of interest.
  6. Viewing the audit information.

Create a new user (optional).

Logging on as a user with SYSDBA role, create a new user called: fda_user

create user fda_user identified by fda_user;
alter user fda_user quota unlimited on users;

fda_user is then given the following grants:

grant create table to fda_user;
grant alter user to fda_user;
grant alter session to fda_user;
grant create session to fda_user;
grant create any context to fda_user;
grant create procedure to fda_user;

Creation of a table that you wish to audit changes to.

For this prototype I am using a copy of the employees table owned by the HR user. The command below was run using whilst logged in with the SYSDBA role.

create table fda_user.employees as select * from hr.employees;

Creation of the Flashback Data Archive objects.

Still logged in as a user with SYSDBA role create a new tablespace and allow the fda_user to utilise it.

create tablespace fda2 datafile 'FDA2.dbf' size 1M autoextend on next 1M;

alter user fda_user quota unlimited on fda2; 

The next step is to create the Flashback Data Archive. In this example the audit information will be kept for 1 year and then automatically purged.

create flashback archive fda_one_year_data tablespace fda2 quota 1G retention 1 year;

Next the fda_user is granted permissions in order to utilise Flashback Data Archive.

grant flashback archive on fda_one_year_data to fda_user;

grant execute on dbms_flashback_archive to fda_user;

In the file step with the sysdba role, the table that is to be audited, employees is pointed to the Flashback Data Archive that was created earlier.

alter table fda_user.employees flashback archive fda_one_year_data;

Creation of a new context and the dependent package.

Log in to the database as the fda_user create the following context and PL/SQL package spec and body. This will used to assign who made the changes to the Employees table what those changes where.

CREATE OR REPLACE CONTEXT employee_context USING employee_ctx_api;

CREATE OR REPLACE PACKAGE employee_ctx_api
IS
  PROCEDURE set_value (p_name  IN VARCHAR2,
                       p_value IN VARCHAR2);

END employee_ctx_api;
/

CREATE OR REPLACE PACKAGE BODY employee_ctx_api
IS
  PROCEDURE set_value (p_name IN VARCHAR2,
                       p_value IN VARCHAR2)
  IS
  BEGIN
    dbms_session.set_context('employee_context', p_name, p_value);
  END set_value;

END employee_ctx_api;
/

For the sake of keeping the example on track the context and dependent package has been created in the same schema but in a production environment both the context and package would live in a different schema.

With this step, the required set up and configuration of Flashback Data Archive is now complete.

Executing DML statements against the table of interest.

In the following statements, a call is made to dbms_session and the package used by our context to record who is making the change and the action they are undertaking. This is then followed by the insert, update or delete statement.

begin

  dbms_session.set_identifier('Ian');

  employee_ctx_api.set_value('action','Insert');

  insert into employees
  (
    employee_id,
    first_name,
    last_name,
    email,
    phone_number,
    hire_date,
    job_id,
    salary,
    commission_pct,
    manager_id,
    department_id
  )
  values
  (
    9999,
    'William',
    'Bodie',
    '[email protected]',
    '01-123-456',
    TO_DATE('01-JAN-1978', 'DD-MON-YYYY'),
    'SH_CLERK',
    '20000',
    NULL,
    149,
    80
  );

  commit;

end;

begin
 
 dbms_session.set_identifier('Ian');
 
 employee_ctx_api.set_value('action','Update');

 update employees e
    set e.salary = 100009 
  where e.employee_id = 9999;
 
 commit;

end;
begin
 
 dbms_session.set_identifier('Ian');
 
 employee_ctx_api.set_value('action','Delete');

 delete from employees
 where employee_id = 9999;
 
 commit;

end;

Viewing the audit information

With committed changes to the Employees table the audit information can be viewed using a Flashback Table Query such as the following:

SELECT employee_id, 
       salary,
       versions_starttime,        
       versions_endtime,       
       versions_operation vo,       
       DBMS_FLASHBACK_ARCHIVE.get_sys_context(versions_xid, 'USERENV','SESSION_USER') AS session_user,
       DBMS_FLASHBACK_ARCHIVE.get_sys_context(versions_xid, 'USERENV','CLIENT_IDENTIFIER') AS client_identifier,
       DBMS_FLASHBACK_ARCHIVE.get_sys_context(versions_xid, 'employee_context','action') AS action
FROM   employees
       VERSIONS BETWEEN TIMESTAMP SYSTIMESTAMP-(1/24) AND SYSTIMESTAMP
WHERE  employee_id = 9999
ORDER BY versions_startscn;

Reviewing the query results shows a record being inserted, updated and finally deleted along with who did it (session_user and client_identifier columns), when it was done (versions_starttime column) and what was done (action column). Although this last column is not really useful as the information is already provided by the versions_operation (vo column) in the screenshot but is included as an example of what can be recorded.

fda_results

Summary

In this post I have shown how you can build a prototype to see if the Audit capabilities of Flashback Data Archive would work with your existing application.

Acknowledgements

The inspiration for this post came from two sources. First was Connor McDonald’s superb presentation on Flashback Data Archive at the 2015 UKOUG conference and Tim Hall’s fantastic and generous site

Using INSERT ALL with related tables

In this article I will demonstrate a method where the INSERTALL statement can be used to insert rows into two tables that are related by a foreign key constraint.

The following example was developed and tested using the Pre-Built Oracle Database App Development VM which at the time of this post used Oracle Database 12.1.0.2.0  & SQL Developer 4.0.3.16.  The example was created within the standard HR schema.

First I create two tables:

CREATE TABLE parent_tbl (the_pk    NUMBER PRIMARY KEY,
                         object_id NUMBER);

CREATE TABLE child_tbl (the_pk      NUMBER PRIMARY KEY,
                        the_fk      NUMBER,
                        object_name VARCHAR2(30));

Next I add a foreign to define the relationship between these tables and two sequences used to populate the primary keys:

ALTER TABLE child_tbl
ADD CONSTRAINT child_tbl_fk1 FOREIGN KEY (the_fk)
   REFERENCES parent_tbl (the_pk);

CREATE SEQUENCE parent_tbl_seq;

CREATE SEQUENCE child_tbl_seq START WITH 500;

Next I ensure that the foreign key relationship is working as expected by trying to insert a record into the child table with value for the_fk column that doesn’t exist in parent_tbl:

INSERT INTO child_tbl
(the_pk,
the_fk,
object_name)
VALUES
(child_tbl_seq.nextval,
999,
'SomeObject');

Attempting to run this statement results in the expected error message:

SQL Error: ORA-02291: integrity constraint (HR.CHILD_TBL_FK1) violated - parent key not found

With the tables and relationship between them in place I can now demostrate how to use INSERTALL to insert information from user_objects into the parent and child tables.

INSERT ALL
INTO parent_tbl(the_pk, object_id) 
VALUES (parent_tbl_seq.nextval, 
        object_id)
INTO child_tbl(the_pk, the_fk, object_name) 
VALUES (child_tbl_seq.nextval,
        parent_tbl_seq.currval,
        object_name)
SELECT uo.object_id,
       uo.object_name
FROM user_objects uo
/

Lines 2 – 4 insert into the parent table, note the use of the sequence to populate the primary key.

Lines 5 – 8 insert into the child table, the important part (and the focus of this article) is to understand how the foreign key column, the_fk is populated. It uses currval of the sequence that was used to populate the parent table.

Thanks go to Tom Kyte for coming up with this elegant solution.

Summary

In this article I have demonstrated how to use INSERTALL to insert into two tables that are related by a foreign key.

Acknowledgements

The idea for this article came from Tom Kyte’s answer to this Ask Tom question.

PL/SQL FizzBuzz

One of the popular exercises when learning a new language is to develop a FizzBuzz program.

This is where you print out the all the numbers from 1 to 100 except if a number is a multiple of 3 you print out Fizz and if the number is a multiple of 5 you print out Buzz. If a number is a multiple of both 3 and 5 then you print out FizzBuzz.

I came across this challenge for the first time recently as I learn C# so here is my version of FizzBuzz written in PL/SQL.

DECLARE

   fizz BOOLEAN := FALSE;
   buzz BOOLEAN := FALSE;

BEGIN

   FOR i IN 1 .. 100
   LOOP

      fizz := MOD(i, 3) = 0;
      buzz := MOD(i, 5) = 0;

      CASE

         WHEN fizz AND buzz THEN

            dbms_output.put_line('FizzBuzz');

         WHEN fizz THEN

            dbms_output.put_line('Fizz');

         WHEN buzz THEN

            dbms_output.put_line('Buzz');

         ELSE

            dbms_output.put_line(i);

      END CASE;

   END LOOP;
 
END;

 

Found a problem with the Oracle Documentation? Report it!

I was recently working through Chapter 9 of the 12c Database 2 Day Developer’s Guide and found the following bug in the sample code.

Here is the original code:

FUNCTION add_department
    ( p_department_name   IN departments.department_name%TYPE,
      p_manager_id        IN departments.manager_id%TYPE )
    RETURN departments.department_id%TYPE
  IS
    l_department_id departments.department_id%TYPE;
  BEGIN
    INSERT INTO departments ( department_id, department_name, manager_id )
    VALUES ( departments_sequence.NEXTVAL, p_department_name, p_manager_id );

    RETURN l_department_id;
  END add_department;

 
The function creates a new department and should return the newly created department id. Except it doesn’t. The variable used to return the newly created department id, l_department_id is never set so it will always returns null.

One solution to this bug is to use the Returning Into clause and the revised code would be:

...
    INSERT INTO departments ( department_id, department_name, manager_id )
    VALUES ( departments_sequence.NEXTVAL, p_department_name, p_manager_id )
    RETURNING department_id INTO l_department_id;
 
    RETURN l_department_id;
...

 
It’s a trivial issue so why I am writing about it? The reason is that seeing this problem I could have easily moved on and finished the chapter but I noticed the Feedback button and thought that others will also encounter this problem and may waste time wondering why the department id is not being returned.

OFeedback

After clicking the feedback button and using my OTN account (you also have the choice of remaining anonymous), describing the issue along with the suggested fix as described above I pressed submit and thought no more about it.

Within 24 hours I had an email reply from someone at Oracle (and someone not using a do_not_reply email address) thanking me for pointing out the error and assuring me it will be fixed in future editions. Whilst it remains to be seen if this will be done, the response to my feedback has left a very positive impression with me which made me glad I made the effort to report this issue.

So if you spot something within the masses of Oracle documentation, report it and help fix those broken windows.

No surprises with Oracle 12c Identity Column performance

Oracle 12c introduced the Identity Column. You can find out more about this feature in the new features guide and within the CREATE TABLE documentation.

In this article I will use Tom Kyte’s run stats utility to compare the performance of the IDENTITY column with the explicit use of a sequence object. The script below will be used to insert 10,0000, 100,000 and finally a million rows.

The tests were performed using Virtual Box running the pre built Database App Development VM. The database version at this time was 12.1.0.2.0 and all the examples were developed using SQL Developer 4.0.3.16

To support the tests,  the following objects are required. Firstly, a table using the IDENTITY column is created. Note the syntax for creating an IDENTITY column. In addition as I know there will be some large inserts into this table I have adjusted cache size of the sequence accordingly.

CREATE TABLE t_identity(id      NUMBER         GENERATED AS IDENTITY CACHE 1000
                                               CONSTRAINT t_identity_pk PRIMARY KEY,
                        details VARCHAR2(32))
/

Next a table and a sequence which will be used to hold the results of the inserts via a regular Oracle sequence is created. Again the sequence cache size has been increased from the default.

CREATE TABLE t_seq(id      NUMBER CONSTRAINT t_seq_pk PRIMARY KEY,
                   details VARCHAR2(32))
/

CREATE SEQUENCE s1 CACHE 1000
/

Below is the test script. As a brief overview, it initialises the call to the runstats package, it then inserts the required number of records into the table with the IDENTITY column.

The runstats package is called again to show that the first part of the processing has finished and the second part is about to start. The second insert is identical to the first one with the exception of the explicit call to the sequence object.

Thanks to Oracle Base for the tip about using TIMESTAMP as a seed to dbms_random. I am not advocating using row by row processing to insert volumes of data of this size in the real world!

BEGIN

   runstats_pkg.rs_start();

END;
/

DECLARE

   l_data VARCHAR2(32);
   l_seed VARCHAR2(32);

BEGIN

   l_seed := TO_CHAR(SYSTIMESTAMP,'YYYYDDMMHH24MISSFFFF');

   dbms_random.seed (val => l_seed);

   FOR i IN 1 .. 1000000
   LOOP

      l_data := dbms_random.string(opt => 'a', len => 32);

      INSERT INTO t_identity(details)
      VALUES(l_data);

   END LOOP;

   COMMIT;

END;
/

BEGIN

   runstats_pkg.rs_middle();

END;
/

DECLARE

   l_data VARCHAR2(32);
   l_seed VARCHAR2(32);

BEGIN

   l_seed := TO_CHAR(SYSTIMESTAMP,'YYYYDDMMHH24MISSFFFF');

   dbms_random.seed (val => l_seed);

   FOR i IN 1 .. 1000000
   LOOP

      l_data := dbms_random.string(opt => 'a', len => 32);

      INSERT INTO t_seq(id,
                        details)
      VALUES(s1.nextval,
             l_data);

   END LOOP;

   COMMIT;

END;
/

BEGIN

   runstats_pkg.rs_stop(1000);

END;
/

Here is the runstats output for each of the runs

10,000 rows inserted

Run1 ran in 106 cpu hsecs
Run2 ran in 105 cpu hsecs
run 1 ran in 100.95% of the time
...
Run1 latches total versus runs -- difference and pct
Run1      Run2      Diff  Pct
171,097   171,432   335   99.80%

100,000 rows inserted

Run1 ran in 1216 cpu hsecs
Run2 ran in 1156 cpu hsecs
run 1 ran in 105.19% of the time
...
Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff      Pct
1,719,582   2,061,024   341,442   83.43%

1,000,000 rows inserted

Run1 ran in 12308 cpu hsecs
Run2 ran in 11835 cpu hsecs
run 1 ran in 104% of the time
...
Run1 latches total versus runs -- difference and pct
Run1         Run2         Diff      Pct
18,480,661   18,761,711   281,050   98.50%

and the difference between the two methods is negligible.

Summary

In this article, using Tom Kyte’s runstats utility, I have invested the performance of the Oracle 12c new feature; IDENTITY column

Acknowledgements

Tom Kyte for his seminal package runstats

 

The most important reason why you should know how to use PL/SQL Conditional Compilation.

….is that when you really need to, it allows you to turn up your applications instrumentation  to 11.

A 30 second PL/SQL Conditional Compilation overview

PL/SQL Conditional compilation was introduced with Oracle Database 10gR2. The official documentation  explains how to use the Conditional Compilation constructs but doesn’t give too many use cases. Fortunately the Oracle White Paper PL/SQL conditional compilation not only covers how to use Conditional Compilation, it also gives a number of use cases and finishes with an in depth case study.

Conditional Compilation allows you to mark your portions of your code so that a preprocessor can determine what will actually be sent to the compiler.  Rather than bore you with more words lets look at a simple example.

CREATE PROCEDURE cc_intro
IS
BEGIN

   $IF sys.dbms_db_version.ver_le_12_1
   $THEN

      dbms_output.put_line('Can use the 12c new features'); 

   $ELSE 

      dbms_output.put_line('Have to use the work arounds'); 

   $END

END cc_intro;

In this example, if the procedure is compiled on a database running 12.1 the first message will appear in the compiled version of the code otherwise the second message will appear.

On line 5 the Conditional Compilation selection directive begins with $IF. The value tested must resolve to a static boolean value such as literals, inquiry directives or as in this example a package constant. Notice that the selection directive finishes with $END and not END IF. Also there is no semi-colon after $END.

Once the procedure has been compiled, you can view the code as the compiler will “see” it using the supplied dbms_preprocessor.print_post_processed_source

BEGIN

   dbms_preprocessor.print_post_processed_source
   (
      object_type => 'PROCEDURE',
      schema_name => USER,
      object_name => 'CC_INTRO'
   );

END;
/

After running on my Oracle 12.1 database the following is seen.

PROCEDURE cc_intro
IS
BEGIN 

 dbms_output.put_line('Can use the 12c new features'); 

END cc_intro;

As expected, only the first message exists in the compiled code.

The most important reason

Following months (weeks, hours or minutes!!) of the various stages of testing,  new changes have finally made it into production. At first everything is great and is ticking along nicely. Until it doesn’t. Strange things start to happen. That thing that can’t happen is happening. To add to the complexity try as you might the problem appears to happen only in production.

You trace the problem back to a routine where perhaps a XML document is created or a collection is populated. Unfortunately you are unable to see what these normally opaque data structures contains and that information could be key to solving the issue.

A method that I have often seen to get round this problem is to wrap code with a standard IF statement as shown in Procedure B below.

CREATE PACKAGE pkg_no_cc
IS

   PROCEDURE a;

END pkg_no_cc;
/

CREATE OR REPLACE PACKAGE BODY pkg_no_cc
IS

   g_debugging BOOLEAN := FALSE;

   PROCEDURE b
   IS 

      TYPE emps_tbl IS TABLE OF employees%ROWTYPE
         INDEX BY PLS_INTEGER;

      l_xml        XMLTYPE;
      l_collection emps_tbl; 

   BEGIN

      -- Expected business logic is here...
      dbms_output.put_line('This is procedure b');

      IF g_debugging = TRUE
      THEN 

         -- see what is in the collection
         FOR i IN 1 .. l_collection.COUNT()
         LOOP

            dbms_output.put_line('see the output of the collection'); 

         END LOOP; 

         -- view the xml
         dbms_output.put_line('the xml: ' || l_xml.getstringval() );

      END IF; 

   END b; 

   PROCEDURE a
   IS
   BEGIN

      dbms_output.put_line('Calling procedure b');

      b();

   END a;

END pkg_no_cc;
/

If the debugging statement within Procedure B survived a Code Review and made it
into production it would cause an overhead, no matter how small (they all add up) every time
Procedure B was invoked. It also would be unlikely that this would be the only occurrence of this type statement within your production code so the overhead would be significant and hence the resistance to allowing this level of information reaching production.

Here is the same package rewritten to take advantage of PL/SQL Conditional Compilation.
Other than the package name change, the only change is the replacement of the IF statement
in Procedure B with Conditional Compilation selection and inquiry directive. At this point the inquiry directive hasn’t been created.

CREATE PACKAGE pkg_with_cc
IS

   PROCEDURE a;

END pkg_with_cc;
/

CREATE OR REPLACE PACKAGE BODY pkg_with_cc
IS

   PROCEDURE b
   IS 

      TYPE emps_tbl IS TABLE OF employees%ROWTYPE
         INDEX BY PLS_INTEGER;

      l_xml XMLTYPE;
      l_collection emps_tbl; 

   BEGIN

      -- Expected business logic is here
      dbms_output.put_line('This is procedure B');

      $IF $$debuging = TRUE
      $THEN 

         -- see what is in the collection
         FOR i IN 1 .. l_collection.COUNT()
         LOOP

            dbms_output.put_line('see the output of the collection'); 

         END LOOP; 

         -- view the xml
         dbms_output.put_line('the xml: ' || l_xml.getstringval() );

      $END  

   END b; 

   PROCEDURE a
   IS
   BEGIN

      dbms_output.put_line('Calling procedure b');

      b();

   END a;

END pkg_with_cc;
/

Using dbms_preprocessor.print_post_processed_source to look at the compiled code

BEGIN

   dbms_preprocessor.print_post_processed_source
   (
      object_type => 'PACKAGE BODY',
      schema_name => USER,
      object_name => 'PKG_WITH_CC'
   );

END;
/

The output shows the debugging code has been stripped out by the PL/SQL preprocessor.

PACKAGE BODY pkg_with_cc
IS

   PROCEDURE b
   IS 

      TYPE emps_tbl IS TABLE OF employees%ROWTYPE
         INDEX BY PLS_INTEGER;

      l_xml XMLTYPE;
      l_collection emps_tbl; 

   BEGIN

      -- Expected business logic is here
      dbms_output.put_line('This is procedure B'); 

   END b; 

   PROCEDURE a
   IS
   BEGIN

      dbms_output.put_line('Calling procedure b');

      b();

   END a;

END pkg_with_cc;
/

Now lets recompile the package body, adding the inquiry directive, debugging at the same time.

ALTER PACKAGE PKG_WITH_CC COMPILE BODY
   PLSQL_CCFLAGS = 'debuging:TRUE' REUSE SETTINGS
/

The debugging literal matches $$debugging used by the procedure B and it is set to TRUE. I have included “REUSE SETTINGS” because it ensures that existing values for PL/SQL warnings or PL/SQL Optimizer Level are not discarded.

Running the dbms_preprocessor.print_post_processed_source and viewing the compiled version of the code we now see that the debugging code is present and ready to help solve the production issue.

...no changes to the other parts of the package

   PROCEDURE b
   IS 

      TYPE emps_tbl IS TABLE OF employees%ROWTYPE
         INDEX BY PLS_INTEGER;

      l_xml XMLTYPE;
      l_collection emps_tbl; 

   BEGIN

      -- Expected business logic is here
      dbms_output.put_line('This is procedure B'); 

      -- see what is in the collection
      FOR i IN 1 .. l_collection.COUNT()
      LOOP

         dbms_output.put_line('see the output of the collection'); 

      END LOOP; 

      -- view the xml
      dbms_output.put_line('the xml: ' || l_xml.tostringval() );

 END b; 

...

Once finished we can remove the debugging code from production by recompiling the package body but this time setting the debugging flag to FALSE

ALTER PACKAGE PKG_WITH_CC COMPILE BODY
   PLSQL_CCFLAGS = 'debuging:FALSE' REUSE SETTINGS
/

Summary

Just to be clear, using Conditional Compilation as described in this post requires recompilation of production code which is not something that should ever been taken lightly and great care should be exercised whenever doing so.

Having the ability to leave debugging code in Production and enable it on demand, is in my
opinion, reason enough to add PL/SQL Conditional Compilation to your developer toolbox. Used judiciously with an instrumented application it further reduces the number of places that bugs can hide.

Oracle PL/SQL Compiler warning PLW-06009

The PL/SQL Compiler warning “PLW-06009: procedure “string” OTHERS handler does not end in RAISE or RAISE_APPLICATION_ERROR”  was introduced with 11g release 1.

It’s purpose is to alert you to the fact that you have a WHEN OTHERS exception handler and the last statement within it doesn’t contain a RAISE or a RAISE_APPLICATION_ERROR statement.

The first example shows what you will see if you turned the compiler warnings off.

ALTER SESSION SET PLSQL_WARNINGS='DISABLE:ALL'
/

and compile the following procedure:

CREATE OR REPLACE PROCEDURE p
IS
BEGIN
   RAISE no_data_found;
EXCEPTION
   WHEN OTHERS THEN
      NULL;
END p;

The important thing to note is that on line 7 the WHEN OTHERS statement does nothing with the exception, effectively suppressing it.

Creating the procedure in SQLPlus I see a “clean” compile:

1

 

 

 

 

 

 

 

 

Now I will enable PL/SQL Compiler warnings:

ALTER SESSION SET PLSQL_WARNINGS='ENABLE:ALL'
/

and recompile the procedure

ALTER PROCEDURE p COMPILE
/

This time I can see I haven’t had such a “clean” compile:

2

 

 

 

The show errors command displays the following:

3

 

As you can see the PLW-06009 warning is shown. I can then choose ignore the warning or amend the procedure to fix the problem.

If you agree with Tom Kyte and believe that WHEN OTHERS without any kind of RAISE statement is a bug then as with all compiler warnings you can turn the warning into an error and stop the program unit from compiling.

I change the warning to an error: (In this example I am changing the parameter at session level but it can be changed at the instance level)

ALTER SESSION SET PLSQL_warnings = 'ERROR:06009'
/

Now when I try to compile my procedure I see that it is invalid and will not compile until I fix the WHEN OTHERS issue.

5
Here is the revised version of the procedure that fixes the PLW-06009 warning. The only change required was to replace NULL with RAISE on line 9.

CREATE OR REPLACE PROCEDURE p
IS
BEGIN

   RAISE no_data_found;

EXCEPTION
   WHEN OTHERS THEN
      RAISE;
END p;

Running this version of the procedure and I see that the PLW-06009 warning no longer appears.

4

 

More information about PL/SQL compile time warnings can be found in the documentation.

Summary

In this article I have explained what the PLW-06009 compiler warning is and how it can be enabled both as a warning and an error.

Acknowledgements

Tom Kyte

Instrumenting your PL/SQL code

Introduction

As an exceptional Oracle Developer you already know the value of instrumenting your code. You have read enough of Tom Kyte, Cary Millsap & Steve McConnell to understand that with the correct instrumentation diagnosing problems especially time critical problems (are there any others?) becomes much easier.

So the question is not why you should instrument your code but how. In this article I will explain the installation and use of the PL/SQL Logger which was originally developed by Tyler Muth and is now available on Github.

Installation

Once you have downloaded Logger from Githib and unzip the contents to a directory that can be seen by your Oracle client (I use SQL Plus). navigate to the directory “releases” and expand the folder for the Logger release you wish to install. At the time of writing version 2.1.2 was the latest so that will be the version used in this article.

Whilst you can install logger into an existing schema, I am going to use the supplied @create_user.sql to install Logger into it’s own schema. So start SQL Plus and connect to the database as system or a user with the DBA role and run the following script

@create_user.sql

When prompted to do so, enter the username, tablespace, temporary tablespace and password for the new schema. In this example I have accepted the suggested defaults of logger_user, users and temp respectively.

Now connect to the database as the newly created logger_user and run the installation script:

@logger_install.sql

If you run into problems with the installation it is worth starting with the supplied documentation.

In this article I will be accessing the Logger objects from the standard Oracle HR user so the following grant is required:

GRANT EXECUTE ON logger TO hr
/

Once installation is complete, you can view the status of the logger by running the following command

exec logger.status;

which displays the following:

SQL> exec logger.status;
Project Home Page :
https://github.com/tmuth/Logger---A-PL-SQL-Logging-Utility/
Logger Version : 2.1.2
Debug Level : DEBUG
Capture Call Stack : TRUE
Protect Admin Procedures : TRUE
APEX Tracing : Enabled
SCN Capture : Disabled
Min. Purge Level : DEBUG
Purge Older Than : 7 days
Pref by client_id expire : 12 hours
For all client info see : logger_prefs_by_client_id

PL/SQL procedure successfully completed.

How to use

Below is a procedure that updates the salary of a given employee. Now lets pretend it is part of a overnight batch job and is just one of thousands of procedures that need to run. To the outside world this procedure is a blackbox. Without querying the employee after the procedure has run (and committed) you have no idea what has happened when it was called.

CREATE OR REPLACE PROCEDURE raise_salary
(
   p_emp_id     IN employees.employee_id%TYPE,
   p_new_salary IN employees.salary%TYPE
)
IS
BEGIN

   UPDATE employees e
      SET e.salary = p_new_salary
    WHERE e.employee_id = p_emp_id;

END raise_salary;
/

Lets change that by adding some instrumentation.

CREATE OR REPLACE PROCEDURE raise_salary
(
   p_emp_id     IN employees.employee_id%TYPE,
   p_new_salary IN employees.salary%TYPE
)
IS
BEGIN

 logger_user.logger.log('Start of raise_salary');

 UPDATE employees e
   SET e.salary = p_new_salary
 WHERE e.employee_id = p_emp_id;

 logger_user.logger.log('Number of rows updated: {' || TO_CHAR(SQL%ROWCOUNT) || '}'); 

 logger_user.logger.log('End of raise_salary'); 

END raise_salary;
/

The revised version of the procedure captures some basic information as the start of the procedure, how many rows were updated and finally that the procedure completed successfully.

Even in this simple case it is easy for the real code to be submerged by the instrumentation code. There is no one size fits all solution. Careful consideration is required to decide how much instrumentation is required.

When the procedure is run, one of the Logger views can be queried and the output reviewed:

SELECT text
  FROM logger_user.logger_logs_5_min
/

Here is the output from the revised procedure after it has been run:

TEXT
----------------------------
Start of raise_salary
Number of rows updated: {1}
End of raise_salary

 Level Up

The output from Logger is controlled by the level constant. The level can be set by calling the logger.set_level procedure. In the previous example I have used the log procedure to instrument my routine. This means that the output will insert an entry into the logger_logs table when the logger_level is set to debug.

There are a number of additional log_x routines that you can use with the various logging levels. For instance, log_warning will insert an entry into the logger_logs table when the logger level has been set to “warning”, likewise if you are only interested in capturing errors, you can set the Logger level to error and use the logger_error procedure.

One of the nicest features of the level facility is the ability to turn logging on for specified clients rather than system wide. In the following example, I have turned on debug logging for user 1 whilst user 2 continues to use the system but without instrumentation messages being inserted.

Logger_user session

The following commands are run:

exec logger_user.logger.set_level(p_level => 'OFF')
/

This system wide command turns Logger off. This means currently no instrumentation is being recorded.

exec logger_user.logger.set_level(p_level => 'DEBUG', p_client_id => 'user_one')
/

This command turns on debug logging for the client identifier user_one. (see user one session below for how this is set). This means any instrumentation that uses the logger.log procedure will be inserted into the log table.

The following query will show you the current clients that logging is enable for:

SELECT *
  FROM logger_prefs_by_client_id
/

User_one session

The following command sets the client identifier (this is used by the call to enable logging for this client by the Logger_user session above)

exec dbms_session.set_identifier('user_one');

I then execute the raise_salary procedure and commit.

User_two session

Almost a carbon copy of the user_one session with the exception of the identifier.

exec dbms_session.set_identifier('user_two');

I then execute the raise_salary procedure and commit.

Now when I query the logger table I see the following output which shows that only user_one’s actions have been captured.

SQL> SELECT client_identifier, text
2      FROM logger_user.logger_logs_5_min
3 /

CLIENT_IDENTIFIER TEXT
------------------------------------
user_one Start of raise_salary

user_one Number of rows updated: {1}

user_one End of raise_salary

But wait there is more…

All of the main Logger procedures allow you to give context to the information you are recording. The context would normally be where the message has originated from, i.e. Application, Package etc. The facility is provided by the p_scope parameter. Here is the raise_salary procedure used earlier refactored to include the p_scope parameter.

CREATE OR REPLACE PROCEDURE raise_salary_scope
(
   p_emp_id     IN employees.employee_id%TYPE,
   p_new_salary IN employees.salary%TYPE
)
IS

   l_scope logger_user.logger_logs.scope%type := 'raise_salary_scope';

BEGIN

   logger_user.logger.log('Start', l_scope);

   UPDATE employees e
      SET e.salary = p_new_salary
    WHERE e.employee_id = p_emp_id;

   logger_user.logger.log('Number of rows updated: {' || TO_CHAR(SQL%ROWCOUNT) || '}', l_scope);

   logger_user.logger.log('End', l_scope);

END raise_salary_scope;
/

Each of the calls to the log procedure now has the name of procedure contained within the l_scope parameter passed to it.

The following query shows results of using the scope parameter

SQL> SELECT scope, text
2      FROM logger_user.logger_logs_5_min
3    /

SCOPE                TEXT
------------------------------------------------
raise_salary_scope   Start

raise_salary_scope   Number of rows updated: {1}

raise_salary_scope   End

The logging of parameter values is handled within Logger by using the p_params object. This is easier to show with an example rather than even more text!

Here is the final revision of the raise_salary procedure.

CREATE OR REPLACE PROCEDURE raise_salary_params
(
   p_emp_id     IN employees.employee_id%TYPE,
   p_new_salary IN employees.salary%TYPE
)
IS

   l_scope    logger_user.logger_logs.SCOPE%TYPE := 'raise_salary_scope';
   l_params   logger_user.logger.tab_param;

BEGIN

   logger_user.logger.append_param(l_params, 'p_emp_id', p_emp_id);
   logger_user.logger.append_param(l_params, 'p_new_salary', p_new_salary);

   logger_user.logger.log('Start', l_scope, null, l_params);

   UPDATE employees e
      SET e.salary = p_new_salary
    WHERE e.employee_id = p_emp_id;

   logger_user.logger.log('Number of rows updated: {' || TO_CHAR(SQL%ROWCOUNT) || '}', l_scope);

   logger_user.logger.log('End', l_scope);

END raise_salary_params;
/

The parameters are added to a pre-defined Logger associative array and that array is included in the first call to the log procedure.

The parameters are stored in the extra column and the following query shows how to display them.

SQL> SELECT text, extra
2      FROM logger_user.logger_logs_5_min
3    /

TEXT                         EXTRA
---------------------------------------------------------------------------------
Start                        *** Parameters *** p_emp_id: 100 p_new_salary: 4500

Number of rows updated: {1}

End

Summary

There is no getting away from the fact that the instrumentation is an overhead to your application. Trying to find the balance between too much and too little information that is to be captured, the extra code you need to develop and then the extra resources required to execute the larger code base all need to be carefully considered.

My opinion is that this is a price I am willing to pay. When bad things happen and you are under pressure to track down the cause you will find the problem far quicker with code that is telling you what exactly is happening. Compare this with the alternative. Peering into an enormous black box armed only with guesses.

Acknowledgements

Tyler Muth

Martin Giffy D’Souza