The incremental commit Anti-pattern

This is a far more opinionated post than usual. It is not meant to inflammatory, my goal for this post is to have something tangible that I can point to the next time an Oracle Developer thinks they want or need to “do” incremental commits.

In this post an incremental commit is defined as code which commits a transaction but the cursor is kept open.  They are also known as a fetch across commit.

I encounter incremental commits in PL/SQL code that issues a commit inside a  loop such as the examples below:

Example

...
CURSOR customers_cur
IS
  SELECT columns
    FROM some_tables;
BEGIN
  FOR i in customers_cur
  LOOP
    -- doing something interesting with the row
    COMMIT; -- ARGH!
  END LOOP;
END;

The commit may be decorated with variations of:

IF rows_processed > some_arbitary_number
THEN 
  COMMIT;

Or

IF mod(customers_cur%rowcount, v_commit_limit) = 0 
THEN 
  COMMIT;

Why are they an Anti-Pattern?

They introduce side effects that the Developer is not aware of, usually a self inflicted ORA-01555 Snapshot too old exception.  I will come back to this in the final part of this post.

Why are Incremental commits used?

Over the years I have had many conversations with other Oracle Developers regarding the problems incremental commits cause. The common explanation I have heard for the introduction of incremental commits is that the developer didn’t want to “blow” the rollback segments.

I disagree with this. You should never commit inside a loop. You should commit when your transaction is complete and only then. If your rollback segments are too small to support your transactions then you need to work with your DBA and get them resized.

ORA-01555 Snapshot too old

I am going to spend the remainder of this post explaining why you will see this error when you perform an incremental commit. It will not be a deep dive into all the nuances of this exception just it’s relevance to incremental commits. The best explanation for ORA-01555 is this AskTom post which originally started some 18 years old. Much of what follows is distilled from this thread.

An ORA-01555 occurs when the database is unable to obtain a read consistent image. In order to obtain this information the database uses the rollback segment but if that information has been overwritten then the database can not use it and the ORA-01555 is raised. So what causes this information to be overwritten? In the context of incremental commits the fuse is lit when a commit is issued…

Here is the steps leading to this error taken from Oracle Support  Note:40689.1

1. Session 1 starts query at time T1 and Query Environment 50

2. Session 1 selects block B1 during this query

3. Session 1 updates the block at SCN 51

4. Session 1 does some other work that generates rollback information.

5. Session 1 commits the changes made in steps ‘3’ and ‘4’.
(Now other transactions are free to overwrite this rollback information)

6. Session 1 revisits the same block B1 (perhaps for a different row).

Now, Oracle can see from the block’s header that it has been changed and it is later than the required Query Environment (which was 50). Therefore we need to get an image of the block as of this Query Environment.

If an old enough version of the block can be found in the buffer cache then we will use this, otherwise we need to rollback the current block to generate another version of the block as at the required Query Environment.

It is under this condition that Oracle may not be able to get the required rollback information because Session 1’s changes have generated rollback information that has overwritten it and returns the ORA-1555 error.

I have marked the key point – 5. By issuing a commit you are saying I have finished with this data, other transactions feel free to reuse it. Except you haven’t finished with it and when you really need it, it will have been overwritten.

Edge Cases?

I am not aware of any edge cases that require incremental commits. If you know of any please let me know via the comments.

Acknowledgements:

This post would not have been possible without the help from the following sources:

AskTom question Snapshot too old

Stackoverflow Question Commit After opening cursor

Using C# to upload a file to AWS S3 Part 2: Creating the C# Console App

This post follows on from part 1.  With the AWS S3 objects in place it is now time to create a simple C# console application that will upload a text file stored locally to the AWS S3 bucket.

The first step is to create a test file that you want to upload. In my example, I have created a text file in the Downloads folder called TheFile.txt which contains some text. After creating the text file, note the name of the file and its location.

Start Visual Studio and create a new console application

Use NuGet to add the AWSSDK.S3 package. At the time of writing this was at version 3.3.16.2

Add the following to App.config

 <appSettings>
   <add key="AWSProfileName" value="dotnet">
   <add key="AWSAccessKey" value="YOUR ACCESS KEY">
   <add key="AWSSecretKey" value="YOUR SECRET KEY">
 </appSettings>

You will find the values for the access key and secret key in the accessKeys.csv which you downloaded in part one of the tutorial.

Create a new class called S3Uploader and paste the following code ensuring you change the variables for bucketName, keyName and filePath as appropriate. As you can see from the comments, this code is based on this answer from Stack Overflow.

For the sake of brevity the code deliberately does not have any exception handling nor unit tests as I wanted this example to focus purely on the AWS API without any other distractions.

using Amazon.S3;
using Amazon.S3.Model;

namespace S3FileUploaderGeekOut
{

  /// <summary>
  /// Based upon https://stackoverflow.com/a/41382560/55640
  /// </summary>
  public class S3Uploader
  {
    private string bucketName = "myimportantfiles";
    private string keyName = "TheFile.txt";
    private string filePath = @"C:\Users\Ian\Downloads\TheFile.txt";

    public void UploadFile()
    {
      var client = new AmazonS3Client(Amazon.RegionEndpoint.EUWest2);
      
      PutObjectRequest putRequest = new PutObjectRequest
      {
        BucketName = bucketName,
        Key = keyName,
        FilePath = filePath,
        ContentType = "text/plain"
      };

      PutObjectResponse response = client.PutObject(putRequest);
    }
  }
} 

In the Program.cs class add the following:

namespace S3FileUploaderGeekOut
{
  class Program
  {
    static void Main(string[] args)
    {
      S3Uploader s3 = new S3Uploader();

      s3.UploadFile();
    }
  }
}

Run the program and once it completes, navigate to your S3  Bucket via the AWS console and you will be able to see that your file has been successfully uploaded.

Summary

In this and the previous post I have demonstrated the steps required to upload a text file from a simple C# console application to a AWS bucket.

Using C# to upload a file to AWS S3 Part 1: Creating and Securing your S3 Bucket

In this, the first of a two part post, I will show you how to upload a file to the Amazon Web Services (AWS) Simple Storage Service (S3 ) using a C# console application.

The goal of this post is to get a very simple example up and running with the minimum of friction. It not a deep dive into AWS S3 but a starting point which you can take in a direction of your choosing.

This post will focus on how to set up and secure your AWS S3 bucket.  Whilst the next will concentrate on the C# console app that will upload the file.

Dependencies

In order to build the demo the following items were used:

An AWS account. (I used the  12 months free tier)

Visual Studio 2017 Community Edition 

AWS Toolkit for Visual Studio 2017

Creating a new AWS S3 bucket

Log on to your AWS Management Console and select S3 (which can be found by using the search bar or looking under the Storage subheading)  

You should now be on the Amazon S3 page as shown below.

This page give you the headline features about your existing buckets. In the screenshot you can see an existing bucket along with various attributes.

Click the blue Create bucket button and enter a name for your bucket, the region where you wish to store your files and then click next.

Click Next.  This screen allows you to set various bucket properties. For this demo, I will not be setting any so click Next to move onto step 3

Leave the default permissions as they are and click Next to move on to the final page.

After reviewing the summary, click Create Bucket

IAM User, Group and Policy

In order to access the S3 bucket from the .NET  application valid AWS credentials are required. Whilst you could use the AWS account holders credentials, Amazon recommends creating an IAM user in order to utilise the IAM users credentials when invoking the AWS API.

In this section of the post I will show you how to create a new IAM user and give it just enough privileges required to interact with our new S3 bucket. The information shown below has been distilled from the AWS documentation.

There are a large number of steps that follow and it is easy to get lost. My advice is to read through once before diving in. If you get stuck (or I have missed something) let me know in the comments.

Return to the AWS Home screen

Select\search for IAM, and after selecting users on the left hand side menu, click the blue Add User Button which will bring up the Set user details page.

Give the user a name and the access type to Programmatic access only. There is no need for this user to be given access to the AWS console.  Click Next Permissions.

Rather than give permissions directly to the IAM user, Amazon recommends  that the user be placed in a group and manage permissions through policies that are attached to those groups. So lets do that now.

From the Set permissions page click on Create Group.

Give your Group a meaningful name.

The next step is to attach one or more policies to the group.  Policies in this context defines the permissions for the group. The Create group page lists the available policies but unfortunately there isn’t an existing policy that can be used to ensure that the IAM user has only access to the new S3 bucket, so click on the Create policy button.

This opens in a new browser tab, the Create policy page

Click on the JSON tab and copy the following. Changing the bucket name as appropriate.  (The source of this JSON can be found here.)

{
  "Version": "2012-10-17",
  "Statement": [
  {
    "Effect": "Allow",
    "Action": [
    "s3:ListAllMyBuckets"
     ],
     "Resource": "arn:aws:s3:::*"
  },
  {
   "Effect": "Allow",
   "Action": [
   "s3:ListBucket",
   "s3:GetBucketLocation"
   ],
   "Resource": "arn:aws:s3:::myimportantfiles"
  },
  {
   "Effect": "Allow",
   "Action": [
   "s3:PutObject",
   "s3:GetObject",
   "s3:DeleteObject"
   ],
   "Resource": "arn:aws:s3:::myimportantfiles/*"
   }
  ]
}

At this point the JSON editor should look like this.

Once done click on the Review policy button. Give your policy a meaningful name and description and then click Create policy.

You will then receive confirmation that the policy has been created.

Now click the browser tab which displays the Create group page.

To find your new policy, change the filter (located left of the search bar) to “Customer managed” and press the refresh button (located next to the Create policy button). Once you have found the newly created policy, select it and press the Create group button.

You will now be returned to the Set Permissions Page; ensure the new group is selected and click Next: Review.

The final page is a review after which you can then click Create user.

Once the user has been created, you will see a confirmation along with a download .csv button. Click the button to download the credentials as these will be needed in our C# application discussed in the next post.

 

 

 

Review

At this point it is worth getting a cup of your favourite beverage and recapping what has been created:

A new AWS S3 bucket.

A new IAM user. This user has been placed in a group. The group has a policy attached that allows it to perform various operations only on the new bucket that has been created.

A csv file containing the required access and secret keys have been downloaded.

On to part 2

With the S3 bucket and IAM user and the necessary privileges created and configured it is time to move on to part two which will create the .NET console application to upload a file into this bucket.