The incremental commit Anti-pattern

This is a far more opinionated post than usual. It is not meant to inflammatory, my goal for this post is to have something tangible that I can point to the next time an Oracle Developer thinks they want or need to “do” incremental commits.

In this post an incremental commit is defined as code which commits a transaction but the cursor is kept open.  They are also known as a fetch across commit.

I encounter incremental commits in PL/SQL code that issues a commit inside a  loop such as the examples below:

Example

...
CURSOR customers_cur
IS
  SELECT columns
    FROM some_tables;
BEGIN
  FOR i in customers_cur
  LOOP
    -- doing something interesting with the row
    COMMIT; -- ARGH!
  END LOOP;
END;

The commit may be decorated with variations of:

IF rows_processed > some_arbitary_number
THEN 
  COMMIT;

Or

IF mod(customers_cur%rowcount, v_commit_limit) = 0 
THEN 
  COMMIT;

Why are they an Anti-Pattern?

They introduce side effects that the Developer is not aware of, usually a self inflicted ORA-01555 Snapshot too old exception.  I will come back to this in the final part of this post.

Why are Incremental commits used?

Over the years I have had many conversations with other Oracle Developers regarding the problems incremental commits cause. The common explanation I have heard for the introduction of incremental commits is that the developer didn’t want to “blow” the rollback segments.

I disagree with this. You should never commit inside a loop. You should commit when your transaction is complete and only then. If your rollback segments are too small to support your transactions then you need to work with your DBA and get them resized.

ORA-01555 Snapshot too old

I am going to spend the remainder of this post explaining why you will see this error when you perform an incremental commit. It will not be a deep dive into all the nuances of this exception just it’s relevance to incremental commits. The best explanation for ORA-01555 is this AskTom post which originally started some 18 years old. Much of what follows is distilled from this thread.

An ORA-01555 occurs when the database is unable to obtain a read consistent image. In order to obtain this information the database uses the rollback segment but if that information has been overwritten then the database can not use it and the ORA-01555 is raised. So what causes this information to be overwritten? In the context of incremental commits the fuse is lit when a commit is issued…

Here is the steps leading to this error taken from Oracle Support  Note:40689.1

1. Session 1 starts query at time T1 and Query Environment 50

2. Session 1 selects block B1 during this query

3. Session 1 updates the block at SCN 51

4. Session 1 does some other work that generates rollback information.

5. Session 1 commits the changes made in steps ‘3’ and ‘4’.
(Now other transactions are free to overwrite this rollback information)

6. Session 1 revisits the same block B1 (perhaps for a different row).

Now, Oracle can see from the block’s header that it has been changed and it is later than the required Query Environment (which was 50). Therefore we need to get an image of the block as of this Query Environment.

If an old enough version of the block can be found in the buffer cache then we will use this, otherwise we need to rollback the current block to generate another version of the block as at the required Query Environment.

It is under this condition that Oracle may not be able to get the required rollback information because Session 1’s changes have generated rollback information that has overwritten it and returns the ORA-1555 error.

I have marked the key point – 5. By issuing a commit you are saying I have finished with this data, other transactions feel free to reuse it. Except you haven’t finished with it and when you really need it, it will have been overwritten.

Edge Cases?

I am not aware of any edge cases that require incremental commits. If you know of any please let me know via the comments.

Acknowledgements:

This post would not have been possible without the help from the following sources:

AskTom question Snapshot too old

Stackoverflow Question Commit After opening cursor

Using C# to upload a file to AWS S3 Part 2: Creating the C# Console App

This post follows on from part 1.  With the AWS S3 objects in place it is now time to create a simple C# console application that will upload a text file stored locally to the AWS S3 bucket.

The first step is to create a test file that you want to upload. In my example, I have created a text file in the Downloads folder called TheFile.txt which contains some text. After creating the text file, note the name of the file and its location.

Start Visual Studio and create a new console application

Use NuGet to add the AWSSDK.S3 package. At the time of writing this was at version 3.3.16.2

Add the following to App.config

 <appSettings>
   <add key="AWSProfileName" value="dotnet">
   <add key="AWSAccessKey" value="YOUR ACCESS KEY">
   <add key="AWSSecretKey" value="YOUR SECRET KEY">
 </appSettings>

You will find the values for the access key and secret key in the accessKeys.csv which you downloaded in part one of the tutorial.

Create a new class called S3Uploader and paste the following code ensuring you change the variables for bucketName, keyName and filePath as appropriate. As you can see from the comments, this code is based on this answer from Stack Overflow.

For the sake of brevity the code deliberately does not have any exception handling nor unit tests as I wanted this example to focus purely on the AWS API without any other distractions.

using Amazon.S3;
using Amazon.S3.Model;

namespace S3FileUploaderGeekOut
{

  /// <summary>
  /// Based upon https://stackoverflow.com/a/41382560/55640
  /// </summary>
  public class S3Uploader
  {
    private string bucketName = "myimportantfiles";
    private string keyName = "TheFile.txt";
    private string filePath = @"C:\Users\Ian\Downloads\TheFile.txt";

    public void UploadFile()
    {
      var client = new AmazonS3Client(Amazon.RegionEndpoint.EUWest2);
      
      PutObjectRequest putRequest = new PutObjectRequest
      {
        BucketName = bucketName,
        Key = keyName,
        FilePath = filePath,
        ContentType = "text/plain"
      };

      PutObjectResponse response = client.PutObject(putRequest);
    }
  }
} 

In the Program.cs class add the following:

namespace S3FileUploaderGeekOut
{
  class Program
  {
    static void Main(string[] args)
    {
      S3Uploader s3 = new S3Uploader();

      s3.UploadFile();
    }
  }
}

Run the program and once it completes, navigate to your S3  Bucket via the AWS console and you will be able to see that your file has been successfully uploaded.

Summary

In this and the previous post I have demonstrated the steps required to upload a text file from a simple C# console application to a AWS bucket.

Using C# to upload a file to AWS S3 Part 1: Creating and Securing your S3 Bucket

In this, the first of a two part post, I will show you how to upload a file to the Amazon Web Services (AWS) Simple Storage Service (S3 ) using a C# console application.

The goal of this post is to get a very simple example up and running with the minimum of friction. It not a deep dive into AWS S3 but a starting point which you can take in a direction of your choosing.

This post will focus on how to set up and secure your AWS S3 bucket.  Whilst the next will concentrate on the C# console app that will upload the file.

Dependencies

In order to build the demo the following items were used:

An AWS account. (I used the  12 months free tier)

Visual Studio 2017 Community Edition 

AWS Toolkit for Visual Studio 2017

Creating a new AWS S3 bucket

Log on to your AWS Management Console and select S3 (which can be found by using the search bar or looking under the Storage subheading)  

You should now be on the Amazon S3 page as shown below.

This page give you the headline features about your existing buckets. In the screenshot you can see an existing bucket along with various attributes.

Click the blue Create bucket button and enter a name for your bucket, the region where you wish to store your files and then click next.

Click Next.  This screen allows you to set various bucket properties. For this demo, I will not be setting any so click Next to move onto step 3

Leave the default permissions as they are and click Next to move on to the final page.

After reviewing the summary, click Create Bucket

IAM User, Group and Policy

In order to access the S3 bucket from the .NET  application valid AWS credentials are required. Whilst you could use the AWS account holders credentials, Amazon recommends creating an IAM user in order to utilise the IAM users credentials when invoking the AWS API.

In this section of the post I will show you how to create a new IAM user and give it just enough privileges required to interact with our new S3 bucket. The information shown below has been distilled from the AWS documentation.

There are a large number of steps that follow and it is easy to get lost. My advice is to read through once before diving in. If you get stuck (or I have missed something) let me know in the comments.

Return to the AWS Home screen

Select\search for IAM, and after selecting users on the left hand side menu, click the blue Add User Button which will bring up the Set user details page.

Give the user a name and the access type to Programmatic access only. There is no need for this user to be given access to the AWS console.  Click Next Permissions.

Rather than give permissions directly to the IAM user, Amazon recommends  that the user be placed in a group and manage permissions through policies that are attached to those groups. So lets do that now.

From the Set permissions page click on Create Group.

Give your Group a meaningful name.

The next step is to attach one or more policies to the group.  Policies in this context defines the permissions for the group. The Create group page lists the available policies but unfortunately there isn’t an existing policy that can be used to ensure that the IAM user has only access to the new S3 bucket, so click on the Create policy button.

This opens in a new browser tab, the Create policy page

Click on the JSON tab and copy the following. Changing the bucket name as appropriate.  (The source of this JSON can be found here.)

{
  "Version": "2012-10-17",
  "Statement": [
  {
    "Effect": "Allow",
    "Action": [
    "s3:ListAllMyBuckets"
     ],
     "Resource": "arn:aws:s3:::*"
  },
  {
   "Effect": "Allow",
   "Action": [
   "s3:ListBucket",
   "s3:GetBucketLocation"
   ],
   "Resource": "arn:aws:s3:::myimportantfiles"
  },
  {
   "Effect": "Allow",
   "Action": [
   "s3:PutObject",
   "s3:GetObject",
   "s3:DeleteObject"
   ],
   "Resource": "arn:aws:s3:::myimportantfiles/*"
   }
  ]
}

At this point the JSON editor should look like this.

Once done click on the Review policy button. Give your policy a meaningful name and description and then click Create policy.

You will then receive confirmation that the policy has been created.

Now click the browser tab which displays the Create group page.

To find your new policy, change the filter (located left of the search bar) to “Customer managed” and press the refresh button (located next to the Create policy button). Once you have found the newly created policy, select it and press the Create group button.

You will now be returned to the Set Permissions Page; ensure the new group is selected and click Next: Review.

The final page is a review after which you can then click Create user.

Once the user has been created, you will see a confirmation along with a download .csv button. Click the button to download the credentials as these will be needed in our C# application discussed in the next post.

 

 

 

Review

At this point it is worth getting a cup of your favourite beverage and recapping what has been created:

A new AWS S3 bucket.

A new IAM user. This user has been placed in a group. The group has a policy attached that allows it to perform various operations only on the new bucket that has been created.

A csv file containing the required access and secret keys have been downloaded.

On to part 2

With the S3 bucket and IAM user and the necessary privileges created and configured it is time to move on to part two which will create the .NET console application to upload a file into this bucket.

NDC 2018

I try to attend a developers conference once a year and this year I attended NDC London 2018.  I was surprised that only two developers out of the many I had asked  before going knew of the NDC brand of conferences.  I discovered NDC thanks to Carl and Richard of .NET Rocks! Thanks gents, I owe you.

Just in cases you are not aware of what NDC is, here is a brief description courtesy of the NDC London site.

About NDC

Since its start-up in Oslo 2008, the Norwegian Developers Conference (NDC) quickly became one of Europe`s largest conferences for .NET & Agile development. Today NDC Conferences are 5-day events with 2 days of pre-conference workshops and 3 days of conference sessions.

NDC London

In December 2013 NDC did the first NDC London conference. The conference was a huge success and we are happy to announce that the 5th NDC London is set to happen the week of 15-19 January 2018.

I didn’t attend the pre-conference workshops so my NDC adventure started on Wednesday. First impressions of the conference and its venue, The Queen Elizabeth II Centre, Westminster were superb; upon arriving there were no queues to register another plus was that the cloak room was free which although a small touch was one I really appreciated.

Throughout the conference continuous high quality hot drinks and food was served. Starting with cakes and pastries and moving on to a variety of hot food.  I wouldn’t normally mention food at a conference but it was of a standard that I had not encountered at other conferences that I had to write a few lines about it.

My reason for attending was to hear a number of people whose podcasts I listen to, books & blogs I have read or have helped me by providing answers on Stack Overflow speak. As a newbie to this conference I did not know what to expect from the talks but I was not disappointed and for me the conference experience went into the stratosphere from here on in.

The talks are scheduled to last one hour and as you will see from the agenda they are on a wide variety of subjects. The presenters did not disappoint. There was no death by PowerPoint, no about me slides, no err/errms or the dreaded “like”.  The presenters were passionate about their topics and were clearly enjoyed themselves engaging with their audiences. Some had a slight more conversational style whilst others used self deprecation and one in particular used the medium of song (thank you Jon Skeet that was unforgettable). One of the common traits that I noticed is that many of the presenters are building and experimenting with “stuff” all the time.

As is the norm after a talk\session the audience are invited to give feedback and NDC has probably the best I have so far encountered.  As you leave the room after a talk just throw a colour in the box. Brilliant.

Here are the sessions I attended:

Wednesday

Keynote: What is programming anyway?
Felienne

Sondheim Seurat and Software: finding art in code
Jon Skeet

You build it, you run it (why developers should also be on call)
Chris O’Dell

I’m Pwned. You’re Pwned. We’re All Pwned.
Troy Hunt

Refactoring to Immutability
Kevlin Henney

Adventures in teaching the web
Jasmine Greenaway

C# 7.1, and 7.2: The releases you didn’t know you had
Bill Wagner

Thursday

Building a Raspberry Pi Kubernetes Cluster and running .NET Core
Alex Ellis & Scott Hanselman

An Opinionated Approach to ASP.NET Core
Scott Allen

Who Needs Dashboards?
Jessica White

Hack Your Career
Troy Hunt

HTTP: History & Performance
Ana Balica

Going Solo: A Blueprint for Working for Yourself
Rob Conery

NET Rocks Live with Jon Skeet and Bill Wagner – Two Nice C# People

Friday

The Modern Cloud
Scott Guthrie

Web Apps can’t really do *that*, can they?
Steve Sanderson

The Hello World Show Live with Scott Hanselman, Troy Hunt, Felienne, and Jon Skeet

Tips & Tricks with Azure
Scott Guthrie

Solving Diabetes with an Open Source Artificial Pancreas
Scott Hanselman

Why I’m Not Leaving .NET
Mark Rendle

Summary

NDC London 2018 was the best conference I have ever attended. I have returned from it motivated to do more; to experiment and try stuff that I hadn’t even thought about.

There were so many highlights for me but having my photo taken with Carl and Richard was the best. Seriously guys you rock!

 

Technical Books read in 2017

Looking back at the technical books I had read in 2017, the biggest surprise is that I didn’t read any books on Oracle which I think is the longest time I have spent between Oracle books. This hiatus will not last long into 2018 because of the imminent launch of Pete Finnigan’s new book

The four books I did read took me far away from my comfort zone and two of the four have been screaming bargains (HT to Seth Godin) with what I have learnt from them.

Microsoft C# Step by Step 8th Edition

This was the first technical book I read this year.  As I continue to learn C#, I look to buy any and all introductory C# books to read different authors descriptions of the language fundamentals.

The book is well structured with nice end notes that recap what the chapter has covered. In addition the code examples were complete and easy to follow. Despite all the positives the book didn’t really grab me and after the first few chapters it became a bit of slog to get through so I didn’t finish it. Not a bad book by any means just not one for me.

Adaptive Code 2nd Edition

This is my favourite technical book of the year. It has stretched me further that I thought possible and has taught me so much.

It is split into 4 parts. Part I Is a good overview of Agile development frameworks; Scrum and Kanban, Part II Focuses on Dependency Management, Programming to Interfaces, Testing and Refactoring. Part III covers the SOLID principles and Part IV Dependency injection and finishing up with Coupling.

Although not a huge book at 421 pages it has taken the best part of six months for me to read and understand about three quarters of the book. I feel I will be revisiting specific chapters for a long time to come as I have only just scratched the surface with the valuable information that this book contains.

One minor criticism is that not all the code examples  can be run, you are given a fragment of code that you may wish to play with to see the different results of changing x and y or just to get a better understanding of the topic being discussed but this is not always possible. That aside this is an easy book to recommend.

MongoDB The Definitive Guide 2nd Edition

This year I have been experimenting with a number of C# console applications that that use NoSQL databases. Rather than endlessly Googling for information, I thought I would buy this books to get a good grounding in MongoDB especially when it comes to security.

I bought the 2nd edition of this book which is now out of date and I quickly lost confidence in it and returned to googling for information and using the official MongoDB docs.

Dependency Injection in .NET

Dependency injection (DI) was a technique hitherto unknown to me. Although discussed in Adaptive Code 2nd Edition I felt I need to find out more and hear what other peoples opinions.  One other point which piqued my interested was the difference when a blog post that is referenced a lot by answers on Stack Overflow describes DI in 2 pages of A4 sized paper yet there is a 400+ page book on the subject.

I bought Dependency Injection in .NET because of two reasons, firstly it is focused on .NET which I am currently learning and secondly the overwhelmingly positive reviews on Amazon.

The book is split into 4 parts. Part I naturally starts with an overview of the problem that DI solves with a simple example that is initially written without using DI followed by it being rewritten to use DI. The next chapters move on to a bigger real world example. Part one closes with a look at DI containers.

Part II covers DI patterns and then interestingly Anti Patterns and then DI Refactorings. Part III looks at DIY DI and Part IV takes an indepth look at DI containers such as Castle Windsor, Structured Map and so on.

At the time of writing I am on page 133 which is the start of the DI anti-patterns. I won’t be reading much further as I feel I have gotten as much as I can from this book for the time being but as my experience in OO languages grows I will be back to correct bad habits and learn how to get the best out of the DI containers that I may be using.

One other interesting point, is that the cover of this book has for reasons I do not know has gathered more comments from people passing by my desk than any other book I have owned!

Conclusion

I have gained much from reading these books (yes even you; MongoDB Definitive Guide) They have all added something to my skills as a developer and given me different ideas and solutions to problems that I currently face and am yet to face.

Can’t wait to see what technical books I read in 2018 will be…

Visual Studio’s Live Share != Screen Share

I am very excited about the potential of Live Share since it’s announcement back in November.

In addition to the content on the Live Share site, I would also recommend listening to this episode of Hanselminutes where Scott talks to Amanda Silver about how Visual Studio’s Live Share goes far beyond “text editor sharing” to something deeply technically interesting.

Although still very early days, Live Share is definitely something worth keeping an eye on. A private preview is coming soon and you can sign up for for more information here.

New book: Oracle Incident Response and Forensics: Preparing for and Responding to Data Breaches

Looks like the first technical book for 2018 I will be reading has already be decided. Pete Finnigan has a new book coming out in early January.

 

 

 

 

 

 

 

 

 

 

 

More details from the publishers, Apress can be found here.  At the time of this post there was no preview available. Fortunately the listing on Amazon does so you can check out the contents here.

I will add a further post once I have read it.

C# Attributes

Overview

This post gives an overview of C# attributes; what they are, the different flavours, when they should be used and finishing up with some examples.

What are C# Attributes?

Attributes in C# provide a mechanism of associating Metadata with code entities such as types, methods, properties etc. Once defined you can then retrieve the Metadata at run time using Reflection.

What flavours do they come in?

Two flavours of attributes are available:

  1. Predefined which are the ones provided by the .NET base class library
  2. Custom which are created by the developer to fill in any gaps in the required Metadata.

Attributes can be defined on many different targets; such as the assembly, type (Struct, class interface), methods etc. Please refer to the table here for the full list of targets available.

When should I use them?

Reviewing the official .NET guide on Attributes for this post, the docs give a verbose introduction to Attributes. Myself I prefer the following quote that is taken from an blog post by Eric Lippert which succinctly explains the reason for using attributes

use attributes to describe your mechanisms

Example 1: The Obsolete Predefined Attribute

This first example shows how to use the Obsolete Predefined attribute.

The class Foo has a method that is now obsolete and should no longer be used.

using System;

namespace ObsoleteAttributeGeekOut
{
  class Foo
  {
    [Obsolete(message:"Use NewBar() instead")]
    public void OldBar()
    {
      Console.WriteLine("Calling OldBar");
    }

    public void NewBar()
    {
      Console.WriteLine("Calling NewBar");
    }
  }
}

I have added the Obsolete Attribute to the OldFoo method along with a message to help developers identify which method should be used instead.

If I then try to use OldBar, as in the code below

namespace ObsoleteAttributeGeekOut
{
  class Program
  {
    static void Main(string[] args)
    {
      Foo f = new Foo();
      f.OldBar();
    }
  }
}

Visual Studio displays a helpful message.

 

 

 

 

 

 

When compiling the program, you will see the following warning:

ObsoleteAttributeGeekOut\Program.cs(10,13,10,23): warning CS0618: ‘Foo.OldBar()’ is obsolete: ‘Use NewBar() instead’

The code for this example can be found here.

Example 2: Creating a Custom Class Attribute & Using Reflection to examine the data at run time

The first step to create a Custom Attribute is to create a new class deriving from System.Attribute as the code below demonstrates.

using System;

namespace ClassLevelAttributes
{
  [AttributeUsage(AttributeTargets.Class)]
  public sealed class UsefulClassMessageAttribute : Attribute
  {
    private string s;

    public string S
    {
      get
      {
        return s;
      }
    }

    public UsefulClassMessageAttribute(string s)
    {
      this.s = s;
    }
  }
}

There are several items of note that are worthy of further explanation about this class.

  1. I use a Predefined attribute to define the level that the Custom attribute should be used at.
  2. The class is sealed. Rather than sidetrack the example if you are interested in reading more on why you should seal your attribute classes take a look at this StackOverflow question.
  3. The name of the class ends with Attribute. This is a convention rather than a rule and is recommended for readability. As you will see when the attribute is applied the word Attribute is optional.

With the Custom Attribute, UsefulClassMessageAttribute defined I can now go ahead and use it as in the code below.

using System;

namespace ClassLevelAttributes
{
  [UsefulClassMessage(s:"This is something useful")]
  class Example
  {
    public void Foo()
    {
      Console.WriteLine("Called Foo");
    }
  }
}

Here you can see the Custom Attribute applied to the class Example. Note that the name used is UsefulClassMessage and not UsefulClassMessageAttribute.

A Custom attribute is worthless until you can see the information. So for the final code in this example I use the following method to obtain the Custom Attribute information.

using System;

namespace ClassLevelAttributes
{
  class Program
  {
    static void Main(string[] args)
    {
      GetAttribute(typeof(Example));
    }

    public static void GetAttribute(Type t)
    {
      UsefulClassMessageAttribute theAttribute = (UsefulClassMessageAttribute)Attribute.GetCustomAttribute(t, typeof(UsefulClassMessageAttribute));

      if(theAttribute == null)
      {
        Console.WriteLine("The attribute was not found");
      }
      else
      {
        Console.WriteLine($"The attribute value is: {theAttribute.S}");
      }
    }
  }
}

Running this, you would see the following in the console window:

The attribute value is: This is something useful

The code for this example can be found here.

Example 3: Creating a Custom Method Attribute & Using Reflection to examine the data at run time

As shown in Example 2, the first step in creating a Custom Method is to create a class that derives from System.Attribute.

using System;

namespace MethodLevelAttributes
{
  [AttributeUsage(AttributeTargets.Method)]
  public sealed class UsefulMethodMessageAttribute : Attribute
  {
    string s;
    public string S
    {
      get
      {
        return s;
      }
    }

    public UsefulMethodMessageAttribute(string s)
    {
      this.s = s;
    }
  }
}

With the exception of it’s name and that the AttributeUsage which is now set to Method, this class is identical to the one shown in Example 2.

With the Custom Attribute, UsefulMethodMessageAttribute defined I can now go ahead and use it as in the code below.

using System;
namespace MethodLevelAttributes
{
  class Example
  {
    [UsefulMethodMessage(s: "Some pertinent information about the method")]
    public void Foo()
    {
      Console.WriteLine("Calling Foo");
    }
  }
}

Here you can see that the Custom Attribute has been applied to the method Foo.

In the final code example I use reflection to obtain information about the attribute.

using System;
using System.Reflection;

namespace MethodLevelAttributes
{
  class Program
  {
    static void Main(string[] args)
    {
      GetAttribute();
    }

    public static void GetAttribute()
    {
      foreach(MethodInfo mi in typeof(Example).GetMethods())
      {
        UsefulMethodMessageAttribute theAttribute = (UsefulMethodMessageAttribute)Attribute.GetCustomAttribute(mi, typeof(UsefulMethodMessageAttribute));

        if (theAttribute != null)
        {
          Console.WriteLine($"The attribute value is: {theAttribute.S}");
        }
      }
    }
  }
}

Running this you would see the following in the console:

The attribute value is: Some pertinent information about the method

The code for this example can be found here.

Acknowledgements

Eric Lippert 

C# 6.0 and the .NET 4.6 Framework

Writing Custom Attributes

Comparing columns in Microsoft Excel

This is my first post on Excel and is an aide-mémoire as I know this will come in useful to a future me and if it helps other people then that is even better.

I recently needed to compare two columns of data in Excel and identify differences between them. For this example I will use the first 20 values from the Fibonacci sequence. Below is the  two columns with data that should be identical:

However it doesn’t take long to see the values differ on row 15. If this data set was twice as long or had more inconsistencies the chances of me missing something will increase. So over to Excel to do the heavy lifting of comparing these columns and flagging any differences.

A quick review of the available functions within Excel and the IF function looks like it will be a good starting point and after some experiments I settled on the following:

=IF(A1=C1, "Match", "ERROR")

What this formula will do is; if Row 1, Column A1 matches row 1 of C1 I will see the text Match and if they do not I will see ERROR.

After adding to a new column, I can see that for the row where the data is different, Excel reports ERROR.

This is a good start but I could still miss the ERROR, so I really would like it to stand out. One method is to achieve this is to use Excel’s Conditional Formatting.

You can find the Conditional Formatting on the Home tab of the Ribbon.

First select all the values in column E, the ones with either Match or Error then select Conditional Formatting followed by Highlight Cell Rules and finally Text that Contains… which will bring up the following dialog box:

I am most interested in cells that contain the ERROR text so I enter that in the text box and left the default colour choices.  After pressing OK I see that the ERROR now stands out.

I hope you found this useful.

Displaying records from MongoDB in an ASP.NET MVC Application

In a previous post I explained how to save a Twitter stream to a MongoDB using C#. In this post I will show you how to build an ASP.NET MVC application to display those records.

The example that follows was built using Visual Studio 2015 Community Edition and MongoDB 3.4.6.

Firstly this is how the data looks in Mongo. I start a new start Mongo shell, select the database of interest, in this case twitterstream and then using db.auth I authenticate myself as the demouser

and execute a find which displays the data in a hard to read JSON format.  (redacted)

The next step is switch to Visual Studio and create a new ASP.NET Web Application (.NET Framework)

Select Ok.

Accept the defaults and again select Ok.

Now add the following packages via NuGet

  • mongocsharpdriver (Version used 2.4.4)
  • MongoDB.Driver (Version used 2.4.4)

Open the Web.config files and add the following MongoDB entries within the appSettings element.

&lt;add key="MongoDatabaseName" value="twitterstream" /&gt;;
&lt;add key="MongoUsername" value="demouser" /&gt;;
&lt;add key="MongoPassword" value="abcd" /&gt;;
&lt;add key="MongoPort" value="27017" /&gt;;
&lt;add key="MongoHost" value="localhost" /&gt;;

These values identify your MongoDB so if you are using a different database, user etc. modify these values as appropriate. Here are these same values within Web.config

 

 

 

 

 

 

 

 

 

Within the App_Start folder create a new class called MongoContext and add the following code:

using System;
using MongoDB.Driver;

namespace DisplayMongoData99.App_Start
{
  public class MongoContext
  {
    MongoClient _client;
    MongoServer _server;

    public MongoDatabase _database;

    public MongoContext()
    {
      // reading creditials from web.config file
     var MongoDatabaseName = System.Configuration.ConfigurationManager.AppSettings["MongoDatabaseName"];
     var MongoUsername = System.Configuration.ConfigurationManager.AppSettings["MongoUsername"];
     var MongoPassword = System.Configuration.ConfigurationManager.AppSettings["MongoPassword"];
     var MongoPort = System.Configuration.ConfigurationManager.AppSettings["MongoPort"];
     var MongoHost = System.Configuration.ConfigurationManager.AppSettings["MongoHost"];

     // creating creditials
     var credential = MongoCredential.CreateMongoCRCredential(MongoDatabaseName, MongoUsername, MongoPassword);

     // creating MongoClientSettings
     var settings = new MongoClientSettings
     {
       Credentials = new[] { credential },
       Server = new MongoServerAddress(MongoHost, Convert.ToInt32(MongoPort))
     };

     _client = new MongoClient(settings);
     _server = _client.GetServer();
     _database = _server.GetDatabase(MongoDatabaseName);
   }
  }
}

This class setups a connection to the MongoDB instance.  It reads the values from the parameters within Web.config and then calls the MongoClientSettings constructor, setting the  Credentials and Server properties.

Within the Models folder create a new class called TweetModel.cs and add the following code

using MongoDB.Bson;
using MongoDB.Bson.Serialization.Attributes;

namespace DisplayingMongoDB4.Models
{
  public class TweetModel
  {
    [BsonId]
    public ObjectId Id { get; set; }

    [BsonElement("thetweet")]
    public string Tweet { get; set; }
  }
}

The TweetModel class represents the data that will be returned by MongoDB. In this case the object id and the tweet.

The next step is to create a new controller. So right click on the Controllers Folder and select Add then Controller. You should then see the following dialog window:

Choose “MVC 5 Controller with read/write actions”, select Add and give it a meaningful name; in this example I have used TweetController. Click add again and Visual Studio will then create the TweetController.cs.

Now amend TweetController.cs as shown below:

using System.Linq;
using System.Web.Mvc;
using DisplayMongoData99.App_Start;
using DisplayMongoData99.Models;

namespace DisplayMongoData99.Controllers
{
  public class TweetController : Controller
  {
    MongoContext _dbContext;

    public TweetController()
    {
      _dbContext = new MongoContext();
    }

    // GET: Tweet
    public ActionResult Index()
    {
      var tweetDetails = _dbContext._database.GetCollection&lt;TweetModel&gt;("tweets").FindAll().ToList();
      return View(tweetDetails);
    }
...
rest of existing scaffolding code here which remains unchanged... 
public ActionResult Details(int id)
...

The changes introduce a new constructor which creates a new MongoContext object. The changes to the Index() method queries the MongoDB database for a collection called “tweets” which will be returned to the View. All the rest of the scaffolding code created by Visual Studio is unchanged.

The final step is right click inside the Index() method and select AddView. This will bring up another dialog window which you should complete as follows:

 

Pressing add results in Visual Studio creating more scaffolding code which when complete should display the following:

 Pressing the “Microsoft Edge” (Or whichever browser you have previously selected) button will launch the Application and after a few moments you will see your MongoDB data displayed.

 

Summary

In this post I have demonstrated how to get started in displaying the content of a MongoDB database from an ASP.net MVC application.

I have skipped over the challenges I encountered in getting the security of my MongoDB set up correctly and that the out of the box display of the data is dreadful at best so if you would like to see further posts on the security or getting the display more production like sound off in the comments.

Acknowledgements

This post would not have been possible without the generosity of these following two posts:

  1. http://www.c-sharpcorner.com/article/simple-crud-operation-using-asp-net-mvc-and-mongodb/
  2. https://www.claudiokuenzler.com/blog/553/authentication-mongodb-3.x-failed-with-mechanism-mongodb-cr