Then I get a few weeks into my internship, and there’s this new idea: a webjob. Essentially, a console app that runs on the cloud for background tasks. They’re awesome…until they start doing weird things while deployed. Azure gives you a couple hundred lines of debug console for free, but that’s just not a sustainable solution. This was the point my boss said some vague words about “App Insights” and pointed at an example the other developer had worked out a while back (modified from a guide neither of us were able to locate).
It took me quite a while to figure out how to even configure Application Insights in the webjob, and even longer to really zero in on how to use it intelligently. So I wanted to combine some of that hard-earned knowledge in a very long walk thru. We put together a toy app inside the webjob template, and configure it to run and send telemetry to an Application Insights resource.
The final code from this project is in a repo you’re more than welcome to.
Step 1: Make The Project
Visual Studio couldn’t possibly make this any easier–there’s a template RIGHT THERE.
It opens up with some nice skeleton classes to get you rolling, which we will look at very soon…after a visit to the Azure Portal.
Step 2: Create an Application Insights Resource
You’ll need to create a resource in Azure that will actually read and organize the telemetry you send it, the Application Insights resource.
Log into your Azure account, and add a resource (I used the big green sidebar button).
Easiest option is to type “Application Insights” into the search bar and go from there.
You’ll have some things to fill out and select. A useful convention is to use “[AppName]Insights” for the resource name, and to keep the Insights resource in the same resource group and Location as the app it’s monitoring. For webjobs, the ASP.NET web application type works well.
Once you confirm your decisions, you’ll see a couple notifications. Click “Go To Resource” when the second one pops up–should only take a minute or two.
This is the Overview blade of the Application Insights resource–we’ll be coming back here, but for now just copy the Instrumentation Key–this is how you’re going to link the webjob to the Application Insights resource.
Step 3: Update the App.config file
Head back to your webjob in Visual Studio, and open up the App.Config file. We’re going to add an AppSetting section, and two keys, an “appName” and an “appInsights” value–“appName” is the name of the webjob, and “appInsights” is the instrumentation key you just copied.
Step 4: Install the Application Insights NuGet Package
Alrighty–now to install the NuGet package.
You may think that searching for Application Insights will yield the bounty you seek, yet you would be as wrong as I was the first time or three I did this:
Fully qualify that package name with Microsoft.ApplicationInsights and you’ll do much better–install that top one.
Step 5: Initialize Your First Telemetry Client!
In the Program class, you’ll want to add a static variable as shown below–note how it won’t work without adding the using statement.
Next, we’ll need to assign that Instrumentation Key to the _appInsights object–notice you may run into problems with ConfigurationManager being unknown.
You’ll need to add a reference to System.Configuration by adding a reference to the project, and then searching for “Configuration” under “Assemblies.”
At that point, Intellisense will help you add the using statement to make all the red squiggles go away.
Step 6: Write Your First Custom Event!
Application Insights lets you track several things, and the three I actually use are Events, Traces, and Exceptions. We’ll start with Events; an event is a trigger, basically–something that puts a process in motion, and if you’ve done much coding you probably have a passing familiarity with them, but just in case here’s a quick rundown. An Application Insights event is simply recording such a thing took place.
I generally log a custom event when the webjob first comes to life, like below.
All we’re doing is telling Application Insights “This App Is Starting,” inserting the appName AppSetting in the App.config file. It’s really that easy.
If we run this webjob right now, not much is going to happen…and in fact, you likely won’t see anything in the Application Insights portal for at least five minutes. So rather than waste time on THAT, let’s go ahead and actually write a function for this webjob.
Step 7: Write a Function
I was going to explain, in probably TMI levels of detail, what is going on in the Functions class of a webjob. But this is where I have Major Winchester pop into my head, so perhaps we’ll save that for a totally different blog post.
Open up the “Functions” class in the project. You’ll see code similar to this:
While this is some awesome boilerplate code, it isn’t helpful to our goal: to get some telemetry into Application Insights as easily as possible. So, we’re going to implement a goofy little toy app I threw together in about 15 minutes (so yes, I realize it’s not the most elegant piece of work):
A few things to call out here:
- We use an initializer to instantiate the TelemetryClient here–for no particular reason other than to demonstrate we can
- We added a “NoAutomaticTrigger” attribute to the public ProcessDates() method
- This is because we removed the queue trigger argument
- We bracket the method with events tracked by the TelemetryClient
- One to indicate when it started, one to indicate when it stopped
- We use trace and exception tracking
Wait, What’s a Trace??
Traces are what I consider the breadcrumbs that tell me what’s going on in the program. Wherever you’d put a “Console.Write()” or a “Debug.Log()” type statement, that’s a Trace.
I have a process that works through a batch of 1000 records at a time, and it helps me keep tabs on performance to see “Starting Batch 1…Starting Batch 2…” in my Application Insights portal. When I know there’s roughly 300,000 records in a particular table, seeing “Batch 249” lets me know all is well with the world.
I also have a few webjobs where we handle 95% of records via one process, but every now and then we find an individual who needs a secondary process. I put a trace in there indicating “Used Process B For Record Id XYZ” just so we can see that yes, the squirrelly ones were handled properly.
It IS possible to run into trace fatigue where you can’t identify the useful ones cuz they all blur together, and Azure will actually limit the number of traces it shows in the portal (tho it stores them all) if there’s too many of them being sent. It’s a balance I still struggle to find, so don’t be afraid to experiment and adjust.
And just in case you’re following along at home, here are the two methods being called by ProcessDates()
Step 8: Add The Storage Account Connection String
In the interests of brevity (too late, I know) I’m going to skip the walk-thru on how to create an Azure storage account and roll it into another post. Suffice to say, you’ll need one, and you’ll need to paste the connection strings from the Access Keys blade into your App.config file like below.
Step 9: Update Program.Main() to Call the Function
The default operation for a webjob is to run continuously, and call a Function method when it’s trigger event happens. That’s not what we’re set up for, so we need to have the webjob host call the method at runtime using reflection:
Step 10: Run the Program!
It’s missing from my code picture above (sorry!), but if you put Console.Writeline() statements before each _appInsights.TrackTrace() and _appInsights.TrackException() call, you should end up with console output like this:
However, even without the visual representation of the console output, the telemetry will still be sent to Azure.
Step 11: View in the Application Insights portal
We really are getting towards the end here, I swear. Head back to the Azure portal, and open up the Application Insights resource you created earlier. You might even still have it open from copying the instrumentation key.
Once in the overview, click on the “Search” link in the sidebar:
Then, click on “Time Range”, select “Last 30 Minutes,” then “Update.”
If you get a screen saying there’s no data, don’t panic: there’s always at least a 5 minute delay from telemetry being tracked and it showing up in the search portal.
If you wait a bit, then refresh, (and if longer than 10 minutes has gone by, run the program again) you should see a list of items like this:
Now, that’s pretty cool– from bottom to top (neatly timestamped in local time), we see the trace where the webjob itself starts, when ProcessDates() starts, the responses to the date strings (including the exception) and when ProcessDates() finishes.
Step 12: Beef Up the Telemetry
But…it’s not as helpful as it could be. What is the string that caused the exception, for instance? How do we know if the input is generating the correct output? A quick refactoring can yield lots of benefits in the clarity department:
You can add as much info to traces as you need to provide a useful context–a result is great to know, but unless you can quickly determine what input is tied to the output, you can’t diagnose a thing. I like to bracket variables–just because I hate having to think about which part of a trace or exception message is the one that I need to worry about.
Just like with any try-catch, you can wrap the caught exception in a custom one that gives you application-specific details. Here, I’m tracking a new exception that provides the input in the outer exception message, and the entire caught exception for the inner exception.
My advice is to err on the side of over-informed traces and exceptions because you can always remove info–but you can’t retroactively grab it after a webjob run.
This code gives us (what I feel is) much more useful telemetry, from a “What happened on that last database update” perspective.
That’s It, That’s the End
- spun up a webjob
- created an Application Insights resource
- configured the webjob to use Application Insights via NuGet packages
- wrote a little toy program
- Included tracking custom events, traces, and exceptions as the webjob runs
- explored how to get the telemetry output from that program
- Made our traces and exception tracking more robust
Hopefully this saves someone time and hassle in implementing App Insights down the road!