I recently started spending a likely-unhealthy amount of time watching TikTok videos. This is only of interest (possibly) to blog readers because it led to undertaking this little piece of video art.
That’s right. Over the course of a month or so, I dedicated free time to test driving a non-trivial implementation of FizzBuzz. It involved not just FizzBuzz, but a SongPlayer, a song structure, and a console app to display the process. I also spent way too much time figuring out how long Fizz and Buzz and FizzBuzz should be played, in order to best sync to the music.
If you’re at all interested, I have the whole thing in a Github repo. This should at least explain why I haven’t been continuing the battleship game walking skeleton…
There are a lot of CI/CD systems out there. If you’re in .NET land, Azure DevOps has some pretty excellent tooling out there. Jenkins is another that has some extremely robust capabilities. There are oodles more I’ve never even brushed up against, including rolling your own on a virtual machine somewhere in the cloud (or even on your own machine).
But wait. What the heck is a CI/CD pipeline? Why do you need one? If you’re asking either of those questions, please take a look at this recent post of mine.
It integrates beautifully with GitHub (which is my repo of choice)
It’s platform agnostic (works equally well for a variety of tech stacks)
Step One is to get yourself a Travis account. This is absurdly, almost suspiciously easy: you sign up with your GitHub account.
This means you may need to allow Travis as an authorized connection or app in your GitHub account, because Travis will be accessing your repo in order to detect changes and to clone the repo so it can build the application.
You’ll end up at your dashboard, which eventually will display your most recent build and a quick link to your repositories–for now, you shouldn’t see anything.
Step Two is to link a repository, which means you need to access your profile settings. Click your avatar in the top right of the screen, then the “Settings” link.
Step Three is to find and switch on your repository so Travis knows to check it for builds. Your public repos are all listed under the “Repositories” tab, which should be the default view (private repos required a paid subscription with travis-ci.com, currently).
It can take several minutes for your repositories to show up in Travis the first time. You may have to click the “sync account” button. It also may take refreshing the browser window itself–I once waited almost ten minutes to see a repo list and as soon as I refreshed the browser it all showed up.
Find the one you want to activate, and toggle it on.
Now Travis is going to be scanning that repo through GitHub APIs and looking for a .travis.yml file to get build instructions. Which means…
Step Four is to create a .travis.yml file in the root of your repository. The dot-travis file is, as you might guess from the dot starting the name, a configuration file used by the Travis-CI build system to know what language you’re using, what build environment to use, what commands to run, and any deployment steps. It’s sort of a big deal, and if there’s no .travis.yml file found in the code pushed to GitHub, Travis simply won’t do anything.
Again, you want to create an empty file in the root of your repository–not necessarily the root of your code base. Find the .git folder, and save your yml file there. I do this step typically in VS Code or Notepad++, depending what OS I’m on.
Step Five is to define the build configuration. This will vary significantly depending on which language you’re developing in and what flavor of that tech stack you prefer, and the folks at Travis do a really great job documenting what goes in the .travis file.
The .travis.yml file I will use for this project including some helpful (to me, anyway) comments about what each line means…
# Mono is used to build .NET on Linux--we don't need it with Core.
# Needed to run commands in the Xenial CLI
# The version of Ubuntu to run the Travis virtual machine -- needed for .NET Core
# Your SDK version, not your run-time version
script: # These commands are executed in the Travis VM just like you would on your local machine
# Use a "cd" command to move the Travis command line prompt into your solution directory
# - cd /home/travis/build/<yourTravisAccount>/<yourRepoRootDirectory>/.../<yourSolutionDirectory>/
- cd ./BattleshipTDD/
# Use these to build the project without tests
# - dotnet restore
# - dotnet build
# if your project has tests, you can skip "restore" and "build" and just use "test" command -- "test" will run the other two automatically
- dotnet test
Step Six is to commit the new file and push it up to your repo, where Travis should catch the change and trigger a build.
Step Seven is to review the build in Travis. Even before I was done prepping the terminal photo above for the post, I got an e-mail from Travis saying my build passed–hooray!
When we look at the dashboard we get some key items right away. The repository name and a build badge are at the top, and then the specific build info–what branch was built, the commit message, commit ID, the build number, how long it ran for are all front and center.
If we keep scrolling down, you can see the actual build logs, starting with info on the build environment–handy for debugging problems, like when your build is fine locally, but fails when Travis tries it.
For instance, if one tries to run a .NET Core app in the default Travis Linux OS, it won’t work–hence needing to specify Xenial to make sure we were running on Ubuntu. Being able to compare the build environment with the suggestion on a help document was key to figuring this out.
Scrolling down further we get to the actual build process. Each command you specify in the .travis.yml is listed out separately–you can see on line 246, even the “cd ./BattleshipTDD/” got it’s own result output (and yes, I have seen this fail, especially on complicated solution structures where I was missing a directory level in the path).
The test output is particularly helpful–my one test passes here, but should it fail it outputs the exception message just like any other test runner, letting you know what failed and why.
At the very bottom, the build status gets reported–if any of the commands exited with anything except a 0 code, the entire build is marked as a failure.
Some examples of unhappy build logs…
Failing tests give you the output of your testing framework–no guessing what happened
Wrapping It All Up:
What we’ve done here is linked your GitHub to Travis-CI, which will build the project and run tests against it whenever changes are pushed to master. This by itself may not provide a lot of value (I mean, if you’re not running your tests before pushing code, we need to talk) but it does lay an important foundation.
From here, we’re able to automate deployment, headless browser testing, dockerizing, packaging, updating a badge on your repo to tell the world your code is sound, and more. And it happens without having to remember it–push the code, and Travis checks it. It’s one heck of a safety net, if nothing else. It lets you focus more on the code than on the boring devopsy* stuff.
*I mean, I don’t exactly think devops is boring stuff, but no shame if you do–most programmers get into code to write code, not manage deployments and QA etc etc etc.
I’m assuming that you know the basics on how to create a code base in your language of choice, in your IDE or text editor of choice. This series is aimed at folks who can stand up a console and/or web app on their own, with really high-level examples in C#. The goal of the series is to help you start leveraging CI/CD automation, not learn how to set up your dev environment.
I’m also assuming you’re working with Github. There are plenty of other ways to do git, and you may be able to use your service of choice in some places–but that’s on you, my guide will be based on Github. If you want a run down on this before we get rolling, this is a pretty solid article.
Okay, so why skeleton code? What does that even mean? Let’s get into it.
If your goal is to write some code as a kata, and don’t really intend on deploying it anywhere or configuring a pipeline, you don’t really need to be simple. You can go hog wild, and test drive as much as you want as fast as you’re able.
But if, as this whole exercise states, the goal is to set up a project that can continuously integrate and deploy by automated means, the absolute last thing you want to do is put together a bunch of code with moving parts. The more moving parts to your code, the more things that can go wrong. Add in all the moving parts to configuring a CI/CD pipeline, and you’ll very soon wish you had fewer things to troubleshoot.
Okay, How Simple?
For the “business logic” you should have one small passing test. How that is architected is up to you–I’m doing this in C# and you can eyeball the repo for yourself. The standard convention is a class library project and a unit test project, with one test class for each class in the library project.
You’ll also want a super bare-bones web app. Some languages/frameworks (like ASP.NET Core) build quite a bit into their “default” templates, so chances are pretty good you can get a basic “This is my webpage” type app without having to write any code at all. In my case, I changed the HTML on the index page just to amuse myself.
And that’s it. The webpage should appear when you run it locally, and the tests should pass when you run them. Otherwise, you don’t actually want your code to do anything yet. The less that can go wrong with your code, the easier it will be to setup the pipeline.
Once you have that code set up, with tests passing and a webpage viewable, go ahead and push it up to your repo. Then you’ll be ready for the next step, linking your repo to Travis-CI.
Then I get a few weeks into my internship, and there’s this new idea: a webjob. Essentially, a console app that runs on the cloud for background tasks. They’re awesome…until they start doing weird things while deployed. Azure gives you a couple hundred lines of debug console for free, but that’s just not a sustainable solution. This was the point my boss said some vague words about “App Insights” and pointed at an example the other developer had worked out a while back (modified from a guide neither of us were able to locate).
It took me quite a while to figure out how to even configure Application Insights in the webjob, and even longer to really zero in on how to use it intelligently. So I wanted to combine some of that hard-earned knowledge in a very long walk thru. We put together a toy app inside the webjob template, and configure it to run and send telemetry to an Application Insights resource.