Self-Organizing Teams Are Teams of Leaders

I am a huge fan of a self-organizing team. I picked up this quirk long before I ever wrote a line of code — we’re talking at 17, working the lunch rush at a Wendy’s. The organized chaos of of the front counter is a thrilling thing to experience; two registers constantly loading up 2 orders each, a sandwich maker and grill cook working a choreographed dance to keep fresh beef and chicken flowing to warm buns, runners filling drinks as they call out for a Biggie fry. And…no manager.

The manager was there, obviously. Or more accurately, they were nearby. Observing the flow, looking for disruptions. Pulling the angry customer aside. Bringing up a sleeve of cups to make sure no one had to leave their post. Wendy’s went so far as to rename the position the manager had as the “operations leader” in order to rewire the company’s thinking. The manager didn’t direct the action, they weren’t a star player on the field — they were a coach, a strategist, a scout. You stayed on the sidelines, let the crew do their job, and jumped in to deal with problems as they arose.

Sure, if a register operator wasn’t asking anyone to “Biggie-size” their combo, the manager over-hearing this would remind them of procedure. But they weren’t sitting in the middle of the kitchen, directing everyone’s movement. Each member of that team understood their job and just did it. They would react to the feedback provided by the manager — “We have another 10 cars in the drive thru line” or “The void percent is getting high.”

Non-Manager Leadership

Looking back, the lunch crew wasn’t just a collection of A+ professionals executing as some amazing hive-mind. When I was started on being a runner, I theoretically knew everything I needed to do–how to fill the cups with ice, what the codes on the screen meant, where to stage trays, how to pack a to-go bag. But Jackie had been doing this job, with and without runners, for five years. She had a mindboggling wealth of experience, and fortunately was willing to share it.

“Don’t wait to get the chili on this order — it’s the only thing you need till the sandwich comes up, and you can use that time to make the drinks for this couple on the next order.”

“Okay, see those four construction workers that just walked in? Just call out the four Biggie fries now so Juan knows he needs to drop more.”

“Didn’t you see we just refilled that iced tea? It’s too hot–see if drive thru will get you one so it won’t water down as much.”

The same thing happened with everyone else — sandwich makers who would tell me how I could help them go faster, or suggest things I can do to finish the order while they were running behind. The folks on the fry station who would direct me to scoop my own fries while they dropped more into the fryer. A hundred or more little suggestions or directions in the course of a two and a half hour rush shift. Almost none of them coming from a manager.

Gradually, I began to be the person giving those directions. “Just set up a new tray every time, don’t bother waiting till they tell you it’s to-go or not.” “Hey Juan, I see about 8 people heading inside!” “Janet, the group that loves grilled chicken is here, do we have enough up?”

There wasn’t a diminished leadership presence because of the manager being removed to a more strategic role, there was an increased in localized leadership. People who understood the immediate needs spurred their peers in the right direction, helped to avoid mistakes, and provided praise for the little-but-crucial things done well that would never warrant attention from management with their eyes on the big picture.

The Software Tie-In

There’s really not much difference between a software team and a fast food team. In fact, I’d say the primary success metric is the same for both teams: how often can you deliver customized, completed work to the customer?

In-before-derisive-comments-about-“burgerflippers:” All jobs boil down to a series of steps to be followed. Mastery of a role boils down to understanding how those steps can and should be adapted in the face of rare or new situations. Is mastery in running a fry station an easier goal than mastery in writing data layer logic? Based on my experience, it’s impossible to tell — both roles have plenty of people who want to achieve mastery, and both have plenty who just phone it in.

An incorrect order (a burger instead of a chicken sandwich, for instance) is a lot like a bug. And just like a bad software team that treats bugs as part of the process, a bad Wendy’s has adapted to remaking orders after the customer complains. A good Wendy’s (or software) team is horrified every time they make a mistake that reaches the customer — and works together to avoid the mistake in the first place. They aren’t just winging it, assuming “If I’m doing this incorrectly the manager will stop me.” And they aren’t letting their peers do that either.

A self-organizing software team is a group of professionals who are comfortable enough in their own skills and their place on the team to provide leadership when it is appropriate. If Jane sees a security gap in some code Bob worked on, she’s able to constructively point it out and help resolve it. If there’s a pipeline problem creating a bottleneck, no one waits for management to authorize fixing it — Mia just pops open the YAML file and fixes it. If there’s missing test coverage around a critical flow, Linda doesn’t need to get management’s permission to address it with Tyler before it gets merged. If all of these interactions had to flow through a single point of authority they’d just outsource the whole team within 6 months.

So what IS the manager’s role if everyone’s a leader? Let me get into that next time, since I’m willing to bet not many of you read even this far.

A Love Letter to Automated Testing — As a Tool for Quality

I think automated testing, particularly automated unit testing, is a misunderstood creature. Every time I run into a “Do we write tests or do we not?” discussion in the wild, the focus is around the confidence tests give one about the functionality of the code. People in favor of tests generally argue about how you gain higher confidence about how well the code functions, and people counter that with their existing approaches like QA or the fact that their management accepts bugs as part of the process.

For me, that confidence boost isn’t the point; it’s a nice result, a by-product of the real advantage. It’s like scallions on my baked potato, or the sauteed onions on my steak, or the whipped cream on my sundae. Yes, I relish those things and they give me the extra hit of dopamine that makes me feel I have to have them…but I don’t skip the potato because I only have sour cream, and I definitely don’t refuse ice cream because there’s no toppings. The garnish is just the obvious benefit to the dish, the same way the “Confidence it will work” is the crouton on the testing salad.

Valuable tests are more than logic validators. They enforce sane engineering practices, expose complexity, put a fan behind code smells, and provide documentation that will be up-to-date as long as people are running the test suite. In short, automated tests are a way to avoid poor Quality code.

Tests Help Avoid Poor Quality Code

Quick qualifier here -- working code is good code, but we all know Quality code when we're in it. Yes, I'm stealing applying concepts from Pirsig again.

Poor Quality code is difficult to test. Sometimes so difficult it’s not worth it, like an abandoned copper mine that still has plenty of ore… but it’d take an investment many times over market price to extract it safely. For me, I can use the ease of writing a test to tell me a lot about the architecture, patterns, and configuration of the codebase. It’s the quickest way to identify challenges I’m going to face.

  • “Oh. This method is new-ing up a dependency in-line. That’s…that’s going to be hard to mock.”
  • “So part of the constructor on this service class is… calling out to a 3rd party API for part of it’s configuration?”
  • “Wait. The controller is making a call directly to DatabaseA, so it can use the return to make a call to a service class that talks to DatabaseB?”

All things I’ve run into trying to write what I thought would be quick tests around legacy code, moving roughly from easiest to fix to most difficult. If you never write tests, and the app worked correctly right off the bat, and you never need to change the functionality, none of these things are problems, per se. But when was the last time you had an application go into prod without a problem? How many stakeholders have you met whose requirements are written in stone?

When I worked at a home improvement retailer, one of my jobs was to cut wood to size for customers. I was taught by a retired carpenter-turned-retailer to measure on the saw where the piece of wood would be once it was cut, then clamp scrap wood there. This gave me a guide to know when the 2×4 was cut correctly without too much conscience effort, and also prevented me from cutting too much.

For me, writing a test is a lot like clamping that scrap into place. The test is a fixed goal to hit, and you know without question and without thinking about it while you’re coding if you’ve hit it or not. If tests are failing (and that includes refusing to compile or build), the code isn’t right. Could I cut 2x4s by measuring each 18 inch length out individually? Yes, but it’s harder than just cutting till I can’t reach the scrap anymore. Could I write that class to cover all the use cases without a test for each one? Probably, but it’s definitely going to be harder.

I write tests because they force me to think about the problem, to break it down to testable pieces, and figure out how to keep it testable. I’m not trying to combine the “what” and “how” in the same thought. Tests also force me to implement that code in a way that is testable –and testable code is (typically) easy to change, easy to diagnose, easy to plug into different use cases.

Documentation

The beauty of arrange/act/assert, especially in an xUnit style test with a minimum of shared setup, is each test is explicit about how the system under test behaves under different conditions. If you’re confirming that a specific result happens based on configuration, you have to put that value in the test. If the data context has to return a specific value, you have to specify it in the test where it’s visible to anyone.

Which means that a year from now, when you need to update a switch statement — you already know what all the values correspond to, without looking up the requirements doc from two projects ago. You won’t need to spend as much time explaining the code to someone — the tests lay it all out, in all the variability. The dependencies are documented, the expected behavior of dependencies is laid out.

Recently (and this has happened more than a few times) I went to the tests as the first stop in a bug squash. I quickly realized that none of the tests covered the scenario where the main dependency throws a null reference exception; as a result, the code was just logging the exception and returning an inappropriate value. I was able to write a test that replicated the situation, and then put a bug fix in without ever actually debugging the app.

Having up-to-date tests is like having up-to-date documentation. You don’t need to debug the application to figure out what it’s doing under the hood, you already know by following the story told by the tests.

And When Coupled With Test Driving…

So all of that above is primarily based on experiences I’ve had trying to wrap legacy code, and it boils down to “Tests help me understand the code so I can improve it safely.”

But…what if you were able to avoid the whole “this needs redesigned before we can add the feature” part? What if I told you there was a way to build that same quality into your code, right from the start?

This is the obligatory TDD plug. I don’t want to harp on it — I love TDD, and even I can’t stand most of the TDD missionaries out there — but again, I view the tests the same way I view configuring the IDE, using git aliases, customizing my PowerShell profile, using Resharper. It’s a tool that allows me to work with the code in a way that drives Quality. Writing a test for an empty service class is going to keep the problem I’m trying to solve very small. And if I’m trying to solve a small problem by “using the the least amount of code to make the test pass,” I’m far less likely to over-engineer a situation. This keeps my code lightweight, flexible, and simple.

As the problems become more complicated, in the “Only update the database if these 3 conditions are true and also it’s Tuesday” vein, so does my code…but incrementally, and in a way that doesn’t break previous passing tests. I’m already avoiding regressions and we’ve never deployed this code. My code is only as complex as it needs to be (if I stay disciplined), and the fewer moving parts the fewer things that can spawn bugs.

Tests, whether before or after writing your prod code, are going to drive Quality. I just prefer to be efficient and find out I’m making a mess before I commit any changes.

Wrapping It All Up

I’ve worked in shops with no automated tests. I’ve worked in TDD shops. I’ve worked in shops that half-assed testing. I’ve learned you do not need tests to write and change working code, but that tests make the job infinitely easier. And when delivering software isn’t an absolute struggle, I write far better code.

Code that can’t be tested without a lot of work is smelly. Writing tests in that case is like opening the refridgerator door — without opening that door you never smell the fact that last week’s leftovers are ready for the trash. Tests, if nothing else, tell the story of how your code is supposed to function — far better than writing a README or walking someone through the entire application.

These two items are the things I’ve come to appreciate about automated tests far more than the pat “I know the code works because tests.”

How I Stopped Worrying and Learned to Love the Bang

I recently stopped cold while writing a ReactJS component and experienced a very brief (albeit intense) existential crisis. I had just, without giving it excessive thought, used not just a ternary expression to determine what the component would render, but I also used a bang in my logical test.

First: My Beef With Ternaries

I learned early in my code journey about the value of legible code, of chasing the paradigm of self-documenting code, of explicitly indicating one’s intentions with a minimum of mental friction. This is largely why I resist writing ternary operations wherever possible, thinking that an item like this

var result = input > BASELINE ? "Input is above baseline" : input < BASELINE ? "Input is below baseline" : "Input matches baseline";

is a difficult-to-read unclear mess; whereas

var result = string.Empty;

if (input > BASELINE)
{
    result = "Input is above baseline";
}
else if (input < BASELINE)
{
    result = "Input is below baseline";
}
else
{
    result = "Input matches baseline";
}

puts practically zero friction on the eyeballs and takes nearly zero effort to understand.

I can understand the desire to write as few lines of code as possible, for a lot of reasons we don’t have space to get into just now. But I don’t feel that brevity was a good enough reason to make something more difficult to read. Many pair partners can attest I have a tendency to start with a classic IF-ELSE block and resist refactoring it into something more brief.

Enter the “Bang”

I might have a strong distaste for ternaries, but I absolutely despise using bangs. The fact I feel compelled to take a moment and define what the hell it is for the less technical reader seems to summarize my issue with it–it’s an overly complex tool.

A “bang,” is simply an exclamation mark (“!”). In most (all?) programming languages, we use it to mean “Take the opposite of this bool (true/false) value.”

For instance, if I have a bool variable “isDarkMode” that equals TRUE, then “!isDarkMode” means FALSE.

Nice and confusing, yes? Thanks; I hate it. To be fair, I have seen cases where using a bang is the most elegant (or least messy) way to get what we want…but more often, I’ve found it to be a sign of lazy naming, poor coordination across application layers, or someone trying to be super clever. It just smells.

// Dark Mode is enabled when the UI element isn't toggled because it's wired backwards
var isDarkMode = !uiToggle.IsToggled;

if (!isDarkMode)
{
    // set everything to light mode styles
}
else
{
    // set everything to dark mode styles
}

You might think this is contrived, but I’ve looked at similar blocks of production code more than twice, going “Why does this make my skin crawl.” A couple quick refactors (like inverting the IF and ELSE to get rid of the need for the bang or renaming the variable so we can use the <uiToggle> the way it was written) will restore sanity…but suffice to say when I see “bang” I say “Oh dear.”

You May Ask, “What Changed?” I’ll Tell You…

I essentially learned to code in C#, and then spent the first years of my career primarily in the back end of .NET Framework applications. I relish the sense of order that comes with strongly typed objects backed by interfaces. Knowing I can extend these stable items whenever I needed new behavior (and knew they’d behave the same way over and over, or throw an exception trying).

More recently I’ve been adding functionality to a React web app (as you may have gathered from my opening paragraph). I’ll own it–JavaScript in general, and ReactJS in particular, were technologies that baffled me as I tried to become fluent in C#. The differences between a strongly typed language and the chaos of JS were bad enough–but React leverages the extreme flexibility of JS in ways I couldn’t comprehend.

For instance, the fact that when using a child component you can pass in one, some, all, or none of the properties defined or used in that child component without expressly providing overloads for the component. Try that in C# and see how fast your app blows up.

// Renders an empty view in light mode
<ChildComponent />

// Displays the list data in light mode, but does not render edit options because no user
<ChildComponent
  dataList={list}
  />

// Displays the list data with all options, in dark mode
<ChildComponent
  dataList={list}
  userId={user.id}
  darkMode
  />

The magic, and my existential crisis, happen inside the child component’s Render() function, thanks to how JavaScript handles resolving objects.

render() {
  const { dataList, userId, darkMode } = this.props;

  if (darkMode) {
    setDarkModeClassNames();
  }

  return (
    <React.Fragment>
      { !dataList
        ? // render empty display
        : // render the list
      }

      { userId &&
        // render the edit/add options
      }
    </React.Fragment>
  );
}

Because JS nor React will balk if any of those three properties are unassigned, we can (and should) take different actions depending on if there’s actually an object attached to that variable. React makes this easy enough with their conditional in-line rendering.

What we see here inside the logical operators is shorthand for “Is this defined?” and when we apply the bang “Is this NOT defined?” It’s checking both that the variable has been defined, and that it has been defined in a usable way (ie, dataList isn’t null). We could do this manually and explicitly…but that’s not good, idiomatic, or smart JavaScript. It’d be like writing one’s own “string.IsNullOrEmpty()” method instead of using the built in C# method.

You’ll see I open my first ternary with a bang (aren’t you glad you’re not paying for this content??). I did this because I felt, in this case, it IS the most explicit approach. I wanted to indicate what the next person in the file should pay the most attention to–the component cares if the listData is missing. We could invert this ternary and get the same result…but, I wanted to emphasize “The weird, early-return style behavior happens when there’s no list data.”

And yes. I’m using a ternary here, two if we count the neat short circuit with the userId. For one thing, I can break it up over several lines for clarity. For another, it would be significantly more work to replicate this behavior in a different pattern. It makes sense, and (based on the existing examples in the code base plus all the reading I did) it’s idiomatic React.

TL;DR…

I had a major polyglot growth moment. I embraced the features of a language that just wouldn’t fly in my “home” tech stack, and I added a whole mess of tools to my kit to use and reference going forward. What is nonsense in a typed language is totally fine in an untyped one, and that’s okay.

CI/CD — A Quick Aside

So what is CI/CD, what does it do for you, and do you need it?

I did a quick search on the interwebz before deciding to write this and what I found at a glance fell into 2 categories:

  • People selling a service
  • People hyping a trend

Since my target audience is, largely, a solo dev who doesn’t really have the context needed to parse either of those kinds of posts, I wanted to break it down a little more succinctly.

But First, Some Vocab:

CI (continuous integration):

What it Is: The act of integrating the code being worked on into the main code base on an on-going basis, rather than waiting until a feature is complete and you’re ready to release it. Typically, merging code or pushing local changes triggers a service (a CI system) that will pull the new code, build it, run unit tests, and let you know if you managed to break your application with the latest changes. Code repos like GitHub can be configured to block pull requests that have a failing CI system build. Often, the CI system will archive the built application so it can be used later in deployment.

Problem it Solves: At root, it helps you limit the damage you can do to your code at one time. Rather than checking out a feature branch, working on it for weeks, then trying to merge it back into master and finding out your feature code is incompatible with the master branch with 231 merge conflicts (or failed tests), you can catch issues almost immediately–when they’re much easier to fix. With multiple people in a code base, this gets even more important; everyone gets aligned far more often and reduces the amount of effort needed to keep things moving in the same direction.

CD (continuous delivery/deployment):

Is it Delivery or Deployment? It depends partly on who you’re talking to (I personally don’t view them as different things and will use them interchangeably) and what the context is. If my code is going to some other department that will do the rest of the dev-opsy stuff, it makes more sense to call it continuous delivery–you’re delivering it to someone. But, if your team is responsible for getting it to the users via a webservice or some other process, then it’s more of a deployment of the software. Context is the key, friends.

What it Is: Either way, this is the process of continually putting finished software where it needs to go. This could be a staging environment, a testing environment, or directly to the users if you’re a company like Facebook (or you’re just trying to show off to family like me). A common approach is the CD system will wait for the CI process to successfully complete, and then once it knows the code is good it will either take the output of the CI process (generally referred to as a “build artifact”) and deploy it based on the configured settings, or it creates a whole new artifact for the deployment.

Problem it Solves: In the not-so-distant past, once software shipped it was done, ready or not. Think about all the games you’ve played with crazy bugs and glitches in them that you just don’t see in modern games–games these days can get patched without inconveniencing player or developer. That’s continuous deployment at work. Finish some code, ship some code. It also lets you put a basic application in front of someone, say a client, to get instant feedback–and then just keep adding functionality.

Is It For You?

So if you’re new enough to all this, is CI/CD something worth setting up? I mean, it depends…but my short answer is yes.

It’s not as arduous to setup as it may sound, especially when you look at guides put together by people trying to sell you things. If you do it as one of the first steps in your process, it’s fairly easy to get done–and it really is a velocity accelerator. You won’t get caught by surprise trying to merge in new features (“When did THOSE tests start failing?”) and don’t have to fumble around moving files in order to make sure things work or to show off a weekend of work.

Of course, there’s also the fact that those are the kind of skills that separate someone who can write code and someone who can delivery value to an employer or client. While writing software is a challenging a rewarding pastime, the best program in the world isn’t really any good until someone can put it in front of users.

The Almighty Card Board

If you’re a professional developer, and your team isn’t using a ticket system, you’re probably familiar with card boards. You may even be a Trello wizard–I personally have spent more time in Microsoft’s Team Foundation (now called Azure DevOps), but it’s the same idea. A way to track what needs to be done and how long it’s taking.

Since this is a (relatively) simple one-off project, I just threw together a Trello board. Here’s the 800-mile-view:

A view of a Trello cardboard with Backlog, Current Effort, In Progress, and Done lists

My backlog is the total sum of what I want to get done/cover in my post series. I have code items (labeled in orange), infrastructure items (labeled in green), and posts to write (labeled in yellow). I have them roughly grouped in order by things I want to do for a specific post.

Current Effort is what might be called a sprint–it’s the group of cards I think are obtainable to get done in the next week or two. To continue my trip analogy this is just the next leg of the trip–maybe how much ground I want to cover before stopping for dinner. It would be the point where I would feel comfortable putting in a pull request, a point where if need be I could say “I have to put this aside for three months” and be able to pick it up as a complete module of the larger project. The acceptance criteria for each “effort” is to have one or more blog posts scheduled.

In Prog is just that–in progress. Stuff that I’m still working on, since I anticipate doing little bits throughout the week.

Done should be another self-explanatory item. It’s all cards I’ve completed–not just during one effort, but over the course of the whole project.


My first step was to fill the backlog. Everything I needed. For instance, I knew I needed a simple code library to provide some sort of functionality. I also knew there’d need to be some sort of web project that can display that functionality…but I wouldn’t need to link them together yet in order to write a blog post about them, or to get the solution to build in a CI system like Travis.

I decided my first effort would look like this:

  1. Write about this plan, and my high-level thoughts on planning solo projects
  2. Create the Trello board to map the whole thing
  3. Write about the Trello board and my planning process
  4. Setup a simple code library I could test in a CI system
  5. Setup a simple web project just to have it ready
  6. Write about the code skeleton

My second effort will focus on:

  1. Setup Travis-CI to build and test my solution
  2. Write about setting up Travis
  3. Write some tests against the web project’s UI
  4. Get them to run in Travis
  5. Write a post about UI tests
  6. Wire the web project to the code library
  7. Update UI tests to reflect the dynamic logic
  8. Write about wiring up the two

After that, I’m not sure exactly. Logically, deploying the program is the next step, but I’m not going to think that far ahead. Sticking with the road trip mentality, I know where I’m going tonight, I know roughly where I’m going to be the night after that…anything further isn’t super useful to plot in detail. I just don’t know what will happen.

Honestly, I’m already second guessing how big that second effort looks. I’m a big fan of bite size pieces. But, since I don’t have a better idea and all of that is grouped logically together, I’m just going to roll with it until a better idea presents itself.

I’ll repeat this process till I get all the backlog cleared. If something needs to get added to the backlog, or needs to go back to the backlog, I’ll do it. If I learn after the next effort that 3 posts is way too much to try and fit into a week, I’ll trim the next one to account for it.

It feels pretty Agile to me, but as a millennial I hate to put labels on things, y’know…

Blog at WordPress.com.

Up ↑