Self-Organizing Teams Need a Gardener

The position of managing a self-organized team is a tricky one, and not many individuals promoted to management are suited for it. The traditional management candidate earns their stripes by being good at the technical task they are supposed to supervise, which does make a certain amount of sense. But the catch is a self-organizing team isn’t looking for a technical expert to tell them what to do; they’re looking for someone to block problems, provide insight, provide air cover, and maintain balance in the garden.

The manager on a team where there’s technically no need for one does what good managers on traditional teams have done for decades: they provide a vision of where to go, provide insight on how the journey is progressing, and remove problems as they appear in the path.

Teams as a Garden

The ol’ command-and-control model of teams and leadership is built on the idea that any butt in a seat, combined with enough other butts in other seats, can accomplish a task well. This thinking somehow survived into the 21st century, and is somehow getting a revival of sorts as the “return to the office” debate heats up. (It’s almost like a generation of decision makers has refused to either adapt with the times or get out of the way, but that will eventually resolve itself).

These days, it very much matters who is in what role, and how they interact with their other teammates. The GM of the Wendy’s I described in the last post described herself as a gardener: “I have to decide who to plant, where in the garden they’ll thrive, and do a little weeding and pruning to make sure everyone gets enough sunlight.” Jim Collins in Good To Great talked about getting the right people on the bus, and then finding the right seats for those people.

A team within a larger organization that has a budget, cannot be truly self-managing. The next best thing is a manager who views the team as a garden to be tended.

Maintaining Perspective

The manager’s role in a self-organized environment is largely about pointing things out.

  • “Was everyone aware there was a velocity drop last sprint?”
  • “The client had some specific notes from last demo…”
  • “Maybe this isn’t a fair conclusion, but it seems like pull requests are taking a long time to get reviewed.”
  • “I’m seeing a lot of work in progress that doesn’t seem to align with our sprint goals…”

When being trained to manage a Wendy’s in the mid 2000’s, the actual position assigned to the manager in charge was called the “Operations Leader.” (I have no idea if this approach survived the buyout that washed away so much of the Wendy’s Dave Thomas built). This was a change from the old way, where a manager worked in a position like sandwiches or fries and called the shots from there. It was like shifting from the player-manager arrangement baseball had early in the 20th century, to the modern “managing is a full time job” approach.

Not being tied to a position, being constantly on the move observing the restaurant operation, allowed the manager to see things like a slow moving front line. Or an over-full trash can. It allowed them to run to the back for more $1’s without hurting the flow of the team. I was taught that if I needed to step into a position, I needed to know how I was getting out of it — “I’m covering fries till Juan gets back from his break” or “I’m going to step in and help fill the grill with meat.”

The manager on a software team likewise isn’t heads-down writing code; they’re looking around, observing. They’re connecting with the client or the stakeholder. They’re digging into metrics. This gives them a wider perspective and more detailed context than most of the team who are writing code or similar tasks. From time to time, they dive in to pair with someone, or to take a quick look at some tricky problem. They’re uniquely positioned to reflect the performance of the team back to the team.

Maintaining Momentum

In a perfect world, this reflection should be enough to get the team started on either a solution or an explanation (as much as a senior leader might like an action plan when velocity dips, sometimes it can’t be helped). But we’ve all been in that meeting where everyone understands the problem but nothing seems to push the group towards decision mode.

This is where the manager needs to start asking some pointed questions.

  • “So I’m hearing this card isn’t needed — can we remove it?”
  • “Is the root cause of this bug in the data layer or the data itself?”
  • “I have to give George something…based on our velocity trend, can we do this in the current sprint?”
  • “Does this implementation work? Can we merge it now and add enhancements to the backlog?”

As the “ops leader,” keeping customers flowing through the lines is the primary goal. Sometimes a register operator is lingering a little too long chatting with a regular instead of getting the next order. Sometimes the grill operator is a little too focused on aligning the meat in perfect rows. Friendliness and attention to detail are excellent qualities we all want to see in people, but without the benefit of the wider perspective it can be hard to tell if we’re going overboard.

Even the most disciplined software creator will charge off into the weeds from time to time, and very few of them think deadlines or estimates are things intelligent folks deal in. By bringing their experience and perspective on the entire project to bear, and helping the team understand the big picture a skilled manager can be the difference between success and failure on a project.

The manager is also in a unique position to be the “first follower” of a suggested action, and thus break deadlocks or violent agreement. And if all else fails–and I mean, really fails–sometimes it’s the manager who needs to make the decision.

Maintaining Balance

Building teams out of individuals with different backgrounds and experience levels is challenging. Expecting them to just work together to accomplish common goals with limited direction is an extra layer on top of that. We can’t find a better example of this than the Wendy’s lunch team: high schoolers in an occupational intervention program, immigrants, retirees, college students, and occasionally a full adult training to be an assistant or general manager. One does not simply plop this group onto a restaurant floor and watch them work together in perfect harmony. This garden needed cultivating.


  • Hired for people who could play well with others
  • Got to know the team, and learned what different people valued
  • Observed interactions on the line and made note of good and bad matchups
  • Intervened with bad performance

Rather than thinking about the number of full-time equivalents they needed to hire, the folks at this Wendy’s thought about the skill and personality gaps. They were hiring for someone who could keep up with Janet, for someone who could be patient with Juan’s broken English. They were careful to not upset the harmony of their team by introducing someone who was obviously not going to be able to mesh with the rest.

This didn’t always lead to happily-ever-after, because people are people. So management learned that Janet and Teresa just didn’t get along, and it’d be silly to position them to work together. They discovered that Juan and Ricky apparently could read each others’ minds, and tried not to separate them.

They also took steps to support people who were really struggling, to eliminate an excessive burden on the rest of the team. A manager pairing with someone till they got the hang of things on fries, or giving direct feedback on how to improve saved resentful and friction among the team and let them just work.

In a worst case scenario, when we discovered that George had certain prejudices against immigrants he wasn’t willing to rethink…well, this is a capitalist society, and most teams (even self-organizing ones) can’t just eject people. That too fell to the manager to handle.


The manager does the hard work of protecting the team. From outsiders, by way of handling senior management and careful selection of new additions, and from itself by way of monitoring and nudging members towards better performance. They also are charged with doing the hard work when someone is struggling to sync with the team and with being the bad guy in situations where someone won’t.

The manager on a team where there’s technically no need for one does what good managers on traditional teams have done for decades: they provide a vision of where to go, provide insight on how the journey is progressing, and remove problems as they appear in the path.

Self-Organizing Teams Are Teams of Leaders

I am a huge fan of a self-organizing team. I picked up this quirk long before I ever wrote a line of code — we’re talking at 17, working the lunch rush at a Wendy’s. The organized chaos of of the front counter is a thrilling thing to experience; two registers constantly loading up 2 orders each, a sandwich maker and grill cook working a choreographed dance to keep fresh beef and chicken flowing to warm buns, runners filling drinks as they call out for a Biggie fry. And…no manager.

The manager was there, obviously. Or more accurately, they were nearby. Observing the flow, looking for disruptions. Pulling the angry customer aside. Bringing up a sleeve of cups to make sure no one had to leave their post. Wendy’s went so far as to rename the position the manager had as the “operations leader” in order to rewire the company’s thinking. The manager didn’t direct the action, they weren’t a star player on the field — they were a coach, a strategist, a scout. You stayed on the sidelines, let the crew do their job, and jumped in to deal with problems as they arose.

Sure, if a register operator wasn’t asking anyone to “Biggie-size” their combo, the manager over-hearing this would remind them of procedure. But they weren’t sitting in the middle of the kitchen, directing everyone’s movement. Each member of that team understood their job and just did it. They would react to the feedback provided by the manager — “We have another 10 cars in the drive thru line” or “The void percent is getting high.”

Non-Manager Leadership

Looking back, the lunch crew wasn’t just a collection of A+ professionals executing as some amazing hive-mind. When I was started on being a runner, I theoretically knew everything I needed to do–how to fill the cups with ice, what the codes on the screen meant, where to stage trays, how to pack a to-go bag. But Jackie had been doing this job, with and without runners, for five years. She had a mindboggling wealth of experience, and fortunately was willing to share it.

“Don’t wait to get the chili on this order — it’s the only thing you need till the sandwich comes up, and you can use that time to make the drinks for this couple on the next order.”

“Okay, see those four construction workers that just walked in? Just call out the four Biggie fries now so Juan knows he needs to drop more.”

“Didn’t you see we just refilled that iced tea? It’s too hot–see if drive thru will get you one so it won’t water down as much.”

The same thing happened with everyone else — sandwich makers who would tell me how I could help them go faster, or suggest things I can do to finish the order while they were running behind. The folks on the fry station who would direct me to scoop my own fries while they dropped more into the fryer. A hundred or more little suggestions or directions in the course of a two and a half hour rush shift. Almost none of them coming from a manager.

Gradually, I began to be the person giving those directions. “Just set up a new tray every time, don’t bother waiting till they tell you it’s to-go or not.” “Hey Juan, I see about 8 people heading inside!” “Janet, the group that loves grilled chicken is here, do we have enough up?”

There wasn’t a diminished leadership presence because of the manager being removed to a more strategic role, there was an increased in localized leadership. People who understood the immediate needs spurred their peers in the right direction, helped to avoid mistakes, and provided praise for the little-but-crucial things done well that would never warrant attention from management with their eyes on the big picture.

The Software Tie-In

There’s really not much difference between a software team and a fast food team. In fact, I’d say the primary success metric is the same for both teams: how often can you deliver customized, completed work to the customer?

In-before-derisive-comments-about-“burgerflippers:” All jobs boil down to a series of steps to be followed. Mastery of a role boils down to understanding how those steps can and should be adapted in the face of rare or new situations. Is mastery in running a fry station an easier goal than mastery in writing data layer logic? Based on my experience, it’s impossible to tell — both roles have plenty of people who want to achieve mastery, and both have plenty who just phone it in.

An incorrect order (a burger instead of a chicken sandwich, for instance) is a lot like a bug. And just like a bad software team that treats bugs as part of the process, a bad Wendy’s has adapted to remaking orders after the customer complains. A good Wendy’s (or software) team is horrified every time they make a mistake that reaches the customer — and works together to avoid the mistake in the first place. They aren’t just winging it, assuming “If I’m doing this incorrectly the manager will stop me.” And they aren’t letting their peers do that either.

A self-organizing software team is a group of professionals who are comfortable enough in their own skills and their place on the team to provide leadership when it is appropriate. If Jane sees a security gap in some code Bob worked on, she’s able to constructively point it out and help resolve it. If there’s a pipeline problem creating a bottleneck, no one waits for management to authorize fixing it — Mia just pops open the YAML file and fixes it. If there’s missing test coverage around a critical flow, Linda doesn’t need to get management’s permission to address it with Tyler before it gets merged. If all of these interactions had to flow through a single point of authority they’d just outsource the whole team within 6 months.

So what IS the manager’s role if everyone’s a leader? Let me get into that next time, since I’m willing to bet not many of you read even this far.

A Love Letter to Automated Testing — As a Tool for Quality

I think automated testing, particularly automated unit testing, is a misunderstood creature. Every time I run into a “Do we write tests or do we not?” discussion in the wild, the focus is around the confidence tests give one about the functionality of the code. People in favor of tests generally argue about how you gain higher confidence about how well the code functions, and people counter that with their existing approaches like QA or the fact that their management accepts bugs as part of the process.

For me, that confidence boost isn’t the point; it’s a nice result, a by-product of the real advantage. It’s like scallions on my baked potato, or the sauteed onions on my steak, or the whipped cream on my sundae. Yes, I relish those things and they give me the extra hit of dopamine that makes me feel I have to have them…but I don’t skip the potato because I only have sour cream, and I definitely don’t refuse ice cream because there’s no toppings. The garnish is just the obvious benefit to the dish, the same way the “Confidence it will work” is the crouton on the testing salad.

Valuable tests are more than logic validators. They enforce sane engineering practices, expose complexity, put a fan behind code smells, and provide documentation that will be up-to-date as long as people are running the test suite. In short, automated tests are a way to avoid poor Quality code.

Tests Help Avoid Poor Quality Code

Quick qualifier here -- working code is good code, but we all know Quality code when we're in it. Yes, I'm stealing applying concepts from Pirsig again.

Poor Quality code is difficult to test. Sometimes so difficult it’s not worth it, like an abandoned copper mine that still has plenty of ore… but it’d take an investment many times over market price to extract it safely. For me, I can use the ease of writing a test to tell me a lot about the architecture, patterns, and configuration of the codebase. It’s the quickest way to identify challenges I’m going to face.

  • “Oh. This method is new-ing up a dependency in-line. That’s…that’s going to be hard to mock.”
  • “So part of the constructor on this service class is… calling out to a 3rd party API for part of it’s configuration?”
  • “Wait. The controller is making a call directly to DatabaseA, so it can use the return to make a call to a service class that talks to DatabaseB?”

All things I’ve run into trying to write what I thought would be quick tests around legacy code, moving roughly from easiest to fix to most difficult. If you never write tests, and the app worked correctly right off the bat, and you never need to change the functionality, none of these things are problems, per se. But when was the last time you had an application go into prod without a problem? How many stakeholders have you met whose requirements are written in stone?

When I worked at a home improvement retailer, one of my jobs was to cut wood to size for customers. I was taught by a retired carpenter-turned-retailer to measure on the saw where the piece of wood would be once it was cut, then clamp scrap wood there. This gave me a guide to know when the 2×4 was cut correctly without too much conscience effort, and also prevented me from cutting too much.

For me, writing a test is a lot like clamping that scrap into place. The test is a fixed goal to hit, and you know without question and without thinking about it while you’re coding if you’ve hit it or not. If tests are failing (and that includes refusing to compile or build), the code isn’t right. Could I cut 2x4s by measuring each 18 inch length out individually? Yes, but it’s harder than just cutting till I can’t reach the scrap anymore. Could I write that class to cover all the use cases without a test for each one? Probably, but it’s definitely going to be harder.

I write tests because they force me to think about the problem, to break it down to testable pieces, and figure out how to keep it testable. I’m not trying to combine the “what” and “how” in the same thought. Tests also force me to implement that code in a way that is testable –and testable code is (typically) easy to change, easy to diagnose, easy to plug into different use cases.


The beauty of arrange/act/assert, especially in an xUnit style test with a minimum of shared setup, is each test is explicit about how the system under test behaves under different conditions. If you’re confirming that a specific result happens based on configuration, you have to put that value in the test. If the data context has to return a specific value, you have to specify it in the test where it’s visible to anyone.

Which means that a year from now, when you need to update a switch statement — you already know what all the values correspond to, without looking up the requirements doc from two projects ago. You won’t need to spend as much time explaining the code to someone — the tests lay it all out, in all the variability. The dependencies are documented, the expected behavior of dependencies is laid out.

Recently (and this has happened more than a few times) I went to the tests as the first stop in a bug squash. I quickly realized that none of the tests covered the scenario where the main dependency throws a null reference exception; as a result, the code was just logging the exception and returning an inappropriate value. I was able to write a test that replicated the situation, and then put a bug fix in without ever actually debugging the app.

Having up-to-date tests is like having up-to-date documentation. You don’t need to debug the application to figure out what it’s doing under the hood, you already know by following the story told by the tests.

And When Coupled With Test Driving…

So all of that above is primarily based on experiences I’ve had trying to wrap legacy code, and it boils down to “Tests help me understand the code so I can improve it safely.”

But…what if you were able to avoid the whole “this needs redesigned before we can add the feature” part? What if I told you there was a way to build that same quality into your code, right from the start?

This is the obligatory TDD plug. I don’t want to harp on it — I love TDD, and even I can’t stand most of the TDD missionaries out there — but again, I view the tests the same way I view configuring the IDE, using git aliases, customizing my PowerShell profile, using Resharper. It’s a tool that allows me to work with the code in a way that drives Quality. Writing a test for an empty service class is going to keep the problem I’m trying to solve very small. And if I’m trying to solve a small problem by “using the the least amount of code to make the test pass,” I’m far less likely to over-engineer a situation. This keeps my code lightweight, flexible, and simple.

As the problems become more complicated, in the “Only update the database if these 3 conditions are true and also it’s Tuesday” vein, so does my code…but incrementally, and in a way that doesn’t break previous passing tests. I’m already avoiding regressions and we’ve never deployed this code. My code is only as complex as it needs to be (if I stay disciplined), and the fewer moving parts the fewer things that can spawn bugs.

Tests, whether before or after writing your prod code, are going to drive Quality. I just prefer to be efficient and find out I’m making a mess before I commit any changes.

Wrapping It All Up

I’ve worked in shops with no automated tests. I’ve worked in TDD shops. I’ve worked in shops that half-assed testing. I’ve learned you do not need tests to write and change working code, but that tests make the job infinitely easier. And when delivering software isn’t an absolute struggle, I write far better code.

Code that can’t be tested without a lot of work is smelly. Writing tests in that case is like opening the refridgerator door — without opening that door you never smell the fact that last week’s leftovers are ready for the trash. Tests, if nothing else, tell the story of how your code is supposed to function — far better than writing a README or walking someone through the entire application.

These two items are the things I’ve come to appreciate about automated tests far more than the pat “I know the code works because tests.”

How I Stopped Worrying and Learned to Love the Bang

I recently stopped cold while writing a ReactJS component and experienced a very brief (albeit intense) existential crisis. I had just, without giving it excessive thought, used not just a ternary expression to determine what the component would render, but I also used a bang in my logical test.

First: My Beef With Ternaries

I learned early in my code journey about the value of legible code, of chasing the paradigm of self-documenting code, of explicitly indicating one’s intentions with a minimum of mental friction. This is largely why I resist writing ternary operations wherever possible, thinking that an item like this

var result = input > BASELINE ? "Input is above baseline" : input < BASELINE ? "Input is below baseline" : "Input matches baseline";

is a difficult-to-read unclear mess; whereas

var result = string.Empty;

if (input > BASELINE)
    result = "Input is above baseline";
else if (input < BASELINE)
    result = "Input is below baseline";
    result = "Input matches baseline";

puts practically zero friction on the eyeballs and takes nearly zero effort to understand.

I can understand the desire to write as few lines of code as possible, for a lot of reasons we don’t have space to get into just now. But I don’t feel that brevity was a good enough reason to make something more difficult to read. Many pair partners can attest I have a tendency to start with a classic IF-ELSE block and resist refactoring it into something more brief.

Enter the “Bang”

I might have a strong distaste for ternaries, but I absolutely despise using bangs. The fact I feel compelled to take a moment and define what the hell it is for the less technical reader seems to summarize my issue with it–it’s an overly complex tool.

A “bang,” is simply an exclamation mark (“!”). In most (all?) programming languages, we use it to mean “Take the opposite of this bool (true/false) value.”

For instance, if I have a bool variable “isDarkMode” that equals TRUE, then “!isDarkMode” means FALSE.

Nice and confusing, yes? Thanks; I hate it. To be fair, I have seen cases where using a bang is the most elegant (or least messy) way to get what we want…but more often, I’ve found it to be a sign of lazy naming, poor coordination across application layers, or someone trying to be super clever. It just smells.

// Dark Mode is enabled when the UI element isn't toggled because it's wired backwards
var isDarkMode = !uiToggle.IsToggled;

if (!isDarkMode)
    // set everything to light mode styles
    // set everything to dark mode styles

You might think this is contrived, but I’ve looked at similar blocks of production code more than twice, going “Why does this make my skin crawl.” A couple quick refactors (like inverting the IF and ELSE to get rid of the need for the bang or renaming the variable so we can use the <uiToggle> the way it was written) will restore sanity…but suffice to say when I see “bang” I say “Oh dear.”

You May Ask, “What Changed?” I’ll Tell You…

I essentially learned to code in C#, and then spent the first years of my career primarily in the back end of .NET Framework applications. I relish the sense of order that comes with strongly typed objects backed by interfaces. Knowing I can extend these stable items whenever I needed new behavior (and knew they’d behave the same way over and over, or throw an exception trying).

More recently I’ve been adding functionality to a React web app (as you may have gathered from my opening paragraph). I’ll own it–JavaScript in general, and ReactJS in particular, were technologies that baffled me as I tried to become fluent in C#. The differences between a strongly typed language and the chaos of JS were bad enough–but React leverages the extreme flexibility of JS in ways I couldn’t comprehend.

For instance, the fact that when using a child component you can pass in one, some, all, or none of the properties defined or used in that child component without expressly providing overloads for the component. Try that in C# and see how fast your app blows up.

// Renders an empty view in light mode
<ChildComponent />

// Displays the list data in light mode, but does not render edit options because no user

// Displays the list data with all options, in dark mode

The magic, and my existential crisis, happen inside the child component’s Render() function, thanks to how JavaScript handles resolving objects.

render() {
  const { dataList, userId, darkMode } = this.props;

  if (darkMode) {

  return (
      { !dataList
        ? // render empty display
        : // render the list

      { userId &&
        // render the edit/add options

Because JS nor React will balk if any of those three properties are unassigned, we can (and should) take different actions depending on if there’s actually an object attached to that variable. React makes this easy enough with their conditional in-line rendering.

What we see here inside the logical operators is shorthand for “Is this defined?” and when we apply the bang “Is this NOT defined?” It’s checking both that the variable has been defined, and that it has been defined in a usable way (ie, dataList isn’t null). We could do this manually and explicitly…but that’s not good, idiomatic, or smart JavaScript. It’d be like writing one’s own “string.IsNullOrEmpty()” method instead of using the built in C# method.

You’ll see I open my first ternary with a bang (aren’t you glad you’re not paying for this content??). I did this because I felt, in this case, it IS the most explicit approach. I wanted to indicate what the next person in the file should pay the most attention to–the component cares if the listData is missing. We could invert this ternary and get the same result…but, I wanted to emphasize “The weird, early-return style behavior happens when there’s no list data.”

And yes. I’m using a ternary here, two if we count the neat short circuit with the userId. For one thing, I can break it up over several lines for clarity. For another, it would be significantly more work to replicate this behavior in a different pattern. It makes sense, and (based on the existing examples in the code base plus all the reading I did) it’s idiomatic React.


I had a major polyglot growth moment. I embraced the features of a language that just wouldn’t fly in my “home” tech stack, and I added a whole mess of tools to my kit to use and reference going forward. What is nonsense in a typed language is totally fine in an untyped one, and that’s okay.

Blog at

Up ↑