“Think big, act small, fail fast, learn rapidly” is a tenet taken from a book titled, Lean Software Development: An Agile Toolkit, by Mary and Top Poppendieck. It’s also the title I used for an Ignite talk I gave in 2011 at AGU on the subject of agile development. That talk was a pretty classic example of abysmal failure, where in my hubris, I decided I didn’t really need to practice for a simple 5 minute talk. The deal with ignite talks is 20 slides in 5 minutes with slides advancing automatically every 15 seconds. I think I started getting behind my slides within the first minute, and it all went downhill from there. I was so pissed at myself over this that I had to try and make up for it with a completely different talk – that I practiced this time – at the Ignite@AGU show in 2013 titled, One foot in front of the other and other lessons from the Camino de Santiago. (You can read more about our Camino experience on my blog.)
A few years ago, I started figuring out, embracing, and implementing agile methods for development of one of the major software platforms I’ve been responsible for envisioning and creating – ScienceBase. I got into this when one of our developers sent me a short note saying something to the effect of, “You know, Sky, you’re getting way to deep into our business of developing this thing; trying to direct too much of the actual on the ground work instead of laying out a vision that sets things in motion. You ought to go read about Extreme Programming, Scrum, and some of these other types of development methodology.” I had done the project management thing in the past when I was figuring out how to manage some IT projects in the networking and security arena, studying and using the PMBOK in laying out requirements and executing projects.
Much of that material was so thick and cumbersome to learn and work through, and when I started digging into the agile movement and read the Agile Manifesto, it really resonated. I’d struggled in past projects with exactly what the agile writings talked about in terms of putting together piles and piles of documentation that were essentially never used in a working project. There’s a lot of great meat in “traditional” (and still much in use) project management methods that is vital in some kinds of projects, but for the types of things I was doing at the time and continue to do today and especially in software development projects with a large R&D component, agile methods are the only way to go.
Thinking about government, fear of failure, and risk aversion had me remembering the tenets of agile summed up in the title of this post and in a few other things I’ll talk about. As I’ve eluded to in a couple of past posts, I’m in a situation now in my current job where much of what I’m trying to do seems fundamentally at odds with some of the rest of my organization. A couple of conversations in the past year has me thinking that there may be a fundamental difference in personality and style between me and a few other folks that might be illuminated by examining the difference between agile and more traditional waterfall approaches to project management. I’m coming to think that it’s not just the trappings and processes of these different methods but a more fundamental difference in thinking and outlook that may be part of the root of the problems and conflicts I’m seeing.
Much of project management thinking over the last 50 years or more has focused on risk identification and management. There’s an underlying notion that if we can only identify, articulate, and document all aspects of what some particular endeavor needs to accomplish and what resources will be needed to accomplish the endeavor then we will be successful. The entire exercise of carefully planning a project is intended to reduce the unknowns and identify potential blindspots so that they can be effectively dealt with. This way of thinking is probably quite important when doing something like planning the next NASA spaceflight mission, at least in terms of the entire project. But these methods have been found by many organizations to be much less effective when building capabilities in the constantly shifting digital world.
An interesting criticism came up recently related to the ScienceBase project that I mentioned. Someone said that the system seems to be in a perpetual beta mode, and the implication was that this is a bad thing. My reaction was, “Well, yeah; we kind of designed it to be that way, taking inspiration from the likes of Google and others who do exactly the same thing with big production systems.” We’re providing services to hundreds of users with that system and working very hard to make government data more discoverable, accessible, and usable through web services and APIs that facilitate unanticipated good uses of our information. We have a ton of unknowns and a backlog of hundreds of items that we’ve thought about but not yet figured out how to accomplish. We execute the work in two-week sprints, with a new version of the software and new features released almost every sprint. We do our best to manage the sometimes competing priorities of different stakeholders as we strive to both think big about where this platform needs to go and act small in ways that benefit individual participating projects. So, it is in perpetual beta, and we think that’s a very good thing.
But government seems to still be struggling to get their head around how to operate this way. When I look at it, almost all of our institutions, policies, conventions, and cultural dynamics are geared toward exactly the opposite approach. For instance, we have been pushed more and more toward “fixed price” contracts over the last number of years. What this means, essentially, is that we have to conduct traditional project management methods to carefully lay out very specific requirements for whatever it is that we want to do, put those out on the street for competition, select a contractor (usually lowest bid), and hope that they both deliver on the contract and that we actually thought of everything beforehand. The first Healthcare.Gov was a good example of this with dozens of individual fixed price contracts that had to be stitched together to produce the whole shebang.
The arguments for this approach seem pretty sound on the surface. We need to budget for what something is going to cost going into it with a plan that lays out sufficient specificity to give us some assurance that we know what we’re going to get on the other end for some amount of funding. We need competition so that government is not giving unfair advantage, so we need to create enough of a level playing field to give several different groups a chance at bidding on the same requirements. Budgets and allocations change and are subject to political will and regular changes on the election cycle, so we often need to build assets (real property, software, etc.) for the long haul with sometimes larger up front cost but lower regular maintenance cost. But sometimes this approach fails to achieve a working result as we saw with Healthcare.Gov version 1 and countless other projects.
My approach, since about 1999 when I started this endeavor to figure out advanced technologies for government data about complex earth systems, has been to go after really big problems that haven’t been completely solved yet, work in small increments to get something on the ground, fail in some way regularly, and try to incorporate some lesson from every failure in future iterations. One of the biggest lessons I’ve learned over time is how to distinguish between the big vision and the on the ground reality in how I communicate what we’re doing. Early on, when I was trying to build enthusiasm for my ideas, I fell into the trap of selling vaporware – talking up the possible instead of explaining first what could be done in the here and now. That got me into trouble when people went to look for what I’d just been talking about at some meeting or conference and couldn’t find what they thought ought to be there already. I’ve learned to be real clear about what’s on the ground today vs. what I’m still dreaming up for the future and what might still be in an R&D cycle.
Over time, as technologies and our own thinking evolved, we ended up incorporating some of this lesson into the fabric of what we’ve built by taking an API-first approach. Knowing that we had more diverse needs than we could meet through our own developed applications and losing some customers along the way because of this fact, we built an application programming interface and started spinning up reference applications for others to pattern unique solutions upon. We could then tell people that the functionality for what they wanted to do is supported via the API, and if they want to build their own custom application to do just what they want, they can have at it. That’s worked for quite a few cases with now more usage of the API than we have of the central user interface. We’ve also got innovative app developers coming out of the woodwork, building all kinds of creative solutions with uses of our platform that we never anticipated.
But I’m beginning to see that there are a few things about this approach that don’t sit all that well with some. My willingness to sometimes go for broke and fail big while still making overall forward progress seems to rub some people the wrong way. It seems too risky for some reason, and we’ve lost an aspect of what government science as opposed to academic science is supposed to be about. In government, we’re supposed to be going for the high risk, high reward ideas that are in the public interest but are too risky or too long term to entice private capital. In the world of data, this has been a driving thought behind a national strategy for big data and data science we are developing as part of a senior steering group I’m on – what are the big new things that government agencies need to invest in to help spur economic growth in the private sector.
Many of our institutional devices such as the contract vehicles I mentioned and the constraints on creativity we seem to hem around our government employees seem to stifle innovation and risk taking. But at the same time, government employees are pretty difficult to fire once we’ve gotten into a full time permanent position. While salaries can be less than comparable to private sector jobs, the stability and benefits still make government jobs attractive for certain kinds of people in some fields. It seems like government organizations should be taking the attitude that we’re signing up to pay people for the long haul and then encourage them to take risks and move us along by leaps and bounds into new and better things. And it seems like government employees should rest in the fact that their organization is backing them up and should charge ahead with all the creative energy they have to help make taxpayer lives better in whatever the core mission happens to be.
I’ve been transitioning (kicking and screaming) in the last few years from an idea generator to an idea recognizer and promoter. I sometimes joke that what I do is point out the difference between good ideas and stupid ideas, but it’s really more about finding younger people with potential and helping to mentor and support them to take risks on their ideas and see them through to something real. I don’t know precisely how to change the risk averse culture I see in government or if it’s even possible. Perhaps there is too much inertia, too long a history, and too many constraints built up to break through. But I still have hope that I can do a little bit to push things in a direction I think we need to go, one student or one junior employee at a time.