In my last post, I introduced a topic that I’m exploring through at least a couple of additional essays. The title of this blog is actually a motto that I’ve only somewhat jokingly suggested for my immediate organization over the last couple of years – Embrace failure!

I have failed regularly over the course of my career – sometimes spectacularly. I sometimes tell stories about all of the “career-limiting moves” I’ve made over time, from incidents in the field during my early career in environmental contaminants investigations to times I’ve tried to do something innovative with data technology to have it go over like a lead balloon. As an example, I’ll share an incident that was pretty gut wrenching at the time but is far enough in the past to involve some principles who have now retired.

In about 2000, I started running down an idea that has ultimately led to some powerful capability in scientific data and information management. I had the notion that the information about people and organizations involved in the work that we were doing was really important to keep track of in a cohesive and long-term way. I didn’t realize until later that this was all part of what companies at the time and a bit later were calling master data management, and ORCID and things like it beyond a given library system were just dreams at the time. I thought that we should be able to use the directory we had for our email system – Lotus Name and Address Book (NAB) – for more than just driving email. So, I set about trying to come up with some way to interface with it for other purposes. LDAP didn’t work very well because of how Lotus implemented their indexing process behind the scenes, and there were issues with data integrity over time because of policies on actually deleting people from the directory instead of just disabling their user accounts. So, I started experimenting with methods that would let me extract the directory attributes in real time into a relational database.

I was in a different organization at the time than I am now, and I was in a position where I had pretty much the highest administrative access to the entire email system for 10,000+ employees. It probably wasn’t the wisest thing to give me that access since I wasn’t an actual email system administrator, but it afforded me the ability to experiment with ways of solving this problem for a robust, long term database of all the people and groups in the organization. So, I hooked up my software and kicked off the process that was supposed to do an initial synchronization of all the information to my database. Everything seemed to be cooking along pretty well, and I was delighted to see all the attributes start to spin up in real time from one fairly opaque and proprietary data structure into an open and usable database.

A little while later I started hearing some strange goings on outside my cubicle from other people in the center who were responsible for all the IT across the organization. It was obvious that something was up in an operational sense. I poked my head out to see what all the fuss was about, and apparently the CIO of the organization at the time was suddenly unable to get to his email and was screaming for blood. The sys admin folks were running around with their hair on fire, because it looked like his entire account had inexplicably disappeared. They thought for sure they’d been hacked, and it was a big deal because that account had his actual encryption key that meant any of his encrypted mail couldn’t even be accessed at all, ever again.

I remember thinking, “Gee, I wonder what’s going on there; it’s a good thing that’s not my problem to deal with.” I went back to check on my data extraction job and started poking around a little bit, only to have an “oh shit” moment. I was looking at my data, and for some strange reason the CIO’s directory entry was the very first one that got synced into my database. Perhaps he was the very first account set up in the new email system back when; I don’t know. In looking at the record, I saw that a bunch of the attributes in my relational database where I was pulling the Lotus NAB were blank. Evidently, the sync process I was using was actually a two-way street. Somehow, the attributes got deleted from the external database and that fed back into the live email directory. It took a little while to see the effect for that one (really important!) record since the various email servers took a while to synchronize their local copies, but eventually it got to the local server where the CIO got his email and by that time it looked like his whole account had been expunged from the system. It turned out that it was just his person attributes were missing. His security certificate was still there, and we just needed to repopulate his data to set everything back the way it was. But that was a pretty darn big failure associated with an experiment I was running to try and solve an information management challenge.

That could have been a real career limiting move. I did get a pretty good tongue lashing once the dust settled, but my boss and the other folks in authority in my local office protected me from the wrath of the senior executive. They didn’t serve my head up on a pike but told the CIO that there was a maintenance error (they didn’t even give the “maintenance error’s” name!) introduced into the system that had been discovered and corrected. They didn’t even take away my admin rights or keep me from continuing my work, but I did have to introduce a little more care and crosschecking into my methods. I learned some valuable software engineering lessons that day about process and data isolation and loose rather than tight coupling of information systems. And all of that early experimentation that eventually resulted in working data capability within that organization led to what has become a powerful underlying information system for data integration and scientific record keeping in my current organization, and it’s now serving as a platform for experimentation on semantically-based recommender systems for scientists.

That failure taught me some really important lessons. Yeah, on the one hand it was a pretty epic failure with some obvious mistakes made by a guy that had no formal training in technology and really shouldn’t have been given the power to do what he (I) did at such a “colossal” scale. However, I know from my own experience that I learn much more and more poignant lessons from my failures than from my successes. There’s a substantial body of literature on this dynamic from organizational and management researchers such as this article that tells a pretty compelling story on learning from failure vs. learning from success – Failing to Learn? The Effects of Failure and Success on Organizational Learning in the Global Orbital Launch Vehicle Industry. There is also a fair bit of work in education, controversial in some circles, on the importance of teaching “grit” to students through organized failure. This story on NPR last month caught my attention and has been churning in my mind since that time as it relates to my own organizational situation – On the Syllabus: Lessons In Grit.

One of the things in government that has really driven me nuts over time is how we give ourselves awards for things and spend quite a bit of time celebrating successes that are dubious at best. There was an effort a number of years ago that I’m not going to mention by name or even go into too much detail on because quite a few of the folks are still around. It was a big interagency thing to try and make some strides in how we were sharing and exchanging information. The leads for this were in a totally different organization than I was, but I was watching the deal play out with interest because it seemed to have some real promise in an area I was passionate about. It had a lot of promise, but it never really materialized into what it was supposed to be. Somewhere as it was coming along, I remember catching an article published someplace about how the group implementing the solution had received a big award for the success of their work (essentially, giving themselves the award since it didn’t come from some third party organization). I remember thinking, “wow, what are these guys getting an award for; this thing sucks!” I’ve seen it happen all over the place in government at various scales, and quite frankly, I’ve probably also been the recipient of overemphasized success vs. honest assessment of failures.

If the research in this area is correct, then we are not doing ourselves, either corporately or personally, any favors by celebrating our successes and downplaying or burying our failures. We spend a lot of energy trying to come up with highlights of our successes and no real time exploring and articulating the lowlights (or dim bulbs). Undoubtedly, there’s a tremendous weight of human nature involved in wanting to gravitate toward areas we feel like we’re succeeding rather than dwelling on our mistakes. People don’t like to sit and ponder overly long (if at all) on the areas where they’ve messed up either personally or professionally.

It is a fair bit different in science, and I’m trying to explore some areas where I can introduce some of the formal structures from the inherent competition in science to an organization that has few trained scientists and is not yet a true research organization. I do believe that the scientific practice has developed formal structures as a way to help deal with the human tendency to veer away from the inherently painful and conflict ridden dynamics of confronting failure. In a more personal area of my life I just recently made a statement that I believe to be very true – the greater the conflict the more structure we need to deal with it.

The vast majority of science is driven by competitive processes through proposals to funding agents of one kind or another. It happens at large scales with government science agencies and NGOs and at local scales within a lab funded writ large but still inter-competing for ideas. We spend a lot of energy putting together a proposal that is essentially a sales document. It tries to tell a compelling story about an idea that we want to take forward in a way that clearly meets the criteria for a given solicitation and represents enough cutting edge knowledge across its inherent scientific domains that a peer review panel will buy the fact that we know what we’re talking about. We talk about our current and past work and spruce up our CVs such that the panel and funders will trust us when we say we can pull off the proposed activity and get to some publishable results. When our proposals fail, we really want the details of why the panel and the funders didn’t buy what we were selling. We need to know what they were thinking, whether we scoff at it or not, so we can make the next proposal more salable.

We have a similar form and structure when we go to publish our results in peer reviewed journals. The reputable journals, their editorial boards, and the peer reviewers they engage are all focused on making sure that the research being presented is not only scientifically sound but actually represents new thinking in the world. Fortunately, we are seeing increased reward given to work that synthesizes previously collected data in addition to all new data collection, but successful scientific synthesis is still new, original thinking. Scientists are hard on each other because of this global competition for ideas, and even accepted papers still include often harsh comments challenging specific ideas, methods, and conclusions that have to be addressed before publication.¬†When scientists in “publish or perish” roles take a body of their work before peer review panels to seek tenure or promotion, they face collegial but frank criticism of at least a period of their life’s work. Good panelists, often senior scientists in the field, poke and prod to find out how the researcher in the hot seat has dealt with their failures.

A few years ago when I and my team were involved in an aspect of the response to the Deepwater Horizon incident, we got in some fairly serious hot water with our scientific peers when, due the nature of the incident, we released the results of our calculation of the oil budget (how much oil and where was it going over time) without all of the data and the specifics of our model. We immediately started getting calls from angry colleagues demanding that we put our thinking out in a way that could be scrutinized and argued over. We eventually put our report out for more formal peer review and in the final version of the published government report, we included all of the reviewer comments and our responses in an appendix – something that is rarely done publicly – because we wanted people to understand our thinking and have full transparency on what was a challenging process but that has stood the test of time.

These formal structures have a number of key characteristics important to their processes. We attempt to self identify conflicts of interest and deal with them by recusing ourselves in cases where those conflicts interfere with objectivity. We also try to understand and freely admit biases both in these formal structures and throughout research. Biases themselves may be seen as a certain type of failure in some sense as they represent areas where we are not as objective as we could or should be and they change over time as we learn more and gain different perspectives. The scientific community at large does police itself in these and other areas and consistently opens itself for scrutiny. And in the nature of bias, I freely admit that I’m exercising that here by taking a somewhat idealistic viewpoint on the process of science. There are many areas where the ideal is met and many where we fall short.

In the examination of the most innovative U.S. government agencies that I referenced in my previous post, NASA consistently comes out on top. In my view, NASA is more of a science agency than any other Federal Government organization, and this may partially explain NASA employees’ sense of being encouraged toward innovativeness. Knowing quite a few NASA colleagues, I can say that there is a general scientific mindset and ethos that I have found only in smaller pockets within my own organization and some of the other government science agencies I work with.

So, I’m pondering on this notion of embracing failure that I’ve tried and failed to bring about as a motto. I’m wondering about how we can be more honest about our successes that are at times a bit overblown and about the failures we seem to want to sweep under the rug. I’m thinking about how we can put the processes of collegial competition for ideas and honest evaluation of strengths and weaknesses into place more firmly in an organizational unit that is not yet quite ready to become a full scientific research institution. Do we need weekly “lowlights” as well as weekly highlights? Do we need to start operating program as more of a laboratory and put in place clear internal processes for competing ideas and having both succeeding and failing proposals? We definitely need to publish more of our work, pushing our ideas out into the global marketplace for scrutiny, but how do we do that gradually in an organization without very many publishing scientists? How can we encourage a new ethos that focuses on honest and yet collegial critique as a positive rather than negative force? Can we learn to celebrate our failures just as much or more than we celebrate success? Can we establish whole new measures of success as an organization based on full and honest evaluation of our progress?