Category Archives: Lean

The problematic Paperwork Reduction Act

The Paperwork Reduction Act (1980,1995) and the Government Paperwork Elimination Act (1998) together suggest that the government wants to move away from burdensome, paper-only interactions with the public toward a 21st century approach that takes advantage of the online world. The Paperwork Elimination Act (GPEA) mandates that government agencies treat electronically submitted information the same as a paper version – even to the extent of recognizing electronic signatures – so that individuals can transact with the government electronically. The Paperwork Reduction Act (PRA) is intended to reduce the burden on the public resulting from information collections. Simply put, agencies should not require unnecessary information from the public and should make the best use of the information it has collected.

These goals are the right ones. As someone who has applied for visas for foreign countries and had to provide odd pieces of information that were clearly irrelevant, I am happy that the US has a mechanism to avoid such a thing. Unfortunately, the details of the legislation and its implementation are interfering with the goal, despite what are clearly the best intentions of all concerned.

One problem is process-related. The PRA sets up a process for both new forms and changes to existing forms that requires a 60-day public comment period followed by a second 30-day public comment period once feedback from the initial comment period has been incorporated. The form must then be approved by the chronically understaffed Office of Information and Regulatory Affairs (OIRA) at OMB. With the time required for preparation of the documents OIRA requires, the process can take 1 – 2 years for a change to an existing form.

The result is that agencies are discouraged from making improvements to their forms. Planning within agencies centers around how to avoid making changes that will trigger a PRA review. In an era when tech-savvy companies make continuous improvements to their user interactions, often testing two versions of the user interface at the same time (called A-B testing), this process interferes with the government’s ability to reduce burden and improve the public’s experience when transacting with the government.

A second issue is the existence of loopholes in the legislation. Government agencies are instructed to accept electronic signatures “where practicable.” In many cases the Department of Justice believes that such signatures are not “practicable” and agencies must require “wet” signatures even if a form is submitted electronically.

Perhaps the biggest issue, though, is the equating of paper and electronic versions of forms. OIRA requires parity between forms that are available both electronically and in print. This means that many of the features of electronic customer interaction are not allowed, since they would create a disparity between the channels. For example, online forms typically “validate” information as it is entered, flagging errors in the user’s input. Since paper allows the user to write anything they want, agencies are not allowed to stop an applicant from electronically submitting information that is clearly wrong. This denies agencies and the public one of the greatest benefits of electronic interactions.

There is a more subtle and insidious problem with this requirement. Electronic applications are generally – outside of the government – interactive; that is, as the user enters information the computer responds by providing related information. For example, once the applicant has been identified, the system can look up information it already has on the applicant and provide it as a “default” to reduce the burden on the applicant. But this would diverge from what is available on a paper application.

As a result the government’s electronic applications are static; viewed as just an equivalent of the paper application. As with paper, the applicant is expected to fill out information on a static page and submit it before the government can provide any help. The paperwork burden on the public is not reduced and the agency receives bad data, which makes its processing less efficient.

The PRA requires that an agency “to the maximum extent practicable, uses information technology to reduce burden and improve data quality, agency efficiency and responsiveness to the public.” The Open Government Directive further requires that OIRA review the PRA for impediments to the use of new technologies. In my view, that means that we cannot treat electronic forms as if they were paper forms, but rather must take advantage of all the advantages electronic interaction allows. Doing so would realize the spirit of the PRA and GPEA better than today’s process.

The legacy system modernization trap

Government agencies have often failed at efforts to modernize their legacy systems. Even when the effort is not labeled a failure, it is expensive, time-consuming, and risky. There is a good reason for this: we should not be doing legacy system modernization projects at all.

Why not? To begin with, “modernization” is not a business goal. It does not in itself represent a business outcome that adds value to the enterprise. As a result, its scope of activities can never be clear, the business cannot feel a sense of urgency (and as a result wants to hold off releasing until the system is perfect), and good prioritization and trade-off decisions cannot be made. It is a project that is all risk and no return: there is already a functioning system, so the agency is only risking failure. A modernization effort can never end, since being up-to-date is a goal that slips away as we approach it.

Legacy modernization also does not lend itself to an agile approach – that immediately should be a give-away that something is wrong. We cannot release the modernized product incrementally, beginning with a minimally viable product, because the business will not want to use something that does not at least match the functionality of the legacy system. This means that the scope of the first release is fixed and cannot be adjusted to fit within a time box. The legacy system must be replaced in a single release.

Yet there is a way to modernize a system. The first goal should be to bring contemporary technical practices to bear on the legacy system as is. And the first step in doing that is to bring the legacy system under automated test. A wonderful reference on doing so is Working Effectively with Legacy Code by Michael Feathers. Once the system is under automated test, regressions can be avoided and the code can be restructured and refactored at will. The second step is to re-architect (as little as possible) to make re-engineering pieces of the system efficient. In many cases this will involve creating elements of a service-oriented architecture – or at least finding ways to loosely couple different pieces of the system.

Now to the real point of the exercise. We need to work from real business needs – capabilities that the system does not have. For each capability we change the system to deliver it. We do so in an agile way, and we use our automated tests to feel comfortable that we are not breaking anything. Over time, the system becomes easier to change as components are replaced by more “modern” components. But we never do anything without a good business reason to do so.

In other words, we’ve modernized our system without doing any such thing.

Spend more on delivery, less on risk mitigation

Let’s do a simple Lean analysis of government IT system delivery projects. How much of our spend is on activities that directly create value, and how much is additional overhead? What percentage of our spend is value-creating?

The value-creating part of a software development project is primarily the actual development and testing of the software. Add to that the cost of the infrastructure on which it is run, the cost of designing and building that infrastructure, and perhaps the cost of any software components from which it is built. I mean to include in these costs the salaries of everyone who is a hands-on contributor to those activities.

The non-direct-value-creating part is primarily management overhead and risk mitigation activities. Add to these the costs of the contracting process, documentation, and a bunch of other activities. Let’s call these overhead. A great deal of this overhead is for risk mitigation – oversight to make sure the project is under control; management to ensure that developers are doing a good job; contract terms to protect the government against non-performance.

No one would claim that these overhead categories are bad things to spend money on. The real question is what a reasonable ratio would be between the two. Let’s try a few scenarios here. An overhead:value ratio of 1:1 would mean that for every $10 we spend creating our product, we are spending an additional $10 to make sure the original $10 was well-spent. Sounds wrong. How about 3:1? For every $10 we spend, we spend $30 to make sure it is well spent? Unfortunately – admittedly without much concrete evidence to base it on – I think 3:1 is actually pretty close to the truth.

Why would the ratio be so lopsided? One reason is that we tend to outsource most of the value-add work. The government’s role is management overhead and the transactional costs of contracting. Management overhead is duplicative: the contractor manages the project and charges the government for it, and the government also provides program management. Another reason is the many layers of oversight and the diverse stakeholders involved. Oversight has a cost, as does all the documentation and risk mitigation activity that is tied to it. When something goes wrong, our tendency is to add more overhead to future projects.

A thought exercise. Let’s start with the amount we are currently spending on value-creating activity, and $0 for overhead. Now let’s add incremental dollars. For each marginal dollar, let’s decide whether it should be spent on overhead or on additional value creation (that is, programmers and testers). Clearly we will get benefit from directing some of those marginal dollars to overhead. But very soon we will start facing a difficult choice: investing in more programmers will allow us to produce more. Isn’t that better than adding more management or oversight?

To produce better results, we need to maintain a strong focus on the value creating activities – delivery, delivery, delivery.

The “business value” of government

Agile delivery approaches focus on maximizing business value rather than blindly adhering to pre-determined schedule and scope milestones. On the definition of “business value” the agile literature is appropriately vague, for business value is defined differently in different types of organizations. I would even argue that it is necessarily different in every organization – each company, for example, is trying to build a unique competitive advantage, and results that contribute to that advantage can be valuable (“net” value, of course, would have to consider other factors as well). A publicly held company needs to maximize shareholder value; a closely-held private company values … well, whatever the owners value. A nonprofit values mission accomplishment. What does the government value and how does it measure value?

The answer is not obvious. Mission accomplishment is certainly valued. But different agencies have different missions and for some agencies measuring mission accomplishment is difficult (James Q. Wilson’s book Bureaucracy is great reading on the topic of agency missions). If the Department of Homeland Security values keeping Americans safe, how can it measure how many Americans were not killed because of its actions? In an agile software development project, how can we weigh cost against that sort of negative value to determine which features are important to build?

To make matters more complicated, the government values many things besides mission accomplishment. Controlling costs, obviously. Transparency to the public and to oversight bodies. Implementation of social or economic goals (small business preferences, veterans preferences, etc.). Auditability – evidence that projects are following policies. Fairness to any business that wants to bid on a project. Security, which in the government IT context can extend to keeping the entire country safe. And through appointed political agency leadership, political goals can also be a source of value. Each of these values may add cost and effort to a project.

To maximize business value, we must consider all of these sources of value. If we limit ourselves to the value of particular features of our software, we are missing the point. Rather, as IT organizations in the government, we need to self-organize to deliver the most value possible, given all of these sources of value. The government context determines what is valuable. What we must do is find the leanest, most effective way to deliver this value. This is no different from the commercial sector – only the values are different.

Government as a low-trust environment

The US government is, deliberately and structurally, a low trust environment. Think about why we have a “system of checks and balances.” We have proudly created a government structure that is self-correcting and that incarnates our distrust of each branch of the government. Why is freedom of the press such an important value to us? Because we all want transparency into the government’s actions – not to celebrate its fine management practices, but to know when it is doing something wrong. Within the government, we have Inspectors General to investigate misbehavior, Ombudsmen to make sure we are serving the public, and a Government Accountability Office. To work in the government is to work in an environment where people are watching to make sure you do the right thing. It is a culture of mistrust.

That sounds horrible, and from the standpoint of classic agile software development thinking, it is unworkable. But take a step back – don’t we sort of like this about the government? “Distrust” has unpleasant connotations, but as a systematic way of setting up a government, there is a lot to be said for it. It is another way of saying that the government is accountable to the people. You could almost say – you might want to hold on to your chairs here, agile thinkers – that mistrust is actually a value in the government context. So where does that leave us if agile thinking wants us to deliver as much value as possible, but believes that agile approaches require trust?

It might sound academic, but I think solving this dilemma is critical to finding ways to bring agile thinking into the federal government. A typical IT project experiences this structural distrust over and over: in the reams of documentation it is required to produce, in the layers of oversight and reviews it must face, and in the constraints imposed on it.

I will argue that even in a low trust environment, agile approaches are still the best way to deliver IT systems. And that certain tools – borrowed primarily from DevOps – actually help us resolve the dilemma. Waterfall approaches fit well with mistrustful environments by holding out the promise of accountability and control – but they just don’t work. So how can we bring agile, lean, team-based processes into an environment that is structurally mistrustful, and realize our goal of a lean bureaucracy?

Optimize the whole

Let’s talk about how to reduce government IT cycle times (mission need to deployed capability). As the Lean principle urges, we need to “optimize the whole.” The “whole” in this case goes well beyond system development. We can implement Continuous Delivery practices but if we don’t address the other parts of the value chain, our impact will be tiny. And we’re shooting for a big impact, right?

To go from mission need to deployed capability, here are some of the steps we may need to pass through: gathering requirements into a monolithic “program,” documenting the program and its plan, satisfying oversight bodies, securing funding, executing one or more procurements (typically for development services, hardware, and software), onboarding contractors, developing and testing the product, having it retested by independent verification and validation (IV&V) partners, configuring production hardware and networks, receiving an Authority to Operate (ATO – verification that it is secure enough to release), passing it through a change control board, and deploying the system.

I’ve probably missed a few things here. But a few things should be clear. First, the “value add” development part is tiny compared to the whole (both in time and cost). Second, there are many opportunities for leaning out the process, even if we still need to execute each of these steps. Third – and most subtly – there are real “business” needs, in the government context, driving each of these steps. We really do need to ensure that the system meets FISMA security requirements. We really do need to secure funding through an appropriations process. And so on.

So the question for me is: how can we meet these “business needs” of the government through the leanest possible process?

Why DevOps will revolutionize Federal IT

DevOps, more than any other agile way of thinking, can cause dramatic change in how the government does IT. There is a lot to talk about here, but in this post I’ll try to present a simple line of thought that will give you an idea of what I mean.

First, we need a hypothesis on what the critical problem is in government IT. I have claimed in many of my speaking engagements that the problem we should be focused on is that of cycle time (technically lead time), by which I mean the time from recognizing a mission need to deploying an IT capability to address that need. Cycle times can be as long as ten years in the government, though I can’t give a meaningful statistic because as far as I know this is not something that is measured. What is reasonable? I would say something more on the order of a week or so. There is clearly room for improvement.

You might think that a more important problem to solve is waste, or possibly overspending. I propose cycle time instead because it is measurable, actionable, and includes these other problems. Cycle times are long because of waste; if we reduce cycle time it will be by eliminating waste. Is the government overspending on IT? It is hard to say, since we don’t know what the right level of spending should be. But everyone can agree that our spending should be as little as possible for the value we receive. Reducing cycle time will help achieve that goal. And it increases the government’s responsiveness and shortens feedback cycles, which result in better products.

Good, so what does DevOps contribute to achieving shorter cycle times? To keep things simple, let’s think of DevOps as a combination of three things: Lean Software Development, Continuous Delivery, and teaming or integration across development, operations, security, and other functions. Lean Software Development provides the framework and tools for reducing cycle times. Continuous Delivery is the best way we know to structure a software lifecycle to minimize waste. And cross-functional integration further reduces waste, addressing the costly “hidden factories” created by handoffs between development, operations, and security.

There are many more reasons why I believe that DevOps is the key to reforming government IT. I will address them in future posts.