Monthly Archives: September 2014

Who needs requirements?

On my other blog, I posted an entry on how agile approaches in a way dispense with the idea of requirements; instead a business need is translated directly into code (skipping the requirements step), with tests providing an objective way to see whether the result is acceptable.

This idea disturbs many government IT and procurement professionals. It shouldn’t.

Perhaps it will ease people’s minds to think of an agile process as something like a procurement done with a Statement of Objectives. In place of system requirements the government, throughout the course of development, presents the contractor with business needs, and the contractor is free to provide a solution without constraints. For the same reason that this is often good practice in contracting, it is also good practice in software development. I am not saying that agile procurements should be done through a Statement of Objectives (a good idea in some cases), just pointing out the underlying similarity in concept.

One objection I hear is that without requirements, we cannot contract for services. Even if we could, how could we have a fair competition, since contractors bid on how they would address requirements? The trick here, I believe, is to distinguish between contractual requirements and system requirements. There is no rule that says that the contract or the RFP must include system requirements. Of course it must include some sort of requirements. The requirements depend on the basis for the competition – for example, if a procurement is for development services, we can state requirements for the services – required skills and experience, management approach, etc. Or we can state requirements for the business needs to be fulfilled. Perhaps the following comparison is in order: if I wanted security guard services I could specify that the security guards need to prevent people we don’t trust from entering the building. The solicitation does not need to list the names of the particular people we don’t trust.

A second objection is that we need the requirements to know whether the contractor or the project team has performed well. That seems to miss the point. If the requirements are satisfied but the product doesn’t meet the business need, then no one has been successful. We should gauge success by business value produced, business needs met, quality of work, customer service, and so on. Or we can judge the contractor’s success at meeting the business needs developed in the “conversations” with users. We don’t need system requirements in the solicitation to do this.

The main point to keep in mind is that better results are obtained by working from business needs directly to system development. Best results are what we want. We might have to change how we set up our contracts to get there. There is no conflict, from what I can see, with the Federal Acquisition Rules.

DevOps and FISMA, part 2

In my last post I discussed how rapid feedback cycles from production can support FISMA goals of continuous monitoring and ongoing authorization. Today I’d like to discuss FISMA compliance and DevOps from another perspective.

In order to support frequent, rapid, small deployments to production, we must ensure – no surprise – that our system is always deployable, or “potentially shippable.” That means that our system must always be secure, not just in production, but also in the development pipeline. With a bit of effort, the DevOps pipeline can be set up so as to achieve this.

I find it helpful to think of security vulnerabilities or flaws as simply a particular kind of defect. I would treat privacy flaws, accessibility flaws (“section 508 compliance”), and other non-functional flaws the same way. I believe this is consistent with the ideas behind the Rugged DevOps movement. We want to move to a zero-defect mentality, and that includes all of these non-functional types of defects.

Clearly, then, we need to start development with a hardened system, and keep it hardened – that way it is always deployable and FISMA compliant. This, in turn, requires an automated suite of security tests (and privacy, accessibility, etc.). We can start by using a combination of automated functional tests and static code analysis that can check for typical programming errors. We can then use threat modeling and “abuser stories” to generate additional tests, perhaps adding infrastructure and network tests as well. This suite of security tests can be run as part of the build pipeline to prevent regressions and ensure deployability.

How can we start with a hardened system, when we almost always need to develop security controls, and that takes time and effort? I don’t have a perfect answer, but our general strategy should be to use inherited controls – by definition, controls that are already in place when we start development. These controls may be inherited from a secure cloud environment, an ICAM system (Identity, Credential, and Access Management) that is already in place, libraries for error logging and pre-existing log analysis tools, and so on. These “plug and play” controls can be made to cover entire families of the controls described in the NIST standard 800-53.

Start hardened. Stay hardened, Build rugged.

How DevOps supports FISMA (Federal Information Security)

The DevOps model is based on rapid and constant feedback, both from the development process and from the system in production. Continuous integration, user review, and automated testing provide feedback during development; production monitoring, alerting, and user behavior provide feedback in production.

The Federal Government has been moving toward an interpretation of FISMA (The Federal Information Security Act) that is very much consistent with this feedback-based approach. The National Institute of Standards and Technology (NIST) publishes guidance on how agencies should implement FISMA. Their publication 800-137 promotes the use of Information Security Continuous Monitoring (ISCM) and makes it the cornerstone of a new Ongoing Authorization (OA) program. A later NIST publication (June 2014) titled “Supplemental Guidance on Ongoing Authorization: Transitioning to Near Real-Time Risk Management” provides additional details. DHS and GSA have worked to create a Continuous Diagnostics and Mitigation (CDM) framework and a contract vehicle through which agencies can procure CDM services.

The core idea is that federal information systems should be continuously monitored for vulnerabilities while in production. Those vulnerabilities should be rapidly remediated and can be used to “trigger” security reviews based on the agency’s risk posture. In other words, we are moving from a process where security is tested and documented every few years to a process based on continuous feedback from production to a team that is charged with remediating and optimizing. It is, in other words, a DevOps system.

The title of the NIST publication indicates that there is more here than meets the eye. The intention is to move to a “near real-time risk management” approach that is based on frequent reassessments of risks, threats, and vulnerabilities. It moves the focus of security activities from documenting that required controls have been implemented (a compliance focus) to one of responding to a changing landscape of real, emerging threats (a risk-based, dynamic focus).

DevOps provides an ideal way to implement this new security approach. Continuous Monitoring for security vulnerabilities is just another type of production monitoring in the DevOps world. A rapid feedback cycle enables the DevOps team to respond quickly to the newly discovered vulnerability. Since the DevOps team has already shortened cycle time and automated its deployments, the vulnerability can be addressed as quickly as possible. As an added bonus, the system in production doesn’t need to be patched; instead the source system can be modified, and the entire system rebuilt and deployed to a new set of VMs, and the old ones torn down.

The influence can go both ways: by incorporating the ideas of triggers and business-based risk assessments, DevOps can be extended to include risk-based decision making.