The DevOps movement is all about efficiency in process – optimising the journey from idea to product, or to echo a previous Naimuri blog post, minimising the time from Asks-to-Gets.
Every company’s development and delivery structure is different, but generally there are seven stages behind the process: Build, Test, Package, Provision, Secure, Deploy, Maintain – with some blurred lines between these definitions (e.g. security should be a consideration for the whole lifecycle).
- Build – the exciting prospect of creating something new, or for existing products, developing new features, improving performance, and fixing bugs – all to improve the customer experience
- Test – normally in a tight feedback loop with the build stage, ensuring the quality of the product and verifying that it meets user requirements
- Package – usually done by the development team, taking the built and tested product and getting it ready to be deployed
- Provision – procuring and building the infrastructure that the systems run on in dev, test and of course production environments
- Secure – a big thing in recent times – ensuring your product, data, and customer data is kept safe
- Deploy – taking the packaged application and running it on the provisioned servers – this is the product launch, the culmination of weeks, months or years of work
- Maintain – monitoring, logging, alerting, and service discovery – the maintenance phase of your product
Traditionally, these stages have been grouped into two separate areas – development (Build, Test, Package) and operations (Provision, Secure, Deploy, Maintain); essentially one group that deals with the building the software and one group that deals with running it. This structure is hugely problematic – it forces operational silos where, rather than working alongside each other, teams battle it out to get their portion of work done before offloading to someone else.
The “us-vs-them” mentality means there’s a lack of cohesion within the project, reducing the effectiveness of all teams involved and increasing the time taken for software to be deployed. It means that tickets are raised when anything is needed, rather than there being a natural channel of conversation. It means that artificial walls are built between stages of the project, with “it works for me” or “that’s not my job” being a common response to issues.
Cross-functional teams, where the development team is made up of experts across the entire software delivery cycle, aim to reduce (or eliminate) the negatives of this silo mentality. By having analysts, developers, database administrators, security engineers, testers, and operations working alongside each other, teams can focus on the product as a whole, instead of their small slice.
This structure also promotes a self-service environment, with teams being self reliant and not needing other departments in order to do their jobs.
By pulling all of the seven stages (Build, Test, Package, Provision, Secure, Deploy, Maintain) into one team we can start to think of the product in a holistic way, understanding how all the pieces fit and how we can break them apart to improve efficiency.
For example – rather than having provisioning as a stage that’s done after the software is packaged up, provisioning is factored in from the start – with the software being developed on the same setup as it’s tested, which is the same (or very similar) to the production environment. This means the infrastructure is being tested at the same time as the application, and there are no surprises waiting in production.
Rather than logging and monitoring solutions being an add-on at the end of the process when the software is already in production, they can be developed into the application itself, allowing seamless integration across the development lifecycle – meaning teams become accustomed to these tools early on, so when something does go wrong in production there’s no learning curve.
Cross-functional teams have other advantages too, such as team members understanding and adapting to other roles; developers write code in a way that’s easier for QA engineers to test, BAs define requirements through acceptance testing scripts, so test driven development patterns can be used. It gives higher visibility and greater understanding to everyone involved, allowing people to be proactive rather than forcing them to be responsive.
Throwing a bunch of people from many dissimilar disciplines into one team sounds like a recipe for an unorganised disaster – so, how do we go about making cross-functional teams work? Well, honestly, the main ways are through attitude change and good leadership – but there are certainly DevOps methods and tools to help along the journey.
Everything Is Code
The first step is to codify as much as possible. Writing manual steps down as machine understandable code – as shell scripts, configuration files, or better yet using programming languages specifically made for the task. This code can then be peer reviewed, tested, and stored in version control – which gives us confidence that our build, testing, backup, migration, and many other tasks work as intended.
Having everything as code opens up the door for…
Make common tasks (or even better – all tasks) happen automatically.
Most of the development pipeline can be automated, from builds being triggered on commits; unit, integration, system and regression tests; release packaging; infrastructure creation, etc.
If all (or most) of that can happen automatically, without requiring much (or any) interaction, then not only can we reduce the amount of time and effort put into these tasks, we can also minimise mistakes at each stage of the pipeline.
Automation is relatively straight forward (conceptually anyway) for software, but it’s a little less obvious for hardware. Most people think of hardware traditionally – plugging in cables, hard drives, CPUs etc.. And it is. But by adding a layer on top of this, either through public cloud – where someone else looks after the hardware (such as Amazon’s AWS, Microsoft’s Azure, or Google Cloud), or through private cloud – where you look after your own hardware (such as OpenStack), your hardware becomes virtualized – your hardware basically becomes software.
Whether public or private, the crux of utilising the cloud in this way is that you get an API – a programmer friendly way of interacting with hardware. This gives you a way of defining your infrastructure in code – either through scripts, utilising vendor tools such as Cloud Formation, or with specialist Infrastructure-as-Code software such as Terraform.
All of this is enables a self service mentality, where teams have full internal access to everything they need to be able to do their jobs. If a team member has to go to another department or put in a support ticket then a cross-functional team will likely fail. But by giving teams the ability to adapt to what they need, create infrastructure on demand, and change the tedious manual tasks into automated processes, then teams can get on with the more important tasks that add real business value.