7.11.2018 Gothenburg, Sweden – The old idiom goes that fixing a bug in production costs $100, $200 in testing and $5,000 when it has reached production. It’s still the same bug, but later it has a larger impact and more people are involved when it’s pushed back to the beginning of the pipeline. The main thing with security bugs is the nastiest ones can leak confidential data and ruin your reputation and business as well as incur criminal liabilities as well.
We have good practices for various situations, from vetting the tooling and stack, securing the application supply chain, recognition of assets, design patents, etc. All the way from the beginning of the pipeline from JUnit tests and component analysis to continuous vulnerability scans, intrusion detection and incident response exercises. We should have them implemented, and not just as manual widgets that blink like Las Vegas Xmas trees and nobody cares. There must also be quality gates. There should be absolutely no reason to make a crappy build all the way to the other end of the pipeline.
We’ve got various tools already in our possession. If you don’t have any automated security testing already in place, it would be a good idea to check what’s going on in this eon. Because software quality is improving, and people are becoming more aware, more and more attacks happen on the side channel, rather than someone knocking on the front door. One should never underestimate the insider vector, unfortunately, intentional and not (thus we have so many mobile devices hiding our darkest secrets getting lost every day and such). And when you are testing things, test it realistically, as there is no method too brutal, the bad guys won’t care if something was out of scope nor meant to be in the next sprint. Turning on that encryption for data in the rest where your secrets reside is like buying fire insurance after your house has already burnt to the ground.
So, in short – assuming when you already have tools in place – does one follow best practices or really use those great little features that can make a difference? Say we all have some kind of version control. Do you use multi-factor authentication with it? Have you thought of branch protection, signed commits, keeping the number of owners low, vetting inactive accounts automatically, using account-per-project protocol (limit impact on compromise) and separating roles and duties so those wonderful third party tools are not delegated with way too many privileges, perhaps even whitelisted all the third parties?
Yes, there’s a load to consider. Add another layer of vetting the supply chain, and we’re talking about checking signatures and certificates of packages, dockerfiles and many other things as well to find out about their integrity. Measuring baseline compliancy means you should also vet obsolete algorithms and chipers away from the shipment, just apart from configuration or dependencies. It boils down to really simple things – don’t make a great piece of software and ruin it by pulling crap into it, configuring it poorly or forgetting to keep up-to-date with latest bug and vulnerability fixes. It just makes your effort on great software otherwise a bit of a waste. Tackle those easy-ish ones first. By automation, you remove the repetition and become much more productive. You don’t want to be the next Equifax. Yet always have those contingency plans in place; it’s not if you get hacked but rather when. Plan and architect like that stuff has already hit the fan.
A whole lot of these things can be automated. In your pipeline, orchestration or such always aims for proactive measure as well as keeping up the good security posture. If you’re starting from scratch, identify secrets which should be stored in providers or vaults thousands of miles away from your code, and automate those applications and their platforms to be kept up to date. Applying least privilege and zero trust architecture in your design and implementation also gives you a lot of benefits. Least privilege ensures the obvious first, but also when things go awry, locking down the proper things when a fire has broken out, even if pretty impossible, and likely results in collateral as well.
Zero trust as an idea has been around a long time, but it’s a valid one for modern architecture design. In the old times, there was safe LAN, and big bad WAN out there, people had (foolishly) implicit trust things running behind the front firewall. Many things in life are illusions, some positive and some negative. Trusting your LAN traffic is negative. The same defensive principle should be applied to software design. Think defensively and assume nothing. Always assume zero trust and authenticate and audit without exception. One common mistake is not collecting everything. What you can’t see doesn’t exist. Much less you can analyse, correlate or remediate it.
It’s also pretty important in all your development not to reinvent the wheel, whether you’re needing a piece of oauth or api gateway, there are great pieces out there. You should not try to redo those with your limited resources because it’s highly likely your capability to develop, patch and support it throughout the life cycle is much less than say Google. Just follow those best practices to get you started well. Do centralized logging, keep logs as safe as your data, actively scan your traffic for suspicious signatures of bad actors, malware and statistical anomalies. Remember that egress traffic is just as important as ingress, just like when writing a good JUnit test positive test is as important as the negative one; otherwise, you’ve just covered the other side of the coin. Be creative.
One shouldn’t get overwhelmed, there are seemingly a whole lot of things, but they really boil down to very straightforward things. And last but not least, remember that some framework xyz compliance doesn’t mean security; bad guys won’t give a dime about standards and certifications. Aim to make your security tests and controls automated, thereby enabling you to concentrate on more important things as well as sleeping a bit better at night.