As security issues at large corporations become everyday mainstream news, it is imperative that managers of development teams have a documented process for using open source components, writing code and managing their application platforms. Here’s a short 7 point checklist to get you started.
1) Has the operating system where any hosted services run been patched appropriately?
As we know there are many aspects to security and near endless attack vectors. Having said that one of the easiest things people can do to reduce the attack surface area is to ensure that the OS / Platform is patched to the latest level. This allows people to them focus on some of the more esoteric security challenges and put the right resources in the right places.
2) Who has elevated access to the production system and how is that governed?
Having elevated access to systems when it is not required increases the chances of something damaging, whether nefarious or not, impacting your software and systems.
The best strategy is always to employ the rule of least entitlement an only provision system / application accounts and people with the lowest level of entitlement required. For people, consider having no default access with a breakglass process or technology access type solution to broker access.
Also on this topic is where you store the credentials. You should protect the keys to your kingdom. If the answer to ‘where?’ is ‘SharePoint’ or ‘Confluence’ then you need to really think about how you got to that point to prevent it happening again.
3) Has the runtime environment been assessed for vulnerabilities and exploits?
You have assessed your operating system for security issues. You should also check your runtime environment and by this I mean everything that is a runtime dependency in your operational system.
Here you should ensure that you are not depending on things that have security issues themselves. If you are using a framework for messaging, a common data fabric, some common login mechanism, then you should check each to ensure they are not going to introduce unnecessary risk.
4) How have you tested your code for security issues?
First of all, write clean code, it’s important. Writing code which others can read, is appropriately documented (more is not better), is structured in a way which makes change easy, is modular and everything else will mean that securing your software is a lot easier. If you need to fix an issue in code and you cannot work out the risks of doing so then either less fixes will be performed or, worse, people will not make the changes.
Having said all that have a good strategy for testing your code for potential security issues and, wherever possible, automate these tests and put them early in the development lifecycle. Testing and fixing the code you have written should be second nature.
5) How are you securing the application and client data?
This is the stuff that gets into the news. People losing other peoples data. It’s the technology equivalent of gossiping about your best friends secrets. You may as well tell people to hate you and not talk to you or trust you ever again. It’s worth considering how long it will take for other security issues to gain this same level of social stigmatism…
Make sure your data is safe. Store it sensibly and only expose the data you need to at the appropriate times. Try to avoid storing and using certain combinations of data that increases possible damaging implications.
Also there’s no point securing your software to protect client data if the username and password for your database is easily guessed. Or if you store your password in clear text in a config file you give out to people.
6) How have you assessed the external dependencies for security defects?
This is probably the biggest gap in today’s development operation in terms of managing application risk. I would like to think everyone knows they should test their code. Everyone knows they should patch their OS, in fact there’s already a lot of automation to help here. Everyone knows not to give the keys away to stuff. Now I’m not saying that people are good at dealing with all these things but I believe people know about them.
There seems to be a real blind spot when it comes to external open source components in that less focus seems to be given. Some people believe open source components are safe due to many eyes but a lot of people know this simply isn’t true.
You don’t ‘trust’ your own code, you know it needs checking. You don’t ‘trust’ everyone with admin access to your systems. Why would you blindly trust the open source components you bake into your systems without performing some reasonable checks first?
7) How are you managing risk?
I think this is the biggest question you should be asking your team. It’s all well and good having a book of rules and guidelines on the things people should do and not do, but you need to make people responsible for managing risk.
Ask people how they manage risk. What do they consider when thinking about possible impact and likelihood? What risks are they happy to except and put their name to? What risks should be mitigated what risks should be addresses?
Everyone should feel responsible for risk management and, where they are not taking any action for a known risk, be happy putting their name against it saying ‘I am ok that this risk has been managed appropriately’. If they are not then action should be taken.
All the questions mentioned above build towards what some people refer to as “defense in depth” by protecting all the layers one can traverse to find a possible security issues in a system. Doing this makes any effort to take advantage of possible security issues a lot harder.