“100% Secure” is a term that you might think had gone out of fashion but strangely there are still plenty of people out there who are willing to make this extraordinarily bold claim about their systems. In this day and age claiming to be 100% secure on the Internet is a bit like claiming that you’ve built a machine that is 100% efficient to a scientist or an engineer. At the very least it’s going to be met with skepticism and at worst with derision.
Shopping sites seem to be among the worst culprits for making this claim and it’s probably reasonable to assume that this is motivated by a need to reassure the customer. The problem is that no matter how many soothing claims a website may make, it does not provide a guarantee that it is genuine or secure. Very often the use of SSL (Secure Socket Layer) is the justification as to why they are absolutely secure but SSL does not provide complete security. It simply secures the communication channel between the client and the server with cryptography. This provides significant protection against man-in-the-middle attacks, a situation in which an attacker has seized control of a routing device between the client and server and can intercept and modify packets of data. When the data in question is transmitted over an unencrypted channel with no safeguards the attacker is free to intercept sensitive information such as login credentials or they could even manipulate the client’s session to perform unauthorised actions.
While it is very important to protect against this type of thing it is by no means the only threat to a website’s security. There are a whole host of other attacks which are much simpler to perform and that can have a far greater impact. I will not go into any great detail about these but some of the main classes of web attack are.
- · Cross-site scripting
- · Cross-site request forgery
- · SQL injection
- · Authentication bypass
- · Remote code/command execution
Each of these vulnerability classes comes in many different forms and they can be triggered in a wide variety of ways. Each of them has the potential to undermine the security of multiple users, while some have the potential to undermine the security of every single user on the site. Securing communications between the client and server is a moot point if the server suffers from devastating flaws. An example of this might be that the client’s private data is stored in an unencrypted database on the server and this data can be exfiltrated by means of an SQL injection or by the attacker leveraging remote command execution to initiate a database dump.
Just defending against all known technical vulnerabilities is a herculean task, especially on large and complex systems but even if the main system is immune to all known technical problems there are still other ways it can be attacked. The server hosting a website may not have sufficient physical security or someone who occupies a trusted position and has access to the system may not really be all that trustworthy. Then of course there are any external dependencies the system may have, we need to consider whether the system automatically updates itself or otherwise depends on other systems and if so are they also secure? There is no end of possible attack scenarios and we can even consider more exotic threats, what if someone is willing to take down power supplies to disrupt the system? What if someone tried to launch an armed assault against the data centre in order to get at some juicy data? The problem is that these last two scenarios are possible but they are extremely unlikely. If the risk of an event is low and the measures required to prevent it are expensive then it is difficult to justify spending time and resources on defending against it. So let us consider what it would take to make a system fully secure. Every single scenario will have been taken into account and some quantity of resources will have been expended on reducing the risk of each event to exactly zero. Such a system would be vastly expensive and would most likely have severely degraded usability resulting from all the additional security measures. (Just imagine filling out a CAPTCHA and multiple it by about oh let’s say 100) It should be apparent that such a system requires an unrealistic amount of resources and would rely on its creators and users to be completely infallible and absolutely trustworthy. Keeping a system secure is as much a challenge of managing human nature as it is of engineering.
If the system is never going to be totally secure then why spend time and money on security at all? Well the best we can really hope for is to reduce the risk to acceptable levels and to have a disaster recovery plan in place if something does go terribly wrong. Security is like any other risk and as with any other risk assessments we need to know at least the following.
- · What are the risks?
- · How likely are they to occur?
- · What is the impact if they do occur?
- · How expensive is it to reduce the risk to an acceptable level?
Once we know the answers to these questions we can make an informed choice on the best place to focus on making improvements. It makes sense to start on items that give the most benefit for the least cost and to keep making more improvements as the resources become available to do them. It would also be wise to periodically revise the list of risks as the world of security and the internet are very dynamic and new threats can emerge overnight.
I believe that 100% security is an ideal that we must strive towards within the constraints of the resources available and without severely impeding the usability of a system. Even though it may be impossible to reach perfection the very act of trying is what keeps a system as secure as it possibly can be. The alternative is to complacently believe that you are always safe while the threat landscape evolves around you; in this case you become the proverbial boiling frog that sits waiting for the inevitable disaster and who really wants to do that?