What are the limits of trust in computer systems in everyday life?
In the computing world there are household names such as Bill Gates, Mark Zuckerberg and Steve Jobs. The list dwindles very quickly after that and in the security domain, there are no real stars known to those outside that sphere.
However, for those working in security, there is one name that is universally respected and that is Bruce Schneier. Bruce is an American cryptographer and has written best-selling books on computer security and cryptography. He has a wide & deep knowledge of computer security.
He even has a law named after him. Schneier’s law is his own pronouncement which states “Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself cannot break. It is not even hard. What is hard is creating an algorithm that no one else can break, even after years of analysis.”
So, a few years ago a user on a Slashdot newsgroup asked a question as follows “I’m a big fan of Bruce Schneier, but just to play devil’s advocate, let’s say, hypothetically, that Schneier is actually in cahoots with the NSA. Who better to reinstate public trust in weakened cryptosystems? As an exercise in security that Schneier himself may find interesting, what methods are available for proving that we can trust Bruce Schneier?”
It actually is an interesting question. Basically, could an influential cryptographer like Bruce be actually working for the government by affirming certain cryptographic algorithms and systems to be secure so that the NSA could continue to exploit them?
Like I said, it is a good question and while I am pretty sure that Bruce is no government agent, there could indeed be lesser entities in the security world who are in cohorts with government to validate IT products that they know to have weaknesses.
We place our trust in computer systems
If we really think about it, we place our trust in many components in modern IT systems without a moment’s hesitation. How do we know the Qualcomm chip, the memory chips, the keyboard, motherboard or hard drive firmware in our device is not compromised before it left the factory?
Simple answer is that we can never know for sure but yet, we have to simply be pragmatic and assume a world model where not everyone is a nefarious actor. For the most part, those working in the IT industry are pretty straight individuals and if mass compromise of core system components was common, we would eventually hear from a whistle-blower or uncover them in some other fashion.
That said, a good rule of thumb in security is to “Trust No One”. MetaCompliance train staff on the issue of tailgating and shoulder surfing. Trusting no one also applies to software products such as encryption which assumes such an important role today.
The only way to come close to ensuring that a piece of software does not contain back doors or gaping vulnerabilities is to have independent trust experts audit the code. This is best practice. If you truly need to know what is really going on inside a program then you need to go straight to the source code. There is simply no substitute for looking at code.
Penetration testing is common for probing systems but many unintentional, yet significant security problems cannot be found through pen testing alone therefore source code auditing is the technique of choice for technical testing.
Auditing code manually can be particularly effective for discovering issues such as access control problems, Easter eggs, Time bombs, cryptographic weaknesses, Backdoors, Trojans, logic bombs, and other malicious code.
Of course, reputable security companies will execute internal code audits but to gain wider acceptance, they also need to invoke external code audits which will give an outside validation (or not) on a products ability to meet the expected requirements e.g. integrity, confidentiality & availability (CIA)
It is probably unreasonable to expect vulnerability free software but we need to look at risk mitigation. Code audits are an excellent tool for this. Auditing popular encryption software is a must for society. A problem however is that there can be a lack of skilled cryptographers & programmers with the necessary security & mathematics & crypto primitives background to conduct audits.
The first lesson you learn in crypto 101 is not to roll your own crypto algorithm yet many seasoned coders still try to do it when an industry standard algorithm like RSA or AES would have been much more appropriate. Even sometimes, when they do implement industry standards, they are still not able to manage memory correctly or they store passwords in clear-text.
The popular TrueCrypt disk encryption product allowed its codebase to a deep review by the Open Crypto Audit Project in a methodical analysis in 2017. It came through it pretty well. There were no backdoors as some had previously claimed.
A lingering debate for some time has been around the impact of open source code in security. We know that open source can be examined by attackers and defenders. Some experts believe however that all security products should be open source. I agree.
The “secret” should simply never lie in the obscurity of the code. Simply making your code open source does not of course guarantee any higher level of security but it does force some to write to a higher standard and of course, it does encourage a higher number of third party experts to examine the code for flaws.
There is however the counter argument that relying on others to look for and fix security holes could lead to a false sense of security. I do not believe that is a strong enough argument to over-ride the positive arguments.
OWASP is an open community dedicated to enabling organizations to conceive, develop, acquire, operate, and maintain applications that can be trusted. OWASP is famous for the “OWASP Top 10” which is to educate developers, designers, managers, and organizations about the consequences of the most important web application security weaknesses.
OWASP recommend that the source code should be made available for testing purposes. No one is claiming that developing secure software is easy but we in the security community would be negligent if we did not call for more robust security code audits so as to eradicate more of the vulnerabilities being exploited today.
So absolute security does not exist. Unless, you go to a local beach, gather some sand, extract the silicon, build an integrated circuit fabrication factory and produce your own computer components – you will need to trust third party hardware and software components to get any work done.
What is up to us in the industry, is to constantly figure out where that degree of trust should end. If are not pragmatic, we will simply never be able to partake in an increasingly technological world.