Caroline Wong, Chief Security Strategist at Cobalt.io, a cybersecurity company, wrote an article in Forbes about 10 cybersecurity blind spots that companies should keep an eye on. These are issues that security teams don’t always see or aren’t always accounting for, but they should be.
10 Cybersecurity Blind Spots
#1: Skipping the Company All-Hands Meeting
Sometimes employees skip staff meetings and company all-hands meetings because they think that have more important work to do. However, it’s critically important for employees to understand what is going on with the company and the business. Wong points out that the job of security is to protect value, and the first step in protecting value is knowing what that value is. Employees need to be informed and in-tune with their business in order to protect it.
#2: Employee Terminations
When an employee is terminated, it’s important to immediately shut down their access to all work-related accounts. Ideally, you might want to try to automate as much of the account-termination process as possible and ensure that the process covers all accounts for all employees. This can be easier said than done, but it’s important to get a process or automated solution nailed won before that employee’s access causes an unwanted breach.
#3: Third-Party Dependencies
Wong explained that any application that uses third-party software components, including open-source components, takes on the risk of potential vulnerabilities in those dependencies. These vulnerabilities should be identified, tracked, and accounted for in the same way as every other software component.
#4: Service Accounts
Service accounts are used by machines, and user accounts are used by humans. The trouble with service accounts is that sometimes they have access to a lot of different systems, and their passwords aren’t always managed well. Poorly managed passwords make for easy compromise by attackers.
#5: Gaps in Testing Coverage
Every organization has a limited security budget. In order to allocate resources according to risk, some applications and systems get a lot of security testing, while others may get none at all. Wong points out that the trouble with no security testing at all, of course, is that you don’t know what you don’t know. Some periodic testing over time, even if that time period is three or even five years, is better than none at all.
#6: Not Testing Legacy Code
There are more tools and techniques that are easier to apply to modern technologies than legacy ones. However, that doesn’t mean that a security program should ignore legacy applications and systems just because they might be hard to test.
#7: Missing Business Logic Flaws
Wong explained that there are two types of application security problems:
- Bugs
- Flaws
Bugs are code-level mistakes, and flaws happen at the design level. The most interesting and important security findings cannot be discovered via automated means alone. Human intelligence and creativity are necessary for discovering security flaws in business logic. There are entire classes of security issues that cannot be discovered using automated tools.
#8: Assuming Open-Source Code is Inherently Secure
There is a school of thought that says the more eyeballs there are on a piece of software, the better. The more eyeballs there are on a piece of software also helps with security. Unfortunately, this idea disregards the fact that having many eyeballs doesn’t necessarily mean responsible, accountable, skilled eyeballs. Having many eyeballs also doesn’t guarantee a rigorous methodology for security testing.
#9: Not Sharing Threat Intel
Wong explained that there are certain types of security professionals who like to know secrets and keep that information to themselves. Sometimes, this can be counterproductive. When a security organization finds out about threat intelligence, it can be in the best interest of that team and the company if the information is shared with relevant stakeholders. If something is going on, the more people who know about it means the more proactive those individuals can be about either avoiding or addressing a problem.
#10: Logging Without Monitoring
Logging is about recording what happened. Application logs should include failures like access control failures and input validation failures. These types of events can be the key to detecting malicious activity and getting ahead of it before it has a chance to cause maximum damage.
It’s all well and good for an application to record all sorts of interesting data that might make it easier for a security professional to detect when an application is under attack, but unless someone’s actually looking at those logs, there is no point in collecting them in the first place.
There are always so many different things to look out for when managing a cybersecurity program, but Wong urges businesses not to forget to think about these key cybersecurity blind spots.
To learn more about cybersecurity blind spots, check out the additional resources attached below.