Cybersecurity in Plain English: What is the SEC doing in Cyber?

I’ve written a blog series like this for many companies I’ve worked for, now I’m doing it on my own blog for everyone to read. Please drop me questions you’d like answered to me via Twitter/X/whatever it’s called this week @miketalonnyc – I’d love to get you answers explained without the jargon! 

Because of recent regulations going into effect in the last couple of weeks, many contacts have asked me “What is the new SEC regulation, and what is the SEC doing in cybersecurity anyway?” The answers might surprise you, as this is a major step forward in regulatory control around cybersecurity, so let’s dive in.

First things first, the obligatory disclaimer. I am not a lawyer or regulatory expert – you should definitely be speaking to one or both of those to figure out how, exactly, your organization needs to get into compliance. I’m just a cybersecurity nerd who read up on things. 

Many regulatory bodies have moved into the realm of cybersecurity over the last several years. In the European Union we saw the General Data Protection Regulations (GDPR), and in the United States we saw the implementation of regulations like the Healthcare Insurance Portability and Accountability Act (HIPAA). Several regional governments across multiple countries have also put forward and even enacted their own regulations. All of these center around privacy – the ability of a user of a service to control how their data is stored, protected, and shared. The new SEC regulations (which went into effect late in 2023 and into 2024) focus more on disclosure of cybersecurity incidents and are not focused on privacy concerns, which makes them significantly different to the regulations we’ve seen before this point. Similar measures are being drafted, voted on, and even ratified across the world, so this is likely to not be the last measure we will see put into effect in the near future. 

But, what do these regulations do, and how do they impact organizations? Well, first let’s define two key terms: SEC Registrant and Material Impact. An SEC Registrant is any company which is required to file disclosures, reports, and other filings with the US Securities and Exchange Commission. This includes US publicly traded companies, and also any companies which are preparing to become publicly traded, though there are exceptions in rare cases – such as some foreign organizations having to file reports even though they are not officially traded in the United States.  

Material impact is somewhat more ambiguous, but Harvard Business School defines materiality as:

“… an accounting principle which states that all items that are reasonably likely to impact investors’ decision-making must be recorded or reported in detail in a business’s financial statements using GAAP standards.”

– https://online.hbs.edu/blog/post/what-is-materiality 

This means that any event which may cause an investor or potential investor to make a specific decision (such as investing or not investing) is considered “material” in nature. The new SEC regulations make it mandatory to disclose any cybersecurity incident with has material impact, meaning any cybersecurity incident which – if it becomes known – would cause an investor or potential investor to alter their decisions regarding the organization itself.  

At their heart, the new regulations create two new reporting requirements for any SEC Registrant. First, all Registrants must file annual reports with the SEC already. These reports are not the “Annual Report” documents that are sent to shareholders and prospects, but rather an official federal filing (Form 10-K) done to keep the SEC and the US Government apprised of what the organization is doing, its overall health, etc. From 2024 onwards, this filing must include details about the cybersecurity resilience of the organization including, but not limited to, which member of the board is responsible for cybersecurity, what issues and incidents have occurred, what measures are being taken to avoid incidents, etc. Most notably, the 10-K filings will have to specifically note who on the board is responsible for cybersecurity resilience, making cybersecurity become a board-level discussion. As most boards are comprised of brilliant business people who don’t generally have deep technical backgrounds (though there are exceptions, of course), this is a massive shift in board responsibility that we haven’t seen in the past. 

Second, the regulations require that, after any cybersecurity incident that has material impact, the company must file a disclosure with the SEC. This is done as an amendment to the existing form used to disclose anything that has a material impact – Form 8-K. Such filings are routinely done any time the organization makes a change or institutes a new operational policy that might impact investor opinion and decisions, but this is the first time that the 8-K will have to be filed in the event of a cybersecurity incident. The new regulations also put a specific time restraint on what the filing must occur. Registrants must file their amended 8-K within four business days of the incident being discovered unless law enforcement and/or the US Government explicitly blocks the filing for matters of national security or the integrity of a federal investigation.

Any SEC filing must be signed by stakeholders (usually high-ranking board members) who attest that the information is complete and correct to the best of their knowledge (and being purposefully ignorant of a situation is not accepted as an excuse for not having the knowledge in question if the signatory would have had access to said knowledge). Essentially, purposefully not filing properly and/or knowingly filing a report with false information is a literal federal offense. This would mean that signatories are liable if they fail to disclose an incident, or if they report incorrect information on the state of their cybersecurity resilience. Penalties can include fines, being barred from an industry or from holding a position at a public company, or even federal charges being filed that could result in jail time in extreme incidents. In other words, there is iron in the glove when it comes to enforcement of these regulations – which business leadership have been very well aware of in other areas of SEC reporting for decades now. 

The impact of these two regulations going into effect has been sweeping and even surprising overall. First and foremost, the specifics of several incidents became public knowledge due to the filing requirements – such as the gaming/casino attacks that occurred late in 2023. While organizations might otherwise downplay the impact of these incidents, or even attempt to completely hide the incident entirely, now the details are becoming public knowledge and impacting things like share prices and customer trust. Other incidents may come to light with the new annual report regulations, showing which companies are properly defending their organizations and which are not. Most surprisingly, advanced persistent threat (APT) groups – organized criminal groups who create and run coordinated attacks against high-value targets – have actually embraced the new regulations. In one now-famous incident, ALPV/BlackCat filed a complaint with the SEC; detailing MeridanLink’s failure to comply with the reporting requirements when that organization did not file an amended Form 8-K in a timely fashion ( https://www.scmagazine.com/news/hacker-group-files-sec-complaint-against-its-own-victim ). It should be noted that this reporting was for an incident that occurred before the go-live date of the regulations, and as such did not actually trigger an SEC investigation; but it shows that threat actors will indeed weaponize this system to force organizations to pay their ransom fees in order to minimize or control what information becomes public about an incident they suffer. 

The new SEC regulations have been challenged, however. The Congress of the United States of America has claimed that the SEC over-reached with the regulations, as such measures are in the purview of Congress and not the SEC. We’ll have to keep an eye on the ongoing debate to see if the regulations are allowed to stand, or if Congress strikes them and renders them invalid. Even if the SEC regulations do get struck down, it would be likely that Congress would pass their own, similar measures to replace them, so this story is going to be sticking around for a while either way.

The SEC has taken a decisive step toward mandatory reporting for cybersecurity incidents that may impact investor decisions. It is likely we will see more governments move in the same direction due to the financial impact of the massive number of cybersecurity incidents seen in the last few years; and the sheer impact that those incidents have had on national and global economic factors. Organizations should definitely prepare for how they will meet these new regulatory requirements to remain in compliance. 

Cybersecurity in Plain English: What is a Threat Actor?

I’ve written a blog series like this for many companies I’ve worked for, now I’m doing it on my own blog for everyone to read. Please drop me questions you’d like answered to me via Twitter/X/whatever it’s called this week @miketalonnyc – I’d love to get you answers explained without the jargon! 

hacker in hoodie

A common question that I hear from both non-technical professionals and experienced cybersecurity pros is, “What’s the difference between a hacker and a threat actor?” Let’s dive into this topic and spell things out – youmight be surprised that those two terms are different, though related to each other. 

A hacker is simply anyone who uses a system for something other than its intended purpose. While we most commonly associate the term with people who use technology in unexpected ways, in fact just about everyone who reads this is a hacker. When you drink coffee to alter the way you transition from just waking up to fully alert, you are hacking you body by introducing a chemical that alters the way your body would naturally perform that process – one example of the phenomenon known as “bio-hacking.” Hacking – in and of itself – is not a harmful or threat activity, it’s merely finding a different (and presumably more effective) way of doing something that uses tools and techniques that aren’t explicitly designed for that purpose. 

Specifically in the technology world, a hacker is someone who utilizes hardware and/or software in a way that it wasn’t designed to be used. Modern examples of hackers are researchers who attempt to subvert hardware and software defenses with the express purpose of making those systems more secure by identifying and closing security gaps. Penetration testers are also examples of hackers – using the tools and techniques which would otherwise be considered threat activity, but with the express permission and authorization of the organization being tested to identify and quantify security weaknesses. 

In short, hackers are everywhere, and primarily do what they do either in order to prove it can be done without causing damage or disruption, and/or to actually make systems better overall. The modus operandi of a hacker is not to perform a criminal act without express permission, but rather to ensure that anyone attempting to perform a criminal act can be blocked, discovered, identified, etc. 

So, if hackers break things and perform threat-like activities, how are they different than threat actors? Well, I’m not a lawyer, so I can’t speak to the legal definition, but I can speak to the practical difference: intent. Threat actors perform operations against technology with the express purpose of disrupting operations, destroying systems, stealing data, extorting an organization, etc. In other words, a threat actor differentiates themselves from a hacker because they are performing these actions in furtherance of a goal already considered to be a criminal act. They have no intent to make the cybersecurity resilience of an organization better. They don’t intend to advise or counsel an organization on potential or actual security gaps. They’re doing it to cause harm and/or make money illicitly – and typically for no other reason. Yes, there are threat actors who perform their operations to highlight a political issue, and there are threat actors who will falsely purport to be “helping” companies by exposing security flaws – but that is clearly and demonstrably not their goal in doing what they do. The disruption, harm, extortion, or espionage that occurs is their primary goal, and cannot be overlooked for any other factor in the threat activity itself. 

Some of the earliest examples of threat actors were “phone-phreaks” who realized that by playing a specific tone into a public pay-phone, the phone would believe the user was an Operator and allow for free long-distance calls. While the tone had a legitimate purpose, that purpose was most definitely not to allow just anyone to make free calls, and therefore was being used fraudulently. This is a great way to explain the difference between a hacker and a threat actor: A hacker would recognize this could be done, then inform the phone company and provide all the evidence so the company could close the gap. A threat actor doesn’t inform the phone company, and instead performs acts of theft of services for their own benefit alone.

To sum up, threat actors are indeed a sub-set of hackers. The difference lies in intent. Hackers look to make things better – by improving a process or closing a security gap. A hacker may make money doing what they do, but they make that money as a result of services, bug bounties, or publication of research. They will also take necessary steps to ensure that intrusions are minimized, data retrieved is destroyed, etc. Threat actors have the primary goal of harming someone or something, or financially benefiting from the act alone through techniques like extortion. There will always be some gray area between these groups, as one is a sub-set of the other, but the intent of the person or group performing the operation can be used to determine which group they belong in.  

Cybersecurity in Plain English: What is Cybersecurity Resilience?

I’ve written a blog series like this for many companies I’ve worked for, now I’m doing it on my own blog for everyone to read. Please drop me questions you’d like answered to me via Twitter/X/whatever it’s called this week @miketalonnyc – I’d love to get you answers explained without the jargon! 

Resilience

Cybersecurity resilience – the key term on just about every CIO/CISO/CSO/CTO’s mind these days. Tons of vendors say they can help with it. Regulators are beginning to demand it. Customers are expecting it. But, what isit? This is a question I’ve gotten from many readers over the last year, so let’s dive in and spell it out.

 

When we speak about resilience in the general technology world, what we’re really talking about is the ability to withstand events that would cause downtime or damage. An email server is resilient when it can continue to provide email services even if one or more servers/services go offline. SaaS technology is resilient when it can be maintained online at full or near-full capacity even if a Cloud provider has issues in one or more regions. For the most part – outside of cybersecurity – resilience is the practice that drives High Availability, Disaster Recovery, and Business Continuity operations. Stay online, or be able to get back up and online quickly.

 

In the cybersecurity world, resilience incorporates general technical definitions of the term with the addition of threat activity which may be encountered. This means that instead of the primary concern being uptime balanced against redundancy, we’re instead looking at the system’s ability to withstand an attack without allowing the attacker to gain control of the system or steal its data. As you might guess, this is a more complex operation than general technical resiliency, but the good news is that cybersecurity resilience is rated on much more of a sliding scale. Customers and regulators can demand that you must be within a certain level of uptime easily – the technology to perform that type of operation is available today within at least reasonable costs. Total cybersecurity resilience is not something that’s possible with today’s technology (and not likely to become available in the very near-term), and as such it is more about being able to prove you have done what you could, rather than proving you’re bullet-proof. 

 

Key components of cybersecurity resiliency are:

 

1 – Layered security methodologies: Whenever we talk about cybersecurity resilience, we’re talking about being able to have security controls compensate for each other if one should be bypassed by a novel attack. So you would perform security awareness training for employees, implement endpoint controls (like anti-malware tools), identity solutions (like Active Directory, Okta, etc.), web gateways (firewalls, proxies, etc.), and other layers of security controls to allow for catching and blocking threat activity that could slip through any one control. 

 

2 – Security-by-design development protocols: If you build technology – either hardware or software – you start by building in security as a primary development metric. This is different from traditional development which primarily addressed security as part of late-stage development operations. By understanding the threat landscape and building defenses into the hardware or software being developed, the likelihood of successful attack is reduced.

 

3 – Testing regularly: For any set of security controls, the only way to know that they are working (and being able to prove that they’re working) is to test them on a regular basis. This means running controlled threat activity within the production environment, and as such you may need to leverage professionals like penetration testers who know how to do that safely. 

 

4 – Tuning regularly: No cybersecurity is “set it and forget it.” Every tool, policy, control, etc. must be reviewed on an ongoing basis to ensure that it isn’t falling behind in its primary role of defending the organization. This can be based on your testing in part 3 above, but can also include regular review of best-practice documentation from the vendors of your hardware and software. The cybersecurity threat landscape is changing all the time, so regularly tuning systems and controls to counter those threats is a necessity. 

 

5 – Monitor your environments: Cybersecurity incidents happen fast, and your organization needs to know that they happened, that your controls held, or that you need to take immediate action to counter the threat activity. This requires monitoring the organization’s systems to make sure that if something does happen, technology and cybersecurity team members know about it fast and begin to deal with it immediately. As the tools and systems used to monitor can be complex – such as SIEM solutions and security orchestration (SOAR) platforms – this may be another area where your organization can benefit from a partner who has the expertise in-house already. 

 

6 – Document everything. While it may sound like overkill, unless it is documented, it doesn’t exist. So all the layered compensating controls, security-by-design operations, testing, and tuning aren’t useful to an organization unless they’re documented; and that documentation is kept updated. This aids in satisfying auditors and regulators, but also greatly aids the cybersecurity team if something does happen. They can quickly assess the situation based on up-to-date information about the overall security of the organization, then take action. 

 

Cybersecurity resilience is less a set of strict requirements, and more about knowing that your systems and data are as defended as possible, and what you will do if those defenses fail unexpectedly. Through the six areas above, you can provide a solid measure of that resilience that can be shared with auditors, regulators, and anyone else who may need you to show your work and prove that you’re taking the necessary steps to defend your systems, data, and customers.