Cybersecurity in Plain English: What Happened With LockBit?

Earlier today, a reader asked “What happened with LockBit today? They’re all over the news.” Probably a question that a lot of people have, so let’s dive in and spell it out!

First things first, who or what is LockBit? Starting life as a ransomware gang some time ago, LockBit has been responsible for attacking the infrastructure and data systems of everything from small sole-proprietorships to multinational organizations. Tactics varied, but their primary operations revolved around double-extortion ransomware: where a copy of victim data is first removed from the environment and sent to LockBit servers in the cloud, then the original data is encrypted and rendered unusable to the victim organization. This allowed LockBit to demand payment for decryption of the data, but also to threaten to make all the stolen data public if the victim org decided they didn’t want to pay for the decryption itself. In this way, LockBit had multiple avenues of extortion to bring to bear in order to get paid by the victim. More recently, LockBit branched out into Ransomware as a Service, where they would create tool-kits and host infrastructure for other criminals to use when performing ransomware attacks against victims, with LockBit getting a cut of the criminally-acquired funds.

Now on to what happened: Early in the morning of Feb 20 here in the US, a coalition of law enforcement groups led by the National Crime Agency (NCA) in the United Kingdom and the FBI in the USA struck hard at the LockBit web infrastructure. In addition to many other operations – including multiple arrests of high-ranking LockBit members in multiple countries – law enforcement took down the dark-web back-end systems and the website that drove the Ransomware as a Service platform, effectively rendering the system useless for hundreds of affiliates of LockBit. The website itself was replaced with new information: First was a fairly standard notification that law enforcement agencies had seized the website and affiliated domains. As this site is where LockBit and their affiliates posted victim information if they didn’t pay up, this was a massive blow to the organization as a whole. Shortly after, however, this placeholder notification was itself replaced with a website that looked very similar to the original LockBit leaks site, but now showing information about the group itself, its members, its operations, and links for victims to get help and assistance from law enforcement. In short, the site returned to doing what it did prior to the seizure, but now hosting the information on LockBit; instead of on their victims. 

The operation – code-named “Cronos” – was carried out quickly and efficiently; with the entire process taking just a couple of hours from start to finish. The coordinated takedown of both the web infrastructure and arrest of LockBit leaders in multiple countries crippled the ransomware gang and their affiliates effectively – and even humorously – as LockBit’s own infrastructure was suddenly converted into a weapon against them and their affiliate network. 

It should be noted that this crippling of the gang could be temporary. Not all suspected LockBit leaders were arrested, and dark-web infrastructure has a very nasty habit of being resurrected quickly somewhere else. That being said, for now, I think we can call this a total win for law enforcement and a complete loss for LockBit and their Ransomware as a Service affiliate groups. 

One only wonders, will LockBit now be offering one year of complimentary identity protection services for their affiliates like many of organizations they attacked had to do for their customers after suffering a LockBit-affiliated attack?

Cybersecurity in Plain English: What is MFA?

Multi-Factor Authentication can be confusing for those who haven’t used it regularly before, and that leads to lots of questions like “What the heck is MFA, and why should I use it?” Let’s dig into that topic and demystify something that is becoming part of our daily lives more and more often.

Multi-Factor Authentication (MFA) is primarily exactly what it says on the tin: in order to log in, a user must be able to satisfy challenges that revolve around more than one piece of data, information, hardware, or some other combination of factors. If you’ve ever had your bank tell you that you must put in the code they just emailed you when you go to log in, then you’ve experienced an MFA challenge – but not all such challenges are quite as visible. Simply stated, an MFA challenge requires a user to present more than one security factor before they’re allowed to access something. Keep in mind that your username and password – while being two bits of data – are actually just one factor for authentication, so it’s best to see them as a single item to keep things simple as we explore.

Primarily, factors in authentication (the process by which a system confirms you are who you say you are) are broken down into several types:

Something you know: This includes things like your username and password combo. While they are preferably unique to you, it’s entirely possible that two people have the same username/password either by accident or because your data was leaked or stolen. Security questions (“What is your mother’s maiden name,” etc.) are also considered something you know in most security contexts. 

Something you are: Biometric data is a factor used to prove who you are because it is – at least theoretically – entirely unique to you. This factor can include things like your fingerprint, specific topographical maps of your face, the pattern of blood vessels in your retina, etc. While biometric data is difficult to steal or fake, storing it brings with it privacy issues, and accurately collecting and reading it can be challenging for a lot of devices. 

Something you have: Tokens that you have physical and/or digital control of can be used to prove who you are by having you show information on or in those devices and/or present the device itself. While tokens can be stolen, when combined with other factors they can be a great way to show a system you are really you. Some tokens generate one-time passcodes using a physical key-fob or an app on your phone. Others work by generating and sending a unique code through near field communication (NFC) – like holding your phone or a smart-card near a reader. In some cases, your laptop/desktop/phone itself can be this factor – by looking at things like geo-location, software installed, networks connected to, etc. authentication systems can confirm that the machine you are using is known to be used by you alone. 

MFA is simply the use of at least two of these factor types in each login/access event. So, for example, when you log into a website; the site may ask for a username and password, and then send a one-time passcode to your phone via text-message. You type the code from the phone (something you have) into the site after you put in your password (something you know) to gain access to the website as a user. Apple devices like iPhones/iPads have been using biometrics as a second factor for some time (TouchID and FaceID), and Windows has begun to use it for laptops and desktops (Windows Hello).

Why are you seeing MFA being used more and more often? MFA offers much better security than a username/password alone. Since the user must also provide some other proof they are who they say they are, it becomes significantly harder for a threat actor to gain access to things they shouldn’t be able to touch. As usernames are typically easy to figure out – most systems use your email address, which is already public information – and passwords tend to either be weak and easy to guess, re-used on multiple sites, get stolen quite often, or any combination of the three; a username and password alone just isn’t proof you are who you say you are anymore. MFA therefore becomes necessary to allow a system to know you are who you say you are without relying solely on information that could be in the hands of anyone. 

Not all MFA is created equally, of course. Email and SMS text message one-time-passcodes can be problematic if a threat actor gains access to your email inbox and/or tricks your phone service provider into re-routing text messages to them instead (a technique called “SIM Swapping”). While events like this are rare, they do happen, so email and text validation for MFA are better than nothing, but not the best. Authenticator apps like Microsoft Authenticator, Google Authenticator, and others make things more secure and harder for a threat actor to overcome easily. Biometric factors are even better, but can be difficult to use effectively. Not for the user, who just taps a finger or looks into a camera, but for the technology itself. Fingerprints can be subtly altered based on pressure against the reader. Facial recognition can be impacted by lighting, glasses, and a host of other factors. Retinal scanning requires the user to hold still and stare into a camera. Researchers and vendors have been making these things better and better over time, but they can still be tricky to deal with. 

In the end, MFA is here to stay. Since usernames/passwords alone are considered nearly the same as not authenticating at all these days, more and more organizations are adopting some form of MFA to allow you to gain access to company resources safely. It doesn’t need to be difficult, however. Having an MFA challenge that just asks you to type to two numbers on your screen into your phone is easy, fast, and effective – with Microsoft and others adopting this methodology to make life easier for users while making it much harder for threat actors. Leveraging hardware “fingerprints” like the apps you have installed and the location the device appears to be sitting at can reduce the total number of MFA challenges a user has to deal with each day. The combination of known successful defenses with evolving technologies allows for MFA to better protect the organization without putting a burden on the users, allowing for better security while keeping users happy and productive. 

Cybersecurity in Plain English: How Do Threat Actors Get In?

I’ve written a blog series like this for many companies I’ve worked for, now I’m doing it on my own blog for everyone to read. Please drop me questions you’d like answered to me via Twitter/X/whatever it’s called this week @miketalonnyc – I’d love to get you answers explained without the jargon! 

A very common question I get from the field is, “How do threat actors actually get into the network in the first place?” It’s a good question, with some possibly surprising answers, so let’s talk about initial access and how threat actors take that first step.

Initial access is the term used for how a threat actor gains their first entry into a protected environment. This could be your home PC, or a corporate network – whatever they’re eventually attempting to get access to within the target environment itself. Generally, the point of initial access is not the end goal of the threat actor; since it’s highly unlikely they land on the machine or system they actually want to get hold of. More often, initial access happens on a user’s laptop, or a web server, or an application platform instead; and the threat actor then must jump from system to system to get where they want to be. This means that by minimizing initial access points, you also minimize the ability of the threat actor to do what they want to do.  

So, how do they accomplish that first step? There are quite a few different ways this can be done, but four of them stand out as being (by far) the most commonly encountered techniques. First, compromise of credentials – the threat actor gains control of legitimate usernames and passwords. Second, compromise of a vulnerable application – where a threat actor is able to exploit a vulnerability. Third is coercion or trickery used to get a user to run a malicious application. Finally, there are initial access brokers that use all of the above to amass initial access that they can sell to the highest bidder.

Credential compromise is the most common technique used. Threat actors use phishing, smsishing (phishing by text message) and a host of other social engineering techniques to get hold of legitimate credentials that they can use to access systems in your organization. Alternately, they could guess or discover credentials without having to phish or otherwise grab them from users directly. Methods such as exploiting weak and/or default passwords, credential stuffing, or even brute-force attacks can get them what they need if other security controls aren’t in place. Weak passwords that are too short (less than 8 characters), too simple (no punctuation/special characters), and/or extremely common (password123) all allow a threat actor to successfully guess in just a few tries. Credential stuffing is trying a list of passwords from one breach to attack a totally different organization that shares users who may have re-used passwords. Brute-force is exactly what it sounds like – threat actors simply try password after password until they find one that works. 

In all of these cases of credential compromise, layered defenses can be a huge help in defending the organization. The use of (and enforcement of the use of) multi-factor authentication (MFA) will help to block a threat actor with otherwise valid credentials from actually using them. Enforcing passwords which meet basic complexity rules such as including special characters (?, /, $, !, etc.) and requiring 12 or more characters makes it much more difficult for a threat actor to successfully guess a valid password. Blocking the most common passwords used online outright is also a great method to bring to bear. Troy Hunt (curator of HaveIBeenPwned.com [https://haveibeenpwned.com/ ] has worked with many government and private entities to keep lists of the most common passwords. For example, the National Cyber Security Center of the UK has worked with Troy to produce a list of the top 100 [[https://www.ncsc.gov.uk/blog-post/passwords-passwords-everywhere ]]. Enforcing restrictions on the number of incorrect entries a user can try before they’re locked out helps derail brute-force attacks. Encouraging users to not re-use passwords by utilizing password managers helps curtail credential stuffing success.

 

 Remember that most usernames are known these days. Users utilize their email address, or some combination of first/last name/initials, so the username is no longer a big secret. Passwords – when well-managed – are still secret, but additional controls are required to ensure that a threat actor can’t walk in the front door. Utilizing complex passwords and MFA, limiting re-use, and blocking commonly known passwords all help to keep the password itself from becoming known and/or useful to a threat actor.  

 

Exploitation of a vulnerability is common in the quest to gain initial access. If a system or platform has a known vulnerability that can be exploited, then a threat actor will not have to gain credentials – they can just take control of the system or platform itself. Defenses here are two-fold, first it is important to patch/upgrade systems with known vulnerabilities; but that’s not always a possibility. If budget doesn’t exist for upgrades, or if the patch or upgrade would significantly impact a business process, it’s unlikely that closing the vulnerability directly will be allowed. Here again, compensating controls can save the organization. If a threat actor gains control of one application, then blocking their ability to move through the network or gain control of additional applications becomes a vital step in limiting damage. Endpoint controls (on servers as well as user systems), restricting network access, and the ability to be alerted on anomalous activity all aid in catching a threat actor attempting to move from an exploited system to others in the environment. Of course, patching or upgrading is the optimal strategy and should be done whenever possible; but additional controls may be required when that patch or upgrade just cannot be applied.

Coercion and trickery are incredibly common in the threat landscape today. While forms of social engineering, they typically do not follow the same path as a phishing or smshing attack. Instead, a user may be tricked into installing malicious software that masquerades as legitimate software the business would regularly use. A common example is a threat actor taking over a mis-spelled domain for a popular software tool, and any user who accidentally goes to the mis-spelled site (which might even be performing search engine optimization to trick the user) downloads and installs the malware instead of the real software. Supply-side compromise – where a threat actor replaces the real software with malware on the vendor’s systems – is also a serious threat. Another common technique is the invocation of authority to coerce a user into installing malware. A threat actor may call or email pretending to be a software vendor, a bank, a government agency, or even your own IT department and pressure a user to download an install malware, spyware, or more. Of course, a user who has been doing things they should not be doing online could also be blackmailed into installing malware on company systems; but while fake attempts at this technique are common (such as “clean up software” emails that try to get a user to install something because they were “caught” on a site they shouldn’t have been on), confirmed real use of this tactic is thankfully rare. Proper security awareness training and periodic testing is the key to derailing this form of initial access attack. When users know what to look for, who to ask for help, and where to go to legitimately get software and updates will keep them from accidentally downloading malware or doing so under duress. Combining these methods with strong endpoint controls (like anti-malware tools) can help to ensure that the fake software is blocked from running. Last, but not least, running software updates in a lab to ensure that they are legitimate before deploying them across the organization – combined with ensuring vendors are following security best-practices – limits damage from supply-side attacks. 

Finally, there are entire categories of threat actor groups who perform just initial access attacks, using all of the above methods to accomplish their goals. They curate massive lists of legitimate and validated credentials, previously exploited systems that they still have active access to, automation to perform coercion and trickery on a massive scale, host and deploy malware as valid updates, etc. – but they don’t actually perform other attacks. What they do is sell that information to other threat actors who then use it to perform more extensive attacks like major data theft, ransomware, or disruptive actions. These initial access brokers make good money re-selling the access they gain to the highest bidder, allowing them to gain financial success without having to worry about extorting an actual payoff from a company. Careful monitoring of user activity and network operations to determine if there are anomalies going on can allow you to detect if credentials or systems have become compromised before that access gets sold to another threat actor. Many managed security service providers (MSSPs) can assist with that effort for those organizations who cannot do that kind of monitoring in-house.

Initial access is the first step in a sequence of events that leads to data loss, ransomware payouts, downtime and business loss, and a host of other problems for any organization. Defending against the most common forms of initial access can derail attacks before they get farther than that first step, and help keep your organization safer over time. 

Cybersecurity in Plain English: What is a SIEM?

I’ve written a blog series like this for many companies I’ve worked for, now I’m doing it on my own blog for everyone to read. Please drop me questions you’d like answered to me via Twitter/X/whatever it’s called this week @miketalonnyc – I’d love to get you answers explained without the jargon! 

As more organizations beef up their cybersecurity resilience, many new tools and platforms become part of organizational operations. This has led to several contacts of mine asking, “What is a SIEM, and what does it do for cybersecurity, and how the heck do you pronounce it?” Let’s dive in and find out.

A Security Incident and Event Management (SIEM) platform is a tool-set used to bring together information from applications, systems, services, hardware, and other operational components into one place so that all of that information can be used to try to find cybersecurity incidents taking place. Think of it as a gigantic database that pulls in data from hundreds or thousands of sources within the organization and from threat intelligence feeds. Coupled with that database, a SIEM includes systems to remove redundant information and process all the information to look for points of correlation – sequences of events that, when viewed together, indicate something is going on. As for pronunciation, there is a difference of opinion on that. The two top pronunciations are sim (as in simulation or simple) and seem (as in seemingly or seams). Either is generally accepted by the technology community. 

The first step in SIEM operations is ingestion – pulling in data from multiple sources and de-duplicating it. SIEM solutions integrate with thousands of different tools and platforms to ingest data. These can include things like Active Directory and other Identity and Access Management (IAM) systems, hardware platforms, cybersecurity tools like firewalls, anti-malware, and others, Operating System and application logging systems, and quite a large number of other sources. All this information is brought into a de-duplication system that removes redundant data-points to reduce the amount of information being sent into the database for correlation. Because of the sheer number of logs and events being ingested and de-duplicated, SIEM solutions must be highly robust and capable of dealing with massive amounts of data at once, and therefore are typically cloud-based solutions where they can be elastic – able to expand to use more resources as needed, but contract and reduce the amount of resources they use when the extra capacity isn’t needed to reduce costs. 

Once data is being ingested efficiently and de-duplicated, the second phase of SIEM operations – correlation – takes place. Correlation is a real-time operation which looks at the sum total of ingested information to attempt to define patterns within that data which indicate threat activity. For example, odd network behavior alone could be indicative of a threat, but could also be indicative of a user doing something unusual without malicious intent. A SIEM would attempt to correlate that network behavior with other indicators of threat activity; such as an anti-malware tool discovering software attempting to access privileged information, or an IAM system recognizing multiple attempts at privilege elevation (a user or process attempting to gain administrator access to something). Taken individually, these actions may not be malicious or even suspicious – the escalation could be a misconfigured application, and the malware could be a one-off download of something that isn’t recurring or able to impact the organization. But, when correlated together, they indicate that something is going on which is indeed suspicious at the very least.  

SIEM platforms also tie into ticketing and email systems to alert IT and cybersecurity staff when correlated events indicate threat activity is going on. These staffers can then block access, undo changes, prepare incident reports, etc. Many SIEM solutions can also work with Security Orchestration, Automation, and Response (SOAR) platforms to take direct action, but these platforms currently have limitations whenever there is more than one valid action to take, making human staffers a necessary part of the process for many types of incidents. 

Overall, SIEM solutions are an invaluable resource. Through de-duplication of data, correlation of potential or actual threat activity, and the ability to alert staff and SOAR platforms in real time, SIEM solutions act as a massive force-multiplier for organizations. They allow for accurate and timely detection and response to security incidents that would not be possible via manual operations, and keep organizations safer and more resilient without creating massive amounts of staff burnout along the way. 

An Open Letter of Thanks to the Social Media Community

Recently, the company I was working for underwent a re-organization and I found myself laid off. While I hold zero ill-will toward them – and in fact will continue workingHands showing thanks with them in a different way – the experience was a shock to say the least. Of course, my experience was hardly unique; with thousands of layoffs happening across the technology world these days. Still, as anyone who has been through this can tell you, it sets you completely off-balance and off-kilter.

After taking a couple of days to get my brain back in order, the first thing I did was reach out to the communities I’m part of on various social media sites. Places like ThePlatform Formerly Known as Twitter and LinkedIn. The response was staggeringly overwhelming, with contacts from all over the world reaching out to both check in on me and to offer assistance. Thousands of people replied, forwarded, upvoted, and otherwise amplified my post about being laid off, and dozens of companies ended up reaching out to talk to me about a position. Even for someone who has always viewed communities online as a huge strength for any organization or individual, the sheer number of things that got mobilized within hours of my post was beyond anything I could have dreamed of.

So, I wanted to say thanks. To everyone who brought me to their HR/Hiring teams. To everyone who suggested a company to reach out to or a job posting I should see. To everyone who re-tweeted/re-posted my post so that it could reach more people who could potentially help. I cannot thank you enough, and consider myself in your debt. This experience has been humbling and inspiring at the same time, and its all because of you – each and every one of you.

The really great news is that – in no small part to everything the community has done – I am indeed employed once more. Can’t say who it is just yet, but keep your eyes on my social media streams for an announcement in the coming days. This couldn’t have happened without the reach and exposure my community gave to me, and for that I am forever grateful.

Cybersecurity in Plain English: What is the SEC doing in Cyber?

I’ve written a blog series like this for many companies I’ve worked for, now I’m doing it on my own blog for everyone to read. Please drop me questions you’d like answered to me via Twitter/X/whatever it’s called this week @miketalonnyc – I’d love to get you answers explained without the jargon! 

Because of recent regulations going into effect in the last couple of weeks, many contacts have asked me “What is the new SEC regulation, and what is the SEC doing in cybersecurity anyway?” The answers might surprise you, as this is a major step forward in regulatory control around cybersecurity, so let’s dive in.

First things first, the obligatory disclaimer. I am not a lawyer or regulatory expert – you should definitely be speaking to one or both of those to figure out how, exactly, your organization needs to get into compliance. I’m just a cybersecurity nerd who read up on things. 

Many regulatory bodies have moved into the realm of cybersecurity over the last several years. In the European Union we saw the General Data Protection Regulations (GDPR), and in the United States we saw the implementation of regulations like the Healthcare Insurance Portability and Accountability Act (HIPAA). Several regional governments across multiple countries have also put forward and even enacted their own regulations. All of these center around privacy – the ability of a user of a service to control how their data is stored, protected, and shared. The new SEC regulations (which went into effect late in 2023 and into 2024) focus more on disclosure of cybersecurity incidents and are not focused on privacy concerns, which makes them significantly different to the regulations we’ve seen before this point. Similar measures are being drafted, voted on, and even ratified across the world, so this is likely to not be the last measure we will see put into effect in the near future. 

But, what do these regulations do, and how do they impact organizations? Well, first let’s define two key terms: SEC Registrant and Material Impact. An SEC Registrant is any company which is required to file disclosures, reports, and other filings with the US Securities and Exchange Commission. This includes US publicly traded companies, and also any companies which are preparing to become publicly traded, though there are exceptions in rare cases – such as some foreign organizations having to file reports even though they are not officially traded in the United States.  

Material impact is somewhat more ambiguous, but Harvard Business School defines materiality as:

“… an accounting principle which states that all items that are reasonably likely to impact investors’ decision-making must be recorded or reported in detail in a business’s financial statements using GAAP standards.”

– https://online.hbs.edu/blog/post/what-is-materiality 

This means that any event which may cause an investor or potential investor to make a specific decision (such as investing or not investing) is considered “material” in nature. The new SEC regulations make it mandatory to disclose any cybersecurity incident with has material impact, meaning any cybersecurity incident which – if it becomes known – would cause an investor or potential investor to alter their decisions regarding the organization itself.  

At their heart, the new regulations create two new reporting requirements for any SEC Registrant. First, all Registrants must file annual reports with the SEC already. These reports are not the “Annual Report” documents that are sent to shareholders and prospects, but rather an official federal filing (Form 10-K) done to keep the SEC and the US Government apprised of what the organization is doing, its overall health, etc. From 2024 onwards, this filing must include details about the cybersecurity resilience of the organization including, but not limited to, which member of the board is responsible for cybersecurity, what issues and incidents have occurred, what measures are being taken to avoid incidents, etc. Most notably, the 10-K filings will have to specifically note who on the board is responsible for cybersecurity resilience, making cybersecurity become a board-level discussion. As most boards are comprised of brilliant business people who don’t generally have deep technical backgrounds (though there are exceptions, of course), this is a massive shift in board responsibility that we haven’t seen in the past. 

Second, the regulations require that, after any cybersecurity incident that has material impact, the company must file a disclosure with the SEC. This is done as an amendment to the existing form used to disclose anything that has a material impact – Form 8-K. Such filings are routinely done any time the organization makes a change or institutes a new operational policy that might impact investor opinion and decisions, but this is the first time that the 8-K will have to be filed in the event of a cybersecurity incident. The new regulations also put a specific time restraint on what the filing must occur. Registrants must file their amended 8-K within four business days of the incident being discovered unless law enforcement and/or the US Government explicitly blocks the filing for matters of national security or the integrity of a federal investigation.

Any SEC filing must be signed by stakeholders (usually high-ranking board members) who attest that the information is complete and correct to the best of their knowledge (and being purposefully ignorant of a situation is not accepted as an excuse for not having the knowledge in question if the signatory would have had access to said knowledge). Essentially, purposefully not filing properly and/or knowingly filing a report with false information is a literal federal offense. This would mean that signatories are liable if they fail to disclose an incident, or if they report incorrect information on the state of their cybersecurity resilience. Penalties can include fines, being barred from an industry or from holding a position at a public company, or even federal charges being filed that could result in jail time in extreme incidents. In other words, there is iron in the glove when it comes to enforcement of these regulations – which business leadership have been very well aware of in other areas of SEC reporting for decades now. 

The impact of these two regulations going into effect has been sweeping and even surprising overall. First and foremost, the specifics of several incidents became public knowledge due to the filing requirements – such as the gaming/casino attacks that occurred late in 2023. While organizations might otherwise downplay the impact of these incidents, or even attempt to completely hide the incident entirely, now the details are becoming public knowledge and impacting things like share prices and customer trust. Other incidents may come to light with the new annual report regulations, showing which companies are properly defending their organizations and which are not. Most surprisingly, advanced persistent threat (APT) groups – organized criminal groups who create and run coordinated attacks against high-value targets – have actually embraced the new regulations. In one now-famous incident, ALPV/BlackCat filed a complaint with the SEC; detailing MeridanLink’s failure to comply with the reporting requirements when that organization did not file an amended Form 8-K in a timely fashion ( https://www.scmagazine.com/news/hacker-group-files-sec-complaint-against-its-own-victim ). It should be noted that this reporting was for an incident that occurred before the go-live date of the regulations, and as such did not actually trigger an SEC investigation; but it shows that threat actors will indeed weaponize this system to force organizations to pay their ransom fees in order to minimize or control what information becomes public about an incident they suffer. 

The new SEC regulations have been challenged, however. The Congress of the United States of America has claimed that the SEC over-reached with the regulations, as such measures are in the purview of Congress and not the SEC. We’ll have to keep an eye on the ongoing debate to see if the regulations are allowed to stand, or if Congress strikes them and renders them invalid. Even if the SEC regulations do get struck down, it would be likely that Congress would pass their own, similar measures to replace them, so this story is going to be sticking around for a while either way.

The SEC has taken a decisive step toward mandatory reporting for cybersecurity incidents that may impact investor decisions. It is likely we will see more governments move in the same direction due to the financial impact of the massive number of cybersecurity incidents seen in the last few years; and the sheer impact that those incidents have had on national and global economic factors. Organizations should definitely prepare for how they will meet these new regulatory requirements to remain in compliance. 

Cybersecurity in Plain English: What is a Threat Actor?

I’ve written a blog series like this for many companies I’ve worked for, now I’m doing it on my own blog for everyone to read. Please drop me questions you’d like answered to me via Twitter/X/whatever it’s called this week @miketalonnyc – I’d love to get you answers explained without the jargon! 

hacker in hoodie

A common question that I hear from both non-technical professionals and experienced cybersecurity pros is, “What’s the difference between a hacker and a threat actor?” Let’s dive into this topic and spell things out – youmight be surprised that those two terms are different, though related to each other. 

A hacker is simply anyone who uses a system for something other than its intended purpose. While we most commonly associate the term with people who use technology in unexpected ways, in fact just about everyone who reads this is a hacker. When you drink coffee to alter the way you transition from just waking up to fully alert, you are hacking you body by introducing a chemical that alters the way your body would naturally perform that process – one example of the phenomenon known as “bio-hacking.” Hacking – in and of itself – is not a harmful or threat activity, it’s merely finding a different (and presumably more effective) way of doing something that uses tools and techniques that aren’t explicitly designed for that purpose. 

Specifically in the technology world, a hacker is someone who utilizes hardware and/or software in a way that it wasn’t designed to be used. Modern examples of hackers are researchers who attempt to subvert hardware and software defenses with the express purpose of making those systems more secure by identifying and closing security gaps. Penetration testers are also examples of hackers – using the tools and techniques which would otherwise be considered threat activity, but with the express permission and authorization of the organization being tested to identify and quantify security weaknesses. 

In short, hackers are everywhere, and primarily do what they do either in order to prove it can be done without causing damage or disruption, and/or to actually make systems better overall. The modus operandi of a hacker is not to perform a criminal act without express permission, but rather to ensure that anyone attempting to perform a criminal act can be blocked, discovered, identified, etc. 

So, if hackers break things and perform threat-like activities, how are they different than threat actors? Well, I’m not a lawyer, so I can’t speak to the legal definition, but I can speak to the practical difference: intent. Threat actors perform operations against technology with the express purpose of disrupting operations, destroying systems, stealing data, extorting an organization, etc. In other words, a threat actor differentiates themselves from a hacker because they are performing these actions in furtherance of a goal already considered to be a criminal act. They have no intent to make the cybersecurity resilience of an organization better. They don’t intend to advise or counsel an organization on potential or actual security gaps. They’re doing it to cause harm and/or make money illicitly – and typically for no other reason. Yes, there are threat actors who perform their operations to highlight a political issue, and there are threat actors who will falsely purport to be “helping” companies by exposing security flaws – but that is clearly and demonstrably not their goal in doing what they do. The disruption, harm, extortion, or espionage that occurs is their primary goal, and cannot be overlooked for any other factor in the threat activity itself. 

Some of the earliest examples of threat actors were “phone-phreaks” who realized that by playing a specific tone into a public pay-phone, the phone would believe the user was an Operator and allow for free long-distance calls. While the tone had a legitimate purpose, that purpose was most definitely not to allow just anyone to make free calls, and therefore was being used fraudulently. This is a great way to explain the difference between a hacker and a threat actor: A hacker would recognize this could be done, then inform the phone company and provide all the evidence so the company could close the gap. A threat actor doesn’t inform the phone company, and instead performs acts of theft of services for their own benefit alone.

To sum up, threat actors are indeed a sub-set of hackers. The difference lies in intent. Hackers look to make things better – by improving a process or closing a security gap. A hacker may make money doing what they do, but they make that money as a result of services, bug bounties, or publication of research. They will also take necessary steps to ensure that intrusions are minimized, data retrieved is destroyed, etc. Threat actors have the primary goal of harming someone or something, or financially benefiting from the act alone through techniques like extortion. There will always be some gray area between these groups, as one is a sub-set of the other, but the intent of the person or group performing the operation can be used to determine which group they belong in.  

Cybersecurity in Plain English: What is Cybersecurity Resilience?

I’ve written a blog series like this for many companies I’ve worked for, now I’m doing it on my own blog for everyone to read. Please drop me questions you’d like answered to me via Twitter/X/whatever it’s called this week @miketalonnyc – I’d love to get you answers explained without the jargon! 

Resilience

Cybersecurity resilience – the key term on just about every CIO/CISO/CSO/CTO’s mind these days. Tons of vendors say they can help with it. Regulators are beginning to demand it. Customers are expecting it. But, what isit? This is a question I’ve gotten from many readers over the last year, so let’s dive in and spell it out.

 

When we speak about resilience in the general technology world, what we’re really talking about is the ability to withstand events that would cause downtime or damage. An email server is resilient when it can continue to provide email services even if one or more servers/services go offline. SaaS technology is resilient when it can be maintained online at full or near-full capacity even if a Cloud provider has issues in one or more regions. For the most part – outside of cybersecurity – resilience is the practice that drives High Availability, Disaster Recovery, and Business Continuity operations. Stay online, or be able to get back up and online quickly.

 

In the cybersecurity world, resilience incorporates general technical definitions of the term with the addition of threat activity which may be encountered. This means that instead of the primary concern being uptime balanced against redundancy, we’re instead looking at the system’s ability to withstand an attack without allowing the attacker to gain control of the system or steal its data. As you might guess, this is a more complex operation than general technical resiliency, but the good news is that cybersecurity resilience is rated on much more of a sliding scale. Customers and regulators can demand that you must be within a certain level of uptime easily – the technology to perform that type of operation is available today within at least reasonable costs. Total cybersecurity resilience is not something that’s possible with today’s technology (and not likely to become available in the very near-term), and as such it is more about being able to prove you have done what you could, rather than proving you’re bullet-proof. 

 

Key components of cybersecurity resiliency are:

 

1 – Layered security methodologies: Whenever we talk about cybersecurity resilience, we’re talking about being able to have security controls compensate for each other if one should be bypassed by a novel attack. So you would perform security awareness training for employees, implement endpoint controls (like anti-malware tools), identity solutions (like Active Directory, Okta, etc.), web gateways (firewalls, proxies, etc.), and other layers of security controls to allow for catching and blocking threat activity that could slip through any one control. 

 

2 – Security-by-design development protocols: If you build technology – either hardware or software – you start by building in security as a primary development metric. This is different from traditional development which primarily addressed security as part of late-stage development operations. By understanding the threat landscape and building defenses into the hardware or software being developed, the likelihood of successful attack is reduced.

 

3 – Testing regularly: For any set of security controls, the only way to know that they are working (and being able to prove that they’re working) is to test them on a regular basis. This means running controlled threat activity within the production environment, and as such you may need to leverage professionals like penetration testers who know how to do that safely. 

 

4 – Tuning regularly: No cybersecurity is “set it and forget it.” Every tool, policy, control, etc. must be reviewed on an ongoing basis to ensure that it isn’t falling behind in its primary role of defending the organization. This can be based on your testing in part 3 above, but can also include regular review of best-practice documentation from the vendors of your hardware and software. The cybersecurity threat landscape is changing all the time, so regularly tuning systems and controls to counter those threats is a necessity. 

 

5 – Monitor your environments: Cybersecurity incidents happen fast, and your organization needs to know that they happened, that your controls held, or that you need to take immediate action to counter the threat activity. This requires monitoring the organization’s systems to make sure that if something does happen, technology and cybersecurity team members know about it fast and begin to deal with it immediately. As the tools and systems used to monitor can be complex – such as SIEM solutions and security orchestration (SOAR) platforms – this may be another area where your organization can benefit from a partner who has the expertise in-house already. 

 

6 – Document everything. While it may sound like overkill, unless it is documented, it doesn’t exist. So all the layered compensating controls, security-by-design operations, testing, and tuning aren’t useful to an organization unless they’re documented; and that documentation is kept updated. This aids in satisfying auditors and regulators, but also greatly aids the cybersecurity team if something does happen. They can quickly assess the situation based on up-to-date information about the overall security of the organization, then take action. 

 

Cybersecurity resilience is less a set of strict requirements, and more about knowing that your systems and data are as defended as possible, and what you will do if those defenses fail unexpectedly. Through the six areas above, you can provide a solid measure of that resilience that can be shared with auditors, regulators, and anyone else who may need you to show your work and prove that you’re taking the necessary steps to defend your systems, data, and customers. 

New York State Unemployment Insurance Help

Noun insurance 2093990Guest Post:

Pat G, a long-time friend of mine and all around wonder-woman who takes photos of BIRDS OF FREAKIN PREY, was furloughed along with many of her co-workers. After the living nightmare of trying to file for unemployment insurance here in New York State, she documented her trials and asked me to post the resulting info here so that others don’t have to go through what she went through:

Pat’s message starts here:

Please, pass this info to anyone you know in NYC trying to collect unemployment insurance.  Despite the Dept. of Labor’s efforts, the system is still backlogged and getting through is nearly impossible for many.  I was able to get through and am shocked that not one media outlet has mentioned that there IS a way to do it. 

With so many people throughout New York State filing for unemployment, the system is overwhelmed and getting through to a real life human being is near impossible.  However, this IS away to get a claim processed and eventually get a person.  Here is my story:

My last day working was Sunday, March 15th.  Once I was let go, I immediately attempted to file for unemployment.  The last time I actually collected from them was in 2011, so I figured that all my info (including direct deposit) would still be on their website.  After numerous attempts to set it up on the Dept. of Labor website, I was prompted to call which I did.  I was eventually able to give all my info using their automated voice system.  It took about 15 minutes.  The system then informed me that it was going to transfer me to someone who will complete the last step which is the interview.  The phone cut off.  When I would get through it would keep hanging up.  This went on for three days.  Finally, I clicked on the contact us link and noticed they had a twitter feed.  There were complaints from fellow New Yorkers who had equally bad experiences.  I saw that one was actually answered that said to direct message them.  As I already have a twitter account, I subscribed to their feed, then clicked the direct message box and left a brief explanation of my dilemma.   I got a reply a few minutes later asking for my name and telephone number which I gave.  Less than five minutes later I got a reply saying that someone would call me.  

Lo and behold, 45 minutes later, a very helpful woman did call. She patiently listened to my tale than asked for my social security number for verification.  Apparently, the system worked and it did record all my info.  She said that someone would call me back in two hours.  90 minutes later, I got the call and completed the interview.  I was given a number to file my first claim which I did on Sunday, March 22nd.  As the State has temporarily waived the 7 day wait, the money was in my checking account on Tuesday, March 22nd.  I have not had a problem since. 

Please pass this on to anyone filing for Unemployment.  Let them know the following:

1.  Do NOT file your claim online, do it over the phone.

2.  Once the automated system records all your info, a voice will tell you to hold for an agent to finish your claim.  One of two things will happen.  Either you WILL be cut off, or a voice will tell you to call back and THEN you will be cut off.

3.  When this happens, go to the NYS Department of Labor Twitter feed and leave a direct message (click the tiny envelope) [Note from Mike: It may look different in your Twitter client, so look for “Send a Private Message” or “Send a Direct Message”]

4.  When they call you back, be prepared to answer questions regarding employment, etc.  Have your bank account number ready if you choose direct deposit (which is the fastest way to get it).

Good luck.