Cybersecurity in Plain English: Exploding… Pagers?

Editors Note: This is a developing story, and little is known about the facts surrounding the events except that they happened. The article will be updated when more information is published (if that happens). Please remember that, as with any rapidly developing story, the truth of these events may not be known for quite some time. The editor would also like to thank @SchizoDuckie and @UK_Daniel_Card for their rational and technical support in keeping the author from getting derailed and wandering into spy thriller territory.

Update: October 18, 2024 – Multiple news outlets including NPR and CNN are citing a US Official in stating that Israel has claimed responsibility for the pager explosions. 

Update 2: October 18, 2004 – Both the Tiwanese first-party manufacturer and holder of the trademark branding for the pagers and the Hungarian 3rd-party manufucaturing company that licensed that trademark are denying that they manufactured or sold the pagers to Hezbolla. We may have to wait a significant amount of time for investigations to sort out the truth of where these devices came from.

Update 3: October 18, 2024 – An additonal wave of explosions, this time involving two-way radio devices and (possibly) solar devices has occured in Lebanon.  While no one has claimed responsibility yet, it stands to reason that this was a second-strike by the same group that detonated the pagers yesterday, presumably Israel.

Update 4: October 18, 2024 – The Guardian (a UK-based News Agency) has posted a story with more detail on how this may have happened. 

Original Post:

It would seem that there are quite a few things happening these last few months that create an immediate need for an explanation in plain english. Today has continued the trend, as I got bombarded by people asking “What happened in Lebanon with the exploding pagers?” Let’s dive into this topic, and hopefully I can offer some reassurance that a world-wide panic is not needed at this time.Noun warning 4241030 FF001C.

Please note, while I have had training in some forms of chemistry in college, I am NOT an explosives or demolitions expert. The details I provide here were gleaned from hastily-performed research on the subject. This is also a longer article than usual, because the topic is complex and full of twists and turns; so breaking it down into plain english is going to require a lot of words.

TL;DR version: No, your phone is not going to explode unless there was a defect in manufacturing, and even those are rare. What happened was not a cyber attack, but rather an act of war that included a digital transmission component. The devices were built to *be* bombs, not converted into bombs by some kind of software magic. Read on for details.

First, some background. On September 17th, thousands of pagers (those old-school devices that let someone send you a phone number or short text string to let you know to call them back) detonated in dozens of locations throughout Lebanon. All of the pagers (as of this moment) were being carried by members of the group Hezbollah, an extremist group which has carried out numerous terrorist plots and attacks over the last several decades. This link to the New York Times coverage of the event is paywalled for many, but one of the better sources of news and information on this particular situation: https://www.nytimes.com/live/2024/09/17/world/israel-hamas-war-news

While no one has yet declared responsibility for the attack, it is likely that this action was carried out by Israel as part of their ongoing conflict in the region. This is one of many points of fact that are not confirmed yet, so it is only a suspicion at this time. Considering the last operation of this scale that Israel was involved in (Stuxnet) went un-declared for 30 years, we may not know for a very long time at that. 

This leads to the inevitable question, “Can a pager (or any other mobile device) that I’m wearing or carrying become a bomb?”

The answer is a bit complex, but the short form is “not unless there was a defect in manufacturing,” and also that it is highly unlikely that today’s events were anywhere near that straight-forward. Rather, it is much more likely the exact opposite – that bombs were fashioned into mobile devices instead of mobile devices being turned into bombs themselves. Let me walk you through that.

The definition of a bomb is simply a massive amount of energy (usually in the form of heat and/or pressure) suddenly being created but trapped inside of an enclosed space. Eventually the heat and pressure exceed the ability of whatever space is containing it, and the result is a rapid dispersal of the heat, pressure, and whatever the container was made of into the immediate surrounding environment; i.e. it explodes. In this case, something caused the pagers – made of plastic with circuit boards, a battery, a small screen, and a few other components – to become the container for all the energy. When the container couldn’t hold back the energy anymore, it exploded; seriously injuring anyone nearby – including whoever was wearing the pager or had it in their pocket at the time.   

Such an explosion could be caused by many different substances. We’ve seen lithium-ion batteries explode before – https://www.cnn.com/2023/03/09/tech/lithium-ion-battery-fires/index.html – but they generally convert themselves to heat more slowly, causing very hot fires but not what we saw in the videos coming out of Lebanon today. On the other side of the equation, there are many compounds that do not take a large amount of space or weight to create significant explosions. I won’t list them out here, but a quick google search will bring them up if you don’t mind that info being in your browser history.

It is also critical to point out that these were not pagers that you could buy from a local electronics store. These were encrypted devices designed to facilitate communications between members of a known terrorist organization (using the US and UK designation for Hezbollah). Therefore, they had to either have been built for that purpose, or heavily modified to suit that purpose. This becomes vitally important later in this article. 

So, what do we know as of this moment? Two things. First, that specialized pager devices which were being worn and/or carried by thousands of Hezbollah members exploded nearly simultaneously throughout Lebanon. Second, that it is unlikely to have been caused by the batteries or internal electronics of the devices themselves due to the explosions being very different from a standard lithium-ion battery fire. 

That can lead us to a set of conclusions, but this is pending additional information which may – or may not – come out later:

It is likely that an external group – potentially the Israeli security organization Mossad – managed to replace the pagers the Hezbollah members were expecting to get with devices that had been altered to include an explosive charge and a detonation system. Alternately, as noted by Reddit user UrsusArctus – https://www.reddit.com/user/UrsusArctus/ – the pagers may have been built with the capability to be remotely destroyed if they were to be lost or stolen as a Hezbollah security measure. This last scenario is less likely because Hezbollah has not been well-known to maintain such a level of Operational Security (OpSec), but it is possible and should be considered. 

This leaves us with pagers that contain an explosive charge on purpose (put there by either an external group or by Hezbollah themselves), and some way to trigger that charge to go off on-command. In scenario one, whoever diverted and altered the pagers would have built in the ability to trigger the explosion by sending a specific code to the device or through some other remote activation. In the scenario where the devices already had a self-destruct function, a security agency (i.e. spy group) could have found the sequence of codes or other operations which would trigger such a function. On-command, all of the pagers received the code to detonate, and the result is what we saw today.

What does this mean to everyone who is not a Hezbollah agent carrying a pager? Can this be done to a regular mobile phone? A laptop? My doorbell?! – in short, it’s insanely unlikely unless you’re being targeted by a state-sponsored espionage agency, and even then there is very little chance. In cybersecurity, we don’t like using terms like “never,” but this is as close to never as you’re going to get.  

The level of coordination and secrecy necessary to pull off either of the two scenarios (replacing the pagers or infiltrating the self-destruct system) is so massive that we almost never see anyone pull off this kind of attack. It has happened for espionage purposes – see https://www.securityweek.com/chinese-gov-hackers-caught-hiding-in-cisco-router-firmware/ – but it is just absolutely rare as to be close to non-existent, and certainly insanely rare for acts of war like we saw today. While it is true that Mossad has rigged exploding mobile phones in the past, each incident was one phone, given to one target by a spy or through some other means – never anything at this massive of a scale. 

Remember that in the first scenario, you must have infiltrated and compromised the supply chain for the devices – a supply chain that routinely deals with a terrorist organization who is likely to retaliate with extreme prejudice. This would require that you basically control everything about the supply chain to an extent that no one who is part of the manufacturer or the other suppliers knows you are there, because they will certainly call you out to the bad guys if they figure out you are there.

In the second case, you would have to have had operatives in place within the terrorist group itself long enough for them to acquire access to the self-destruct systems. This is much more possible with really good spies, but still not something that your average threat actor could pull off with any level of success. Also of note, your devices would have to be rigged to explode in the first place, which I can safely assume no one reading this article has built into their iPhone. 

In both cases, it would only be possible to carry out this kind of attack because the devices were specifically built for use by the group that was targeted. These devices worked on an encrypted network, and therefore would have to be purpose-built or modified to function on that network. This allowed whoever carried out the attack to specifically target hardware and users to an incredibly precise degree. Trying to do this with commodity devices like Android phones would make it impossible to ensure that you attack those people you’re looking to attack, and them alone. Using off-the-shelf commercial devices like this also means there is a significantly higher – almost guaranteed – chance that the alterations are discovered before you can put your plan into action. So it isn’t the kind of thing that you’d see being done unless it was directly, highly, and explicitly targeted.

This is also something that can only be done once. That’s it. Now, everyone who uses covert mobile devices is going to be looking to make sure that they haven’t been tampered with; and those with self-destruct systems will disable them until they can re-secure the control systems. 

Finally, there’s no profit in this. Remember that cyber threat actors are typically in this to make money through extortion and/or resale of the data they steal. Blowing up someone’s phone doesn’t aid that goal in any way, since the device and its data are now gone. Not to mention the massive law-enforcement reaction because you either could have or actually did injure and possibly kill people. Even for hacktivists, detonation of a target will not gain them any ground, and will probably cause them to lose quite a lot instead. 

Taken together, this indicates that the attack was a state-motivated and state-sponsored act of war, and not a cybersecurity incident. Technically, it involved a cyber aspect – the devices were remotely detonated through some form of digital connectivity – but would not be classified as a cyber attack itself. This is not something that you are going to see happening frequently, and certainly not something that we’re likely to see be used as part of a cyber attack in the traditional sense. It’s also extremely unlikely that the devices were turned into bombs with just the components that would normally be part of the pager/phone/whatever. Either the devices were substituted for ones that contained an explosive charge, or the devices were built to have a self-destruct feature; they were built to be bombs, they didn’t become bombs through some technological trickery. 

So, for 99% of us, there is no real likelihood that our phones will explode without warning. Or, at least no more of a likelihood than already exists due to accidental manufacturing issues – https://www.wired.com/2017/01/why-the-samsung-galaxy-note-7-kept-exploding/ . Instead, we should maintain focus on actual cyber threats. It is far more likely that you will fall victim to a phishing or text scam, accidentally download and run malware, or do a hundred other things that do not involve explosions at all, but still cause significant damage to your personal digital systems and/or company.

Cybersecurity in Plain English: The Great Social Security Number Leak

Because of the recent news  that 2.9 billion (with a B) Social Security Numbers for US Citizens had been stolen from a background investigation firm, lots of people have been asking me to talk about what they should do.  

 

The short answer is… nothing. 

 

While this latest massive data breach is concerning to be sure, the fact that billions of Social Security Numbers were stolen is not the story. Noun social security card 76840 FF001C. Unfortunately for all of us in the US (or who otherwise have a US Social Security Number), that data is almost definitely already known to the general public and the threat actor community. So, let’s look a little deeper about why you don’t need to be all that worried that your Social Security Number ended up in a huge data dump, again.

First, a bit about Social Security Numbers (SSNs). For those outside the USA, SSNs are numbers used to identify each US citizen in order to track a government-managed welfare program system called – you guessed it – Social Security. It’s managed by the Social Security Administration  and provides multiple services for citizens during their lifetime. SSNs are usually assigned shortly after a person is born, or shortly after they become a citizen if they immigrated. They are issued once and, with only a few incredibly rare exceptions, they are never changed during a person’s lifetime. So for most of us living here in the USA, we have one that was assigned to us at birth and will be with us until after we die. 

While these numbers were never meant to be used as any form of identification, they ended up being used for exactly that purpose over the 80+ years the system has been in active country-wide use. SSNs are used on tax forms, medical records, employment records, financial records, and just about everything else. The issue is that there are zero security controls around these numbers. While organizations who collect them are required to use reasonable and standard practices to protect the data; the actual number is not randomized or anonymized in any way by anyone – including the agency that issues it to you. 

The numbers themselves can be decoded and even guessed if you have enough information on a person. Entire calculators and decoders exist, because the SSN was meant to be decoded so it could be used to route benefits properly – such as this site. Because of this, SSNs should never be considered privileged or private information – they’re just too easy to figure out.

Additionally, as with any program that’s been in existence for nearly a century now, just about any organization or agency that’s held SSNs has lost control of some or all of that data over the years. So many data breaches (both physical paper-based access and digital access) have included SSNs that – at this point – you’d be in a ultra-tiny minority if your SSN wasn’t already known to anyone who wanted to find it. 

So, what to do about this breach? As I said at the top, there’s really not much to do in this case, nor is there much to worry about. The breach did include much more sensitive information that – when present all together in one place – absolutely could lead to identity fraud and other nefarious activity. Your SSN being the data dump, on the other hand, really isn’t a big deal. Keep an eye on your credit score/reports, be very wary of emails, text messages, or phone calls that want you to buy something or pay money or share additional information. Always remember that the FBI, Apple, Microsoft, Google, the Sheriff’s Office, etc. won’t call you first. When in doubt, ignore the link in the email and/or hang up the phone; then manually go to the website in question and log in or find a number to call to ask about the situation. Trust me, if any government organization or corporation needs you to do something, there will be a web page on their site or a phone number where they can tell you what they want you to do. None of them work exclusively by outbound email or phone calls. 

Threat activity generated from data breaches is very real. Follow good online hygiene and be cautious with any phone calls or texts – but you should be doing that even when you aren’t hearing about a massive data leak these days. The fact that SSNs were in the latest breach doesn’t change anything, and should be the issue you’re least concerned about surrounding this ongoing problem. 

Cybersecurity in Plain English: What happened with CrowdStrike?

It’s probably known to just about everyone in the world right now that on Friday, July 19 2024, millions of computers went offline unexpectedly Noun crash 4437101 C462DD.due to the software provided by CrowdStrike – a vendor specializing in cybersecurity tools. Many have asked for an explanation at a high level as to what happened an why, so let’s dive into this topic. Settle in, this is going to be a long one. 

Editor’s Note for Disclosure: While the author works for an organization which offers sales and services around CrowdStrike products, they also offer such sales and services for a wide variety of other EDR/XDR solutions. As such, objectivity can be preserved.

First, some background information:

 

CrowdStrike is a well-known and well-respected vendor in the cybersecurity space. They offer a large range of products and services to help businesses with everything from anti-malware defenses to forensic investigations after a cyber attack occurs. For the most part, their software works exceptionally well and their customers are typically happy with them as a company. 

Endpoint Detection and Response (EDR) is the general term for any software that looks both for known malware files on a computer and also looks at what things are actively running on a computer to attempt to determine if they may be some form of yet-unknown malware. These operations are often referred to as “signature/heuristic scanning” and “behavioral detection” respectively. While it isn’t necessary to understand the ins and outs of how this stuff works to understand what happened on Friday, CrowdStrike has a product line (Falcon XDR) which does both signature scanning and behavioral detection. 

EDR solutions have two forms of updates that they regularly get delivered and installed. The first type is one most of us are familiar with, application updates. This is when a vendor needs to update the EDR software itself, much like how Windows receives patches and updates. In the case of an application update, it is the software itself being updated to a new version. These updates are infrequent, and only released when required to correct a software issue or deploy a new feature-set. 

 

The second form of update is policy or definition updates (vendors use different terms for these, we will use “definition updates” for this article). Unlike application updates, definition updates do not change how the software works – they only change what the EDR knows to look for. As an example, every day there are new malicious files discovered in the world. So every day, EDR vendors prepare and send new definitions to allow their EDR to recognize and block these new threats. Definition updates happen multiple times per day for most vendors as new threat forms are discovered, analyzed, and quantified. 

The other term that was heard a lot this weekend was “kernel mode.” This can be a bit complex, but it helps if you visualize your operating system (Windows, MacOS, Linux, etc.) as a physical brick-and-mortar retail store. Most of what the store does happens in the front – customers buy things, clerks stock items, cash is received, credit cards are processed for payment. There are some things, like the counting of cash and the receiving of new stock, that are done in the back office because they are sensitive enough that extra control has to be enforced on them. In a computer operating system, user space is the front of the store where the majority of things get done. Kernel space is the back office, where only restricted and sensitive operations occur. By their nature, EDR solutions run some processes in the kernel space; since they require the ability to view, analyze, and control other software. While this allows an EDR to do what it does, it also means that errors or issues that would not create major problems if they were running in user space can create truly massive problems as they are actually running in kernel space. 

OK, with all that taken care of… what happened on Friday?

Early in the morning (UST), CrowdStrike pushed a definition update to all devices running their software on Windows operating systems. This is a process that happens many times a day, every day, and would not normally produce any kind of problems. After all, the definition update isn’t changing how the software works or anything like that. This update, however, had a flaw which set the stage for an absolute disaster. 

Normally, any changes to software in an enterprise environment (like airlines, banks, etc.) would go through a process called a “staged rollout” – the update is tested in a computer lab, then rolled out to low-impact systems that won’t disrupt business if something goes wrong. Then, and only then, it goes out to all the other systems once the company is sure that it won’t cause trouble. For CrowdStrike application updates, this process happens like any other software update, and they are put through a staged rollout process. However, definition updates are not application updates, and because of both the frequency of definition updates and the nature of their data (supplying new detection methods), they are not subject to staged rollout by the customer. In fact, customers rarely have even the ability to subject definition updates to staged rollouts themselves – the feature just doesn’t exist in nearly all EDR platforms. There are several EDR vendors who do staged rollouts to their customers, but once a definition update is pushed, it is installed immediately for every customer in that phase of the rollout. CrowdStrike pushed this update out to over 8 million systems in a matter of minutes.

This particular definition update had a massive issue. The update itself was improperly coded, which made it attempt to read an area of memory that couldn’t exist. In user space, this problem would just cause the application to crash, but not have any other impact on the system. In kernel space, however, an error of this type can cause the system itself to crash, since in kernel space the “application” is – essentially – the operating system itself. This meant that every machine which attempted to apply the definition update (over 8.5 million at last count) crashed immediately.

To recover from this issue, a machine would need to be booted into Safe Mode – a special function of Windows operating systems that boots up the machine with the absolute bare minimum of stuff running. No 3rd-party applications, no non-essential Windows applications and features, etc. Once booted into Safe Mode, the offending update file could be deleted and the machine rebooted to return to normal. 

So, why did it take days to make this happen if you just had to reboot into Safe Mode and delete a file? Well, there are two reasons this was a problem:

First, Safe Mode booting has to be done manually. On every single impacted device. When we may be talking about tens of thousands of devices in some companies, just the manpower needed to manually perform this process on every single machine is staggering. 

Second, if the machine is using BitLocker (Microsoft’s disk encryption technology) – which they all absolutely should be using, and the majority were using – then a series of steps must be performed to unlock the disk that holds the Windows operating system before you can boot into Safe Mode and fix the problem. This series of steps is also very manual and time consuming, though in the days following the initial incident there were some methods discovered that could make it faster. Again, when applied against tens of thousands of devices, this will take a massive amount of people and time. 

Combined, the requirement to manually boot into Safe Mode after performing the steps to unlock the drive led to company IT teams spending 72 hours and longer un-doing this bad definition update across their organizations. All the while, critical systems which are required to run the business and service customers were offline entirely. That led to the situation we saw this weekend, with airlines, stores, banks, and lots of other businesses being unable to do anything but move through the recovery process as quickly as they could – but it was still taking a long time to accomplish. Of course this led to cancelled flights, no access to government and business services, slow-downs or worse in healthcare organizations, etc. These operations slowly started coming back online over the weekend, with more still being fixed as I write this on Monday.

Now that we’ve got a good handle on what went wrong, let’s answer some other common questions:

“Was this a cyber attack?” No. This was a cyber incident, but does not show any evidence that it was an attack. Incidents are anything that causes an impact to a person or business, and this definitely qualifies. Attacks are purposeful, malicious actions against a person or business, and this doesn’t qualify as that. While the potential that this was threat activity can not yet be entirely ruled out, there are no indications that any threat actor was part of this situation. No group claimed responsibility, no ransom was demanded, no data was stolen. The incident also was not targeted, systems that were impacted were just online when the bad update was available, and therefore it was pretty random. This view may change in future as more details become available, but as of today this does not appear to be an attack. 

“Why did CrowdStrike push out an update on a Friday, when there would be less people available to fix it?” The short answer is that definition updates are pushed several times a day, every day. This wasn’t something that was purposely pushed on on Friday specifically, it was just bad luck that the first update for Friday AM had the error in it. 

“How did CrowdStrike not know this would happen? Didn’t they test the update?” We don’t know just yet. While we now know what happened, we do not yet have all the details on how it happened. It would be expected that such information will be disclosed or otherwise come to light in the coming weeks. 

“Why was only Windows impacted?” Definition updates for Windows, MacOS, and Linux are created, managed, and delivered through different channels. That is something that is common for most EDR vendors. This update was only for Windows, so only Windows systems were impacted.

“Was this a Microsoft issue?” Yes and no, but in every important way no. It was not actually Microsoft’s error, but since it only impacted Windows systems it was a Microsoft problem. Microsoft was not responsible for causing the problem, or responsible for fixing it, though they did offer whatever support and tools they could to help, and continue to do so. 

“Couldn’t companies test the update before it rolled out?” No, not in this case. The ability to stage the rollout of definition updates is not generally available in EDR solutions (CrowdStrike or other vendors) – though after this weekend, that might be changing. There are very real reasons why such features aren’t available, but with the issues we just went through, it might be time to change that policy. 

“How can we stop this from ever happening again?” The good news is that many EDR vendors stage the rollout of definition updates across their customers. So while a customer cannot stage the rollouts themselves, at least only a limited number of customers will be impacted by a bad update. No doubt CrowdStrike will be implementing this policy in the very near future. The nature and urgency of definition updates makes traditional staging methods unusable as organizations cannot delay updates for weeks as they do with Windows updates and other application updates. That being said, some method of automated staging of definition updates to specific groups of machines – while truly not optimal – might be necessary in future.

To sum up, CrowdStrike put out a definition update with an error in it, and because this definition update was loaded into a kernel-mode process, it crashed Windows. Over 8.5 million such Windows machines downloaded and applied the update before the error was discovered, causing thousands of businesses to be unable to operate until the situation was corrected. That correction required manual and time-consuming operations to be performed machine by machine, so the process took (and continues to take) a significant amount of time. No data theft or destruction occurred (beyond what would normally happen during a Windows crash), no ransom demanded, no responsibility beyond CrowdStrike claimed. As such, it is highly unlikely that this was any form of cyber attack; but it was definitely a cyber incident since a huge chunk of the business world went offline. 

Cybersecurity in Plain English: A Special Snowflake Disaster

Editor’s Note: This is an emergent story, and as such there may be more information available after the date of publication. 

Many readers have been asking: “What happened with Snowflake, and why is it making the news?” Let’s dive into this situation, as it is a little more complex than many other large-scale attacks we’ve seen recently.Noun abstract geometric snowflake 2143460 FF001C.

Snowflake is a data management and analytics service provider. What that essentially means is that when companies need to store, manage, and perform intelligence operations on massive amounts of data; Snowflake is one of the larger vendors that has services that allow that to happen. According to SoCRadar [[ https://socradar.io/overview-of-the-snowflake-breach/ ]], in late-May of 2024 Snowflake acknowledged that unusual activity had been observed across their platform since mid-April. While the activity indicated that something wasn’t right, the investigation didn’t find any threat activity being run against Snowflake’s systems directly. This was a bit of a confusing period, as usually you would see evidence that the vendor’s own systems were being attacked when you had strange activity going on across the vendor’s networks. 

Around the time of that disclosure, Santander Bank and Ticketmaster both reported that their data had been stolen, and was being held ransom by a threat actor. These are two enormous companies, and both reporting data breach activity within days of each other is an event that doesn’t happen often. Sure enough, when both companies investigated independently, they both came to the same conclusion – their data in Snowflake was what had been stolen. Many additional disclosures by both victim companies and the threat actors themselves – a group identified as UNC5537 by Mandiant [[ https://cloud.google.com/blog/topics/threat-intelligence/unc5537-snowflake-data-theft-extortion ]] occurred over the following weeks. Most recently, AT&T disclosed that they had suffered a massive breach of their data, with over 7 million customers impacted [[ https://about.att.com/story/2024/addressing-data-set-released-on-dark-web.html ]].

So, was Snowflake compromised? Not exactly. What happened her was that Snowflake did not require that customers use Multi-Factor Authentication (MFA) for users logging into the Snowflake platform. This allowed attackers who were able to successfully get malware on user desktops/devices to grab credentials; and then use those credentials to access and steal that customer’s data in Snowflake. This was primarily done by tricking a user into installing/running an “infostealer” malware, which allowed the attacker to see keystrokes, grab saved credentials, snoop on connections, etc. All the attacker needed to do was infect one machine that was being used by an authorized Snowflake user, and they could then get access to all the data that customer stored in Snowflake. Techniques like the use of password vaults (so there would be no keystrokes to spy on) and the use of MFA (which would require the user acknowledge a push alert or get a code on a different device) would be good defenses against this kind of attack, but Snowflake didn’t require these techniques to be in use for their customers.

Snowflake did not – at least technically – do anything wrong. They allow customers to use MFA and other login/credential security with their service, they just didn’t mandate it. They also did not have a quick way to turn on the requirement for MFA throughout a customer organization if that customer hadn’t started out making it mandatory for all Snowflake accounts they created. This is a point of contention with the cybersecurity community, but even though it is a violation of best practices it is not something that Snowflake purposely did incorrectly. Because of this, the attacks being seen are not the direct fault of Snowflake, but rather a result of Snowflake not forcing customers to use available security measures. Keep in mind that Snowflake has been around for some time now. When they first started, MFA was not an industry standard and customers starting to work with Snowflake back then were unlikely to have enabled it. 

Snowflake themselves have taken steps to address the issue. Most notably, they implemented a setting in their customer administration panel that lets an organization force the use of MFA for everyone in that company. If any users were not set up for MFA, they would need to configure it the next time they logged in. This is a good step in the right direction, but Snowflake did make a few significant errors in the way they handled the situation overall:

 – Snowflake did not enforce cybersecurity best practices by default, even for new customers. While they have been around long enough that their earlier customers may have started using the service before MFA was a standard security control, not getting those legacy customers to enable MFA was definitely a mistake. 

 – They also immediately tried to shift blame to customers who had suffered breaches. The customers in question were responsible for not implementing MFA and/or other security controls to safeguard their data; but attempting to blame the victim rarely works out in an vendor’s favor. In this case, the backlash from the security community was immediate and vocal. Especially when it came to light that there was no easy way to enable MFA across an entire customer, they lost the high ground quickly. 

 – That brings us to the next issue Snowflake faced: they didn’t make it easy to enable MFA. Most vendors these days allow for a quick way to enforce MFA across all users at that customer; with many vendors now having it be opt-out; meaning customer users will have to use MFA unless the customer organization opts-out of that feature. MFA was opt-in for Snowflake customers, even those signing up more recently when the use of MFA was considered a best practice by the cybersecurity community at large. With no quick switch or toggle to change that, many customers found themselves scrambling to identify each user of Snowflake within their organization and turn MFA on for each, one by one. 

Snowflake, in the end, was not responsible for the breaches multiple customers fell victim to. While that is true; their handling of the situation, attempt to blame the victims loudly and immediately, and lack of a critical feature set (to enforce MFA customer-wide) has created a situation where they are seen as at-fault, even when they’re not. A great case-study for other service providers who may want to plan for potential negative press events before they end up having to deal with them. 

If you are a Snowflake customer, you should immediately locate and then enable the switch to enforce MFA on all user accounts. Your users can utilize either Microsoft or Google Authenticator apps, or whatever Single Sign-On/IAM systems your organization uses. 

Is Ransomware Getting Worse, or Does it Just Feel That Way?

A reader contributed a great question recently: “So many more ransomware attacks are getting talked about in the news. Is ransomware growing Noun broadcast 6870591 C462DD.that quickly, or does it just seem worse than it is?” The answer is “both,” but let’s break things down.

 

According to Security Magazine, ransomware has indeed grown exponentially in the last year, with an 81% increase in attack activity. That’s certainly not good, but may not be telling the whole story. While there’s no doubt that threat actors have increased attacks via Ransomware-as-a-Service (RaaS) and more sophisticated automation; some of what we’re seeing is an increase in the number of reported attacks compared to previous years.  

 

Better automation allows threat actors to perform more attack attempts in the same amount of time than they’d be able to perform manually. Scripting and automation have increased the effectiveness of legitimate organizations in many different ways. Processes like allowing a user access to an application which would have previously taken days or a week can now be done in seconds – safely. Stocks trades that would take hours in years past are now done in seconds – also safely, usually. As legitimate businesses have embraced automation to make their organizations better, threat actors have done the same. Now, a new exploit that would allow for a new attack, which would normally take weeks or months to see significant spread throughout the world, can become a major world-wide threat in hours. This, of course, means that more attack attempts leads to more successful attacks and higher numbers of organizations compromised year over year. 

 

RaaS allows established threat actor cartels to re-package and sell attack protocols they no longer use themselves to lower-tier threat actors. This extends the life of the product (the ransomware attack), and allows the cartel to continue to make money from it for much longer periods of time. By having more threat actors use existing tools against still-unpatched systems, more organizations end up compromised.

 

Both of these factors have lead to a marked increase in the total number of ransomware victim organizations over time, and that can’t be dismissed as a statistical blip or anything like that. We’re facing more attacks, more often, across more industries.

 

However, it should be noted that a huge portion of the compromised organizations would not – until recently – have reported the compromise at all. Businesses have many reasons to attempt to hide the fact that they fell victim to a ransomware attack. Loss of customer trust, violation of clauses in contracts, endangering future business – all reasons companies may choose to hide that an attack took place. This isn’t new behavior, as companies would often try to gloss over or bury anything that could impact their bottom line as you would expect – we’re just now talking about impacts caused by digital disasters instead of bad accounting practices, corporate espionage, and other more traditional events. 

 

Generally, if such hidden events and setbacks would cause overall market impact or jeopardize citizens of a country or locality; government agencies create regulation to make it mandatory to report it. This is not something that’s done frequently, and only occurs when the burying of such events would create major fallout in an entire market or a large group of citizens. Typically, new regulations only occur after such a major impact occurs. Over the last several years, the impact of cybersecurity incidents has indeed begun to cause fallout in markets, and has caused impact to massive amounts of citizens through identity theft and other problems. Because of this, governments have begun to pass legislation that makes it mandatory to quickly disclose any cybersecurity incident which might have a “material impact” to markets and/or consumers. You can read more about one such regulation in a previous post here.

 

In the USA, both the Federal Government (specifically the Securities and Exchange Commission) and several State Governments (most notably New York and California) have already passed regulations which compel organizations to report incidents via public filings. The SEC, for example, requires the filing of an amendment to a regular reporting form (8-K) within four days of any incident that has material impact, and the incident must also be part of the annual 10-K filing every public company and certain other companies must file. Since these reports are public, anyone and everyone can view them. Other US states either have regulations that are being/have been amended to cover cybersecurity incidents, or are creating new legislation to make disclosure mandatory for any companies that do business within that state or territory. The European Union and other nations/coalitions are also either strengthening reporting regulations or implementing new regulations specifically around cybersecurity incident reporting.

 

The practical upshot of this is that significantly more incidents are becoming public knowledge that would not have been publicly reported previously. Incidents that would have been “swept under the rug” in previous years are now becoming public knowledge quickly, leading to a marked uptick in the number of known attack victim organizations. While this number is certainly not enough to account for the total increase in attacks, it has most definitely increased the number of reported attacks over the last few years. The combination has lead to massive increases in year-over-year ransomware reports, leading to dramatic news reporting on the problem. As the issue becomes more sensational, everyone hears about it more often and with more volume.

 

So, while it is true that the total number of ransomware attacks has increased sharply due to a combination of the rise Ransomware-as-a-Service and the use of automation in threat actor activities, it is important to also realize some of the sensational numbers are attributable to companies being required to talk about the problem more than in the past. In total, the issue of ransomware and other cybercrime is taking a much bigger share of the public interest – which is a very good thing – but we must look at all of the factors that lead to such numbers to more fully understand what’s going on. 

Cybersecurity in Plain English: How Did They Use the Real Email Domain?

Once in a while, I get the chance to pull back the curtain on how threat activity works in this column, and a recent question “I got a fake email from Microsoft, but it was the REAL microsoft.com domain – how did they do that?” gives me the opportunity to do so now. Let’s take a look at some of the tricks threat actors use to make you think that spam/threat/phishing email is actually coming from a domain that looks legitimate.

 

Technique 1: Basic Spoofing

Threat actors are able to manipulate emails in many ways, but the most common is to just force your email application to display something other than the real email Noun ask 6712656 FF001C.address they’re sending from. There are several ways to do this, but the most common involves the manipulation of headers. Headers are metadata (data about data) that email systems use to figure out where an email is coming from, where it should go to, who sent it, etc. One of the most common techniques involves using different headers for the display name (which shows up before you hover over the From: address in the message) and the actual email address the mail is coming from (which you can see by hovering over the From: field). This would result in a situation where you get an email from “Microsoft Support ()” and is somewhat easy to spot if you hover over the sender and see what email address it’s really from. 

If you’re wondering why email systems don’t reject messages like that, it’s because this situation is a valid feature-set of how email works. Simple Mail Transfer Protocol (SMTP) is the method used by the whole world to send emails, and part of that protocol allows for a display name in addition to an email address. This is how your company’s emails can have the name of the person that sent it to you, or a company can give an email account a friendly name – so there’s a trade-off here. While the feature is legitimate, it can be used for malicious purposes, and you need to look at the actual email address of the sender and not just the display name. 

 

Technique 2: Fake Domains

“OK,” you say, “But I definitely have gotten fake emails that used real email addresses for a company.” While you’re not losing your mind, the emails did not come from the company in question. Threat actors use multiple tricks to ma​ke you be​lieve that the email dom​​ain that message came from is real. For example, in the last sentence, the words “make,” “believe,” and “domain,” aren’t actually those words at all. They have what is known as a “zero-width space” embedded into them. While this space isn’t visible, it’s still there – and my spell-checker flagged each of the words as mis-spelled because they indeed are. Techniques like this allow a threat actor to send an email from “support@m​icrosoft.com” because they registered that email domain with an invisible space between the letters (between the “m” and the “i” in this case). To the naked eye, the domain looks very much real, but from the perspective of an email system, it is not actually the microsoft.com domain, and therefore is not something that would get extra attention from most security tools. 

This same theory can be used in another way. For example, have a look at AMAΖON.COM – notice anything odd there besides it being in all caps? Well, the “Z” in that domain name isn’t a “Z” at all – it’s the capitalized form of the Greek letter Zeta. Utilizing foreign characters and other Unicode symbols is a common way to trick a user into believing that an email is coming from a domain that they know, when in fact it is coming from a domain specifically set up to mislead the user. 

There are two ways to defend against this kind of malicious email activity. The first – and most important – is to follow best practices for cyber hygiene. Don’t click on links or open attachments in email, and never assume that an email is from who you think it is from without proof. Did you get an email from a friend with an attachment that you weren’t expecting? Call or text them to check that they sent it. Get an email from your employer with a link in it? Hover over the link to confirm where it goes – or better yet, reach out to your IT team and make sure you are supposed to click on that link. Most companies have begun to send out pre-event emails such as “You will be receiving an invitation link to register for our upcoming event later today. The email will be from our event parter – myevents.com.” in order to make sure users know what is real and what is suspicious if not outright fake. 

The second defense is one you can’t control directly, but is happening all the time. Your email provider (your company, Google for GMail, Outlook.com for Microsoft, etc.) is constantly updating lists of known fake, fraudulent, and/or malicious email domains. Once a fake domain goes on the list, emails that come from there get blocked. While this is an effective defense, it can’t work alone as there will always be some time between when a threat actor starts using a new fake domain and when your email provider discovers and blocks it.

 

In short, that email from that legitimate looking email address may still be fake and looking to trick you. Hovering over the email sender name to see the full and real address and following good cyber hygiene can save you from opening or clicking something that is out to do you, your computer, and/or your company harm.

Cybersecurity in Plain English: Should I Encrypt My Machine?

A common question I get from folks is some variant of “Should I be encrypting my laptop/desktop/phone?” While the idea of encrypting data might sound scary or difficult, the reality is the total opposite of both, so the answer is a resounding “YES!” That being said, many people have no idea how to actually do this, so let’s have a look at the most common Operating Systems (OSs) and how to get the job done.

First, let’s talk about what device and/or disk encryption actually does. Encryption renders the data on your device unusable unless someone has the decryption key – Noun-mobile-security-6159628-4C25E1 (1).which these days is typically either a passcode/password or some kind of biometric ID like a fingerprint. So, while the device is locked or powered down, if it gets lost or stolen the data cannot be accessed by whoever now has possession of it. Most modern (less than 6 to 8 year old) devices can encrypt invisibly without any major performance impact, so there really isn’t a downside to enabling it beyond having to unlock your device to use it – which you should be doing anyway… hint, hint… 

Now, the downside – i.e. what encryption can’t do. First off, if you are on an older device, there may be a performance hit when you use encryption, or the options we talk about below may not be avialable. There’s a ton of math involved in encryption and decryption in real-time, and older devices might just not be up to the task at hand. This really only applies to extremely older devices, such as those from 6-8 years old, and at that point it may be time to start saving up to upgrade your device when you can afford to. Secondly, once the device is unlocked, the data is visible and accessible. What that means is that you still need to maintain good cyber and online hygiene when you’re using your devices. If you allow someone access, or launch malware, your data will be visible to that while you have it unlocked or while that malware is running. So encryption isn’t a magic wand to defend your devices, but it is a very powerful tool to help keep data secure if you lose the device or have it stolen. 

So, how do you enable encryption on your devices? Well, for many devices it’s already on believe it or not. Your company most likely forces the use of device encryption on your corporate phones and laptops, for example. But let’s have a look at the more common devices you might use in your personal life, and how to get them encrypted.

Windows desktops and laptops:

From Windows 10 onward (and on any hardware less than about 5 years old), Microsoft supports a technology called BitLocker to encrypt a device. BitLocker is a native tool in Windows 10 and 11 (and was available for some other versions of Windows) that will encrypt entire volumes – a.k.a. disk drives – including the system drive that Windows itself runs on. There are a couple of ways it can do this encryption, but for most desktops and laptops you want to use the default method of encryption using a Trusted Platform Module (TPM) – basically a hardware chip in the machine that handles security controls with a unique identifier. How the TPM works isn’t really something you need to know, just know that there’s a chip on the board that is unique to your machine, and that allows technologies like BitLocker to encrypt your data uniquely to your machine. Turning on BitLocker is easy, just follow the instructions for your version of Windows 10 or 11 here: https://support.microsoft.com/en-us/windows/turn-on-device-encryption-0c453637-bc88-5f74-5105-741561aae838 – the basic idea being to go into settings, then security, then device encryption, but it’ll look slightly different depending on which version of Windows you’re using. One important note: If you’re using Windows 10 or 11 Home Edition, then you may have to follow specific instructions listed on the link on that web page to encrypt your drive instead of the whole system. It has the same overall outcome, but uses a slightly different method to get the job done. 

Mac desktops and laptops:

Here’s the good news, unless you didn’t choose the defaults during your first install/setup, you’re already encrypted. MacOS since about 4-5 versions ago automatically enables FileVault (Apple’s disk encryption system) when you set up your Mac unless you tell it not to do so. If you have an older MacOS version, or you turned it off during the setup, you can still turn it on now. Much like Microsoft, FileVault relies on a TPM to handle the details of the encryption, but all Macs that are still supported by Apple have a TPM, so unless you are on extremely old hardware (over 8-10 years old), you won’t have to worry about that. Also like Microsoft, Apple has a knowledge base article on how to turn it on manually if you need to do so: https://support.apple.com/guide/mac-help/protect-data-on-your-mac-with-filevault-mh11785/mac

Android mobile devices (phones/tablets):

Android includes the ability to encrypt data on your devices as long as you are using a passcode to unlock the phone. You can turn encryption on even if you’re not using a passcode yet, but the setup will make you set a passcode as part of the process. While not every Android device supports encryption, the vast majority made in the last five years or so do, and it is fairly easy to set it up. You can find information on how to set this up for your specific version of Android from Google, such as this knowledge base article: https://support.google.com/pixelphone/answer/2844831?hl=en

Apple mobile devices (iPhone/iPad):

As long as you have a device that’s still supported by Apple themselves, your data is encrypted by default on iPhone and iPad as soon as you set up a passcode to unlock the phone. Since that’s definitely something you REALLY should be doing anyway then, if you are, then you don’t have to do anything else to make sure the data is encrypted. Note that any form of passcode will work, so if you set up TouchID or FaceID on your devices, that counts too; and your data is already encrypted. If you have not yet set up a passcode, TouchID, or FaceID, then there are instructions at this knowledge base article for how to do it: https://support.apple.com/guide/iphone/set-a-passcode-iph14a867ae/ios and similar articles exist for iPad and other Apple mobile devices. 

Some closing notes on device encryption: First and foremost, remember that when the device is unlocked, the data can be accessed. It’s therefore important to set a timeout for when the device will lock itself if not in use. This is usually on automatically, but if you turned that feature off on your laptop or phone, you should turn it back on. Secondly, a strong password/passcode/etc. is really necessary to defend the device. If a thief can guess the passcode easily, then they can unlock the device and get access to the data easily as well. Don’t use a simple 4-digit pin to protect the thing that holds all the most sensitive data about you. As with any other password stuff, I recommend the use of a passphrase to make it easy for you to remember, but hard for anyone else to guess. “This is my device password!” is an example of passphrase, just don’t use that one specifically – go make up your own. If your device supports biometric ID (like a fingerprint scanner), then that’s a great way to limit how many times you need to manually type in a complex password and can make your life easier.

Device encryption (and/or drive encryption) makes it so that if your device is lost or stolen, the data on that device is unusable to whoever finds/steals it. Setting up encryption on most devices is really easy, and the majority of devices won’t even suffer a performance hit to use it. In many cases, you’re already using it and don’t even realize it’s on, though it never hurts to check and be sure about that. So, should you use encryption on your personal devices? Yes, you absolutely should.

 

Cybersecurity in Plain English: My Employer is Spying On My Web Browsing!

A recent Reddit thread had a great situation for us to talk about here. The short version is that a company notified all employees that web traffic would be monitored – including for secure sites – and recommended using mobile devices without using the company WiFi to do any non-business web browsing. This, as you might guess, caused a bit of an uproar with multiple posters calling it illegal (it’s usually not), a violation of privacy (it is), and because it’s Reddit, about 500 other things of various levels of veracity. Let’s talk about the technology in question and how it works.

For about 95% of the Internet these days, the data flowing between you and websites is encrypted via a technology known officially as Transport Layer Security (TLS), but Noun network monitoring 6236251 00449F.almost universally referred to by the name of the technology TLS replaced some time ago, Secure Sockets Layer (SSL). No matter what you call it, TLS is the tech that is currently used, and what’s responsible for the browser communicating over HTTPS:// instead of HTTP://. Several years ago, non-encrypted web traffic was deprecated – a.k.a. phased out – because Google Chrome, Microsoft Edge, Firefox, Opera, and just about every other browser began to pop up a message whenever a user went to a non-secure web page. As website owners (myself included) did not want to deal with large numbers of help requests, secured (HTTPS://) websites became the norm; and you’d be hard-pressed to find a non-encrypted site these days. 

So, if the data flowing between your browser and the website is encrypted, how can a company see it? Well, the answer is that they normally can’t, but organizations can set up technology that allows them to decrypt the data flowing between you and the site if you are browsing that site on a laptop, desktop, or mobile device that the organization manages and controls. To explain that, we’ll have to briefly talk about a method of threat activity known as a Man in the Middle (MitM) attack:

MitM attacks work by having a threat actor intercept your web traffic, and then relay it to the real website after they’ve seen it and possible altered it. As you might guess, this could be devastating for financial institutions, healthcare companies, or anyone else that handles sensitive data and information. Without SSL encryption, MitM attacks can’t really be stopped. You think you’re logging into a site, but in reality you’re talking to the threat actor’s web server, and THEY are talking to the real site – so they can see and modify data you send, receive, or both. SSL changes things. The way SSL/TLS works is with a series of security certificates that are used along with some pretty complex math to create encryption keys that both your browser and the website agree to use to encrypt data. That’s a massive oversimplification, but a valid high-level explanation of what’s going on. Your browser and the website do this automatically, and nearly instantly, so you don’t actually see any of it happening unless something goes wrong and you get an error message. If a threat actor tries to put themselves in the middle, then both your browser and the website will immediately see that the chain of security is broken by something/someone, and refuse to continue the data transfer. By moving to nearly universal use of SSL, Man in the Middle attacks have become far less common. It’s still technically possible to perform an MitM attack, but exceedingly more difficult than before, and certainly more difficult than a lot of other attack methods a threat actor could use.

Then how can your company perform what is effectively a MitM process on your web traffic without being blocked? Simple, they tell your computer that it’s OK for them to do it. The firewalls and other security controls your company uses could decrypt the SSL traffic before it reaches your browser. That part is fairly easy to do, but would result in a lot of users not being able to get to a whole lot of websites successfully. So, they use a loophole that is purposely carved out of the SSL/TLS standards. Each device (desktop/laptop/mobile/etc.) that the company manages is told that it should trust a specific security certificate as if it was part of the certificate chain that would normally be used for SSL. This allows the company to re-encrypt the data flow with that certificate, and have your browser recognize it as still secure. The practice isn’t breaking any of the rules, and in fact is part of how the whole technology stack is designed to work expressly for this kind of purpose, so your browser works as normal even though all the traffic is being viewed un-encrypted by the company. I want to be clear here – it’s not a person looking at all this traffic. Outside of extremely small companies that would be impossible. Automated systems decrypt the traffic, scan it for any malware or threat activity, then re-encrypt it with the company’s special certificate and ferry it on to your browser. A similar process happens in the other direction, but that outbound data is re-encrypted with the website’s certificate instead of the company’s certificate. Imagine that the systems are basically using their own browser to communicate with the websites, and ferrying things back and forth to your browser. That’s another over-simplification just to outline what is going on. Humans only get involved if the automated systems catch something that requires action. That being said, humans *can* review all that data if they wanted to or needed to as it is all logged – it’s just not practical to do that unless there’s an issue that needs to be investigated.

That brings us to another question. Why tell everyone it’s happening if it can be done invisibly for any device the company controls and manages? Well, remember way up above when we talked about if it was legal, or a violation of privacy, or a host of other things? Most companies will bypass the decryption for sites they know contain financial information, healthcare info, and other stuff that they really don’t want to examine at all. That being said, it’s not possible to ensure that every bank, every hospital and doctor’s and dentist’s office, every single site that might have sensitive data on it is on the list to bypass the filter. Because of that, many companies will make it known via corporate communications and in employee manuals that all traffic can be visible to the IT and cybersecurity teams. It’s a way to cover themselves if they accidentally decrypt sensitive information that could be a privacy violation or otherwise is something they shouldn’t, or just don’t want to, see. 

Companies are allowed to do this on their own networks, and on devices that they own, control, or otherwise manage. Laws vary by country and locality, and I am not a lawyer, but at least here in the USA they can do this whenever they want as long as employees consent to it happening. The Washington Post did a whole write-up on the subject here: https://www.washingtonpost.com/technology/2021/08/20/work-from-home-computer-monitoring/ (note, this may be paywalled for some site visitors). As long as the company gets that consent (say, for example, having you sign that you have read and agree to all of the stuff in that Employee Handbook), they can monitor traffic that flows across their own networks and devices. Some companies, of course, just want to give employees a heads-up that it’s happening, but most are covering their bases to make sure they’re following the rules for whatever country/locality they and you are in. 

What about using a VPN? That could work, if you can get it to run. Many VPN services would bypass the filtering of SSL Decryption, because they’re encrypting the traffic end-to-end with methods other than SSL/TLS. In short, the browser and every other app are now communicating in an encrypted channel that the firewall and other controls can’t decrypt. Not all VPN’s are created equal though, so it isn’t a sure thing. Also keep in mind that most employers who do SSL Decryption also know about VPN’s, and will work to block them from working on their networks.

One last note: Don’t confuse security and privacy. Even without SSL Decryption, your employer can absolutely see the web address and IP address of every site you visit. This is because of two factors. First, most Domain Name Servers (DNS) are not encrypted. That’s changing over time, but right now it is highly likely that your browser looks up where the website is via a non-encrypted system. Second, even if you’re using secure DNS (which exists, but isn’t in wide-spread use), the company’s network still has to connect to the website’s network – which means at the very least the company will know the IP addresses of the sites you visit. This isn’t difficult to reverse and figure out what website is on that IP address, so your company can still see where you went – even if they don’t know what you did while you were there.

To sum up: Can your employer monitor your web surfing even if you’re on a secure website? Yes – provided they have set up the required technology, own and/or manage the device you’re using, and (in most cases) have you agree to it in the employee manual or via other consent methods. Is that legal? Depends on where you live and where the company is located, but for a lot of us the answer is “yes.” Doesn’t it violate my privacy? Yes, though most companies will at least try to avoid looking at traffic to sites that are known to have sensitive data. Your social media feeds, non-company webmail, and a whole lot of other stuff are typically fair game though; so just assume that everywhere you surf, they can see what you’re doing. Can you get around that with a VPN? Maybe, but your company may effectively block VPN services. And finally, does this mean if my company isn’t doing SSL Decryption that I’m invisible? No, there’s still a record of what servers you visited, and most likely what URL’s you went to.

Last but not least: with very few exceptions, the process of SSL Decryption is done for legitimate and very real security reasons. The technology helps keep malware out of the company’s network and acts as another link in the chain of security defending the organization. While there are no doubt some companies that do this to spy on their employees, they are the exception rather than the rule. Check FaceBook and do your banking on your phone (off WiFi) or wait until you get home. 

Cybersecurity in Plain English: How Does Ransomware Work?

I get a lot of great questions from people in all different areas of business, but one comes up more than most: “How does ransomware even work?” Granted, we know what the goal of ransomware is – to get paid to unlock files that are locked down by a threat actor – but how does it operate, function, do what it does? Let’s dive into this topic.

Ransomware is a generic term to refer to any cyber attack where data is encrypted in order to make it unusable to a person or organization until a payment to the threat actor is made. Because locking up the data by encrypting it renders most businesses partially or totally unable to conduct business, it is a devastatingly effective form of attack, and a preferred method of threat activity these days. How it does what it does, however, is a bit more complicated; as the methods and scope of ransomware have changed over the 20-plus years we’ve been dealing with it as a security community.

Modern ransomware can be broken down into two broad categories: Single-extortion ransomware that just locks the data down, and double-extortion ransomware that also steals a copy of all the impacted data before locking it down. Each has evolved to reduce the ability of an organization to recover from backup or otherwise fix things without having to pay the threat actor, but each category is equally popular among criminal groups. 

Single-extortion ransomware works by first gaining access to a desktop, laptop, or server. This can be through one of many initial access methods, but the more commonly used techniques these days are subterfuge and exploiting a vulnerability. See the previous post at https://www.miketalon.com/2024/02/cybersecurity-in-plain-english-how-do-threat-actors-get-in/ for more info on initial access. Subterfuge includes things like tricking a user into visiting a booby-trapped website, hiding malware in what appears to be a valid software application, or otherwise getting a user (or automated system) to install the threat actor’s software on a machine/virtual machine, etc. Exploitation of a vulnerability requires less (or no) interaction by a user, but rather tricks/forces an application or platform into doing something malicious by taking advantage of a weakness in the software or hardware itself. Note that threat actors are aware that anti-malware exists, and so will attempt to hide what they are doing for as long as possible and avoid triggering the anti-malware whenever possible (see dwell time below). This is referred to as “evasion,” and there are many different techniques that are used to different levels of effectiveness, depending on what anti-malware defenses are in place.  

Once they have the first device compromised, the threat actor then will typically attempt to spread their influence to as many other machines as possible (referred to as “propagation”). Since most organizational systems now use some form of Endpoint Detection and Response (an advanced type of anti-malware system), this has to be done carefully and cautiously to evade tripping detection and defensive systems. In fact, a threat actor can take weeks or even months just moving around a victim network in search of more devices and systems to take control of before they do anything like encrypting data. This is most commonly referred to as “dwell time,” with the average being about 10 days in 2023 but many sticking around for far longer to gain control of more systems. It isn’t uncommon to see dwell times stretching into months as double-extortion attacks become more common.

More commonly these days, threat actors will also attempt to disable backup solutions and try to weaken or disable anti-malware solutions as they go. This allows them to spread further, and to ensure that once they do spring the trap, the organization won’t have recent backups to restore from. Both actions make it more likely that the victim organization will pay to have their data unencrypted. Remember that ransomware is a business – a criminal business, but still a business – so the more likely a victim is to make a payment, the more money the criminal business generates. Additionally, many modern threat-actors will install back-door systems which will allow them to re-enter the organization’s systems if the organization does choose not to pay – so that the threat actor can re-encrypt over and over until they get money. 

Once the threat actor has gotten onto as many systems as possible and made sure things like backups have been rendered useless, then single-extortion ransomware enters its final stage. Some, most, or all of the data on each infected machine is encrypted using a key only known to the threat actor. Without going into too much detail here, threat actors use a theory known as asymmetrical encryption – meaning that the key that encrypts the data cannot be used to decrypt it. So even if the organization captures the encryption key, it won’t be useful in getting back to business. Once done, the threat actor either displays a message on the infected systems and/or directly contacts the organization to demand a ransom in exchange for the decryption key; and the attack is then finished.

For double-extortion ransomware, the game changes a bit. While all of the above steps still happen, there is another step added in between the propagation phase – where the threat actor tries to compromise as many systems as possible without being caught – and the encryption phase. As they move across the organization’s systems, the double-extortion ransomware threat actor begins stealing a copy of the data that they discover. There are many methods for performing this step, but the most common involve sending a copy of each file to cloud storage that the threat actor has access to. Many have asked why cloud providers don’t prohibit this activity and stop double-extortion ransomware, and the answer to that question will be in an upcoming article, but suffice it to say that currently; they really can’t police this type of data transfer in order to stop it. Data exfiltration can occur quickly, or very quietly – with different threat actors preferring different techniques in a trade-off between getting everything fast or evading defenses but taking longer to get the job done. 

This dataset is held until after the threat actor encrypts the original data on the organization’s systems, and the data theft can go on for as long as the threat actor is able to dwell within the organization. This means that not only can all current data be stolen, but any new data can also be siphoned off and stolen as the attack progresses. With dwell times adding up to potentially months, this can mean a great deal of current data can be stolen as it is created and modified by employees. 

Once the trap is sprung and the original data is encrypted, the threat actor now has two threats they can use to extort a payout from the victim organization. First, they will offer the decryption key in much the same way as with single-extortion ransomware. Secondly, they offer to destroy their copy of the data if the ransom is paid; but threaten to release that data to the general public if the ransom is not payed. So, even if an organization can recover without paying the ransom, they still must contend with the fact that highly privileged data could be released to the outside world unless they pay. For organizations like law firms, healthcare companies, payment processors, and other organizations that hold extremely privileged information, such public release of the info could be devastating and even trigger massive regulatory fines and penalties. Even a business that writes off the encrypted data as a loss may not be able to weather all of that data becoming public knowledge to anyone who wishes to view it. The hit to customer trust, regulatory fines, impact to stock prices, loss of investors, and other factors make such a release of data something many companies cannot withstand without going out of business. 

Some ransomware threat actors have even taken things a step further with so-called triple-extortion attacks. The data itself is encrypted, the stolen data is threatened to be released to the general public, and the threat actor also threatens persons and companies that appear in the data to try to get them to pay in addition to the company the data came from. For example, if a ransomware actor compromises a hospital, the data on the hospital data-systems is encrypted, a the threat actor threatens to release the copy of that data which they hold to the general public, and the threat actor reaches out to individual patients and demands that they also pay money to keep their own data that was in the stolen data-set from becoming public. This maximizes the payout the threat actor can get, and makes it even more likely that the original victim organization (the hospital in this scenario) will pay them to make the whole problem go away. 

Many have asked me if they should pay the ransom. While I can’t speak to every situation that ransomware can create, my overall recommendation is not to pay if there is any other way to get back to business. Paying the ransom has several negative effects: First, you’re giving money to one or more people who admit they are criminals. There’s no guarantee that they’ll do what they say they’ll do if you pay them, and they may have back-door access to continue harming your organization even if they do give you the decryption keys. There’s also no way to validate that they deleted their stolen copy of the data, and in fact law enforcement was able to find supposedly deleted data on threat actor systems they took control of in raids and shutdowns [https://krebsonsecurity.com/2024/03/blackcat-ransomware-group-implodes-after-apparent-22m-ransom-payment-by-change-healthcare/]. Second, every time the threat actor is paid, it encourages more threat actors to get into the ransomware business to make money. Third, depending on who the threat actor is and where you are, it might be against the law to send money to the threat actor at all and therefore expose your organization to even more regulatory and/or legal issues. Some information on this for US companies can be found here: https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf . While there are some cases where paying the threat actor is the only way to resolve the situation, every organization should think long and hard about the repercussions to their own business and to the greater business world if they do so. 

Ransomware is an insidious threat that is growing every day. With double- and triple-extortion techniques growing in popularity, even the ability to recover without paying the ransom doesn’t remove the threat that the criminals can hold over an organization and its customers. That being said, it is not all doom and gloom. By keeping software updated, not interacting with links in emails or attachments that come in with them, and practicing basic online hygiene; users can thwart a large number of ransomware attacks. Exploitation of weaknesses in software will still be a problem, and organization must address these by utilizing additional security controls to compensate for the weakness, but effective strategies do exist for minimizing the potential to be struck by ransomware. Together, we can make it less lucrative for a threat actor to use ransomware, causing their business models to break and making the net a safer place. 

Cybersecurity in Plain English: IAM What?

A reader recently asked, “What is IAM and why is it important?” This is a bit of a complex question, but we can definitely dive into some of the higher-level concepts and details to de-mystify Identity and Access Management (IAM).

IAM is simply the series of technologies that control who is allowed to access what on your corporate systems. The complexity comes about because – while the idea is simple – the actual implementation of IAM is one of the most complex operations that many companies will ever undertake. The reason is straight-forward, humans are not generally logical and orderly beings. Because of that, systems which enable humans to do their jobs also tend to be complicated and intertwined, meaning making sure only the right people have access to the right systems and data is often difficult at best. So, let’s have a look at the basic ideas behind IAM and what they do.

First, the Principle of Least Access is the starting ground for any IAM solution set. As its name would imply, this principle says that each user should be first given the absolute minimum amount of access to systems and applications, regardless of any other factor. When a user needs access to something more, they get it quickly and efficiently, but they only get the bare minimum access to that “something more” and no more than that. As an example, a new user needs access to things like file servers, email, and some applications. This access would be very specifically defined, giving them access to just the folders on the file server they require; for example. They get an email box, but don’t get access to shared mailboxes automatically. They get read-only access to applications, not full access. Then, based on the needs of the user and the approvals of management, the user can request and gain additional access as and when required. While this process can be cumbersome – especially when a user is first starting with an organization – it also avoids over-provisioning access that later must be pulled back. Provisioning and de-provisioning solutions can greatly aid with this process, allowing IT teams to quickly add and remove access as needed with a minimum of manual steps. Note that de-provisioning is as critical as provisioning. When an employee changes roles or leaves the organization, or when an application is reconfigured or replaced, access must also be updated to maintain the principle in action; ensuring users have the access they need but no more. 

Second, one source of truth per organization. While it is very possible for every application and site to have their own identity data store, that is a recipe for disaster as a company grows and evolves. Instead, a single source of truth for identity – like Microsoft’s Active Directory or a similar solution – allows for much tighter and effective control over identity and access. Each application would then use that single source to confirm the identity of the person logging in and what they’re allowed to have access to. The most common form of this idea in organizations today is Single Sign-On (SSO) – where you go to log in to an application (like SalesForce) and see your browser re-direct to your company login page. SalesForce is checking with your company’s single source of identity truth, instead of keeping its own database of users within the app. This is a bit of an oversimplification, as the methods and technologies used to do SSO are complex, but the basic theory of using one source of truth to identify users is the goal.

Third, the concept of zero trust. Zero trust has become a bit of a buzzword in the cybersecurity industry of late, but the actual operational methodology is extremely valuable. Zero trust says that whenever a user, systems, application, etc. attempts to access anything; they/it must prove that they are who they are and must have been granted access for that specific operation. This means that even if the user had been logged into an application already, their identity would still be challenged if they attempted to access other areas of the application. A system talking to another system might have to pass an authentication challenge if it tried to access data in another database. This is significantly different than traditional access methods which say that a user who can use an application has all of their access rights “pre-cached” and ready to go. The reason for zero trust is that a user’s device (or a data system itself) could be used in a way that is not appropriate – either because the user is attempting to do something they shouldn’t on purpose or by accident, or because the device has been compromised by a threat actor. This could easily result in access to data and systems that shouldn’t be accessible, or where access has been removed, but that removal hasn’t yet filtered down to the application in question. In short, zero trust gets its name from the fact that a user – even a user who already logged into something – isn’t trusted as they move around applications and systems. They must pass identity checks (which often happen invisibly to the actual user) to gain access to additional resources. 

Identity and Access Management attempts to implement all these theories and more, and so can be a complicated strategy for any organization to undertake. By giving users access to only what they require, forcing all applications and systems to use a single source of identity truth, and ensuring that access requests are dynamic and not static; organizations can begin to tame the beast that is IAM without keeping users and systems from effectively doing their jobs.