Cloud Attackers Are Getting Faster — The Patch Window Just Shrunk From Weeks to Days and I Have the Bruises to Prove It

Cloud Attackers Are Getting Faster — The Patch Window Just Shrunk From Weeks to Days and I Have the Bruises to Prove It

On February 14 — Valentine's Day, because the universe has a sense of humor — I woke up to seventeen Slack notifications, four missed calls, and an email with the subject line "URGENT: possible breach." One of our clients, a B2B SaaS company running their entire platform on Google Cloud, had been compromised.

The attackers got in through a vulnerability in a third-party library that had been publicly disclosed exactly six days earlier. Six days. We had not patched it yet because — and this is the part that keeps me up at night — six days used to be fast. Our patching cycle was 14 days for critical vulnerabilities and 30 days for everything else. That was considered responsible. That was industry standard.

It is not industry standard anymore. The attackers have gotten faster, and the rest of us are still playing last year's game.

The Data: What Google Just Told Us

Google released their H1 2026 Cloud Threat Horizons Report this week, and the numbers confirm what I experienced firsthand. The headline finding: hackers are increasingly exploiting newly disclosed vulnerabilities in third-party software to gain initial access to cloud environments, with the window for attacks shrinking from weeks to just days.

Let me translate that from corporate-speak: attackers are reading the same vulnerability disclosures you are. They are just acting on them faster.

The report also highlights a shift in attack methodology. In 2024, weak credentials (bad passwords, missing MFA) were the number one initial access vector for cloud breaches. In 2026, vulnerability exploitation has overtaken credentials. This is significant because it means attackers are not just guessing passwords anymore — they are actively hunting for unpatched systems, and they are finding them within days of a vulnerability being published.

The Numbers That Should Scare You

  • Average time from vulnerability disclosure to first exploitation attempt: 4.2 days (down from 12.8 days in 2024)
  • Percentage of cloud breaches starting with vulnerability exploitation: 41% (up from 27% in 2024)
  • Average dwell time before detection: 11 days (meaning attackers are inside for nearly two weeks before anyone notices)

That last number is the real kicker. The attackers get in within a week of a vulnerability being disclosed, and then they sit inside your environment for eleven days doing whatever they want. By the time you notice, they have already mapped your infrastructure, exfiltrated data, and possibly set up persistence mechanisms that will survive even after you patch the original vulnerability.

What Happened to Our Client

I am going to walk through our Valentine's Day incident because I think it is instructive. Names and identifying details are changed, but the technical sequence is accurate.

Day 0 (February 8): Vulnerability Disclosed

A critical vulnerability was published in a popular authentication library. CVSS score: 9.1. Advisory said it allowed unauthenticated remote code execution. Every security team in the world saw this.

Our client's development team saw it too. They created a Jira ticket, assigned it "high priority," and scheduled the patch for their next maintenance window. Which was February 22. Fourteen days away.

Fourteen days. That used to be fine.

Day 3 (February 11): Exploit Code Published

A proof-of-concept exploit was published on GitHub. Not by the attackers — by a security researcher who wanted to demonstrate the severity of the vulnerability. Good intentions. Terrible timing. Within hours, the exploit was being adapted and weaponized.

Day 6 (February 14): Breach

Attackers used a modified version of the public exploit to gain initial access to the client's Google Cloud environment. They came in through a Cloud Run service that was running the vulnerable library version. From there, they:

  1. Escalated privileges by exploiting an overly permissive service account (our fault — the service account had project-wide editor access when it only needed access to one bucket)
  2. Accessed Cloud SQL databases containing customer data
  3. Created a new service account for persistent access
  4. Exfiltrated approximately 2.3GB of data to an external storage bucket before we detected the anomaly

Total time from initial access to data exfiltration: approximately 4 hours. They were fast. Practiced. This was not their first time.

What We Did Wrong

Looking back, our mistakes were embarrassingly basic:

  • 14-day patch window for a CVSS 9.1 vulnerability. This was indefensible. Anything above 9.0 should be patched within 48-72 hours, period.
  • Overly permissive service account. The compromised service account had editor access to the entire project. It only needed read/write access to a single Cloud Storage bucket. Principle of least privilege is not just a best practice — it is the difference between a contained incident and a catastrophe.
  • No automated vulnerability scanning in CI/CD. If we had been running dependency checks in the deployment pipeline, the vulnerable library would have been flagged before it reached production.
  • Insufficient logging. We did not have Cloud Audit Logs configured for data access events. This made the forensic investigation significantly harder and slower.

My colleague Marcus, who led the incident response, said something after the debrief that I think about daily: "We were not breached because we were careless. We were breached because our definition of careful was two years out of date."

The New Reality: What Needs to Change

Based on our experience and the Google report, here is what I believe every cloud team needs to do differently:

1. Critical Patches in 48 Hours, Not 14 Days

I know. I know this is hard. I know your change management process takes a week. I know your QA team needs time to test. I know your compliance framework says you need three levels of approval before deploying anything to production.

I also know that attackers do not care about your change management process.

For CVSS 9.0+ vulnerabilities: patch within 48 hours. Develop an emergency patching procedure that bypasses normal change management. Yes, this introduces risk. But the risk of an unpatched critical vulnerability is higher. We learned this the expensive way.

2. Automate Dependency Scanning

Every CI/CD pipeline should include automated vulnerability scanning for dependencies. Tools like Snyk, Grype, or Trivy can flag vulnerable libraries before they reach production. This should be a hard gate — if a critical vulnerability is detected, the deployment fails. No exceptions.

We implemented this after the incident. It has blocked three deployments in the past month that would have introduced known vulnerable packages. Three potential breaches prevented by a tool that takes 20 minutes to set up.

3. Least Privilege Is Not Optional

Audit every service account. Every IAM role. Every permission. If a service account has broader access than it needs for its specific function, fix it. Today. Not next sprint. Not next quarter. Today.

Google Cloud's IAM Recommender tool can help identify overly permissive roles and suggest tighter alternatives. It is free. It is built into the console. Most people do not know it exists. I did not know it existed until after we got breached. Now I check it weekly.

4. Enable Comprehensive Logging

At minimum, you need:

  • Cloud Audit Logs — admin activity AND data access (data access logs are off by default in GCP)
  • VPC Flow Logs — network traffic patterns
  • Cloud Run / Cloud Functions logs — with request-level detail
  • Alert policies — anomalous API calls, unusual data access patterns, new service account creation

Yes, comprehensive logging costs money. Our logging bill went up about $180/month after we enabled everything. Our breach cost us approximately $340,000 in incident response, notification, legal fees, and lost business. I will happily pay $180/month for logging.

5. Assume Breach, Design for Containment

This is the hardest mindset shift. Stop designing your cloud architecture around preventing breaches — you cannot prevent all breaches. Instead, design for containment. When (not if) an attacker gets in, how far can they go?

  • Network segmentation — microservices should only talk to the services they need
  • Workload identity — every service runs with its own identity and minimum permissions
  • Data encryption at rest and in transit — with customer-managed keys if possible
  • Break-glass procedures — documented processes for revoking access and isolating compromised services

If our client's Cloud Run service had only been able to access its own database (not the entire project), the breach would have been contained to a single service. The attacker would have gotten limited data instead of everything. Architecture decisions made during quiet times determine outcomes during crises.

The Living-off-the-Cloud Problem

One trend from the Google report that particularly worries me is what they call "Living-off-the-Cloud" (LOTC) techniques. Similar to "Living-off-the-Land" attacks in traditional security, LOTC means attackers use legitimate cloud services and APIs to achieve their objectives, making their activity harder to distinguish from normal operations.

In our incident, the attackers used standard GCP APIs to create service accounts, access databases, and exfiltrate data. They did not install malware. They did not use any unusual tools. They used the same APIs our developers use every day. The only difference was intent.

Detecting LOTC attacks requires behavioral analysis, not just signature-based detection. You need to know what "normal" looks like for your environment so you can spot "abnormal." This is where tools like Google's Security Command Center, Chronicle SIEM, or third-party solutions like Wiz or Orca become essential.

The Bottom Line

The window between vulnerability disclosure and exploitation has shrunk to days. Your patching process needs to be faster than the attackers. Your permissions need to be tighter than you think. Your logging needs to be more comprehensive than you want to pay for. And your architecture needs to assume that someone will get in despite all of this.

I learned these lessons the hard way on Valentine's Day. I would rather you learn them from reading this.

And if you are still on a 14-day patch cycle for critical vulnerabilities... maybe bump that up. Just a suggestion from someone who had a very bad February.

The Google H1 2026 Cloud Threat Horizons Report is publicly available and worth reading in full. The section on LOTC techniques alone is worth your time.

📚 Related reading:

Found this helpful?

Subscribe to our newsletter for more in-depth reviews and comparisons delivered to your inbox.