AI-driven Attacks on SaaS: What You Need to Know

Jun 27 2023

Gone Phishing Banner

Welcome to Gone Phishing, your daily cybersecurity newsletter that does what it SaaS on the tin ????

Today’s hottest cyber security stories:

  • AI exSaaSts security of Software-as-a-Service (SaaS) ????

  • Canadian energy producer Suncor suffers cyber sabotage… Cybertage? ????????

AI’m exSaaSted ????

We’re going to kick this one off with some stats to give you an idea of the corporate landscape as it pertains to generative AI software and similar programs.

  •  Employees and business leaders are adopting generative AI software and similar programs without awareness of SaaS security vulnerabilities.

  • A survey in February 2023 with 1,000 executives showed that 49% currently use ChatGPT, and 30% plan to use generative AI soon.

  •  99% of ChatGPT users reported some form of cost savings.

  • 25% of ChatGPT users claimed to have reduced expenses by $75,000 or more.

  • The survey was conducted three months after ChatGPT's general availability, suggesting higher usage rates at present.

This may or may not come as a surprise to you depending on to what extent your business has embraced AI within the scope of SaaS.

As helpful as these tools can be, wouldn’t you know it: there's a catch. We’ve said it before and we’ll say it again: AI is a double-edged sword.

It’s a help and a hindrance because, as game changing as engines like ChatGPT have been for business owners and workers, the same is true of those on the other side of the law, i.e., the cybercriminals. It’s revolutionised hacking too, don’t forget.

The question is, how are bad actors exploiting the use of AI within SaaS and (hopefully!) what can you do to combat this? Let’s find out.

We’re going to examine three problems or areas of potential vulnerability. These are:

  1. Threat actors can exploit generative AI to trick SaaS authentication protocols

  2. Employees connecting unsanctioned AI tools to SaaS platforms without considering the risks

  3. Sensitive info shared with generative AI tools is susceptible to leaks


  1. First up, how can threat actors expolit AI to trick the SaaS into giving up access?

Ambitious employees are leveraging AI tools to increase productivity, but cybercriminals are also utilising generative AI for malicious purposes, which is inevitable and already happening.

The strong impersonation capabilities of AI make weak SaaS authentication protocols highly susceptible to hacking.

Techopedia highlights the potential misuse of generative AI by threat actors for activities like password-guessing, CAPTCHA-cracking, and developing more powerful malware.

Although these methods may seem limited in scope, the CircleCI security breach in January 2023 was traced back to a single engineer's laptop being infected with malware.

In addition, three prominent technology academics recently proposed a plausible scenario where generative AI is used to conduct phishing attacks.

What the experts say:

"A hacker uses ChatGPT to generate a personalised spear-phishing message based on your company's marketing materials and phishing messages that have been successful in the past.

“It succeeds in fooling people who have been well trained in email awareness, because it doesn't look like the messages they've been trained to detect."

Hackers choose the path of least resistance

Hackers aren’t stupid. They know that attacking the fortress gates (e.g. the SaaS itself), if you will, is unlikely to reap rewards. Instead, they’ll probe for weakness. And where does that lead?

Well, more often than not, it leads to the AI generative software that runs alongside the SaaS. Why? Well, it’s not tried and tested. It tends to be off-brand and, as such, lacks the credibility of products from established software companies.

Okay, more importantly, what can be done to combat this growing problem?

First and foremost is education. Educate yourself and your employees/co-workers about the dangers of these applications. Perhaps even insist that any proposed new tool must first be subject to research and testing before it’s sanctioned for use (more on this in the next point).

Additionally, beyond implementing multi-factor authentication (MFA) and physical security keys, security and risk teams need visibility and continuous monitoring for the entire SaaS perimeter, along with automated alerts for suspicious login activity.

Now, it may still be viable to use some of these tools in which case go ahead. But remain vigilant.

  1. Connecting AI unsanctioned tools to SaaS

Honestly, this is the key problem. Employees going rogue. They see a tool that cuts their workload in half and of course, it’s hard to resist. Business is competitive and individuals are always looking for shortcuts to give them that all important edge.

But be aware. Is the pat on the back you’ll get for speedy returns worth potentially bringing the company to a standstill thanks to a ransomware attack or something similar? Maybe not, huh?

Signing up for an AI scheduling assistant, from the end-user's perspective, is as simple and (seemingly) innocuous as:

  • Registering for a free trial or enrolling with a credit card

  • Agreeing to the AI tool's Read/Write permission requests

  • Connecting the AI scheduling assistant to their corporate Gmail, Google Drive, and Slack accounts

It’s so simple it’s easy to forget you could be handing over the keys to the kingdom, so to speak.

Once the authorization is complete, the token for the AI scheduling assistant will maintain consistent, API-based communication with Gmail, Google Drive, and Slack accounts — all without requiring the user to log in or authenticate at any regular intervals.

Our advice here is simple: no unsanctioned tools.

  1. Info shared with the programs is vulnerable to leaks

Employees input data into generative AI tools to enhance work efficiency and quality, but this data can potentially be accessed by the AI provider, competitors, or the public.

Due to the external nature of most generative AI tools and their free availability, security and risk professionals lack oversight and control over the security measures associated with these tools.

Enterprises are increasingly worried about this issue, and incidents of generative AI data leaks have already occurred.

Stand-alone generative AI does not create SaaS security risk. But what's isolated today is connected tomorrow. Ambitious employees will naturally seek to extend the usefulness of unsanctioned generative AI tools by integrating them into SaaS applications.

TOP TIPS:

  • Organisations need comprehensive SaaS security measures and cross-functional collaboration for effective AI tool data governance in their environments.

  • Employees resort to unsanctioned AI tools due to limitations in approved tech stacks, driven by a desire to enhance productivity and quality.

  • CISOs should engage in good-faith conversations with leaders and end-users, fostering collaboration and trust while addressing security concerns and potential risks.

  • A robust SaaS security posture management (SSPM) solution is essential for understanding and mitigating the risks associated with AI tools, providing insights and visibility.

  • SSPM enables authentication strength improvement through the enforcement of MFA and continuous monitoring, allowing security teams to reduce the attack surface and respond to unsanctioned or insecure AI tools connected to the SaaS ecosystem.

You Suncor my battleship ????

A major Canadian energy producer and owner of the Petro-Can gas station network has openly admitted to being the target of a mysterious cyber attack.

In a rather comically uninformative and brief presser on Sunday, Suncor Energy confessed to experiencing a cyber security incident.

The company assured the public that they are taking action and collaborating with expert outsiders to investigate and resolve the predicament. They have also dutifully informed the appropriate authorities, just in case.

To everyone's relief, Suncor Energy expressed that, as of now, there is no evidence to suggest any compromise or misuse of customer, supplier, or employee data resulting from this situation. Sure, we believe you.

Additionally, Vancouver-based cybersecurity firm Plurilock released a report today detailing that the incident came to light after reports surfaced on Friday of staff members being unable to access their accounts.

To make matters worse, Petro-Can found itself in the awkward situation of being unable to accept electronic payments. Hmm sounds kind of serious, guys, not gonna lie.

That’s all for today, folks. Stay safe!

So long and thanks for reading all the phish!

Recent articles