Apr 13 2023
Welcome to Gone Phishing, your daily cybersecurity newsletter that’s the Zoom to cybercrime’s Skype.
Today’s hottest cyber security stories:
This could be major, folks! Australia has fallen victim to a slew of cyberattacks over the past few years with some of its major organisations being targeted by ransomware attacks.
Ransomware attacks are when hackers either lock or steal victims’ files and demand a ransom for their safe return. They have exploded in the last year or two and our friends (or ‘mates’, should we say?) down under seem to have been hit particularly badly.
The Australian Cyber Security Centre (ACSC) suggests that Australia is particularly attractive to cybercriminals due to its prosperity, with Australians often cited as having the highest median wealth per adult in the world.
As unbelievable as it sounds, when these ransomware attacks occur, the affected companies often decide that the cheapest way to resolve the issue is to ‘pay up’, so to speak, and hand over the desired cryptocurrency (either Monero or Dero, nine times out of ten) to the hackers.
Check out these stats:
Organisations experienced a significant increase in ransomware – from an average of four attacks over five years in 2021 versus four attacks over the course of one year in 2022 – and of those who fell victim, 82% admitted to paying the ransom at least once, according to a new research report.
Yeah, you read that right, folks – 82 percent! And this is despite the fact that cybersecurity professionals pretty much all advise against paying the scammers off.
Indeed, the Australian government’s lead cybersecurity agency, the ACSC, currently recommends that victims of ransomware attacks never pay a ransom, saying there’s no guarantee the information will be returned instead of being sold online.
Sometimes the criminals are true to their word and, as such, hold up their end of the bargain by unlocking the files upon receiving payment. Although there’s of course no means of recourse for victims if they don’t!
On top of this, paying criminals is obviously never a good idea, in principle, but also because more often than not the bounty received winds up funding further criminal endeavours (like more ransomware attacks, along with terrorism, espionage, the list goes on…). Frankly, it’s a vicious cycle, perpetuated by victims giving in.
Well, this ‘giving in’ could be a thing of the past if the Australian government cave to the mounting pressure to ban ransomware payments altogether.
“Making ransom payments illegal would act as a deterrent for criminals to continue attacks if they know that they won’t be paid large sums of money,” claims Wayne Tufek, the director of cybersecurity firm CyberRisk.
And some of Australia’s politicians are beginning to take heed of the warnings and weigh in. See below:
I’ll admit, we were sceptical at first but perhaps banning them would be the most logical response to the growing problem. If criminals knew that businesses, organisations, and even individuals would be legally prohibited from paying them, maybe they would, in time, let up.
We’ll drink to that! 🍻
Let the bug chase (woah, don’t google bug chasers!) begin. Now, to our beloved good faith cybersecurity researchers, before you ChatGPTee off on this, please be aware that $20,000 is the upper end of the echelon and only presented to those who unearth “exceptional discoveries”.
For low severity findings a still-not-be-sniffed-at $200 is up for grabs and for those of you looking to get in on the actions, reports are submittable via crowdsourcing cybersecurity platform Bugcrowd.
However, bear in mind that OpenAI (the company behind ChatGPT) isn’t too keen on would-be hackers trying to break ChatGPT by employing the infamous ‘evil twin’ technique.
OpenAI: No Jailbreaks! No evil twin!
They’re offering a bounty, but don’t bother trying to jailbreak the poor thing or make it do anything naughty as you won’t be rewarded and may even be banned.
FYI, jailbreaking ChatGPT usually involves inputting elaborate scenarios in the system that allow it to bypass its own safety filters.
These might include encouraging the chatbot to roleplay as its “evil twin,” letting the user elicit otherwise banned responses, like hate speech or instructions for making weapons.
Indeed, OpenAI says that such “model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed.”
With that in mind, Let the games begin! And may the odds be ever in your favour…
So long and thanks for reading all the phish!