MaxPower@feddit.de
on 24 Aug 2023 08:53
nextcollapse
Now that you mention fucking incompetence, I need to verify my 3-2-1 backup strategy is correctly implemented. Thanks for the reminder, CloudNordic and AzeroCloud!
Moonrise2473@feddit.it
on 24 Aug 2023 08:55
nextcollapse
What’s the point of primary and secondary backups if they can be accessed with the same credentials on the same network
snaptastic@beehaw.org
on 24 Aug 2023 10:45
nextcollapse
What’s the correct way to implement it so that it can still be automated? Credentials that can write new backups but not delete existing ones?
VerifiablyMrWonka@kbin.social
on 24 Aug 2023 10:54
nextcollapse
For an organisation hosting as many companies data as this one I'd expect automated tape at a minimum. Of course, if the attacker had the time to start messing with the tape that's lost as well but it's unlikely.
Moonrise2473@feddit.it
on 24 Aug 2023 14:32
collapse
It depends what’s the pricing. For example ovh didn’t keep any extra backup when their datacenter took fire. But if a customer paid for backup, it was kept off-site and was recovered
It might be even pretending to be a big hosting company when they’re actually renting a dozen deds from a big player, much cheaper than maintaining a data center with 99.999% uptime
Moonrise2473@feddit.it
on 24 Aug 2023 10:58
nextcollapse
i use immutable objects on backblaze b2
from command line using their tool is something like b2 sync SOURCE BUCKET
and from the bucket setting disable object deletion
also borgbase allows this, backups can be created but deletions/overwrites are not permanent (unless you enabled them)
rentar42@kbin.social
on 24 Aug 2023 11:13
nextcollapse
Fundamentally there's no need for the user/account that saves the backup somewhere to be able to read let alone change/delete it.
So ideally you have "write-only" credentials that can only append/add new files.
How exactly that is implemented depends on the tech. S3 and S3 compatible systems can often be configured that data straight up can't be deleted from a bucket at all.
IWantToFuckSpez@kbin.social
on 24 Aug 2023 11:30
nextcollapse
Haui@discuss.tchncs.de
on 24 Aug 2023 11:55
collapse
I don’t know if it is the „correct“ way but I do it the other way around. I have a server and a backup server. Server user can‘t even see backup server but packs a backup, backup server pulls the data with read only access, main server deletes backup, done.
cwagner@lemmy.cwagner.me
on 24 Aug 2023 13:53
collapse
Similar with Promox Backup Server. While Proxmox actively writes the backups to PBS, it’s PBS that decides what to do with the data and how many versions to keep.
Haui@discuss.tchncs.de
on 24 Aug 2023 14:03
collapse
Martin Haslund Johansson, the director of Azerocloud and CloudNordic, stated that he does not expect customers to be left with them when the recovery is finally completed.
Moonrise2473@feddit.it
on 24 Aug 2023 11:36
collapse
The customers are already lost:
pay the expensive ransom, if the bad actor gives them the decryption key, customers are relieved but still pissed, will take the data and move to somewhere else with a big FO. Go out of business.
don’t pay the ransom, customers are pissed and move to somewhere else with a big FO. Go out of business.
hunt4peas@lemmy.ml
on 24 Aug 2023 09:16
nextcollapse
Time and time again, data hosting providers are proving that local backups not connected to the internet are way better than storing in the cloud.
IWantToFuckSpez@kbin.social
on 24 Aug 2023 09:48
nextcollapse
Any redundant backup strategy uses both. They both have inherent data loss risks. Local backups are great, but unless you store them in a bunker they are still at risk to fire, theft, vandalism and natural disasters. A good backup strategy stores copies in at least three locations. Local, off-site and the cloud. Off-site backups are backups you can physically retrieve. Like tapes stored in a vault in another city.
Oh ok. So you’re using them effectively like cold storage backups? I was scared you were going to tell me that you were running an ZFS pool off a USB hub, lol.
The 3-2-1 backup strategy: “Three copies are made of the data to be protected, the copies are stored on two different types of storage media and one copy of the data is sent off site.”
theshatterstone54@feddit.uk
on 24 Aug 2023 18:20
nextcollapse
How would that work in practice? 1 medium offsite, and 2 mediums on-premises?
Sounds like they had all their backups online, instead of keeping offline copies. It’s a reminder that everyone needs at least one backup that isn’t connected to any computer. It’s also a reminder that “the cloud” should not be the only place you keep your data, because hosting providers are targets for this stuff and you don’t know how careful they are.
IonAddis@lemmy.world
on 24 Aug 2023 12:47
nextcollapse
Danish hosting firms CloudNordic and AzeroCloud have suffered ransomware attacks, causing the loss of the majority of customer data and forcing the hosting providers to shut down all systems, including websites, email, and customer sites.
digdilem@lemmy.ml
on 24 Aug 2023 21:54
nextcollapse
I feel really bad for everyone involved - customers and staff. The human cost in this is huge.
Yes, there’s a lot of criticism of backup strategies here, but I bet most of us who deal with this professionally have knowledge of systems that would also be vulnerable to malicious attack, and that’s only the shortcomings we know about. Audits and pentesting are great, but not infallable and one tiny mistake can expose everything. If we were all as good as we think we are, ransomware wouldn’t be a thing.
snailtrail@lemmy.world
on 24 Aug 2023 23:31
collapse
I think that people generally overestimate how much money tech companies like this one actually make. Their profits are tiny. A lot of the time, tech companies run on investment money, and can’t actually turn a profit. They wait for the big acquisition or IPO payday. So if you think you’re actually gonna get 100k off them, good luck. Sometimes they’re barely keeping the lights on.
DeprecatedCompatV2@programming.dev
on 23 Sep 2023 06:05
collapse
threaded - newest
Now that you mention fucking incompetence, I need to verify my 3-2-1 backup strategy is correctly implemented. Thanks for the reminder, CloudNordic and AzeroCloud!
What’s the point of primary and secondary backups if they can be accessed with the same credentials on the same network
What’s the correct way to implement it so that it can still be automated? Credentials that can write new backups but not delete existing ones?
For an organisation hosting as many companies data as this one I'd expect automated tape at a minimum. Of course, if the attacker had the time to start messing with the tape that's lost as well but it's unlikely.
It depends what’s the pricing. For example ovh didn’t keep any extra backup when their datacenter took fire. But if a customer paid for backup, it was kept off-site and was recovered
It might be even pretending to be a big hosting company when they’re actually renting a dozen deds from a big player, much cheaper than maintaining a data center with 99.999% uptime
i use immutable objects on backblaze b2
from command line using their tool is something like
b2 sync SOURCE BUCKET
and from the bucket setting disable object deletion
also borgbase allows this, backups can be created but deletions/overwrites are not permanent (unless you enabled them)
Fundamentally there's no need for the user/account that saves the backup somewhere to be able to read let alone change/delete it.
So ideally you have "write-only" credentials that can only append/add new files.
How exactly that is implemented depends on the tech. S3 and S3 compatible systems can often be configured that data straight up can't be deleted from a bucket at all.
A tape library that uses a robot arm https://youtu.be/sYgnCWOVysY?t=30s
Backups that are not connected to any device are not susceptible to being overwritten and encrypted by malware.
Or like that vault in Rogue One?
I don’t know if it is the „correct“ way but I do it the other way around. I have a server and a backup server. Server user can‘t even see backup server but packs a backup, backup server pulls the data with read only access, main server deletes backup, done.
Similar with Promox Backup Server. While Proxmox actively writes the backups to PBS, it’s PBS that decides what to do with the data and how many versions to keep.
Neat! Thanks for mentioning it!
They weren’t normally on the same network, but were accidentally put on the same network during migration.
That’s what you call an epic blunder.
It is a company destroying blunder.
I think they’re aware of that
The customers are already lost:
pay the expensive ransom, if the bad actor gives them the decryption key, customers are relieved but still pissed, will take the data and move to somewhere else with a big FO. Go out of business.
don’t pay the ransom, customers are pissed and move to somewhere else with a big FO. Go out of business.
Time and time again, data hosting providers are proving that local backups not connected to the internet are way better than storing in the cloud.
Any redundant backup strategy uses both. They both have inherent data loss risks. Local backups are great, but unless you store them in a bunker they are still at risk to fire, theft, vandalism and natural disasters. A good backup strategy stores copies in at least three locations. Local, off-site and the cloud. Off-site backups are backups you can physically retrieve. Like tapes stored in a vault in another city.
How are you using that 7 port USB hub?
Oh ok. So you’re using them effectively like cold storage backups? I was scared you were going to tell me that you were running an ZFS pool off a USB hub, lol.
I dunno about that. If you actually were using a USB hub for ZFS, then I have a 10 petabyte flash drive to sell you.
The only downside to something like this would be electrical surges if you leave the drives plugged.
The 3-2-1 backup strategy: “Three copies are made of the data to be protected, the copies are stored on two different types of storage media and one copy of the data is sent off site.”
How would that work in practice? 1 medium offsite, and 2 mediums on-premises?
Exactly.
This is the way.
Other people’s computers. Never forget.
If you fuck up that badly you shouldn’t be allowed to operate in that industry.
Problem is that you have to work in the industry to fuck up that badly.
They’re a small company, they’ll probably just go bankrupt.
They had one job
People literally pay these guys to not screw up this one thing.
Put all the data in the cloud, they said. It will all be save and handled by professionals!
How is that even possible? What kind of hosting company runs in a way that they would lose all the data with ransomware?
Sounds like they had all their backups online, instead of keeping offline copies. It’s a reminder that everyone needs at least one backup that isn’t connected to any computer. It’s also a reminder that “the cloud” should not be the only place you keep your data, because hosting providers are targets for this stuff and you don’t know how careful they are.
I feel really bad for everyone involved - customers and staff. The human cost in this is huge.
Yes, there’s a lot of criticism of backup strategies here, but I bet most of us who deal with this professionally have knowledge of systems that would also be vulnerable to malicious attack, and that’s only the shortcomings we know about. Audits and pentesting are great, but not infallable and one tiny mistake can expose everything. If we were all as good as we think we are, ransomware wouldn’t be a thing.
I think that people generally overestimate how much money tech companies like this one actually make. Their profits are tiny. A lot of the time, tech companies run on investment money, and can’t actually turn a profit. They wait for the big acquisition or IPO payday. So if you think you’re actually gonna get 100k off them, good luck. Sometimes they’re barely keeping the lights on.
I wonder why they can’t/won’t pay.