People spend a lot of time and effort taking nice photos, but sometimes very little thought on how to protect those memories. Protecting important data does not need to be difficult, but it does need to be deliberate.
The general rule for protecting data is 3-2-1 which means 3 copies of the data, on 2 different media, and at least one copy offsite or offline. The most common means of "backing up" data today are usb portable hard drive and cloud hosting providers. If your cloud hosting provider also backs up your data and makes restores available to you, then you've met the 3-2-1 rule. You have three copies of the data (your live data, your sync copy, and your cloud providers backup copy). However, do not assume this is the case, you need to ask if they have actual backups and can perform restores for you if your data gets corrupted and you need to know what their retention period is (how far back can they go). If they do not, then all you have a is a sync copy of your data and that does not meet the 3-2-1. A malicious software or simple bitrot could corrupt files and those sync to the cloud hosting provider before you notice. Now there's nothing to restore from. If your cloud provider does not offer backups, you will still need to make and maintain one more copy of your data. I would recommend keeping that copy offline and in a way that allows restores back in time. Similarly, a single USB copy of your data is insufficient. You have two copies now so still one short of the first rule (three copies of your data), but you do have two media and one of those being offline (assuming you unplug the drive and store it away somewhere, preferably at an offsite location). This means with a USB method, you'll need two USB hard drives, and the best practice would be to store one of those at an offsite location. You can also combine cloud hosting and hard disk storage.
A note about long term storage on hard disks. Unfortunately, hard disks can suffer from bit rot, damaged sectors, and file system errors. This means that it's possible data can be lost or corrupted in small part that you may not even notice or in whole. Technically, storing on optical or tape would have less of these issues, but optical is rarely large enough these days and tapes and tape drives have their own issues (expensive, tape drives need cleaned and eventually fail). There are some file systems with integrity monitoring that can help detect and in some cases correct these issues. If using a Synology NAS, I would recommend going into the Control Panel, Shared Folders, and turning on Data Checksum for Advanced data integrity (likely under the Advanced tab) and in Storage Manager (check package manager if it's not installed) click on your Storage pool and enable Data Scrubbing. Here's an article with more information about Data scrubbing. Many NAS operating systems are linux based and will support Btrfs or ZFS file system. I would recommend looking up how to enable and configure Btrfs or ZFS for your storage drives if using one of those models. If storing data on a disk that does not support BTRFS or ZFS file systems, you can still monitor the integrity of the those files. Here's an easy to use application that will calculate the checksum of the files for you. Note it does not provide a recovery option, once an error is detected you'll need to use one of your clean copies of the data to do the restore. You need to manually run the program when you want to verify the integrity of the files. Here's one that's even simpler to use CHKBIT (Run it with -u to update the hash files, run it without -u to verify. Always run it with -u the first time to make the hash files). Keep in mind if you edited those files you would expect the checksum to have changed. A more advanced solution can also try to provide recovery such as PyFileFixity.
Syncing data and backing it up are not really the same thing. A sync is a copy of your data. A proper backup allows recovery of the files from different points in time. The oldest data the backup solution can access is set by your retention period (this tells the backup software when to delete data). In order to save space, most modern backup solutions use incremental backups, The first time the software is run it makes a full backup of all your data. Each time it is run after that, it only backups up the files (or blocks of the disk) that changed. This is much more efficient. Each time the data changes, those changes are placed into the backup and should you discover any issues, you can simply restore to an older version of that file. The downside is if one of the incremental backups in the chain is damaged, it may cause issues restoring to any of the backups after that point in time. For this reason, a proper backup software should offer a means of verifying the backup chain. Additionally, you will need to do test restores to make sure the backups are working as believed. Sometimes issues are only discovered when performing a test restore. I would recommend restoring the test files to an alternative location (even if it's just a different folder). Additonally if you're using encryption, you will need to make sure you do not lose your encryption key (your backups are useless without that). The backup software I recommend is Duplicati. It works well enough, it is free, supports MAC/Linux/Windows, can be installed on Synology NAS, supports many cloud hosting providers but can also be used on local disks, incorporates a test function (you should still perform test restores manually). The setup is quite straightforward. However, one thing that is less obvious is that you should export a copy of your backup job (done within the software). Also, I would copy the duplicati-server.sqlite and other sqlite files manually to another location. On linux machines the sqllite files are in (~/.config/Duplicati or /root/.config/Duplicati) and on Windows (%LocalAppData%\Duplicati). There are many backup solutions, both paid and free. The important thing is to understand how they work, how to verify the backup chains, and exactly what is needed to restore in the event the backup system itself becomes unavailable. Do manual test restores regardless of any promises made.
The general rule for protecting data is 3-2-1 which means 3 copies of the data, on 2 different media, and at least one copy offsite or offline. The most common means of "backing up" data today are usb portable hard drive and cloud hosting providers. If your cloud hosting provider also backs up your data and makes restores available to you, then you've met the 3-2-1 rule. You have three copies of the data (your live data, your sync copy, and your cloud providers backup copy). However, do not assume this is the case, you need to ask if they have actual backups and can perform restores for you if your data gets corrupted and you need to know what their retention period is (how far back can they go). If they do not, then all you have a is a sync copy of your data and that does not meet the 3-2-1. A malicious software or simple bitrot could corrupt files and those sync to the cloud hosting provider before you notice. Now there's nothing to restore from. If your cloud provider does not offer backups, you will still need to make and maintain one more copy of your data. I would recommend keeping that copy offline and in a way that allows restores back in time. Similarly, a single USB copy of your data is insufficient. You have two copies now so still one short of the first rule (three copies of your data), but you do have two media and one of those being offline (assuming you unplug the drive and store it away somewhere, preferably at an offsite location). This means with a USB method, you'll need two USB hard drives, and the best practice would be to store one of those at an offsite location. You can also combine cloud hosting and hard disk storage.
A note about long term storage on hard disks. Unfortunately, hard disks can suffer from bit rot, damaged sectors, and file system errors. This means that it's possible data can be lost or corrupted in small part that you may not even notice or in whole. Technically, storing on optical or tape would have less of these issues, but optical is rarely large enough these days and tapes and tape drives have their own issues (expensive, tape drives need cleaned and eventually fail). There are some file systems with integrity monitoring that can help detect and in some cases correct these issues. If using a Synology NAS, I would recommend going into the Control Panel, Shared Folders, and turning on Data Checksum for Advanced data integrity (likely under the Advanced tab) and in Storage Manager (check package manager if it's not installed) click on your Storage pool and enable Data Scrubbing. Here's an article with more information about Data scrubbing. Many NAS operating systems are linux based and will support Btrfs or ZFS file system. I would recommend looking up how to enable and configure Btrfs or ZFS for your storage drives if using one of those models. If storing data on a disk that does not support BTRFS or ZFS file systems, you can still monitor the integrity of the those files. Here's an easy to use application that will calculate the checksum of the files for you. Note it does not provide a recovery option, once an error is detected you'll need to use one of your clean copies of the data to do the restore. You need to manually run the program when you want to verify the integrity of the files. Here's one that's even simpler to use CHKBIT (Run it with -u to update the hash files, run it without -u to verify. Always run it with -u the first time to make the hash files). Keep in mind if you edited those files you would expect the checksum to have changed. A more advanced solution can also try to provide recovery such as PyFileFixity.
Syncing data and backing it up are not really the same thing. A sync is a copy of your data. A proper backup allows recovery of the files from different points in time. The oldest data the backup solution can access is set by your retention period (this tells the backup software when to delete data). In order to save space, most modern backup solutions use incremental backups, The first time the software is run it makes a full backup of all your data. Each time it is run after that, it only backups up the files (or blocks of the disk) that changed. This is much more efficient. Each time the data changes, those changes are placed into the backup and should you discover any issues, you can simply restore to an older version of that file. The downside is if one of the incremental backups in the chain is damaged, it may cause issues restoring to any of the backups after that point in time. For this reason, a proper backup software should offer a means of verifying the backup chain. Additionally, you will need to do test restores to make sure the backups are working as believed. Sometimes issues are only discovered when performing a test restore. I would recommend restoring the test files to an alternative location (even if it's just a different folder). Additonally if you're using encryption, you will need to make sure you do not lose your encryption key (your backups are useless without that). The backup software I recommend is Duplicati. It works well enough, it is free, supports MAC/Linux/Windows, can be installed on Synology NAS, supports many cloud hosting providers but can also be used on local disks, incorporates a test function (you should still perform test restores manually). The setup is quite straightforward. However, one thing that is less obvious is that you should export a copy of your backup job (done within the software). Also, I would copy the duplicati-server.sqlite and other sqlite files manually to another location. On linux machines the sqllite files are in (~/.config/Duplicati or /root/.config/Duplicati) and on Windows (%LocalAppData%\Duplicati). There are many backup solutions, both paid and free. The important thing is to understand how they work, how to verify the backup chains, and exactly what is needed to restore in the event the backup system itself becomes unavailable. Do manual test restores regardless of any promises made.