The support forum

Backup Archive Copies Being Deleted

Doequer :

Mar 01, 2019

Hello, I never thought of this situation before, but it just happened to me.

Basically, I have one common source and two destinations, they are handled by two almost identical backup jobs, being only their paths and the "backup copies" related option the differences. I have both jobs configured to be executed "continuously", so whatever drive is already connected/present at the time of a change, gets the job done. The thing is, the drives' presence isn't always the same; sometimes only one of them is connected, others both. Under such circumstance, after I made some changes to the source, they were propagated by the backup job which isn't doing "archival" copies, to its (already connected) respective drive; this led to the updated files being just deleted, which is expected, but once I connected the second destination drive (which is doing archival copies), the related job simply didn't archived any of the original files, since by then they were already deleted by the previous job.

I'm wondering, isn't there any kind of feature which avoid the aforementioned situation to happen? Other than myself being aware of what drive I have hooked up at a specific moment, before doing changes to the source; or rather changing one of the backup jobs to not get "real time" but delayed/manual executions.

Thanks.

Alex Pankratov :

Mar 03, 2019

I don't understand your setup, sorry. In particular, these bits:

this led to the updated files being just deleted


since they were already deleted by the previous job


No job will delete anything from the source and you said that both jobs use the same source (i.e. it's not that the second job uses first job's destination as its source).

Doequer :

Mar 04, 2019

After doing some new tests, I realized about some misconceptions on my part. I think the issue I faced wasn't related to an additional backup job's execution, but to a "by design" program's behavior.

Having a backup job with these settings:

Backup from: "E:\" (SOURCE, fixed int. disk).
Backup to: "K:\" (ext. disk).
What to backup: everything with some exceptions (default rules).
When to backup: Continuously (real-time).
Detecting changes: Use destination snapshot.
Copying: Use delta copying.
Deleting: Archive backup copies... (deleting after two weeks).
More options: Left as default.

I updated the file "1234.txt" at the source's, which was located at the "docs v1.1" folder; what I did was replacing such file with a new version, but it still had the same name; then I changed the folder's name to "docs v1.2". The thing is, the external backup disk ended up having the last "1234.txt" file's version, but the original one wasn't archived. The only thing that got archived was an empty folder, with the original name from the previous version (docs v1.1). There were sometimes when such folder was archived more than once, but with an added timestamp value.

So, I'm wondering if isn't there something to do about how the program handles the archival procedure of "updated" files? I mean, other than doing the "manual" task of actually "deleting" same named files from source, before copying updated versions of them. Doing it so, will guarantee the files to be replaced will indeed get "archived", but it's not that practical.

Alex Pankratov :

Mar 04, 2019

The thing is, the external backup disk ended up having the last "1234.txt" file's version, but the original one wasn't archived.


That's because the original file wasn't deleted, but moved.

You can switch off move/rename detection in Backup Settings > More Options, in which case every rename will be treated as a file being deleted and created, and the deletion will be processed as per your "Deleting" setting.

Alternatively, you can enable archival of _modified_ files, but there are some performance caveats - https://bvckup2.com/support/forum/topic/502/2958

Doequer :

Mar 04, 2019

Thanks for the suggestion, but I just tried the same tests but without checking the "move/rename" detection feature, and the result were exactly the same; as expected I would say, since I never were renaming neither moving any files, but "overwriting" existing ones. Such task will always replace current files either by using delta copying or in full, but if the source file/s have the very same name as the new/update file/s, they will just be overwrote, without any archival.

Respect the second option, I think it's kind of "overrated", at least to the simple purposes I'm dealing with; moreover if that implies certain detrimental effects on the backups'.

So, considering the current situation, I think the best I can do about it would be thinking it twice before overwriting certain files, or rather doing it but not without some workaround as the one I talked before.

Alex Pankratov :

Mar 04, 2019

I just tried the same tests but without checking the "move/rename" detection feature, and the result were exactly the same


They should not be the same. Here's what you should see if you are to switch off move/rename, set "Deleting" to "Archive", rename some file at source and re-run the job:

2019.03.04 14:27:00     Processing ...
2019.03.04 14:27:00         A total of 2 steps
2019.03.04 14:27:00         1. Archiving file Programs\test.abc
2019.03.04 14:27:00         2. Copying file Programs\test.xyz
...
2019.03.04 14:27:00             Completed in 63 ms, copied in full

I also don't think I follow you anymore in general.

Do you expect "Archive copies of *deleted* files" somehow apply to *modified* files but without using "archive modified" option?

Doequer :

Mar 10, 2019

Hi, I indeed see the same steps on the log, once I deactivate those specific parameters:

2019.03.10 10:50:37.149 (UTC-3) 2 1     Processing ...
2019.03.10 10:50:37.149 (UTC-3) 3 2         A total of 2 steps
2019.03.10 10:50:37.149 (UTC-3) 2 2         1. Archiving file TEST DOC-2.txt
2019.03.10 10:50:37.150 (UTC-3) 2 2         2. Copying file TEST DOC.txt
2019.03.10 10:50:37.150 (UTC-3) 3 3             8 bytes, created 2019.03.10 10:47:11.718, modified 2019.03.10 10:46:59.729, archive
2019.03.10 10:50:37.150 (UTC-3) 3 4                 Raw: 8 / 131966992317187460 / 131966992197290602 / 00000020
2019.03.10 10:50:37.157 (UTC-3) 3 3             Completed in 7.36 ms, copied in full
2019.03.10 10:50:37.658 (UTC-3) 2 1     Completed in 514 ms with no errors
2019.03.10 10:50:37.658 (UTC-3) 3 2         Read 8 bytes, wrote 8 bytes

But that doesn't matter for what I'm talking about. I'm talking about this:

Processing ...
2019.03.10 10:59:30.568 (UTC-3) 3 2         A total of 1 step
2019.03.10 10:59:30.568 (UTC-3) 2 2         1. Updating file TEST DOC.txt
2019.03.10 10:59:30.568 (UTC-3) 3 3             Details
2019.03.10 10:59:30.568 (UTC-3) 3 4                 Source: 63 bytes, created 2019.03.10 10:47:11.718, modified 2019.03.10 10:59:17.708, archive
2019.03.10 10:59:30.568 (UTC-3) 3 5                     Raw: 63 / 131966992317187460 / 131966999577082702 / 00000020
2019.03.10 10:59:30.568 (UTC-3) 3 4                 Backup: 8 bytes, created 2019.03.10 10:47:11.718, modified 2019.03.10 10:46:59.729, archive
2019.03.10 10:59:30.568 (UTC-3) 3 5                     Raw: 8 / 131966992317187460 / 131966992197290602 / 00000020
2019.03.10 10:59:30.576 (UTC-3) 3 3             Completed in 8.08 ms, copied in full
2019.03.10 10:59:30.628 (UTC-3) 2 1     Completed in 66 ms with no errors
2019.03.10 10:59:30.628 (UTC-3) 3 2         Read 63 bytes, wrote 63 bytes

As you may see, I'm not deleting/moving/renaming anything, but "overwriting" a file with an updated version. Under that circumstance, what I'm trying to avoid is losing original "updated" files at source's, once they are modified not by getting deleted/renamed/moved, but just by being overwrote.

So, taking into consideration the above example, I'm wondering of a way of getting that original "8 bytes" file still present at source's folder before the backup job updates it, might get archived as well; since otherwise, in this kind of cases, the "updating" step will mean the original file will just be lost, because another (newer or older) one with the same name, but not necessarily the same content, overwrote it.

I hope that makes it clearer.

Thanks.

N.B..: In my previous message when talking about the "second option", I inadvertently wrote "overrated", instead "overkill".

Alex Pankratov :

Mar 10, 2019

Under that circumstance, what I'm trying to avoid is losing original "updated" files at source's, once they are modified not by getting deleted/renamed/moved, but just by being overwrote.


Have you seen this - https://bvckup2.com/support/forum/topic/502/2958 ?

Because it would seem that it does exactly what you want - it archives existing backup copies before they are updated due to the changes at the source.

If that's not it, then please try and explain in a form of a detailed example - "I do this and that" and "I expect this and that to happen". I am still not 100% clear on what you describe above (because you keep referring to the source and the originals, whereby the program has no control over these, only over their backup copies).

Doequer :

Mar 11, 2019

Yes, that "versioning" feature is what I talking about. I tried it, and the only negative point I see about it is about having to disable the "delta" copying feature. Other than that, if I set a similar period of time for the archived files' deletion in the current job, and considering that it won't be dealing with really big files neither gets activated too often, there shouldn't be no problems.

Couldn't you make that "versioning" related option gets compromised, in a way only those specifically "overwrote" files get copied in full, while all the rest keep using the "delta" feature, if needed?

Anyway, considering I have more than one backup job for the same source, I could simply put a delay in one of them, so it runs after the same amount of time I set as "archive deletion" for the "real-time" jobs' version. Or simply alter one character of the files' names that keep being the same between updates, so I know they will still get archived.

Thanks.

Alex Pankratov :

Mar 19, 2019

the only negative point I see about it is about having to disable the "delta" copying feature.


This is because the "delta" copying is pointless when modified files are archived.

Archiving a modified file works by moving its existing copy to the archive, meaning that after that the file no longer has a copy at the _backup_ location and it needs to be recopied from scratch, in full. While it's possible to use "delta" copying to create this file, doing so is completely pointless, because with the next update this file again will need to be re-copied from scratch.

Couldn't you make that "versioning" related option gets compromised, in a way only those specifically "overwrote" files get copied in full, while all the rest keep using the "delta" feature, if needed?


I don't understand this bit, sorry.

New topic

Create
Made by Pipemetrics in Switzerland
Support

Updates
Blog / RSS
Follow Twitter
Reddit
Miscellanea Press kit
Testimonials
Company Imprint

Legal Terms
Privacy