Safe to assume OS uses same clusters to rewrite files?

Forums: 

This question actually applies to any file wiping tool that doesn't rewrite files accessing the clusters directly, but I'm asking here because BleachBit got my attention by adverting that 1 pass is enough, while other tools make you think that real safety is to rewrite files 35 times over, which, by what I've read around, doesn't make sense in current times. My question is about Windows, NTFS, but info on other platforms is welcome, I like learning. How can one be sure that the OS will reuse exactly the same clusters while rewriting a file, and not, by whatever reason, allocate one or many new clusters for portions of the new data? Maybe it's a dummy question and "it's obvious because that just how it works". But I'm curious because file systems are complex beasts, it could use new units to optimize something, like avoid fragmentation, I don't know. Thanks a bunch!

Generally you should not assume the same clusters are overwritten. Besides the file system, the application can cause this: for example, you save a document three times, and each time it gets written to a new spot. This behavior depends on the application and the file (e.g., whether the new version of the file is the same or smaller than the previous version of the file) .

Overwriting individual files gives a modest amount of privacy. The benefit is it is really quick (i.e., cheap) and will deter an average person trying to recover the file (and probably some above average people).

So for cases where additional effort justifies the costs, the BleachBit documentation gives complementary advice: wipe the whole partition keeping the file system intact (using a tool like BleachBit), wipe the whole partition destroying everything, use encryption, physically destroy the drive, etc.

---
Andrew, lead developer