File tools for the Windows commandline

  German Version of this page

Some information about commandline tools

  FCB V0.8.3.0 - File Compare Binary - Replacement for Windows FC.EXE
FCB is a fast replacement for Windows FC.EXE. It supports only the binary mode of the original

The original acts often very dull: Comparing large files it allocates huge amounts of memory. And it reads the files thru the Windows file cache even if its useless because of the file size.

FCB reads directly from the disk. It uses three separate threads for reading the two files and for comparing. So FCB is nearly always faster than FC.EXE. The only exeption is when you compare a file with itself. FCB then really reads the data twice from disk while FC gets the data of the 'second file' from the Windows file cache.

The usage is similar to the original but the /B parameter doesn't exist because FCB only supports the binary mode, processing sub-dirs isn't implemented yet.

FCB FileSpec1 FileSpec2 [-i]

-i   ignore different file sizes, compart up to the smaller size

(~ 90KB)

Letztes Update: 08.01.2019


  FFC V1.6.5.0 - Fast File Copy
FFC is optimized for large files. It reads and writes directly from and to disk, written data can be verified.
But if NTFS file compression is involved then there is a file cache even FILE_FLAG_NO_BUFFERING is used and with FILE_FLAG_NO_BUFFERING there is more fragmentation than without. Therefore if the target file has NTFS file compression then FILE_FLAG_NO_BUFFERING isn't used (since V1.1.7).
It can deal with system volume mountpoints (e.g. \\?\Volume{97ba6ca1-45f2-11e3-86bb-901b0e04e7f7}).
If the source contains hardlink groups then they are recreated at the target.
If there are reparse-points in the source then they will be recreated at the target pointing to the same location or to the new location if the reparse-point's target is part of the source. With param -p they are parsed instead and found files are copied.
NTFS security attibutes can be copied (-sec), alternative data streams too (-ads).
Files can be excluded from being copied by one or multiple exclude masks (sample: -e:*.tmp -e:*.bak).
FFC can deal with devices like \\.\PhysicalDrive1 etc. It has an internal source device \\.\NUL.
If source or target device fail while copying then FFC tries to reopen the file and continues where it failed.
Since V1.2 FFC has full support for paths longer than 256 characters (up to nearly 32768). But the path depth is limited to 1024 levels.

FFC FileSpec1 FileSpec2 [-s][-o][-oo][-or][-k][-sr][-sw][-u][-h][-p][-sec][-ads][-cx][-lzx][-sp][-sz][-e][-v][-ve][-ds][-n]
-s     recurse subdirs (default if source is a folder)
-o     overwrite existing files
-oo    overwrite older existing files
-or    overwrite older existing files even they have the read-only attribute
-k     skip existing files
-sr    share read (by default FFC wants exclusive read access on devices)\n"
-sw    share write (by default FFC wants needs exclusive write access)\n"
-u     avoid unnecessary writes when overwriting exiting files
-h     create hardlinks instead copies (on same volume only)
-p     parse reparse points (instead of copying the reparsepoint itself)
       (reparse points which point to the source path are never parsed,
       the copy will point to the target path)
-mir   mirror: delete files at target which do not exist in source (be careful...)
-scan  scan target to find existing source file under different name or path
       avoiding unnessecary writes (very experimental): only valid with -mir
-t:nn  test size for -scan: to find existing identical files FFC scans by default
       1MB at start, middle and end of the file. With -t:nn a different test size
       can be set. Units as K, M, T can be used. -t:f for full scan
-sec   copy security attributes
-ads   copy alternative data streams
-c0    do not compress target
-c1    compress target if source is compressed
-c2    compress target
       (default is don't care, compression is inherited from parent folder)
-lzx   LZX compress target (Win10+)
-sp    set 'sparse file' attribute for target files (sparse bocks of 1 MB granularity)
-sz    skip all-zero blocks when writing (useful only when writing to devices)
-e     exclude mask (once for each mask)
-v     verify written data
-ve    verify written data and existing files
-r:n   on CRC or IO error retry read sectors n times, default is 10
-ds    delete source file(s)
-f     flush target volume's file cache (needs admin privileges)
-1     print file names into one line by shortening to the width of the console window
-q     quiet mode
-n     no wait for key on finish if started standalone

(~ 100KB)

Last update: 19 Dec 2022

  FSF V1.1.3 - Find Same File - Finds identical files
FSF searches for identical files in one or two pathes. Found duplicates are listed and can be deleted, replaced by hardlinks or renamed.
FSF first determines all file sizes because only files with identical sizes can be identical at all. Instead of comparing the whole files FSF can test a small part only (param -t) which is way faster.
For finding files with identical sizes as fast as possible the file lists are sorted by size. Therefore the output isn't sorted by folders but by size.

FSF - Find Same File V1.1.3 (Win32) - Freeware by Uwe Sieber
FSF FileSpec1 [FileSpec2] [-t:testsize][-m:minfilesize][-x:maxfilesize]
-t:size  test 3 blocks of this size only instead of the whole file
         e.g. -t:1M for testing 1 MB at file start, mid and end
-m:size  minimum file size to test, smaller files are skipped
         e.g. -m:1K for testing files larger than 1KB
-x:size  maximum file size to test, larger files are skipped
         e.g. -x:1M for testing file smaller than 1MB
-i   identical file names only
-s   scan subdirs 1+2
-s1  scan subdirs 1 (default if FileSpec1 is a folder)
-s2  scan subdirs 2 (default if FileSpec2 is a folder)
-r   scan reparse points 1+2
-r1  scan reparse points 1
-r2  scan reparse points 2
-fh  find hardlinks (by default hardlinks are not handled as identical files)
-ded delete emptied directories
-n   no wait for key on finish if started standalone
if duplicate found:
-d1  delete file1
-d2  delete file2
-d   delete the file which is more fragmented
-h   replace the file which is more fragmented by hardlink
-dsn delete the file with the shorter file name
-dln delete the file with the longer file name
-dsp delete the file with the shorter path name
-dlp delete the file with the longer path name
-do  delete the file with the older write date
-dn  delete the file with the newer write date
-e1  rename file1
-e2  rename file2
When dealing with one folder only then the file with the deeper path is considered as file2.
(~ 90KB)

Last update: 25 March 2022


  ListLinks V1.6 - lists reparse points, symbolic links and hard links
ListLinks is a commandline tool which lists:
  • reparse points
    • mount points: directory entries which point to logical volumes or other local directories (junction points)
    • symbolic links: file or directory entries which point to other file or directory - also with relative and remote paths
  • hard links: file entries which point to the same file as at least one other on the same drive (on an NTFS drive every file entry is a hard link but usually there is exactly one for each file only)
It supports Microsoft file systems only.


ListLinks FileSpec [-s]
-s   recurse subdirs
ListLinks C:\* -s
Reparse points are shown when found while hard links must be collected and matched first, so they are listed at the end.


Last update: 31 Octl 2021


  SetFileSize V1.0 - setting the size of a file
SetFileSize test.bin 1024

SetFileSize test.bin 0x400

SetFileSize test.bin 1024Ki
The following units can be used:
Ki210 (1024)
Mi 220  (1024*1024)
Gi 230  (1024*1024*1024)
Ti 240  (1024*1024*1024*1024)
K 103  (1000)
M 106  (1000*1000)
G 109  (1000*1000*1000)
T 1012 (1000*1000*1000*1000)


Last update: 06 July 2014



Uwe Sieber