I frequently exchange huge (approx. 35gb) gpg encrypted files (password encrypted) with my partners as part of development projects. Typically these data pakages contain proprietary binary files such as e.g CAD models, documents, etc.), which is the reason, why these files are substentially big. Due to their shear size, the encrypted files are stored on a NAS system accessed via smb-protocol, which has two immediate consequences:
(1.) issuing "gpg -o <Decrypted_file.tar.gz> -d <Encrypted_file.tar.gz.gpg>" in a Win 7 terminal is impracticably slow; decrypting a file with 35gb takes about 12h (or more)
(2.) Decrypting the file via Kleopatra speeds the process up significantly, but the decrypted file is temporarily generated in %temp%, i.e. the temporary directory of the respective user. Thereafter the decrpyted file is by default copied to the folder in which the encrypted file is stored. This is fine as long as the system disk is sufficiently large to generate the decrypted file. But as M.2 SSDs are now becoming increasingly popular (e.g. my laptop), the space may be limited such, as that this operation fails as the %temp% directory runs out of space. My laptop, for instance, features a 256gb M.2 system disk and my CAD and FEA S/W allready consume a considerable part of the disk's capacity. As this is a CAE certified workstation notebook, replacing the SSD with a bigger one (for a furtune btw) just for the sole purpose of decrypting large files isn't an attactive option.
Hence my feature request: Issue (2.) could be resolved easily by providing an option in Kleopetra which allows to generate the decrypted file in the same directory as the encrypted file (and not in %temp%). This corresponds to option (1.), but unfortunately gpg issued on a terminal is far too slow to be used on cifs shares...
Keep up the good work!
Cheers,
Stefan