Wed, Sep 6
That should be easy on Unix but on Windows we have the nul nul: and iirc also /dev/nul.
@iklocker: Which gpg bug to you mean?
Sep 1 2023
Thanks. For the record, done at https://lists.gnupg.org/pipermail/gnupg-users/2023-August/066692.html.
Aug 23 2023
It may be better to open a separate issue for the issue in gpg, so that it's not overlooked/forgotten when the issue in gpgtar is fixed.
That is intentional. If we are able to remove a file we do it. Solution for you is easy: gpg .... -o - </dev/null >/dev/null
That is intentional. If we are able to remove a file we do it. Solution for you is easy: gpg .... -o - </dev/null >/dev/null
This looks like the same problem I encountered in Gentoo's Portage. To unlock the binary package signing key, Portage will run the equivalent of gpg --homedir ... --digest-algo ... --local-user ... --output /dev/null /dev/null. If unlocking fails (due to e.g. wrong password), /dev/null is removed: https://bugs.gentoo.org/912808
Aug 17 2023
Yes, gpgtar emits a SUCCESS status. gpgme should probably check for this.
Aug 4 2023
Works for me.
Jul 12 2023
Normal priority to get the _1_ removed when no folder with the same name already exists in that location.
Strangely enough this does not happen on linux. Maybe related to the KMime changes we have there?
For S/MIME archives the output for e.g. testfolder.tar.gz.p7m is now named "testfolder.tar.gz_1_/testfolder" with the "_1_" even added if there is no "archive.tar.gz"
Jul 6 2023
works. So gpgtar obviously knows about the filenames now, too
Jun 26 2023
Closing since the problem doesn't seem to occur if the operation is canceled properly.
Sorry about that. I tested an old build which didn't call gpgme_cancel_async and therefore probably didn't properly close the channels. It seems to work if gpgme_cancel_async is called to cancel the operation.
This option is already used. Running pgrep -a gpg in a loop (and ignoring gpg-agent processes) I get:
Mo 26. Jun 11:29:11 CEST 2023 19111 gpgtar --batch --status-fd 60 --gpg-args --no-tty --gpg-args --charset=utf8 --gpg-args --enable-progress-filter --gpg-args --exit-on-status-write-error --gpg-args --display=:0 --gpg-args --ttyname=/dev/pts/37 --gpg-args --ttytype=xterm-256color --decrypt --directory /tmp/kleopatra-JqIiXu/src -- /home/ingo/dev/g10/src.tar.gpg 19112 gpg --batch --status-fd=60 --output - --decrypt --no-tty --charset=utf8 --enable-progress-filter --exit-on-status-write-error --display=:0 --ttyname=/dev/pts/37 --ttytype=xterm-256color -- /home/ingo/dev/g10/src.tar.gpg
Can you please test by adding --exit-on-status-write-error to the gpg invocation by gpgtar?
Jun 23 2023
Should be fixed.
Jun 22 2023
Due to the double fork in gpgme we won't get the exit code which gpgtar emits. Possible actions in a signal handler are also limited; in particular we can't use stdio or estream. The only option to print a status line would we by using write directly. However, this might mess with the libassuan buffering. Thus, it is not a good idea to pkill gpgtar. Same is true for gpg and gpgsm.
Jun 12 2023
In the past this was done by --set-filename in libkleopatrarc-win32.desktop. But I am happy if we close this and focus on T6530.
Which only works if gpgtar actually knows the input file name (which it will once T6530: GPGME / QGpgME Extend Archivejobs to accept input / output from a filename is done and used).
Jun 9 2023
Of course, those are different controllers. :-)
As I already had my testsetup open I recompiled with your change and I tested manually cancelling and letting it run into the disk full error. In both cases the temporary file was deleted and the job was cancelled :)
The processes should now be killed properly.
Please note that my test was not on an USB Device. I will keep this issue with your analysis and reopen a different one with my error and details on how to reproduce that one. Pretty sure it was disk full.
I don't think this is a regression or something we can do anything about. Note that we see the same thing also on the command line. Actually I have seen the very same thing pretty often with USB devices. Thus lowering priority.
We have seen this problem in the QA this week and could identify that this was a ERROR_FILE_INVALID (ec=1006,"The volume for a file has been externally altered so that the opened file is no longer valid"). We also noticed disk errors in the event logger but did not recorded them. The USB stick was not unplugged but merely used with VirtualBox.
Jan 19 2023
Release quite some time ago.
Mar 21 2022
No need for callbacks actually. We can do it in a simpler way. See commit rGe5ef5e3b914d5c8f0b841b078b164500ea157804
Feb 17 2022
I tested encrypt two txt files with filename 1 and 2.txt and insert text: test 1 and test 2. Tararchive has been created successfull. Than i tested this Two txt files with a long name. See attached txt files, i send it already to you. Now by the first test Archive.tar.gpg.yqoirl with 0 Bytes was created.
Second test, the other archive.tar.gpg with 0 Bytes was created and gpgex hang.