To test this you need to create an OpenPGP key without signing capability.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Apr 20 2023
Apr 19 2023
Apr 18 2023
The actual error is in gpgme. CreateProcess is called with "gpgtar" but "gpgtar.exe" must be used.
This has been fixed with commit rM0c29119e061c. The reason why we didn't noticed the real cause of the problem is that the CreateProcess error shows up in the gpgme-w32spawn helper which has no good way for returning errors.
Apr 13 2023
Fixed in 1.19.0.
Fixed in 1.19.0.
Fixed in 1.19.0.
Apr 6 2023
You could configure gpgme with
Apr 5 2023
Mar 17 2023
Mar 16 2023
Will go into 1.19.0
ready for testing
I wrote T6412: Kleopatra: Inform user if some files were not extracted from encrypted archive to inform the user about not extracted files. I think this shouldn't block this issue because special files probably don't occur in normal usage of GnuPG VSD.
Closing. This will be tested with T5478: Kleopatra: Performance problems decrypting and encrypting large Archives.
Mar 15 2023
Mar 3 2023
That's why I added some tags and also set me a reminder. We will try to get this into the next GPGME release we plan for this month.
@werner Seeing as you seem to be actively maintaining this project: is there any way to move this forward? This is breaking quite a few builds of development environments for my company and we are now applying similar patches ourselves but it would be nice to get this merged upstream.
Feb 22 2023
Oh sorry I only saw this now. We have "gpgme_set_offline" for this use case which disables CRL checks in the S/MIME case. It is more general because it also disables OCSP for example and might disable more online actions like fetching chain certificates etc.
Feb 16 2023
Feb 15 2023
Feb 14 2023
I have seen that the rule is honoring the exclusions of Microsoft Defender but I do not know if one would need to exclude gpgol.dll or the gpgolconfig.exe / gpg.exe in this case. https://learn.microsoft.com/en-us/microsoft-365/security/defender-endpoint/attack-surface-reduction-rules-reference?view=o365-worldwide#microsoft-defender-antivirus-exclusions-and-asr-rules
Indeed. The called function dates back to 2004. We really need to rework this and cache the value - it might be required to take the file_name into account.
Feb 13 2023
@werner I saw the call in _gpgme_set_engine_info at line 452 https://dev.gnupg.org/source/gpgme/browse/master/src/engine.c$452 which I think leads down to _gpgme_get_program_version in version.c which does a spawn and uses no cache.
I had the same suspicion andIchecked the code. afaics all values are taken from a cache (see dirinfo.c). Thus no real overhead.
In T6369#167642, @werner wrote:The context cloning should not be that expensive compared to invoking gpg. Thus let us first see how to speed up this in the common case.
That's what I was initially trying to do, but then I saw https://git.gnupg.org/cgi-bin/gitweb.cgi?p=gpgme.git;a=blob;f=src/keylist.c;h=1c01bd42b8497932d218e4d525794ed98e712bf5;hb=HEAD#l1362 and I wasn't sure if I needed to copy that logic to avoid introducing any regressions.
If you got a limited list of, say, fingerprints, you should put them into an array and use gpgme_op_keylist_ext_start tolist only those keys. This will be much faster.
In T6369#167649, @ikloecker wrote:Finally, what's your use case? gpgme_get_key() is meant to be used for getting individual keys. It's not meant to be used to get 1000 keys in a loop.
If you mean gcc optimization flags, then yes.
Finally, what's your use case? gpgme_get_key() is meant to be used for getting individual keys. It's not meant to be used to get 1000 keys in a loop.
Moreover, if you have performance problems on Windows, then it's not the best idea to strace the code on Linux.
Just asking the obvious: You are using an optimized release build for your benchmarks, right?
Feb 12 2023
Benchmark script:
yeah, I'd guess it's creating a new gpg instance with it. strace shows extra clone/pipe/read/fcntl syscalls with the new context.
Feb 10 2023
Okay. So the problems with "file type 1" seem to come from git using hardlinks and tar storing them as hardlinks, but gpgtar ignores them on --decrypt. This would also explain the larger size of the archives if gpgtar stores the hardlinked files multiple times in the archive. Take home message: Don't gpgtar your git repo!
Running gpgtar directly only gives slightly better results. The following
GNUPGHOME=~/xxxx gpgtar --batch --status-fd 2 --gpg-args --enable-progress-filter --encrypt --gpg-args --always-trust -r D5E17E5ABC11F4CD060E02D41DD0D4BAF77BE140 -r C02C4012C09B2AE33921CF87577E88AC284DC575 --output - --directory /xxxx src >src-gpgtar.tar.gpg 2>src-gpgtar.log
took about 31.1 seconds.
These are USTAR types:
For testing the old version, did you use GNU Tar with Kleopatra or changed the configuration to use gpgtar?
"file type 2" may refer to symbolic links.
I did some tests. I encrypted the g10/src folder which contains multiple repos (33098 files) with a total weight of about 1.4 GiB.
I made the condition for calling the verify handler more strict by checking if err is a NO DATA error. This should minimize the risk of regression.
Feb 9 2023
I have some doubts that signed-only archives are very useful. The only use case is that this allows to sign stuff without saving it first. You would need to do this in my generally preferred detach signature case.
I see two possible solutions.
Feb 1 2023
The gpgme part has been done. Some minor changes in Kleopatra regarding the VERSION file checking would be useful.
Jan 31 2023
If you want this to happen, then you should consider contributing a patch. Please see doc/HACKING for the formal requirements.
Thanks. I fixed the documentation. Will go into 1.19
Jan 30 2023
I guess we need some gpgme support as well.
Jan 26 2023
Jan 24 2023
Jan 23 2023
Jan 19 2023
Jan 18 2023
Yes I am an admin on the https://pypi.org/project/gpg/ package.
No more logs. My understaning is that the pypi ownershipof the project has been transferred to @bernhard
Jan 17 2023
Jan 11 2023
I am changing the priority here to high as the parent task has high prio. Maybe we should close this as a duplicate of T5478
Jan 9 2023
Jan 5 2023
Since the issue T6328 described an issue with high pirority which would be fixed by this issue I am raising the prio here.