I see no need for this.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Jul 1 2019
Jun 30 2019
To be clear, this would allow the least competent CA in the system root trust anchor list to certify an arbitrary server as a member of hkps.pool.sks-keyservers.net. So it is in some sense a security vulnerability -- it allows for a bypass of the correct authority.
I've just pushed 1c9cc97e9d47d73763810dcb4a36b6cdf31a2254 to the branch dkg-fix-T4593
Jun 29 2019
Note also that some keyservers like keys.openpgp.org will distribute only verified self-sigs (including revocations and subkey updates) without distributing the floodable third-party certifications. We can and should distinguish "updates-only" keyservers from discovery-by-address mecahnisms.
Jun 28 2019
I recognize that adding network activity to the test suite can be complicated (not all test suites are run with functional network access), but if it is possible to have a unit test or something (that doesn't do network access, but just looks at what the dirmngr *would* have tried somehow?), that would be great. Thanks for looking into this!
i'm aware of the filters you're using, but they are not a principled response to this kind of certificate flooding attack. An attacker who wants to be really abusive can easily create certifications that bypass any import-filter gpg is capable of.
Confirmed; that looks like a regression.
I know this problem very well and it let to the introduction the import filters. For example I can update my own key only using filters like
Please see T4592 where i've reported this particular performance concern in more detail, including profiling data.
For folks who encounter this problem in the future, i recommend that you first check whether you have a pubring.gpg instead of (or in addition to) your pubring.kbx. If you do have pubring.gpg, you should be able to run the pipeline to the awk script described above with just the pubring directly, which omits the time-consuming gpg --export step above. so i think that would look like:
wow, 46MiB, that's even worse than mine. :( thanks for sharing the update, @jackalope. I'm glad you've worked around it for now, but sadly this kind of certificate flooding could happen at any time if you're using the SKS keyserver network :(
By golly, you were right, @dkg! I ran that "gross hack" (thanks so much for writing it) and after nearly 10 minutes got the output I needed. Last 5 lines:
That's a great question, @jackalope. I found this in a different misbehaving keyring recently by basically deleting keys by hand until only one was left. surprise, it was mine (ugh)! But that process is pretty slow and manual and tedious. Let me see if i can do better.
Jun 27 2019
Good to see you here @dkg, thanks for the response, and whoa, that's quite something!
@jackalope, the place where the output is hanging is likely due to output buffering (i have been able to replicate the same problem, and the output hangs at intervals of 8192 octets). So while it is giving you a clue about where the hang is, it's not a very precise clue.
Jun 26 2019
I've encountered the same problems that the original poster has described; the problems started suddenly on June 24, not prompted by any related updates as far as I can tell. The problems occur with both gpg 2.1.18 installed from the official Debian Stretch package and 2.2.12 installed from stretch-backports.
Jun 25 2019
Jun 21 2019
I took this task as it has errors of gpg-connect-agent scd killscd. But, it seems for me that it's not the direct cause.
Anyway, I investigate the bug.
Jun 19 2019
without feedback, i have no idea what you want to do here as upstream. I believe this issue has identified a specific failing use case, and it has a patch that fixes the problem. if there's a problem, please let me know what it is. If there's no problem, please consider merging.
Any word on this? i've pushed a fix for this into debian experimental as a part of 2.2.16-2, but i am concerned that there's no adoption from upstream. If there's a reason that this is the wrong fix, please do let me know!
Jun 18 2019
If we only need it for backward compatibility, then the configuration in gpg.conf should *not* be overriding the preferred, forward-looking form of the configuration (in dirmngr.conf). If it is low priority to fix this, then there will be a generation of GnuPG users and toolchains which deliberately configure the value in gpg.conf instead of dirmngr.conf because they'll know that's the more robust way to do it.
Jun 14 2019
Jun 11 2019
@gouttegd good catch!
Jun 8 2019
I just assumed that is an ntbtls problem.
If I understand correctly, this is exactly the same problem that the one we encountered some time ago in the code dealing with fetching keys from HTTP (--fetch-keys), and that we fixed with this patch.
fwiw, the bug looks like it's in send_request in ks-engine-hkp.c, which re-uses the http_session object without re-initializing its tls_session member.
thanks for the triage, @werner!
We need --keyserver in gpg for just one reason: backward compatibility.
thanks for fixing that error message, @werner. As @Valodim points out in discusson about hagrid, a gpg.conf keyserver option (deprecated according to the documentation) overrides the dirmngr.conf keyserver option (not deprecated according to the documentation.
Jun 7 2019
I received an strace for a similar case by PM.
Jun 5 2019
any feedback on this proposed patch?
May 31 2019
May 30 2019
I've pushed fa0a5ffd4997c2ca38a1dd2d89459b6b1f18ad99 to the branch dkg/fix-T3464, which i think solves the problem i was seeing without reintroducing any new problems.
I can confirm that this is actually a problem now :( gpgme_op_decrypt_verify returns a status with GPG_ERR_MISSING_KEY set when a session-key is used.
May 29 2019
Thanks, the mentioned OpenSSL option should be helpful.
A high level test description is:
- Configure both gpgsm and dirmngr to use OCSP.
- Import the responder signer certificate with gpgsm --import.
- Use a certificate with OCSP responder extension present, or configure a default OCSP responder in dirmngr.
- Configure your OCSP responder to identify itself with key ID (and not subject name)
- Attempt to sign or verify with gpgsm.
- You should get an error, with dirmngr logs showing that the responder signer certificate could not be found.
Thank you for a quick fix (despite this being a minor problem).
May 28 2019
Do you have any test cases? Note that T3966 is due to missing support for SHA-256.
May 27 2019
Thanks to your very good analysis, this was easy to fix.
May 23 2019
Are you not reading what I am saying to you?? Once again, your explanation is INVALID because that would mean that gnupg would be BROKEN, because it would be a NON-COMPLIANT http client according to the RFC I quoted.
I explained why the keyserver access requires access to the DNS. If that is not possible the keyserver code will not work. If you don't allow DNS to work you either have to use Tor (which we use to also tunnel DNS requests) or get your keys from elsewhere. Also note that the keyserver network is current several broken and under DoS and thus it is unlikely that it can be operated in the future.
May 21 2019
Thanks. Fixed in master and 2.2.
May 17 2019
Fix will go into 2.2.16 to be release this month.
There will be no full solution for this. However, the next release should in general work due to a 400ms delay we use after spawning the viewer. This is configurable; see rG7e5847da0f3d715cb59d05adcd9107b460b6411b.
May 16 2019
Actually the temp file is created but because the photo viewer is run as a detached process and gpg keeps on running, the temp file has been removed by gpg at the time the photo viewer tries to open it. Ooops. The correct behaviour would be to wait for the photo viewer to be finished. We use
Fixed in amster and 2.2:
May 15 2019
Thanks
Applied to master and 2.2. Thanks.
Right, that was missing. Fixed for master and 2.2. Noet that for kill and reload we added this already in 2016.
May 14 2019
Oh, ah. Ok. I do not read c't no more since about 2005. They are busy people and lead into the right direction.
There is actually a problem with --use-embedded-filename. Given that the option his highly dangerous to use we have not tested this for ages. We will see what you we can about it.
Good catch. Thanks for that work. I'll apply it to master and 2.2.
I removed this specialized error message. Thanks for reporting.