Thanks for the pointer, @werner. Certainly we want T4590 fixed.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Jul 2 2019
We need to rewrite the Location to avoid a CSRF attack. See fa1b1eaa4241ff3f0634c8bdf8591cbc7c464144
Jul 1 2019
I should add that i don't really care whose fault it is if the software is broken by some downstream. if it harms any users, and we can fix it, we should fix it, especially if the fix is easy.
We're writing free software, which we know that people use and modify downstream. if we know that the software has a particular sharp edge that people who are modifying it are likely to cut themselves on, we have two options:
Come on, if someone changes the software and breaks it, it is their's fault ant not ours. The whole thing on which keyserver and certificate to use as been discussed ad nausea in the past. Given all the problems with the keyservers I do not see a reason to change it right away to a state we had before. Keyserver code is pretty hard to test and has thus always been prone to regressions.
(See T4175 why this changed in 2.2.12.)
If the default keyserver is not hkps.pool.sks-keyservers.net, then @kristianf's CA certificate has no business certifying it.
I see no need for this.
Jun 30 2019
To be clear, this would allow the least competent CA in the system root trust anchor list to certify an arbitrary server as a member of hkps.pool.sks-keyservers.net. So it is in some sense a security vulnerability -- it allows for a bypass of the correct authority.
I've just pushed 1c9cc97e9d47d73763810dcb4a36b6cdf31a2254 to the branch dkg-fix-T4593
Jun 28 2019
I recognize that adding network activity to the test suite can be complicated (not all test suites are run with functional network access), but if it is possible to have a unit test or something (that doesn't do network access, but just looks at what the dirmngr *would* have tried somehow?), that would be great. Thanks for looking into this!
Confirmed; that looks like a regression.
Jun 21 2019
A possible exception here is that .onion TLDs should stick with HKP by default
Jun 19 2019
Any word on this? i've pushed a fix for this into debian experimental as a part of 2.2.16-2, but i am concerned that there's no adoption from upstream. If there's a reason that this is the wrong fix, please do let me know!
Jun 18 2019
If we only need it for backward compatibility, then the configuration in gpg.conf should *not* be overriding the preferred, forward-looking form of the configuration (in dirmngr.conf). If it is low priority to fix this, then there will be a generation of GnuPG users and toolchains which deliberately configure the value in gpg.conf instead of dirmngr.conf because they'll know that's the more robust way to do it.
Jun 11 2019
@gouttegd good catch!
Jun 8 2019
I just assumed that is an ntbtls problem.
If I understand correctly, this is exactly the same problem that the one we encountered some time ago in the code dealing with fetching keys from HTTP (--fetch-keys), and that we fixed with this patch.
fwiw, the bug looks like it's in send_request in ks-engine-hkp.c, which re-uses the http_session object without re-initializing its tls_session member.
thanks for the triage, @werner!
We need --keyserver in gpg for just one reason: backward compatibility.
thanks for fixing that error message, @werner. As @Valodim points out in discusson about hagrid, a gpg.conf keyserver option (deprecated according to the documentation) overrides the dirmngr.conf keyserver option (not deprecated according to the documentation.
May 31 2019
May 28 2019
We only supported SHA-1 signed OCSP requests. Fix will go into 2.2.16.
May 27 2019
I doubt that we are going to implement this.
May 24 2019
Interesting tinge: The main CRL of the dgn.de CA uses a nextUpdate in the year 2034 (15 years in the future) which would force dirmngr to cache the CRL until then. However, the CRL of the intermediate certificate has a nextUpdate only one month in the future. There is currently no entry in that second level CRL, so their idea might be that an updated second level CRL will also trigger a reload of the main CRL. I have not checked how we implement that in Dirmngr but I doubt that such a thing will work for us and that it is in any way standard compliant.
May 23 2019
Are you not reading what I am saying to you?? Once again, your explanation is INVALID because that would mean that gnupg would be BROKEN, because it would be a NON-COMPLIANT http client according to the RFC I quoted.
I explained why the keyserver access requires access to the DNS. If that is not possible the keyserver code will not work. If you don't allow DNS to work you either have to use Tor (which we use to also tunnel DNS requests) or get your keys from elsewhere. Also note that the keyserver network is current several broken and under DoS and thus it is unlikely that it can be operated in the future.
May 17 2019
I agree with @dkg here.
May 16 2019
"requires too much changes" i can understand.
This requires too much changes and does not reflect the reality. It actually makes debugging harder for us.
May 15 2019
Thanks
May 14 2019
I think you are saying that dirmngr receives the query term as escaped data in the assuan connection from the dirmngr client (typically, gpg, which itself decides how to percent-escape what it feeds into libassuan).
This is easy to explain: dirmngr receives already escaped data and that is what you see in the log. For proper parsing of the URI the escaping needs to be removed and only before sending the request the required escaping is applied. '@', '<', and '>' do not need to be escaped and thus you see them as they are.
I removed this specialized error message. Thanks for reporting.
This is particularly bad for users who have manually specified a given keyserver in dirmngr.conf, because even a transient failure in that keyserver will prevent them from any future keyserver requests until dirmngr decides that the "death" has worn off.
May 13 2019
further testing suggests that the invalid URI issue is only present for dirmngr's --keyserver option, and gpg's deprecated --keyserver option actually accepts schema-less hostnames.
see also T4467
May 10 2019
May 9 2019
May 8 2019
Apr 23 2019
Apr 19 2019
I just noticed that dirmngr(8)'s documentation for its --keyserver option says:
Note that even sending a HUP to dirmngr, when it is in this autodetection mode that observed tor at the start, is insufficient to have it re-run the autodetection. You have to explicitly terminate dirmngr to get it to unlearn the autodetected presence of Tor. This is subtly hinted at in dirmngr(8), but no justification is given for it.
Apr 10 2019
One of the things that dirmngr has going for it is that it tracks the current network state, and it would be nice to be able to reuse that state across sessions. If an ephemeral keyring can't use a shared dirmngr, there are fewer arguments for having dirmngr in the first place, and people might be more justified in replacing it with things like https://gitlab.com/anarcat/scripts/blob/master/openpgp-key-get
Apr 9 2019
I don't anymore think this is a high priority request. BTW, A more real problem than several dirmngr instances is multi-user access to smartcards.
Apr 5 2019
Apr 3 2019
Apr 1 2019
HTTP/1.1 spec, RFC 7230, Section 5.4, paragraph 2:
https://tools.ietf.org/html/rfc7230#section-5.4
Please be so kind and point me to the specs stating that you should put the IP address into Host:
It's up to GPG to send the Host header that shows the user's intent.
So in short you want:
- Allow to specify a keyserver by IP without any DNS lookups.
- When connecting via IP use the IP address for Host:.
Mar 31 2019
Mar 27 2019
gpg4win 3.1.6 is released which contains this fix.
Mar 19 2019
Also might I add, this used to work perfectly fine in gnupg14. It seems that somewhere along the line a regression was introduced that is causing this erroneous non-compliant behavior in the HTTP client.
Why? Your explanation is invalid because it implicates dirmngr's HTTP client as not comforming to the spec laid out by the RFC. I've quite clearly explained--and backed up with the spec itself--that when a proxy variable is configured, a client should not be doing DNS lookup of the destination hostname. Is there something about that you are not understanding?
Please show an example regarding something else than a failed access to a pool of keyservers. I explained why it can't work for pools for you.
Mar 18 2019
Yes you can, and no you do not. Don't believe me? Then read the spec. At no point does the spec say that there is "nothing that can be done" when a hostname cannot be resolved when connecting through a proxy. In fact, it states precisely the opposite, describing the exact procedure a client should take when making a request through a proxy. See section 5.3, paragraph 3:
No we can't we need to know the IP addresses to handle the pools. I have given a workaround for you in my previous comment. You can also use install Tor which we can use for DNS resolving.
Mar 13 2019
There is a solution for it:
Feb 28 2019
Btw. I only noticed this now as I always had "disable-tor" in my config but recently removed it for testing.