Any thoughts or progress on this?
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Feb 6 2017
Feb 5 2017
By the way, I've noticed that communication with the card will only be broken
upon reinsertion if some software has attempted to access the card while it is
detached.
In other words:
access card -> remove -> insert -> access card
is fine.
access card -> remove -> access card -> insert -> access card
will cause all accesses to fail after insertion until gpg-agent is killed (and
restarted obviously).
Feb 4 2017
the reason "no public key" is confusing is because gpgv already knows that there
can be no public key. So the message that the naive user needs to see in this
case is "no keyring available".
If there is at least one keyring available, then saying something like "no
public key found in keyrings X and Y and Z" is reasonable. but if there are no
keyrings at all, the message should just be something like "no keyring found to
validate signature against".
Feb 3 2017
Hi,
I can still see that qt[1] is using the simplified pkg macros[2], while the
configure.ac is using proprietary method[3].
We are still missing PKG_PROG_PKG_CONFIG macro in configure.ac to make pkg
macros happy, this can remove all AC_PATH_PROG(PKG_CONFIG, pkg-config, no)
executions, see pinentry-0.9.5-build.patch, as you have PKG_CONFIG set.
The other changes to use PKG_CHECK_MODULES are optional but is there any reason
why not to use this macro instead of executing the pkg-config manually? This
macro has the advantage of allowing override via environment, and append proper
help.
If you like I can rebase this old patch set.
[1] http://git.gnupg.org/cgi-bin/gitweb.cgi?p=pinentry.git;a=blob;f=m4/qt.m4;hb=HEAD
[2]
http://git.gnupg.org/cgi-bin/gitweb.cgi?p=pinentry.git;a=blob;f=m4/pkg.m4;hb=HEAD
[3]
http:
//git.gnupg.org/cgi-bin/gitweb.cgi?p=pinentry.git;a=blob;f=configure.ac;hb=HEAD#l431
links removed as I got "Edit Error: not allowed (too many links).
Is that the gnome3 pinentry? if so please try the gtk-2 pinentry to see whether
it is the same problem.
Someone please check whether this is still the case and come up with a fix?
The Debian report is waiting since October for a reply from the orig. submitter.
That doesn't seem all that large in the modern era, but okay. In any
case, after moving it to the backup file, don't the same number of bytes
need to be written into the new file anyway? And, regardless, how can
something be done to facilitate pubring.kbx sometimes being a symlink then?
Perhaps an option so the choice of move vs. copy can be left to the user?
--Kyle
Feb 2 2017
I'm curious. So what was it about this particular key and signed text that caused this
to expose this error while others did not?
Here is the output from the program you attached running on OS X Sierra and compiled
with gcc. Is it what you expected?
$ ./a.out
0 => 0; tail = ''; errno = Undefined error: 0 (0)
1 => 1; tail = ''; errno = Undefined error: 0 (0)
=> 0; tail = ''; errno = Invalid argument (22)
Sorry, forgot the reference for [1] previously:
I can also confirm that adding the line "disable-ccid" to scdaemon.conf appears
to revert to the previous system, which then works fine (but doesn't really fix
the issue).
Having read [1], I double checked my scdaemon.conf (which apparently already
featured debug-all) and made sure it to read as follows:
log-file /home/mike/.gnupg/scdaemon.log
debug-all
I got the following from attempting to run gpg --card-status:
2017-02-02 18:00:58 scdaemon[32091] DBG: chan_5 <- GETINFO version
2017-02-02 18:00:58 scdaemon[32091] DBG: chan_5 -> D 2.1.18
2017-02-02 18:00:58 scdaemon[32091] DBG: chan_5 -> OK
2017-02-02 18:00:58 scdaemon[32091] DBG: chan_5 <- SERIALNO openpgp
2017-02-02 18:00:58 scdaemon[32091] DBG: apdu_open_reader: BAI=10a02
2017-02-02 18:00:58 scdaemon[32091] DBG: apdu_open_reader: new device=10a02
2017-02-02 18:00:58 scdaemon[32091] ccid open error: skip
2017-02-02 18:00:58 scdaemon[32091] DBG: chan_5 -> ERR 100696144 No such device
<SCD>
2017-02-02 18:00:58 scdaemon[32091] DBG: chan_5 <- RESTART
2017-02-02 18:00:58 scdaemon[32091] DBG: chan_5 -> OK
Please let me know what further information I can provide to help debug this?
This should be fixed by 407f5f9baea5591f148974240a87dfb43e5efef3 .
Thanks for reporting this!
According to SUSv3:
If the subject sequence is empty or does not have the expected form, no
conversion is performed
... If no conversion could be performed, 0 is returned and errno may be set to
[EINVAL].
http://pubs.opengroup.org/onlinepubs/007908799/xsh/strtol.html
It appears that MacOS X sets errno to EINVAL, but glibc doesn't.
(The attached program should expose the behavior; I haven't run it yet on Max OS
X, but I'd be interested in the result.)
The underlying problem is that bindings for ultimately trusted keys were not
registered with the TOFU data.
This should be fixed in 027b81b35fe36692005b8dba22d9eb2db05e8c80.
Copying pubring.kbx to the backup file is not an option because keyrings tend to
get very large. Several dozen megabytes are quite common.
Feb 1 2017
Jan 31 2017
Jan 30 2017
To be clear the initial output is not wrong. At the time the information is
initially requested, the message has not yet been processed.
Anyway, I think I'm working on a fix so this is a non-issue.
Jan 27 2017
Jan 26 2017
Jan 25 2017
thanks for the quick fix, Justus. I can confirm that this fixes the problem for me.
I have now learnt how GCC uses 'undefined behavior' for aggressive optimization
and that this could break code doing unaligned accesses even on x86. So this
needs to be fixed after all.
Merged in 9291ebaa4151a1f6c8c0601095ec45809b963383.
Fixed in 3f4f20ee6eff052c88647b820d9ecfdbd8df0f40.
That is no regression, that never worked well. It only works if one uses a uid
like 'test <test@example.org>'. I'll fix this.
That is a regression - it used to work since every early gpg versions.
I agree on the first part. This needs to be fixed.
I do not understand wht you think "no public key" is the wrong message. We have
always used this message if the public key is not available for verification.
Do you think the text should be changed to "public key not found" ? That would
be a simple change in libgpg-error.
Libgpg-error has a GPG_ERR_MISSING_KEY but that code indicates wrong usage of
functions or bad data structures.
Jan 24 2017
for cases (1), (2), and (3) it sounds like you don't need the PTR at all. right?
For your case (4), i think we should reject hkps via literal IP addresses. It's
not a real-world use case, and if you want to test/experiment with hkps as a
developer, you should have at least the capacity to edit /etc/hosts (or whatever
your system's equivalent is). Anyway, trying to support this case for the
purposes of debugging doesn't make sense if support for this case is the cause
of the bugs in the first place ;)
re: duplicate hosts: I live in a part of the world where dual-stack
connectivity is sketchy at best. And, when connecting to things over Tor, it's
possible that connections to IPv4 hosts will have a different failure rate than
IPv6 connections.
So unless you already know that the host itself is down, why would you avoid
trying the other routes you have to it?
Look at it another way: when trying to reach host X, you discover that X has two
IP addresses, A and B. You try to reach A and it's not available. Isn't it
better to try B instead, rather than to avoid trying B at all just because A was
unreachable?
In a pool scenario, you might want to try to cluster addresses together by
perceived identity so that you can try an entirely different host first, rather
than a different address for the same host who happens to be in the pool twice.
But that strikes me as a very narrow optimization, certainly something that'd
only be worth implementing after we've squeezed the last bit of performance out
of other parts of the code (parallel connections, "happy eyeballs", etc).
Definitely not something to bother with at the outset. So i'd say drop that
optimization for simplicity's sake.
So the simplest approach is:
a) know the configured name of the keysserver
b) resolve it to a set of addresses
c) try to connect to those addresses, using the configured name of the server
for SNI and HTTP Host:
This is all that's needed for cases (1) and (3), and it could also be used in
case (2) if you see (b) as a two-stage resolution process (name→SRV→A/AAAA),
discarding the intermediate names from the SRV. Given that some people may
access the pool via case (1), and servers in the pool won't be able to
distinguish between how they were selected (SRV vs. A/AAAA), they'll still
accept the connections.
If you decide the additional complexity is worthwhile for tracking the
intermediate names in the SRV records, you can always propagate the intermediate
names wherever you like locally without changing the "simplest" algorithm.
If you really want to use the names from the SRV in collecting, then the
algorithm should change to:
a) know the configured name of the keyserver
b) resolve it to a set of intermediate names
c) resolve the intermediate names to a set of addresses
d) try to connect to those addresses, using the intermediate name of the server
for SNI and HTTP host.
But still, no PTR records are needed.
Okay, I get this error now. I had to implement a new option --disable-ipv4 to
make testing easier.
I have never seen the no permission message but only a general connection failed
error. I can try your suggestion of setting an explicit NoIPv6Traffic
We have several cases:
- A pool accessed via round-robin A/AAAA record: We do not use the canonical hostname (i.e. from the PTR) but the name of the pool for the certificate. This is the classical way how keyserver pools.
- A pool access via SRV records: The SRV record has the canonical name and thus we do not need a PTR lookup. But we need a address lookup.
- A keyserver specified by its name: We alread have the name thus no need for PTR lookup.
- A keyserver specified by literal IP address: We need a host name for the certificate. Either we take it from the PTR record or we reject TLS access. I don't think that is is a real world use case but for debugging it is/was really helpful. Should we reject hkps via literal IP addresses?
It is quite possible that some of these cases do not work right. I
have done only manual testing and the matrix is pretty complex: We
have all combinations of direct/Tor, v4 only, v6 only, v4, v6,
interface up, network down.
Right, by "duplicate host", I mean hosts reachable by several addresses
and in particular by v4 and v6. My test back when I originally
implemented the code showed that when hosts are down their other
addresses are also down. Without marking the host dead, the code
would have tried the same request on another address and would run
into the next timeout.
I also think that most delays are due to connection problems and not due to DNS
problems. And most connection problems are due to lost network access. There
we might need to tweak the code a bit similar to what I did for ADNS.
Test added in 5aafa56dffefe3fac55b9d0555c7c86e8a07f072.
Thanks for the report. The message you quoted is a very general error message,
and unfortunately does not really help identifying the problem.
Please describe in detail your setup, and how to reproduce this problem.
Here's a concrete example of how using PTR records gets things mixed up.
keyserver.stack.nl offers keyserver service on port 443.
It has an A record at 131.155.141.70.
But the ptr is to mud.stack.nl:
70.141.155.131.in-addr.arpa. 69674 IN PTR mud.stack.nl.
and the https SNI and HTTP Host: directives provide an entirely different
website depending on whether you access it with:
https://mud.stack.nl/
or
https://keyserver.stack.nl/
If you access it as https://hkps.pool.sks-keyservers.net/, you get the
"keyserver" view. But if you access it by the name in the PTR record
("mud.stack.nl") then you get the mud view (and a 404 on any /pks URLs)
Even more troubling is that dirmngr successfully connects to mud.stack.nl and
does the query, even though it is configured to only talk to
hkps.pool.sks-keyservers.net
This suggests that anyone able to spoof a PTR record to me can get my dirmngr to
send my potentially-sensitive keyserver queries to an entirely different webserver.