Do we need to do something for 1.4?
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Dec 17 2012
Dec 15 2012
Finishing things up now.
Note that this implies setting Host: properly as well
David, what is the status?
Dec 10 2012
Dec 6 2012
Dec 3 2012
Note that the topic is currently under discussion on gnupg-devel.
Dec 2 2012
Taking
Nov 21 2012
That was fixed by
2010-11-11 Werner Koch <wk@g10code.com>
- agent.h (opt): Add field SIGUSR2_ENABLED.
- gpg-agent.c (handle_connections): Set that flag.
- call-scd.c (start_scd): Enable events depending on this flag.
and thus 2.0.19 should work fine.
Thanks to gniibe for mentioning this.
Nov 8 2012
Fixed for 1.4.13 (95347cf9).
Fixed for 1.4.13 (e3e5406)
Do you still have this problem with 1.4.12?
Fix for 1.4.13 (commit 64e7c23).
Fixed in git for gnupg 1.4.13, Libgcrypt 1.5.1 and Libgcrypt 1.6.0.
The reason why I was not able to replicate this bug was that
I didn't use -std=c99 with gcc >= 4.3.
We won't do this for 1.4.
I would say this should go into 2.1.
Meanwhile gniibe fixed a lot more bugs and also ported them back to 2.0. Thus I
close this bug.
Nov 7 2012
This is not a bug. The description of --max-cache-ttl reads:
Set the maximum time a cache entry is valid to @var{n} seconds. After
this time a cache entry will be expired even if it has been accessed
recently. The default is 2 hours (7200 seconds).Thus even if you set the cache-ttl-ssh > max-cache-ttl, it will expire after
max-cache-ttl seconds.
Nov 6 2012
Oct 20 2012
The behaviour matches that observed in released versions; I was debugging a
problem observed in the released versions, not reviewing code looking for issues.
Whether or not it's used in the current development branch, this has caused an
interoperability issue in the real world for the keyserver operators, causing a
functionality deployment to be rolled back and resulting in filtered results,
reducing the pool of available keyservers.
Since Issue1447 is a security impacting issue which will need a CVE and a security
release to fix anyway, it would really be nice to try to get the fix for client
behaviour into a version which is likely to be pushed out widely. Not critical,
security comes first, but if we can leverage the security release to improve
interop, that would be helpful.
In practice, we (the keyserver operators and pool operators) are stuck not able to
use SRV to point to non-default ports for at least a couple of years. This is
very unfortunate, given the efforts currently being made to make deployments more
robust, with TLS more widely deployed.
Oct 19 2012
So you are saying, every distribution supporting a co-installation should patch
GPG to fix this?
The user might not even know what gpg is, that several major versions exist, or
which options are available where. He just sees email encryption not working any
longer. In most cases, he won't have decided to install both versions - he just
has software installed which, indirectly, depends on both.
Well, it might still exists, but it is not used anymore. Remember that this is
the development branch.
The one who decided to install both versions while at the same time using an
option not available for gpg1. It might be caused by a package manager.
Oct 11 2012
Of course it can be fixed by changing the config files. However, the default
behaviour (if I start gpgconf as a new user) is to use the shared config file
for both versions. As I see it, programs are responsible to set up the
configuration in the user's home directory if they need one - at least, that's
common practice in my experience.
If this is not a bug in GnuPG, then whose installation problem is this?
% git remote -v
origin git://git.gnupg.org/gnupg.git (fetch)
origin git://git.gnupg.org/gnupg.git (push)
% git status
On branch master
nothing to commit (working directory clean)
%
I did the pull on the day I filed the bug, and as of the commit stated, the
directory exists. I just did a "git pull", no change. I didn't write "git current"
in this bug.
http://www.gnupg.org/download/cvs_access.en.html still points to the repo above, so
that's what I pulled. If that's no longer correct, I can pull another repo.
But still, if you check out the revision stated, you'll see the behaviour, which is
reflected in current releases of GnuPG.
Gpgconf uses the configuration file as advertised by gpg.
For example:
$ gpg2 --gpgconf-list | grep ^gpgconf-gpg.conf: gpgconf-gpg.conf:16:"/home/wk/.gnupg/gpg.conf $ gpg --gpgconf-list | grep ^gpgconf-gpg.conf: gpgconf-gpg.conf:16:"/home/wk/.gnupg/gpg.conf $ touch ~/.gnupg/gpg.conf-1 $ gpg2 --gpgconf-list | grep ^gpgconf-gpg.conf: gpgconf-gpg.conf:16:"/home/wk/.gnupg/gpg.conf $ gpg --gpgconf-list | grep ^gpgconf-gpg.conf: gpgconf-gpg.conf:16:"/home/wk/.gnupg/gpg.conf-1
Thus you only need to create a gpg.conf-1 (or conf-2) and you are
done. This is an installation problem.
What do you mean by "git current"? The current "git master" has no keyserver/
stuff.
Oct 9 2012
Kristian has removed the SRV records at _pgpkey-https._tcp.hkps.sks-
keyservers.net, so the explanation in step 3 might seem to not match reality, but
that's a change, because of this Issue and Issue1446.
If you set up your own DNS pool for testing, I'm happy to send you a CSR for a new
vhost to help with debugging.
Oct 8 2012
Oct 7 2012
Sep 26 2012
Yeah sure, I meant "NOT a problem".
Yes I know what you mean. But without locking you will never be able
to get it right.
As you noted there is actually a problem with Libgcrypt under
Windows. In Libgcrypt we lock the seed file and thus the fatal error
is the right thing to do. But.....
#ifdef __GCC__ #warning Check whether we can lock on Windows. #endif #if LOCK_SEED_FILE
Thus we should implement locking for Windows. The problem here is
that there is no portable advisory locking in Windows. And frankly our
fcntl locking approach does not work on all Unices either and worse it
does not work if the home partition is on certain remote file systems.
The only solution I see is to employ the new dotlock code from GnuPG
here. It is slower than fcntl locking but very portable.
I don't think we will fix this in gpg 1.4.
Make sure your umask is setup properly. This is standard Unix behaviour and
nothing GPG can do about. Whether you use --output or the usual redirection
shall not make a difference.
In any case we can't change the behaviour of --output created files becuase that
would break all kind of users.
Sep 19 2012
Sep 18 2012
This is known but a problem from a security POV.
I suppose you mean "NOT a problem"? I think it might be a problem in
opportunistic encryption scenarios if gpg encryption failures caused by
random_seed access conflicts are ignored like failures caused by missing keys.
But usually it's just a nuisance like any other randomly failing program.
The non-locking read is on purpose - if it works: okay. Otherwise we
re-generate a seed file.
I see that the code tries to tolerate access conflicts, but there's still a race
condition if the random_seed file is truncated between fstat() and read(). The
read() error handling is incomplete. Maybe this pseudo-patch explains best what
I mean:
diff -ru gnupg-1.4.12/cipher/random.c gnupg-1.4.12/cipher/random.c
- gnupg-1.4.12/cipher/random.c 2012-01-24 09:45:41.000000000 +0100
+++ gnupg-1.4.12/cipher/random.c 2012-09-18 19:47:54.449578800 +0200
@@ -492,11 +492,17 @@
do {
n = read( fd, buffer, POOLSIZE );
} while( n == -1 && errno == EINTR );- if( n != POOLSIZE ) {
+ if( n == -1 ) {
log_fatal(_("can't read `%s': %s\n"), seed_file_name,strerror(errno) );
close(fd);
return 0;
}+ else if ( n == 0 ) {
+ ... handle like sb.st_size == 0
+ }
+ else if ( n != POOLSIZE ) {
+ ... handle like sb.st_size != POOLSIZ
+ }
close(fd);
Yes, we could do the file locking
Proper locking would be the ideal solution, but a better read() error handling
would already be sufficient to avoid the sporadic fatal errors on random_seed
accesses.
and iirc, we do this in libgcrypt (GnuPG-2).
I'm just checking this and ... sorry, no. The gpg2.exe from Gpg4win 2.1.0 shows
the very same error:
note: random_seed file is empty
note: random_seed file is empty
Fatal: can't read `C:/Dokumente und
Einstellungen/jechternach/Anwendungsdaten/GnuPG /random_seed': No such file or
directory
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
note: random_seed file is empty
Sep 17 2012
This is known but a problem from a security POV. The seed file is a cache. If
the seed file can't be read a new one is created. Right, there might be a
performance issue but at least on Windows this is not as severe as on certain
Linux systems.
The non-locking read is on purpose - if it works: okay. Otherwise we
re-generate a seed file. Yes, we could do the file locking and iirc, we do this
in libgcrypt (GnuPG-2).
The error on short reads is also on purpose. We want those 600 bytes at once
and nothing else. If another process is writing the seed file we may see the
short reads. But in this case there is no clear answer waht to do - thus we
assume the "no seed file" case.
Aug 17 2012
May I ask for the status of this bug?
Is this going to be implemented in the gnupg 2.x series?
May I ask what happen with this bug?
Just trying to keep track of these bugs in Debian Bug Tracking System.
Aug 14 2012
So you want to open /dev/tty (which gpg does anyway if needed; see
common/ttyio.c) and pass that to the agent so that the agent may pass
it on to Pinentry if he needs it. That may work.
However, I don't like it because you claim a resource of the tty and
send it to a different process to be used only if needed. With our
current system we use this resource only if we really needs it.
Although libassuan implements descriptor passing, it can't be used
with Pinentry, because that one uses a simple pipe and not a socket.
Yes, we could change that too, but then you can't use a shell script
instead of Pinentry anymore.
Aug 10 2012
Even in that case:
foo | gpg 2>gpg.log | bar
if the main gpg process opens "/dev/tty", then it will still get the user's
terminal. /dev/tty is strong magic -- it will work even if fds 0, 1, and 2 are
pointing elsewhere.
And if it then passes that file descriptor to an agent (I assume you're aware of
the ability to pass open file descriptors across unix-domain sockets, so that the
target process actually receives a copy of the same file descriptor) then the agent
will have the user's terminal also.
I reiterate my offer to help implement something like this, if you can point me to
the right place in the code (i.e. where *exactly* are these requests to the agent
that require user interaction happening? Where would be the best place to put file-
descriptor passing code?)
Your solution will not work either. It still depends that a standard fd is
connected to the tty. This is not the case
foo | gpg 2>gpg.log | bar
is a common pattern. I don't see why you have such a problem to set a variable
for each new terminal. If you don't like GPG_TTY, you may contact the Open
Group to define a new standard variable to POSIX. FWIW, GPG_TTY is used for
more than a decade.
Aug 9 2012
OK, here's what I hear you saying: Even if my patch would do the right thing in the common case
for 2.0.19, when 2.1 comes along it will stop helping.
I agree with you that *if you are using a long-lived agent*, then the patch I had proposed is not
sufficient. I had been discounting that case as "not the common case"; now I realize it is going
to be the common case soon.
I think, at this point, we're going to have to consider using file descriptor passing (SCM_RIGHTS)
from the gpg to the agent.
It *will* be the case that, even if someone has redirected stdin/stdout in the gpg process, that
the gpg process will be able to open its "/dev/tty" and get a useful file descriptor. I agree with
you that it can't (at least not portably) work backwards from that to find the *name* of its tty,
but it can at least open /dev/tty, itself.
If gpg then passes that open file descriptor across the unix-domain socket to the agent (at least
I assume unix-domain sockets are used for gpg/agent communication), then the agent will have a
copy of *that* file descriptor.
Can you point me to where the agent receives the set of data that makes up one "request" or "work
unit"? I can try to make a new patch that uses file-descriptor passing.
To reiterate: making the user *in the common case* set an environment variable is not acceptable.
Environment variables are a nice thing to be able to set to change behavior from the default; but
if the user is happy with the default behavior they should not have to set any environment
variables in order to use a piece of software. If I had to have one environment variable setting
for every program I used regularly, my .cshrc would be *huge*!
Thanks!
See my comments for T1406. It is clearly a clang bug.
Without having a controlling tty you can't get the name of the
controlling tty. That is why we need other ways to tell the
background process (i.e. gpg-agent) which tty a pinentry shall use.
It doesn't matter who calls the agent; if he has a tty in some
settings, that is not a fact we can rely upon. For example in 2.1
there will be no on-demand starting and stopping anymore; instead the
agent is started just once (you can even compile 2.0.19 with this
behaviour).
Your options are:
- Set GPG_TTY for each new TTY.
- Pass --ttyname=$(tty) on the GPG commandline.
- Start the agent in advance and use --keep-tty --tyyname=$(tty) to lock the Pinentry to the tty the agent has been started.
BTW, it is common practice to dup fd 0 to /dev/null. Thus it does not
help to use ttyname (0) instead of ttyname (1) as the default.
Fixed in master (4ea37fe4). We use the docs for master for releases of all
branches - thus this fix will be applied to the next 2.0 and 1.4 release as well.
Thanks.
Aug 1 2012
To restate: If you are starting the agent on demand, AND if you are feeding gpg
data on standard input, then initscr() will NOT do the right thing. At least on
FreeBSD. Are you saying that on your OS, initscr() will, internally to itself,
open "/dev/tty"?
If that's the case for you, then it's not the case for me on FreeBSD. The
initscr() on FreeBSD doesn't do any magic, it just uses fds 0 and 1. Hence the
fact that I am trying to get it to open /dev/tty in that case.
Yes, I saw the if (tty_name) in pinentry when I was looking through all of that
stuff. The problem for me is NOT that pinentry has no controlling terminal,
because I *am* starting the agent, as you say, on demand.
The problem for me is that pinentry has inherited file descriptor 0 from gpg,
and it is *not* a tty, it is the input file that I am asking gpg to process.
So no, the if (tty_name) thing doesn't really work too well if you are feeding
gpg something on its standard input, AND if you are starting gpg-agent on
demand.
It does not work on glibc based systems either. Actually the correct
way would be to use ctermid(3) but that has the same problem as
ttyname - it even returns a fixed string without trying to find the
tty in /dev/ or /proc.
Pinentry actually defaults to the default tty if no GPG_TTY has been
passed to it from gpg-agent. Here is the code from the curses
pinentry:
/* Open the desired terminal if necessary. */
if (tty_name)
{
ttyfi = fopen (tty_name, "r");
if (!ttyfi)return -1;
ttyfo = fopen (tty_name, "w"); if (!ttyfo)
{
int err = errno; fclose (ttyfi); errno = err; return -1;
}
screen = newterm (tty_type, ttyfo, ttyfi);
set_term (screen);
}
else
{
if (!init_screen){
init_screen = 1; initscr ();
}
else
clear ();
}
TTY_NAME has been set via an Assuan option which should have come from
GPG_TTY. If this has not been set (or any of the --ttyname options
used), Pinentry uses init_scr. The problem is that gpg-agent and thus
pinentry usually has no controlling terminal and thus there is no default tty.
It works for you because you started gpg-agent on demand. That is
something to be avoid because it won't be able to cache the
passphrase then.
I have not checked whether your patch may harm. However, I remember
that we had quite some problems calling pinentry from a background
process and the way we do it today works in almost all cases -
assuming your system has been properly configured.
Ugh. That trick doesn't work on Solaris, it looks like.
The basic place I'm trying to get to is... in the simple case... a user logs in,
and isn't using gpg-agent, gpgme, or anything like that... and just types:
some_command | gpg -a --clearsign > some_file
that it will work.
It seems *to me at least*, like defaulting to the literal filename "/dev/tty",
as in my patch, at least *does no harm*.
Maybe it doesn't solve the gpg-agent case or the gpgme case 100%. But at least
it makes the simple case work. And people can always override it by setting
GPG_TTY, if they need to.
Make sense?
If I come up with a modified patch that opens /dev/tty, calls ttyname on *that*,
and gives *that* tty name to pinentry, will you consider it?
Thanks!!
(btw, I don't use the agent at all... my usage of gpg is very vanilla, just the
plain way of using it on the command line that has worked ever since gpg1, but
is now broken in gpg2)
Looks like calling ttyname() on a freshly open()ed "/dev/tty" works, at least on
FreeBSD:
cat ttyname.c
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
int main(int argc, char **argv)
{
int fd = open("/dev/tty", O_RDWR, 0);
char *s = ttyname(fd);
printf("%s\n", s ? s : "NULL");
return 0;
}
gcc -o ttyname ttyname.c
./ttyname
/dev/pts/9
./ttyname < /dev/null >& /tmp/foo
cat /tmp/foo
/dev/pts/9
Notice that even though the program's stdin was /dev/null, and the program's
stdout and stderr were both going to a file (I use tcsh, hence the >& syntax),
and yet it still managed to figure out what the terminal was.
Consider the case of GPGME. All standard descriptors are not connected to a tty.
I don't know a way to get the actual terminal name in a portable way. Thus we
need to rely on the shell to give use the name of the tty and pass it via an envvar.
In your case you may want to use gpg-agent’s --keep-tty option.
Yes, we can do this for 2.1. In case there is an already translated string
available we can backport this also to 1.4 and 2.0.
So now, what shall we do proper file locking and make sure that the user has
permissions to both files? It will be quite some code to get this all done right.
Another thought would be that ttyname(2) is possibly somewhat more likely to
give a useful result than either ttyname(0) or ttyname(1). That is, assuming
that people redirect stdin and stdout all the time, but rarely redirect stderr.
I'm just tossing out ideas here. My gut reaction is still "just use /dev/tty",
but I'm hoping that if I toss out some ideas that maybe one of them will be
helpful. :-)
Hmmmm.
Would it work to open /dev/tty, and then call ttyname on *that*? Rather than
calling ttyname on stdin always?
I really dislike the solution of "the user must set $GPG_TTY". That is broken,
period. If I'm not making use of any advanced functionality like the agent,
please don't penalize me (as a user) for the fact that such advanced
functionality *exists*.
I want the simple case -- i.e. I'm logged in, and I run gpg on a single tty --
to Just Work, without me having to set any environment variables to make it
work.
That is not a bug but required by the specs. Leading dashed are required to be
escaped by "- "; see RFC 4880. Use "--output FILE" to get the cleartext.
That does not work.
For example: The GPG process may map /dev/tty to /dev/pts/4. Then it
passes the string "/dev/tty" via gpg-agent to pinentry. Pinentry is
called by gpg-agent but gpg-agent was started on different tty. Thus
for gpg-agent /dev/tty may map to /dev/pts/2. The pinentry will now
pop up at /dev/pts/2 - it is very likely that no terminal is attached
to it and thus you won't even see a pinentry on some other tty.
Agreed, the fallback we currently have does not work either in this
case. Printing a warning if GPG_TTY is not set would probably be the
better alternative.