Please take such discussions to the mailing lists. As soon as a resolution has
been found please update the status of this bug.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Jan 31 2017
Jan 24 2017
This is because your idea of security is wrong in two different aspects.
First: you assume that you just need to call or declare a system as
„trustworthy”, and then it would stop to ever have any bug, failure, or any sort
of malfunction. "Secure" and "Trustworthy" are not absolute properties of a
device, they are always relative to a given threat or attack vector, and these
security terms do not cover normal bugs oder mistakes made by operators. Having
a „trustworthy” system does not mean that it would not write a secret key on a
storage device if you accidently ask it to do so (e.g. because you still have a
CWD in the device when running any key related software. So you need to avoid
that bad operation or malware could leak the secret keys out of the device, and
it won't help in any way to call the device „trustworthy”.
Second: You erroneously apply the term „trustworthy” to a storage device. Trust
belongs to the area of system security, while mobile storage devices (as stupid
read-write-devices) belong to communication security. In communication security
there is no trust. A regular (perfect) storage device is something where you can
write data, and read it later, no matter whether you trust it or not. You cannot
write a message on a storage device and hope that an attacker would not read it,
unless it is part of a system (which it is not if it is, as you intend, a mobile
device which is to be connected to another, insecure machine.)
Things get worse, since thumb drives are not perfect storage devices, but little
systems just pretending to be one. Putting a trusted and an untrusted system
together (i.e. putting a thumb drive into a computer) breaks the system security
of the first system.
Third: the current design is illogical and inconsistent. you use and create a
device (crypto usb device like yubikey) which is intended to protect the key
while making the key usable to the authorized user for doing cryptographical
operations, e.g. create signatures.
Good. That's what the device and it's API are designed for, and since there's
currently no better option available that's currently the best way to do it.
But then, if there is a well defined way, considered as secure, to use the key
for signatures (which might include some internal logging to track signatures),
why should there be a second, different way to create signatures, outside the
device, just to sign it's own public key record and ID (self certify)?
Why isn't it used the normal way for this special signature as well?
e.g. x509/SMIME/openssl do it correctly. They first create a pub/sec key pair
and then use a regular signature operation to generate a CSR oder self-cert.
If there is a well defined and secured way to create signatures, it's bad design
to use another operation just to selfcert without good reason.
And inventing the need to communicate with the outer world in an unprecise
protocol (which it is if you exchange large storage devices) which could
transmit just anything, stops the device from beeing trustworthy anymore.
It is wrong to think that you can exchange storage devices, because the system
was trustworthy. It's the other way round: It's not trustworthy anymore once you
allowed to communicate after key creation.
To me, your assumptions seem flawed. You somehow assume that you can get a
trustworthy computer A, but cannot get your hands on a trustworthy device to
transport data from A to B?
(Even if your assumptions hold, you can always take A apart and use its data
storage device (which is trustworthy) and use it to carry the public key.)
Jan 23 2017
Fix is in 2.1.18
How you convey data between an air-gapped box and a the general desktop is out
of scope for GnuPG. This is OPSEC and you have to setup your rules. Aside from
USB sticks, it is possible to burn stuff to a CDROM, use a floppy, SD card, a
printer and a scanner, a camera and OCR, you name it in your security policy.
Please direct your question to a mailing list. I can't see why this is a
feature requests.
Jan 19 2017
You can connect your token to the computer, but for some reason
cannot connect a thumb drive to it?
Exactly.
That's the point.
A token is a security device from a (hopefully) known manufacturer with a
(hopefully) well known API, where you can survey what data it carries out. You
need to use it (if you don't want to reveal your key as a file to unsecure
machines), and it is no surprise that it will carry the secret key. That's the
idea.
A thumb drive, on the other hand, is evil. You have a file system and lots of
hidden space on it, and you can't check what malware will hide on it or what
will be left on it simply by making mistakes or bad use of software (e.g. having
the CWD on the thumb drive while doing some crypto operations).
Furthermore, thumb drives are reprogrammable, sometimes quite easy. You can
teach regular thumb drives to behave like CDROMs, keyboards, just as any USB
device, and thumb drives are well known as an attack vector to bring in malware.
However, the major problem is not to connect the thumb drive to the secure
computer. It's not in general a bad idea to use a thumb drive as a backup
storage system for the secure computer.
It's a bad idea to connect it to any other machine after it. Once the thumb
drive has been connected to the secure machine, it should be considered as
contaminated with secrets and never be used outside the secure environment. So
your question somewhat missed the point.
However, the secure enviroment (secure computer and maybe thumb drives) should
be completely isolated (some people call it "air gapped") and the only
connection to the outer world should be what's absolutely needed and well
defined. And that's the token (which, after all, is exactly designed for that
purpose).
If A is to be kept *really* secure, it must not have any network contact
Agreed.
and not
export any files from the point of time where the keys is generated.
I don't follow. You can connect your token to the computer, but for some reason
cannot connect a thumb drive to it? I don't see why exporting data from that
computer is problematic. If you are worried about compromised USB devices, you
should also be worried about the computer being manipulated in the first place,
or the openpgp token. Furthermore, you could use the computers screen to export
any information.
Jan 18 2017
Jan 16 2017
FTR: EFL == enlightenment foundation libraries. Calling this
"Enlightenment-based" is like calling the GTK pinentry "Metacity-based".
It does work, but contrary to my expectations it is rather unpolished. I'll
talk to Mike.
Jan 13 2017
Done now
Jan 11 2017
I currently know of no more problems so lets resolve this.
Jan 6 2017
In 2.1 --quit is honored here
There are keyservers which listen on port 80 or 443. They can be used in such
cases. See https://sks-keyserver.net.
Actually we do not need that function on Windows. It is on Unix called at
startup to get a list of files not to close. On Windows we do not need to close
the files before a CreateProcess and thus close_all_fds is a dummy anyway.
I removed calling this function under Windows. To go into 2.1.18.
I do not think that an expired key should be ignored. The reason is that it
won't be possible to verify an old package because it is common that keys expire
at some time. This does not say anything on whether the key has been compromised.
However, if a key has been revoked, that might be be an indication that the key
has been comprimised and that old signature may have been replaced by faked
ones. I would agree to return failure in this case.
I would suggest to add
gpgconf --launch gpg-agent
GPG_AGENT_INFO="$(gpgconf --list-dirs agent-socket):-1:1"
export GPG_AGENT_INFO
to your startup script. This starts gpg-agent and sets the correct socket name
into the envar.
I recently di this change:
- return 0;
+ return !access (filename, R_OK)? 0 : gpg_error (GPG_ERR_EACCES);
(commit 5d13581f4737c18430f6572dd4ef486d1ad80dd1)
Does that solve your problem?
Adding %f does not help much because it is only used internally. I would be in
favor of adding an ssh-key-mode option so that the user can select the hash algo
and the output format.
I think this is a dup of T2819
That issue also contains a possible implementation. I'm not sure anymore why we
didn't push it I think it was because we were under release pressure and wanted
do look into this later.
Jan 2 2017
Hi,
The patch works. There's 1 more issue that's been standing for a bit longer already,
and that you might want to tackle at the same time: there's no argp.h header on Mac.
On Linux it is only a problem with the headers (e.g. the -dev) Package as the
That's actually an orthogonal issue, and one that's probably easier to rectify as any
changes only become apparent when dependent software is being built.
libraries have different soversions
This is also the case on Mac, but the link library doesn't have a soversion. It's
called libqgpgme.so or libqgpgme.dylib .
There is of course the option to rename just that symlink. A bit of a hack, but one
that's relevant only during the link step, when dependent software is being rebuilt.
How does MacPorts handle this in general? IMO this is not a (q)gpgme(++)
specific problem as you will have this problem with each ABI break.
MacPorts does many things like they're done on more traditional *n*x desktops, i.e.
install libraries in a central, shared location (--prefix=/opt/local by default).
There is nothing specific it does to handle ABI breaks; they can hold up an upgrade,
or a patch is applied at some level, or a conflict is registered. Sadly there is no
central way to create -dev packages, which doesn't help here.
E.g. when we
break the ABI in QGpgME libqt5qgpgme.dylib may be incompatible and we would need
a new name.
That I don't see. The problem here isn't so much the ABI break compared to the
version shipped by kdepimlibs4, but the fact that an incompatible Qt version is used.
So no, a SOVERSION=8 upgrade doesn't impose a library name change. Cf. Poppler and
its Qt backends; they're called libpoppler-qt4 and libpoppler-qt5 .
An alternative would be to do like QCA: install the library wherever Qt's own
libraries are installed. That automatically resolves the conflict with the old
version included with kdepimlibs4, and might be less disruptive for existing
distribution packages.
It's not only the build system but the code using QGpgME / GpgME++ will be more
complex as they would need to have feature checks for both the QGpgME version
What I had in mind was a build system that refuses to do a mismatching build of QpgME
X.Y.Z against GpgME++ that's not X.Y.Z . If you don't do runtime checks there's no
guarantee anyway beyond what the dynamic linker can give, I think. Distributions can
build QpgME and only bundle the QpgME bits, and then install that against any GpgME
install. I've done that for a bit with QpgME 1.7.x against QpgME++ 1.8.0, and didn't
run into any issues.
You could probably even argue that people would be less likely to try this kind of
things if the build system gave off a big hint that they really shouldn't be doing
that. It's not like it's particularly difficult to install only QpgME, after all.
Hi,
thanks for your feedback.
Regarding library suffix in the cmake config files, sorry about that I forgot
MacOS ;-) can you please test the attached patch (macos-cmake-config-fix.diff)
that reintroduces libsuffix to distinguish between macos and linux?
QGpgME builds libqgpme, preserving the same name as the library that used to
be built by kdepimlibs4.
There was a discussion after the 1.7.0 release about this. In summary: I agree
that we should have changed the name to avoid this conflicts, but we think that
it's now too late to do that as we want to avoid additional hassle for packages.
On Linux it is only a problem with the headers (e.g. the -dev) Package as the
libraries have different soversions. On Windows it's not a problem at all as the
Application ships the library it requires.
Is this something that might be considered upstream, e.g. for 1.8.1, possibly as
a build option? I realise this may not be something that has already come up on
Linux desktops but it's likely to do so in other distribution systems; it is
blocking us in MacPorts at this moment, for instance.
How does MacPorts handle this in general? IMO this is not a (q)gpgme(++)
specific problem as you will have this problem with each ABI break. E.g. when we
break the ABI in QGpgME libqt5qgpgme.dylib may be incompatible and we would need
a new name.
On Linux we have soversion and on Windows and MacOS imo usally the libraries are
shipped with the Application. But on MacPorts how does this work?
It will probably a bit more complex to maintain the buildsystem because you'd
want to exclude builds against mismatching qgpgme versions, but when done that
should be all, no?
It's not only the build system but the code using QGpgME / GpgME++ will be more
complex as they would need to have feature checks for both the QGpgME version
and the GPGME version to determine which features are available. This was a huge
hassle in the old days and one of the reasons we wanted to move them closer
together so that you can rely on the API once you have a minimum required version.
See e.g.:
https://git.gnupg.org/cgi-bin/gitweb.cgi?p=gpgme.git;a=commitdiff;h=433bb8e84b2d1e50b5c5b9f7f2006b60cd7d7785
That removed lots of these feature checks.
Dec 21 2016
Aside from the required build system changes we wil run into problems evaluating
bug reports.
It will probably a bit more complex to maintain the buildsystem because you'd
want to exclude builds against mismatching qgpgme versions, but when done that
should be all, no?
It's just a bit a pity that you have to build all of the cpp bindings again if
you just want to build the Qt bindings.
Dec 20 2016
The web page has been updated.
Done. Note that the https is only to the frontend the backend is reached
unencrypted. We can't easily change this.
Dec 19 2016
Ok profiles are now there and look workable, but it looks like they are only
supporting configuration values that are currently accessible through gpgconf:
[gpg]
trust-model tofu+pgp
keyserver-options auto-key-retrieve
auto-key-locate local,wkd,pka,cert,dane
Leads to:
gpgconf: /opt/gnupg/etc/gnupg/automated.profile:7:0: error: unknown option
'trust-model' in section 'gpg'
gpgconf: /opt/gnupg/etc/gnupg/automated.profile:8:0: error: unknown option
'keyserver-options' in section 'gpg'
So we need more options promoted to gpgconf. Which I think is ok, we can just
mark them as Expert / Invisible and GUI's should respect that.
Dec 16 2016
I went over the other programs, and did not see any glaring problems. I have
decided to ignore the socket configuration for now. I'm quite happy with the
changes, but feel free to reopen this bug.