Page MenuHome GnuPG

Design gap in openpgp card process
Closed, WontfixPublic


I ran into a problem which I couldn't find any solution in the docs.

given scenario:

A very secure computer A, a less secure computer B, an openpgp card (e.g.
yubico), all completely fresh installed/clean/empty.

The key pair is to be generated and kept on computer A, and put on the openpgp
card using computer A.

The key is then to be used on computer B by using the openpgp card.

The procedure by design is that A would upload the pubkey to a keyserver (or
export as a file) after key generation. When connecting the card to B, B doesn't
know about the key yet, but through gpg --card-edit and fetch (or importing a
pubkey file), B would learn about the pubkey and then learn from the openpgp
card about available keys and create it's secret key files which are in fact
just stubs pointing to the openpgp card.

That works as expected and described in the docs.

But it contradicts itself.

If A is to be kept *really* secure, it must not have any network contact and not
export any files from the point of time where the keys is generated. It
therefore cannot upload anything to keyservers and not export any pubkey files.

Now there are two problems (with some aspects of chicken-and-egg):

  1. B can never learn about the availability of the secret keys on the openpgp

card, since it accepts secret keys on cards only after having the pubkeys.

  1. There is no way to publish (or even have outside A) the pubkey/ID of the key.

Therefore, gnupg running on B should be able to

  • learn about secret keys even in absence of the pubkey
  • should be able to generate the pubkey with a given ID (as in the key creation


At least I did not find anything about that in the docs.

The current way to force the computer which generates the pubkeys to contact key
servers or export files is by design not really secure.


Event Timeline

If A is to be kept *really* secure, it must not have any network contact


and not
export any files from the point of time where the keys is generated.

I don't follow. You can connect your token to the computer, but for some reason
cannot connect a thumb drive to it? I don't see why exporting data from that
computer is problematic. If you are worried about compromised USB devices, you
should also be worried about the computer being manipulated in the first place,
or the openpgp token. Furthermore, you could use the computers screen to export
any information.

You can connect your token to the computer, but for some reason

cannot connect a thumb drive to it?


That's the point.

A token is a security device from a (hopefully) known manufacturer with a
(hopefully) well known API, where you can survey what data it carries out. You
need to use it (if you don't want to reveal your key as a file to unsecure
machines), and it is no surprise that it will carry the secret key. That's the

A thumb drive, on the other hand, is evil. You have a file system and lots of
hidden space on it, and you can't check what malware will hide on it or what
will be left on it simply by making mistakes or bad use of software (e.g. having
the CWD on the thumb drive while doing some crypto operations).

Furthermore, thumb drives are reprogrammable, sometimes quite easy. You can
teach regular thumb drives to behave like CDROMs, keyboards, just as any USB
device, and thumb drives are well known as an attack vector to bring in malware.

However, the major problem is not to connect the thumb drive to the secure
computer. It's not in general a bad idea to use a thumb drive as a backup
storage system for the secure computer.

It's a bad idea to connect it to any other machine after it. Once the thumb
drive has been connected to the secure machine, it should be considered as
contaminated with secrets and never be used outside the secure environment. So
your question somewhat missed the point.

However, the secure enviroment (secure computer and maybe thumb drives) should
be completely isolated (some people call it "air gapped") and the only
connection to the outer world should be what's absolutely needed and well
defined. And that's the token (which, after all, is exactly designed for that

How you convey data between an air-gapped box and a the general desktop is out
of scope for GnuPG. This is OPSEC and you have to setup your rules. Aside from
USB sticks, it is possible to burn stuff to a CDROM, use a floppy, SD card, a
printer and a scanner, a camera and OCR, you name it in your security policy.

Please direct your question to a mailing list. I can't see why this is a
feature requests.

To me, your assumptions seem flawed. You somehow assume that you can get a
trustworthy computer A, but cannot get your hands on a trustworthy device to
transport data from A to B?

(Even if your assumptions hold, you can always take A apart and use its data
storage device (which is trustworthy) and use it to carry the public key.)

This is because your idea of security is wrong in two different aspects.

First: you assume that you just need to call or declare a system as
„trustworthy”, and then it would stop to ever have any bug, failure, or any sort
of malfunction. "Secure" and "Trustworthy" are not absolute properties of a
device, they are always relative to a given threat or attack vector, and these
security terms do not cover normal bugs oder mistakes made by operators. Having
a „trustworthy” system does not mean that it would not write a secret key on a
storage device if you accidently ask it to do so (e.g. because you still have a
CWD in the device when running any key related software. So you need to avoid
that bad operation or malware could leak the secret keys out of the device, and
it won't help in any way to call the device „trustworthy”.

Second: You erroneously apply the term „trustworthy” to a storage device. Trust
belongs to the area of system security, while mobile storage devices (as stupid
read-write-devices) belong to communication security. In communication security
there is no trust. A regular (perfect) storage device is something where you can
write data, and read it later, no matter whether you trust it or not. You cannot
write a message on a storage device and hope that an attacker would not read it,
unless it is part of a system (which it is not if it is, as you intend, a mobile
device which is to be connected to another, insecure machine.)

Things get worse, since thumb drives are not perfect storage devices, but little
systems just pretending to be one. Putting a trusted and an untrusted system
together (i.e. putting a thumb drive into a computer) breaks the system security
of the first system.

Third: the current design is illogical and inconsistent. you use and create a
device (crypto usb device like yubikey) which is intended to protect the key
while making the key usable to the authorized user for doing cryptographical
operations, e.g. create signatures.

Good. That's what the device and it's API are designed for, and since there's
currently no better option available that's currently the best way to do it.

But then, if there is a well defined way, considered as secure, to use the key
for signatures (which might include some internal logging to track signatures),
why should there be a second, different way to create signatures, outside the
device, just to sign it's own public key record and ID (self certify)?

Why isn't it used the normal way for this special signature as well?

e.g. x509/SMIME/openssl do it correctly. They first create a pub/sec key pair
and then use a regular signature operation to generate a CSR oder self-cert.

If there is a well defined and secured way to create signatures, it's bad design
to use another operation just to selfcert without good reason.

And inventing the need to communicate with the outer world in an unprecise
protocol (which it is if you exchange large storage devices) which could
transmit just anything, stops the device from beeing trustworthy anymore.

It is wrong to think that you can exchange storage devices, because the system
was trustworthy. It's the other way round: It's not trustworthy anymore once you
allowed to communicate after key creation.

Please take such discussions to the mailing lists. As soon as a resolution has
been found please update the status of this bug.

marcus claimed this task.
marcus added a subscriber: marcus.

As others have pointed out, we don't implement the Bell-Lapadula model.