This is because your idea of security is wrong in two different aspects.
First: you assume that you just need to call or declare a system as
„trustworthy”, and then it would stop to ever have any bug, failure, or any sort
of malfunction. "Secure" and "Trustworthy" are not absolute properties of a
device, they are always relative to a given threat or attack vector, and these
security terms do not cover normal bugs oder mistakes made by operators. Having
a „trustworthy” system does not mean that it would not write a secret key on a
storage device if you accidently ask it to do so (e.g. because you still have a
CWD in the device when running any key related software. So you need to avoid
that bad operation or malware could leak the secret keys out of the device, and
it won't help in any way to call the device „trustworthy”.
Second: You erroneously apply the term „trustworthy” to a storage device. Trust
belongs to the area of system security, while mobile storage devices (as stupid
read-write-devices) belong to communication security. In communication security
there is no trust. A regular (perfect) storage device is something where you can
write data, and read it later, no matter whether you trust it or not. You cannot
write a message on a storage device and hope that an attacker would not read it,
unless it is part of a system (which it is not if it is, as you intend, a mobile
device which is to be connected to another, insecure machine.)
Things get worse, since thumb drives are not perfect storage devices, but little
systems just pretending to be one. Putting a trusted and an untrusted system
together (i.e. putting a thumb drive into a computer) breaks the system security
of the first system.
Third: the current design is illogical and inconsistent. you use and create a
device (crypto usb device like yubikey) which is intended to protect the key
while making the key usable to the authorized user for doing cryptographical
operations, e.g. create signatures.
Good. That's what the device and it's API are designed for, and since there's
currently no better option available that's currently the best way to do it.
But then, if there is a well defined way, considered as secure, to use the key
for signatures (which might include some internal logging to track signatures),
why should there be a second, different way to create signatures, outside the
device, just to sign it's own public key record and ID (self certify)?
Why isn't it used the normal way for this special signature as well?
e.g. x509/SMIME/openssl do it correctly. They first create a pub/sec key pair
and then use a regular signature operation to generate a CSR oder self-cert.
If there is a well defined and secured way to create signatures, it's bad design
to use another operation just to selfcert without good reason.
And inventing the need to communicate with the outer world in an unprecise
protocol (which it is if you exchange large storage devices) which could
transmit just anything, stops the device from beeing trustworthy anymore.
It is wrong to think that you can exchange storage devices, because the system
was trustworthy. It's the other way round: It's not trustworthy anymore once you
allowed to communicate after key creation.