Page MenuHome GnuPG

After tampering, a file still decrypts and returns incorrect plaintext, rather than giving an error
Closed, InvalidPublic

Description

  1. Encrypt a file with < somefile gpg -c > tmp.gpg
  2. Modify the file in some way. An attacker would preserve the headers while modifying the ciphertext.
  3. Decrypt the file with < tmp.gpg gpg -d > plaintext

Expected result: GPG detects the error and fails with a decryption error.

Actual result: GPG detects the error, returns the wrong plaintext, and exits with status 2 and a warning on stderr.

It should be possible to force GPG to bypass security because bit rot might occur, but this should not happen accidentally, so it should definitely not be the default. Currently, a script or application using GPG must manually check to make sure the decryption was successful and that the plaintext, which GPG happily sends to the application, is valid at all. We all know how often such manual checks are forgotten to be implemented. It should fail with an error message instead. The only way around it should be (a) very obvious parameter(s) which cannot be abbreviated to hide the fact, such as having to pass --insecure-mode --disable-MDC-check.

Details

Version
2.2.5

Event Timeline

werner added a subscriber: werner.

Sorry. gpg is a real software and not some memory hog. real software runs under Unix and complies with the Unix rules, where one of them is to allow the use in a pipeline. All standard Unix tools have this feature and you need to check the error code ("set -e" in the simplest case). It is not different from gzip, tar, curl, rsync, ...

The primary function of those other tools is not securely encrypting data. If the message is too large to keep in memory at once, then there is indeed no choice to process it as a stream, but users should be aware of this. Perhaps a flag can be used, along the lines of --stream-without-verification? The man page could explain: "GPG computes an MDC over the whole message, so it can only check at the end whether the message was tampered with. This flag can be used to stream the output, so that the entire message does not have to be kept in memory. You must check the exit status to verify that decryption was successful and that the message was not tampered with, because with this flag, the data returned by GPG may be incorrect or even malicious. If the exit status is zero, then the MDC is correct and the message was not tampered with."

The trouble with that solution is that we'd have to check how much physical RAM is available and cannot tell in advance in case of a stream (so we'd have to kill it mid-stream), so it's indeed not trivial to do. If you don't think that this is the correct behaviour for an encryption tool, then GPG's behaviour should at least be documented in the man page. Otherwise, even those who read the documentation won't know how to use it correctly.

Currently, exit statuses are not documented anywhere that I can find. A quick search shows that people are looking in the source code to find what an exit status means, and that's for those who even know such a thing as exit statuses exists (shells typically don't show them unless explicitly configured that way, so you don't normally encounter it).

The set of information returned by gpg is too large to be mapped on an exit code. Thus we have status codes and the gpgv tool.

There won't by any arbitrary buffering just for a voiding to check the relevant information. It does not really help and may indeed be contra-productive. What I already proposed is to invalidate the data output buffer in gpgme as an additional failstop mechanism. This will work only file file and memory buffers but that should be okay for your use case. If you are not using gpgme you need to implement it yourself.