Page MenuHome GnuPG

MDC failures should always trigger fatal error
Closed, ResolvedPublic

Description

Missing or broken MDC currently does not cause a fatal error to be thrown if some obsolete ciphers are in use (such as CAST5, 3DES). The rationale for this was to support legacy systems. But this has encouraged mail clients to incorrectly treat missing MDC as non-fatal (gnupg says it succeeded, so it must be OK!).

To guard against such practice in the future, we should fail hard on all MDC errors by default, regardless of ciphersuite.

This mitigates CVE-2017-17688

Event Timeline

Done in master with rGd1431901f014 and we are discussing on Jabber whether we can risk to do that in 2.2 too. It might be that another ortion than --ignore-mdc-error would be better for 2.2 but that would differ than from master.

Actually this is not related to the mentioned CVE because the issue we are talking about has not been tested by them.

Actually this is not related to the mentioned CVE because the issue we are talking about has not been tested by them.

Is it not? Admittedly the paper is unclear exactly which particular errors have been tested in which particular combinations...

You mean because they mentioned 64 bit block ciphers? In the original mail exchange in November about "we have broken the MDC" which we disproved and they confirmed that it is an Enigmail or Thunderbird problem:

XXXXX found that for a non-intregrity protected message and Gnupg
2.2.3, he gets a warning:
"gpg: WARNING: message was not integrity protected"

With -vv he gets:
  "gpg: decryption forced to fail"

Error code is 2 and Enigmail still displays the decrypted message, see
attachment.

These diagnostics are only used with a 128 bit blocksize cipher. Their screenshot also shows that - in contrast to their claim - TB blocked the external content:

.

werner changed the task status from Open to Testing.May 17 2018, 9:29 AM

The path I now took is to keep 2.2 as is but change GPGME to trigger a decryption failure if no MDC is used. This is under the assumption that old scripts using gpg 2.2 or gpg 2.0 do not use GPGME.

Note that this also protects the use of gpg 2.0 with GPGME, despite that GPG 2.0 ialready reached EOL.

If the DECRYPTION_INFO status is backported to 1.4 also 1.4 will be protected when used with GPGME - but I am not sure whether this makes sense.

Since 1.4 has been previously cited as the thing to use when accessing data encrypted with v2 keys and the like, it's hard to argue in favour of backporting a fix for an issue which will explicitly override the one major use case (maybe one of two if we count headless systems still) for keeping 1.4 in play. If you were going to fix it and and potentially kill the use of it for accessing old archived data then why not just skip the backport and EOL the branch? Less work, same result.

Personally I can understand he legitimate and possibly legally mandated need some organisations may have with regards to historical records; so I'd say don't EOL it and don't backport the fix.

And now, I believe it is time to unearth my first key and create a stupid file to test this patch in the Python bindings ....

It works (or rather fails to decrypt) as expected, though an update to the HOWTO and examples is also needed, not a major change.

werner claimed this task.

In addition GnuPG master and 2.2.8 now always create MDC messages (except with option --rfc2440) and always fail for messages without an MDC. For old algorithms a hint is printed:

gpg: WARNING: message was not integrity protected
gpg: Hint: If this message was created before the year 2003 it is
     likely that this message is legitimate.  This is because back
     then integrity protection was not widely used.
gpg: Use the option '--ignore-mdc-error' to decrypt anyway.
gpg: decryption forced to fail!

as well as a dedicated status line for tools which like to inform the user.