Yeah, I saw the Debian bug report. Unfortunately there is no easy
solution to this except for rejecting the use of large secret keys.
The problem here is that the big number library needs to allocate from
a limited secure memory region (32 KiB by default) and terminates on
allocation failure. I know that this is sub-optimal but we are doing
this for 19 years now. Checking for an error after each low-level big
number operation would make the code unreadable and will introduce
bugs. Ideas on what to do:
- On secure memory allocation failure, call the out-of-memory handler which may then free other memory (or purchase new memory). This can be done in the application.
- On secure memory allocation failure, allocate a new block of secure memory and allocate from that one. There are two disadvantages: a) It is unlikely that the new block can be mlock'd. b) A free will be a a little bit slower because it needs to check the list of secure memory blocks and not just one address range. The address range check is needed so that we can figure out whether the freed address is in the secmem range and needs to zeroed out. This requires a new Libgcrypt version, though no ABI change.
- We have a ./configure option --enable-large-secmem which sets 64k instead of 32k aside for the secmem. This is currently only used in gpg to enable gpg's --enable-large-rsa option. Given that in 2.1 we use the gpg-agent for the secret key operations we should have the same options in gpg-agent. However, it is only a kludge, but one we once agreed upon to silence some pretty vocal experts on key size.