Page MenuHome GnuPG

sha3: wrong results for large inputs
Closed, ResolvedPublic

Description

The SHA3 functions give wrong results for inputs larger than 4GB, because the originally size_t argument handled as unsigned int in keccak_write and leads to integer overflows. This does not happen if the data is fed into the md_write by smaller chunks. More information and reproducers are available from Clemens in the attached bug.

The fix that should solve the problem (use of the size_t) is available now at gitlab: https://gitlab.com/redhat-crypto/libgcrypt/libgcrypt-mirror/-/merge_requests/6 Comments welcomed.

I was considering updating the some of the hash tests to capture this issue, but did not find a simple way to do that yet so I will keep it on you to decide if you believe some regression test is needed here.

Details

Event Timeline

Fix looks good to me. This could be tested with new long running test (tests/hashtest) that would allocate 4GiB+ pattern block for inputting to gcry_md_write.

Here's patch that adds 6GiB test for hashtest (with 5GiB pattern block):

The test looks good. I hope I changed the API in all the hw optimized implementations.

My poor old laptop - its RAM will now have a hard time to run the huge tests ;-)

werner triaged this task as Normal priority.Sep 26 2022, 7:36 PM

I've tested the different hw implementations (amd64, arm64, s390x) and they are all ok.

One nit that I overlooked initially is the memory leak, which is fixed with the following patch:

Patch applied to master, thanks.

One more nit regarding to the test is the format string for size_t which was using %d instead of %zu. This is fixed by the attached patch:

gniibe moved this task from Next to Ready for release on the FIPS board.
gniibe changed the task status from Open to Testing.Nov 7 2022, 7:14 AM
werner claimed this task.