The SHA3 functions give wrong results for inputs larger than 4GB, because the originally size_t argument handled as unsigned int in keccak_write and leads to integer overflows. This does not happen if the data is fed into the md_write by smaller chunks. More information and reproducers are available from Clemens in the attached bug.
The fix that should solve the problem (use of the size_t) is available now at gitlab: https://gitlab.com/redhat-crypto/libgcrypt/libgcrypt-mirror/-/merge_requests/6 Comments welcomed.
I was considering updating the some of the hash tests to capture this issue, but did not find a simple way to do that yet so I will keep it on you to decide if you believe some regression test is needed here.