agowa338
Goto Top

Php error with Nextcloud big file uploads

Hi,

I've had some issues with the official Nextcloud (apache) container. When uploading some bigger files (I assume they're bigger, don't know for each individual one, it says 156GB in 13 files) using the desktop sync client it keeps failing and repeating the upload indefinitely.
Does anyone know what I could try to debug this issue further?
I suspect some issue with either the apache2 configuration (or the haproxy) regarding chunking or caching.

apache2 log output:
app_6    | [Sun Oct 31 17:36:05.468312 2021] [php:error] [pid 51] [client 2001:DB8::2:44500] PHP Fatal error:  Uncaught TypeError: hash_final(): Argument #1 ($context) must be a valid Hash Context resource in /var/www/html/lib/private/Files/Stream/HashWrapper.php:70\nStack trace:\n#0 /var/www/html/lib/private/Files/Stream/HashWrapper.php(70): hash_final(Object(HashContext))\n#1 [internal function]: OC\\Files\\Stream\\HashWrapper->stream_close()\n#2 /var/www/html/3rdparty/icewind/streams/src/Wrapper.php(96): fclose(Resource id #31)\n#3 /var/www/html/3rdparty/icewind/streams/src/CallbackWrapper.php(117): Icewind\\Streams\\Wrapper->stream_close()\n#4 [internal function]: Icewind\\Streams\\CallbackWrapper->stream_close()\n#5 /var/www/html/3rdparty/icewind/streams/src/Wrapper.php(96): fclose(Resource id #34)\n#6 /var/www/html/3rdparty/icewind/streams/src/CountWrapper.php(99): Icewind\\Streams\\Wrapper->stream_close()\n#7 [internal function]: Icewind\\Streams\\CountWrapper->stream_close()\n#8 /var/www/html/3rdparty/icewind/streams/src/Wrapper.php(96): fclose(Resource id #38)\n#9 /var/www/html/3rdparty/icewind/streams/src/CallbackWrapper.php(117): Icewind\\Streams\\Wrapper->stream_close()\n#10 [internal function]: Icewind\\Streams\\CallbackWrapper->stream_close()\n#11 /var/www/html/3rdparty/guzzlehttp/psr7/src/Stream.php(108): fclose(Resource id #41)\n#12 /var/www/html/3rdparty/guzzlehttp/psr7/src/Stream.php(74): GuzzleHttp\\Psr7\\Stream->close()\n#13 [internal function]: GuzzleHttp\\Psr7\\Stream->__destruct()\n#14 {main}\n  thrown in /var/www/html/lib/private/Files/Stream/HashWrapper.php on line 70
app_6    | [Sun Oct 31 17:36:06.005881 2021] [php:error] [pid 51] [client 2001:DB8::2:44500] PHP Fatal error:  Uncaught TypeError: hash_final(): Argument #1 ($context) must be a valid Hash Context resource in /var/www/html/lib/private/Files/Stream/HashWrapper.php:70\nStack trace:\n#0 /var/www/html/lib/private/Files/Stream/HashWrapper.php(70): hash_final(Object(HashContext))\n#1 [internal function]: OC\\Files\\Stream\\HashWrapper->stream_close()\n#2 {main}\n  thrown in /var/www/html/lib/private/Files/Stream/HashWrapper.php on line 70
app_6    | 2001:DB8::2 - user [31/Oct/2021:17:26:12 +0000] "MOVE /remote.php/dav/uploads/user/3337383129/.file HTTP/1.1" 500 858 "-" "Mozilla/5.0 (Windows) mirall/3.3.5stable-Win64 (build 20210930) (Nextcloud, windows-10.0.22483 ClientArchitecture: x86_64 OsArchitecture: x86_64)"  

Content-Key: 1452177506

Url: https://administrator.de/contentid/1452177506

Ausgedruckt am: 29.03.2024 um 08:03 Uhr

Mitglied: BirdyB
BirdyB 01.11.2021 um 07:15:14 Uhr
Goto Top
Hi,

did you check the maximum-upload size in your php config?

Kind regards
Mitglied: agowa338
agowa338 01.11.2021, aktualisiert am 02.11.2021 um 09:39:30 Uhr
Goto Top
Thanks, I completely overlooked that one because the client sent the file completely. I just assumed that an upload limit in PHP would cancel the upload and not just receive it completely and drop it afterward...

upload_max_filesize was set to 512M via an environment variable.

Therefore I now just set these environment variables like:
PHP_UPLOAD_LIMIT=1024G
PHP_MEMORY_LIMIT=4G
(because it does not appear to prevent anything as the whole file was received anyway)...

Thanks for your help face-smile

Edit: Nope it wasn't that. Now it also works with the default limits of 512M (localhost deployment though) ?!? think I've to do some more testing face-sad

Edit2: Another few hours later I now found the problematic module. It's related to the S3 module.
As soon as I switch to the S3 bucket instead of local files it causes errors.

Edit3: Noticed why I initially thought the problem was solved. If you just don't do anything and let the client "do its thing", it'll incorrectly show that it synced everything successfully until hitting "Force sync now" or looking into the folder at the status icon of the failing file...

Edit4: Think we're done here. I've opened an Issue upstream. It looks like there is an error-handling API responses from the S3 bucket properly. Can be reproduced (even though slightly different) with another S3 bucket (localstack).