I had a problem with the sshfs connection dying all the time. I have tracked the problem down to the check if (newlen > BUFFER_MAX_LEN) fatal("buffer_append_space: alloc %u not supported", newlen); in buffer.c:buffer_append_check() The problem is that when sending a file data, sshfs will just keep sending without waiting for the server to catch up. This does not need to be a problem, as the sftp server should just stop receiving TCP packets when the buffers are full, causing TCP resends and automatically slowing down the sending. However, the openssh sftp server loop (openssh:sftp-server.c) will just keep trying to read from stdin. It will read from stdin to its input buffer at most once per loop, and dispatch at most one sftp packet per loop. But as each read from stdin can read up to 8 sftp packets into the input buffer then the input buffer risks running out of space. When it runs out of space then it just kills itself (openssh:buffer.c:buffer_append_space). I note that the openssh sftp client has a mechanism to limit the number of unanswered requests, which probably means unlike sshfs, it is not affected. I found that a reliable way to trigger this bug was to start a parallel process which hogs the disk, causing the consuming sftp server loop to slow down. -sshfs-mount localhost:/tmp test -ionice -c1 cp large_file /tmp/deleteme -dd if=/dev/zero of=test/deleteme2 The error I get is dd: writing to `test/deleteme2': Input/output error dd: closing output file `test/deleteme2': Transport endpoint is not connected It is somewhat a matter of opinion, but I would say that it is openssh and not sshfs which should be fixed. If I look at http://www.openssh.org/txt/draft-ietf-secsh-filexfer-02.txt (which, however, does not seem to be the protocol actually implemented by openssh sftp, AFAICT), it says that There is no limit on the number of outstanding (non-acknowledged) requests that the client may send to the server. In practice this is limited by the buffering available on the data stream and the queuing performed by the server. If the server's queues are full, it should not read any more data from the stream, and flow control will prevent the client from sending more requests. Note, however, that while there is no restriction on the protocol level, the client's API may provide a limit in order to prevent infinite queuing of outgoing requests at the client. Two versions of a patch to fix this is attached, one for 4.3p2, and one for 4.5p1. This stops reading new data from stdin if the input buffer is full. As a bonus, I also handled the case where the output buffer is overwhelmed.
Created attachment 1238 [details] A patch for sftp buffer handling applies to 4.5p1
Created attachment 1239 [details] A patch for sftp buffer handling applies to 4.3p2
A note: if you apply my 4.5p1 patch you should probably switch the two conditions if (buffer_compact(buffer)) goto restart; if (roundup(buffer->alloc + len, BUFFER_ALLOCSZ) <= BUFFER_MAX_LEN) return (1); in buffer_check_alloc(), or you risk memmove()'ing for every incoming patchet.
Created attachment 1241 [details] A patch for sftp buffer handling applies to 4.5p1 There was a problem in the old patch. Fixed.
Actually I don't think the part of the patch dealing with the output buffer is sufficient, since (I think) a single input package can create multiple output packages. I can make a patch if you want it, but I would like somebody to acknowledge this bug first. The part of the patch dealing with the input buffer is still good enough, and very much needed...
Yes I think we need to look at this. In answer to the question about which draft it implements, it's filexfer-03, we should update the link.
Actually, on a closer reading of the code it does only send one response package per input package. So my patch is good.
Created attachment 1282 [details] Revised diff I don't think the middle hunk of the patch is required: the fd will never be set in the select() results if it is never asked for.
Patch applied - thanks!
Close resolved bugs after release.