Ticket #68 (new defect) — at Initial Version

Opened 14 years ago

Last modified 6 years ago

savannah: Aborting a FISH file transfer still causes the FISH layer to consume the whole file

Reported by: slavazanko Owned by:
Priority: major Milestone: Future Releases
Component: mc-vfs Version: master
Keywords: Cc: god12@…
Blocked By: Blocking:
Branch state: on hold Votes for changeset:


Original: http://savannah.gnu.org/bugs/?19721

Submitted by:Pavel Tsekov <ptsekov>Submitted on:Fri 27 Apr 2007 08:40:45 AM UTC
Category:VFSSeverity:3 - Normal
Assigned to:NoneOpen/Closed:Open
Release:All versionsOperating System:All


Mon 07 May 2007 04:18:32 PM UTC, comment #5:

huh? you actually used ssh for that? i guess that's a fine optimization.
but for the general case the chunking should be homegrown (based on 
dd and printf/read, i guess).
	Oswald Buddenhagen <ossi>
Mon 07 May 2007 01:06:26 PM UTC, comment #4:

I've tested the ssh ability to tunnel multiple session over the same
 connection and it works nicely. There is one problem though - it is
 supported only with SSH protocol v2.
	Pavel Tsekov <ptsekov>
Project Administrator
Wed 02 May 2007 01:20:28 PM UTC, comment #3:

Unfortunately kde's fish implementation (as found in kioslave/fish 
directory) is not a improvement in this particular case. At least I 
do not see any code which deals with aborting a file transfer 
gracefully. Most likely the connection is just killed.
	Pavel Tsekov <ptsekov>
Project Administrator
Fri 27 Apr 2007 01:38:47 PM UTC, comment #2:

Sounds interesting - I'll take a look at it. My perl is pretty bad 
though - I hope the code is not too complicated.
	Pavel Tsekov <ptsekov>
Project Administrator
Fri 27 Apr 2007 12:20:17 PM UTC, comment #1:

no, i think we can do like ssh does, i.e., tunnel multiple virtual 
connections through one physical connection. this adds some 
overhead, though (especially cpu-wise, as we have to call dd for 
every chunk).

btw, you might want to look at kde's fishserv.pl, it has some 
optimizations. never looked at it myself, though.
	Oswald Buddenhagen <ossi>
Fri 27 Apr 2007 08:40:45 AM UTC, original submission:

I was looking at the fish code recently and noticed that aborting
a running file transfer still causes MC to read the whole file sent 
by the remote end. I realized that the way FISH is currently
implemented, i.e. commands and data sent over the same channel, this
 is the only way to clear the data channel so that command replies 
would get to the FISH layer without re-establishing a new link. 
While this could be acceptable for small transfers I doubt it that 
it makes sens for multi-megabyte files.

Ideas on how to fix it are welcome. One way I can see is to
open a separate FISH connection for the data transfer i.e. like FTP. 
Note: See TracTickets for help on using tickets.