Ticket #76 (new defect)
savannah: Dangerous unpacking (disk full is not reported, unpacked files become corrupt)
Reported by: | ngaba | Owned by: | |
---|---|---|---|
Priority: | major | Milestone: | Future Releases |
Component: | mc-vfs | Version: | |
Keywords: | Cc: | ||
Blocked By: | Blocking: | ||
Branch state: | no branch | Votes for changeset: |
Description (last modified by ossi) (diff)
Original: http://savannah.gnu.org/bugs/?21524
Submitted by: | Nagy Gabor <ngaba> | Submitted on: | Tue 06 Nov 2007 12:45:03 PM UTC |
Category: | VFS | Severity: | 3 - Normal |
Status: | None | Privacy: | Public |
Assigned to: | None | Open/Closed: | Open |
Release: | 4.6.1 | Operating System: | GNU/Linux |
Original submission:
Hi! I simply refer to my mail on ML: http://mail.gnome.org/archives/mc/2007-April/msg00002.html The workaround I suggested there is not usable, because it is very slow. The main problem: most archivers cannot do "unpack foo as bar" and we use "unpack --unpack-to-stdout archive/foo > bar" and unpack won't return with error in case of disk-full. A possible workaround (pseudo-code): extractto-dir = basedir(extractto) extractto-name = undocumented ;-P Then we could unpack to extractto-dir with the original name, then rename the unpacked file to extractto-name. Bye
Comment 1 by Pavel Tsekov <ptsekov> at Tue 06 Nov 2007 01:31:16 PM UTC:
Are you sure ? [ptsekov@localhost Desktop]$ unzip -p bless-bin-0.5.0.zip bless-bin-0.5.0/NEWS > /bin/mv bash: /bin/mv: Permission denied [ptsekov@localhost Desktop]$ echo $? 1 So errors are reported. Maybe the return status is not checked properly in the uzip script.
Comment 2 by Nagy Gabor <ngaba> at Tue 06 Nov 2007 01:53:47 PM UTC:
Well. I'm not really familiar in perl. But as your example shows, here obviously bash reports the error instead of unzip. If you modify the script to detect bash errors too, that would be OK. But imho unzip's disk-full detection is much better, because it deletes the partly unpacked file and so on. But I'm sure that uzip/urar ... can corrupt my files, because it did :-( A bit off here: IMHO the whole /tmp stuff should be dropped from VFS (that is a big speed and disk limitation).
Comment 3 by Pavel Tsekov <ptsekov> at Tue 06 Nov 2007 02:11:39 PM UTC:
Well, I took the time to try and reproduce an error condition while extracting an archive entry into tmp (i.e. copyout). The copyout action reports the error and MC captures it and displays it in a red dialog box. I don't know what caused the data corruption in your case but I cannot continue investigating this bug report whithout further information. Either provide useful info or I'll close this bug report as invalid.
Comment 4 by Pavel Tsekov <ptsekov> at Tue 06 Nov 2007 02:15:17 PM UTC:
I did not try a disk full on /tmp but I rather forget the temporary file name to point to a file writable by root which produces Permission denied error - but there should be no difference.
Comment 5 by Nagy Gabor <ngaba> at Tue 06 Nov 2007 03:31:08 PM UTC:
h372926@pc2003:~$ df -h /dev/sda8 950M 869M 34M 97% /tmp h372926@pc2003:~$ dd if=/dev/zero bs=50M of=./bigfile count=1 h372926@pc2003:~$ zip a 1.zip bigfile Then I starts mc, I press enter on 1.zip, and I try to copy out bigfile from 1.zip to my home. The result: 34M bigfile in my home, no error.
Comment 6 by Pavel Tsekov <ptsekov> at Tue 06 Nov 2007 04:20:42 PM UTC:
Ok. According to the bash manual: [...] A failure to open or create a file causes the redirection to fail. [...] Unfortunately failure to write is not detected :( So, yes - this method is unsafe.
Comment 7 by Pavel Tsekov <ptsekov> at Wed 07 Nov 2007 02:13:17 PM UTC:
Instead of using "> /tmp/file" the extfs scripts could pipe the extracted file contents trough "dd" .. then all errors would be reported. Comments ? Ideas ?
Comment 8 by Nagy Gabor <ngaba> at Wed 07 Nov 2007 02:45:30 PM UTC:
If this works well, then OK. But I would prefer skip /tmp from extracting whenever possible: Pros: -sometimes /tmp is small, and you have not enough privileges to change this: big limiting factor -if you extract files to a pendrive for example, the extra /tmp step [usually on local HDD] slows down the process [this is not a reasonable slowdown however, but a slowdown] Contras: -you loose your "copy-out" progressbar (however, this is not an informative thing, usually uncompress is much slower than /tmp->/dest copy/move) -???
Comment 9 by Pavel Tsekov <ptsekov> at Wed 07 Nov 2007 03:02:33 PM UTC:
The /tmp step is necessary since extfs tries to emulate real filesystem operations. If we had a native archiver support it would be unnecessary. This could happen but not for 4.6.2.
Comment 10 by Nagy Gabor <ngaba> at Thu 08 Nov 2007 11:05:17 AM UTC:
I will be a bit offtopic here, but I hope that you tolerate me: 1. If you attach the modified uzip && urar scripts, I will test it. 2. IMHO some review is needed for all extfs scripts, because for example urar freezes mc if your rar is password protected. 3. I'm just curious here: I don't understand why do you need native archiver support. uzip now get "copyout archivename storedfilename /tmp/..." instead of "copyout archivename storedfilename destination". (Then mc copies /tmp/... to the destination now). I simply don't understand why. uzip and urar can do direct copyout, copyin, ... without /tmp [in case of F3 and F4 or with tar.gz /tmp is needed ovbiously]. And finally I mention that list needs no extra HDD. mc could decide which parameters he will pass to extfs script (/tmp in case of F3/F4; 'realdest' in case of F5/F6/F7/F8 ... <- if /tmp is still needed, the extfs script can use it)
Comment 11 by Pavel Tsekov <ptsekov> at Thu 08 Nov 2007 01:02:40 PM UTC:
Regarding urar and password protected archives - if you use urar from CVS it shouldn't freeze MC anymore. I'll answer the other questions later.
case test
mkdir ttt
cd ttt
touch ttt
zip -9P password ttt.zip ttt
select ttt.zip in panel
enter in ttt.zip
select ttt
F3