amp

[bug]downloaded file inconsistence

37 posts in this topic

This was happend with StrongDC 2.02, but as BigMuscle also reading this forum, I'll post it here. Or I may repost this into his forum.

Sometimes file is downloaded corrupted! Because of powerloss, download process wasn't properly closed, but finished file isn't checked in new version, so it might be corrupted. I think you should return file rechecking after download.

Also, corrupted file cannot be restored by returning it into temp directory, 'cause new version just starts to download it from begining. Fix this please.

Second annoyning bug is removing slow users even this option is disabled. For example you have 2 sources, one is faster than second. Say, 5 and 10 Kb/s. So total download speed is 15 Kb/s. But Strong everytime removes 5Kb/s source, so download speed is just 10 Kb/s, which is worse.

Share this post


Link to post
Share on other sites

been getting a large amount of corrupt downloads lately, getting ready to switch to another client as I didnt have these problems with fulDC. Could there still be some interaction with the latest zonealarm? anyone using zone+apexdc?

Share this post


Link to post
Share on other sites

Hey, why just not to return file rechecking? uTorrent has it, oldschool StrongDC had it too. And this is not hardware failure, but SDC bug!

Share this post


Link to post
Share on other sites

We are not responsible for your hardware failure.

Of course you are not. But the problem is that it is not a hardware failure causes broken files. Of course, in a case of hardware malfunction (typically broken RAM) you will often get broken files. But StrongDC++ in any version I have encountered resulted broken files if it was not correctly shutdown (such as power failure, BSOD etc.), although 1.xx detected this during recheck and "repaired" damaged parts by redownloading them. And "broken" data were actually zero-filled. Up to several hundred megabytes in a case of large files. That is, I assume, blocks of file were considered downloaded, while they were not. And, obviously, this can't happen as a result of non-flushed disk cache.

Please don't think that I have malfunctioning hardware - for example I use eMule and Azureus for several years (and several TB). Both of them have recheck function and I keep it enabled. And I never yet seen this recheck fails. But this often happens in StrongDC++.

So I can't understand why it was necessary to remove such a useful feature in StrongDC++ 2.xx. Even if you will fix all problems, which currently results in incomplete files in a case of improper shutdown (which are, unfortunately, are not fixed yet), there are other reasons. For example, hash failure is a good indication "something is wrong" (whether this software of hardware). Also I have often used StrongDC++ (as eMule and Azureus in other networks) to repair broken files (whatever was a reason of this). Everything was fine in 1.xx - already existing temp-file was considered complete, was rechecked and damaged blocks were redownloaded. And this very useful (for advanced users, of course) feature was broken in StrongDC++ 2.xx.

I would be grateful if you will restore this features. Anyone thinking them are useless can disable them, yes?

Share this post


Link to post
Share on other sites

No it won!t be restored. You should understand that StrongDC++ check its own activity - correctness of downloaded data and correctness of saved data. It doesn't and won't check file integrity for situation which has nothing to do with StrongDC++ (hardware failure etc.)

Share this post


Link to post
Share on other sites

No it won!t be restored. You should understand that StrongDC++ check its own activity - correctness of downloaded data and correctness of saved data. It doesn't and won't check file integrity for situation which has nothing to do with StrongDC++ (hardware failure etc.)

Then please explain how it is possible to get ~500 MB zero-filled gap in ~3 GB file. Temp file was preallocated, of course.

Share this post


Link to post
Share on other sites

i really dont think this has anything to do with a hardware problem. I've run burn in and memory tests for 72 hours on this box with no issues when i initially set it up and just ran another 12 hour test to verify. also, i've run other clients without this issue for days at a time. i'm not knocking apexdc, as i think it has great potential, but it would be nice to have the client check the file before it calls it complete (or even at the minumum have a re-add to queue option in the finished downloads window if you test and find the file did not work, that way, even if its not available at the time, you can have the client keep searching for it) - things happen to normally fine systems (power failure, filesystem fills, other apps cause a crash, who knows).

Share this post


Link to post
Share on other sites

by "hardware failures" I meant what you said (=power failure, BSOD etc.) and data lost in this case has nothing to do with dc++.

Share this post


Link to post
Share on other sites

by "hardware failures" I meant what you said (=power failure, BSOD etc.) and data lost in this case has nothing to do with dc++.

But StrongDC++ can help avoid such data corruption problems even in a case of power failure etc. In case of real world situations. I can't understand why you are objecting. Almost every other P2P client has this feature because it is useful. And, I repeat, even in 1.xx it was optional.

Share this post


Link to post
Share on other sites

BM has the perfect solution. But the world around us really is far from perfect. So obviously when SDC++ >> Apex DC++ could be adapted to fit better, then IMHO they should be adapted.

Edited by Zlobomir

Share this post


Link to post
Share on other sites

open Notepad, type something in there and then simulate power failure. The written text will be lost. It's same situation.

Share this post


Link to post
Share on other sites

But in this case you can detect such situation and fix! Why are you objecting, I can't understand.

Share this post


Link to post
Share on other sites

I only try to tell you that we won't add any CPU-eating feature which will "solve" problems which has nothing to do with DC++.

Share this post


Link to post
Share on other sites

open Notepad, type something in there and then simulate power failure. The written text will be lost. It's same situation.

No.

Open Notepad, type 2M of spaces, save file. Overwrite first half of file with "z", save file. Overwrite second half of file with "z" and simulate power failure. Of course you will get file with only 1M of "z". The problem is that sometimes StrongDC++ decides that we already have 2M of "z". And now it even doesn't bothers to check whether this true.

Share this post


Link to post
Share on other sites

but problem is your harddisk cache. StrongDC++ just write data to disk, but your disk will store them in cache, so on failure datas from cache are lost.

Share this post


Link to post
Share on other sites

I only try to tell you that we won't add any CPU-eating feature which will "solve" problems which has nothing to do with DC++.

Anyone can disable this feature. Even in 1.00RC10 it was disabled by default. So what the problem?

Share this post


Link to post
Share on other sites

I only try to tell you that we won't add any CPU-eating feature which will "solve" problems which has nothing to do with DC++.

If this feature is totally useless, why it is present in many other p2p-clients? Are their authors so stupid at your point of view?

Share this post


Link to post
Share on other sites

but problem is your harddisk cache. StrongDC++ just write data to disk, but your disk will store them in cache, so on failure datas from cache are lost.

Do you really think that my disk cache is 500 MB?

Share this post


Link to post
Share on other sites

Do you really think that my disk cache is 500 MB?

just open taskmanager and you will see how many memory is used for cache

Share this post


Link to post
Share on other sites

just open taskmanager and you will see how many memory is used for cache

I doubt this cache is related with disk cache.

Share this post


Link to post
Share on other sites

just open taskmanager and you will see how many memory is used for cache

Please read Windows NT kernel documentation. Then you would know the meaning of numbers shown in Task Manager. I do.

Do you really think that "dirty" data is Windows NT disk cache are stored for several tens of hours?

Share this post


Link to post
Share on other sites