I think the most accurate version of my arguments come across in post #157 onwards.
For ease of writing this I have defined bug to mean several things.
- "problem, issue, lack of technique, gap in technique"
- Solutions to prevent waste caused by gaps/lack of techniques in the segmented download technique.
- To inform the user about the problem, not fix the problem.
- To automate the detection of failing downloads so the user dosn't have to.
For Big Muscle: Helpful rewording.. of the entire thread perhaps... although If you need full detail on the waste that occurs to convince you the bugs need fixing, check the rest of this post.
Issues with the segmenting download technique
1. Redownloading the same chunk(s) ad infinitum of a download with a corrupted temporary file that is beyond repair by itself. - It should stop the process and inform the user of which temporary files are corrupted in this case or it should at least inform the user with a dialog box.
2. Its corruption handling technique to repair corrupted temporary files does not have a high enough success rate, other P2P applications have a much higher, near perfect or perfect success rate.
Issue 2. is a/the cause of Issue 1.
I once described these bugs as "download same chunk ad infinitum bug caused by inability to correct corrupted temporary file bug".
Description of the issues
1. A Corrupt temporary file for a download queue item exists on the users machine.
2. In some cases StrongDC++s segmenting download technique fails to repair the corrupt temporary file indefinitely. (Other P2P clients never fail at this or fail at a much lower %)
2.1. The segmenting download technique downloads the same chunk(s) of the file ad infinitum.*1* *2*
*1*. Ad infinitum meaning: Its been running for days: It redownloads the same chunk(s) 100s-1000s of times at high speed (10Mbit connection) from 1-10 peers at the same time (Max is 10).
*2*. The user is not informed which temporary file is corrupted.
2.1. Should inform the user if the corrupted temporary file cannot be repaired by itself.
2.1.2. Should pause the download if the corrupted temporary file cannot be repaired by itself.
2.1.3. Should detect if the download cannot be repaired by itself (e.g. redownloaded the same chunk(s) too many times) or some other technique.
2.2. "Informing feature" Only needs to be implemented if the repair corrupt temporary file technique is not fixed 100%. If it cannot be perfected then the informing feature is required for the few remaning % where the temporary file cannot be repaired by itself.
I have not tested leaving it (Download with corrupted temporary file) running for more than a week because of the bandwidth waste.
One option instead of just pausing the download after a set number of chunk redownloads:
- If the corruption handling technique fails to repair a corrupt temporary file it should delete the download from the queue and re add it to the queue automatically. (Starts over with a new temporary file)
- The user could be given this option when the corruption is detected or set it to do this automatically.
- Other users have experienced these bugs. (see posts ITT)
- Other users continue to experience these bugs. (see posts ITT)
*ITT = in this thread
About this thread
New thread was moved into the old thread without my permission by forum admin, I'm in the process of cleaning it up.
Cleaning done: you can skip everything except this opening post up until page 3 after the post marked "New Thread begins after this post."
Title of thread censored: Real title:
Bugs found in the download techniques packaged in ApexDC++/StrongDC++
Previous title: [Request] Corruption detection technique that informs the user of corrupt temporary files of download queue items. - this would be an interim technique if a fully automated technique is not implemented (If the bugs in the download techniques are not fixed).
Long version of this opening post
1. Fix the bugs so that the user dosn't have to be informed as is the behaviour of other p2p clients.
Last resort: 2. Corruption detection technique that informs the user of corrupt temporary files of download queue items.
There are bug(s) in the way StrongDC++s/ApexDC++s segmenting download techniques corruption handling subtechnique handles corruption in temporary files of download queue items.
There are also bugs/lack of corruption handling techniques in the way the normal download technique handles corruption in temporary files of download queue items.
Hence there are several unfixed bugs and lack of corruption handling techniques in the packages that are StrongDC++/ApexDC++
Please post corrections or relevant additions to these facts if required - I will post them here.
Segmenting download technique
- Does not complete a download if the local temporary file for that download is corrupt (Good)
- It detects the corruption but does not inform the user of which file the corruption was detected and does not pause the download if the corruption in the local temporary file is beyond repair.(Bad)
- It downloads the same chunk(s) of a download with a corrupted temporary file that is beyond repair (by itself)* ad infinitum (Bad)
- It cannot repair the corrupted temporary file because its technique to do so is bugged. (Bad)
- *1*Time and resource waste occurs because of lengthy manual user detection *2*.(Bad)
*1*. Hence the reason for this feature request - I am in essence asking for a technique to automate what is currently a manual process which involves guessing, trial and error.
*2*. Continually observing the transfers window and making a *Guess* which download(s) is/are not going to complete successfully due to a corrupted temporary file for that download.
2.1. The obvious bandwidth waste caused by downloading the same chunk(s) of a file ad infinitum until the user has engaged in and completed the manual corruption detection process.
2.2. StrongDC++/ApexDC++ crashes after re downloading the same chunk(s) of a download with a corrupted temporary file a random number of times. * This is the only bug so far a developer (Big Muscle in this case) has said they will fix - This fix has not yet been released.
2.2. Note: The cause(s) of the crash bug is/are unproven - the only bug I am unsure of.
Standard legacy download technique
- Completes the download regardless of a corrupt temporary file(Bad)
- Perhaps it dosn't detect the corruption in the local temporary file at all, if it does - it ignores it and completes the download.
Cause of the corrupted temporary files
Unidentified defect in the users machine:
Defect in the users machine possibilities: Hard drive, Filesystem, Windows XP, other software, other hardware.
I will add defect possibilities as they are sent in.
Corruption on the users machine is an inherently unfixable problem for all users for the forseeable future
The main variable being the percentage of corrupted temporary files.
- Hardware and software in/on the users machine can and will fail, I believe these are called
"Error rates" some RAM has ECC for example.
"1 error per 1billion operations" along those lines.
We require the tools of information to prevent said consequences quickly and effectively. (See my reply below to Big Muscles statement: "User should fix his problem on his own".)
Personal experience, opinion
I use the legacy download technique - even If I have a corrupt file - that is going to be obvious to me when I open or verify it - In my opinion It wastes much less time and resources to redownload the file and is also much less annoying compared to the lengthy manual user detection process during the download. Also even without the speed of segmenting I still prefer this option because of the cost in time and resources of manually detecting even a few corrupted temporary files - in my case I get about 2 corrupted temporary files for every 60 files that I download, although I may have been "unlucky" recently and the overall number is perhaps 2 in 150 or more files since 2004.
My relevant replies and questions left unanswered from the last thread:
"I still prefer the client doing nothing automatically and letting the user cancel the download if he thinks he has to."
Doing nothing automatically - my translation: Not having corruption handling techniques for corrupt data on the local machine.
One of these automated techniques is to detect which files the corruption is located and inform the user of these corrupted files.
You are saying the user should not be informed of what files are corrupted, by saying it should do nothing automatically.
Rediculous amount of waste can occur in the mean time during the lengthy manual process of user detection... gone into full detail already.
So you are saying that you think this large amount of waste is acceptable. If these really are your opinions, please reply and say it more explicitly in no uncertain terms.
I'd also be interested to know if Big Muscle or anyone else reading this thinks that this waste is acceptable, clearly I think its unacceptable and I'm fairly sure the vast majority (users/devs) involved in P2P filesharing as a whole think its unacceptable but if you think its acceptable please come out and say so and state the reasons why you think it is acceptable.
Other apps will at least detect local files that are corrupted and inform the user of these corrupted files, so that the user can do something about it and quickly.
I already understood exactly what you just said and agree with you.
I am asking for corruption handling techniques for corrupt data on the receiving users machine to prevent wastage (full detail-previous posts) and questions at the end of this post.
So My theory of interacting with certain peers causes the local temp file to be corrupted is extremly unlikely however possible (hackers or ???) - I was just trying to list all possibilities however likely or unlikely so we can get to the bottom of this, which we have.
I agree but with an addition in this case -