Excelsium

Segmented downloading techniques

150 posts in this topic

Critical post

Yes, but again, every error "corruption detected at position" is caused by something in your computer, after your NIC card, wireless, modem or whatever.

I agree partially , the possible causes:

- A continous problem interacting with only certain peers - "Specific Defect in local or remote machine hardware/software that leads to a corrupt local copy."

- Or the problem is random and caused by "General Defect in local machine"

Again: Maybe we technically fully agree

"corruption detected at position" is caused by something in your computer - yup the local copy is corrupt, the message "corruption detected at position" proves it. (Big Muscles explanation)

However there maybe a problematic interfaction with certain clients (the same set of clients) that caused the local copy to be corrupted.

The case for it being a problem interacting with only certain peers is very strong as laid out in full detail in previous posts.

Fact remains that there is no automated problem handling technique in dc++ clients to prevent the waste caused by this problem. I've already gone into full detail about why corruption is an unfixable problem (analogies: spam, crime). It will probably always affect a small percentage of transfers on the internet. If you don't handle it, waste happens, also the quality of user experience is reduced.

Share this post


Link to post
Share on other sites

However there maybe a problematic interfaction with certain clients (the same set of clients) that caused the local copy to be corrupted.

The case for it being a problem interacting with only certain peers is very strong as laid out in full detail in previous posts.

Fact remains that there is no automated problem handling technique in dc++ clients to prevent the waste caused by this problem.

If this is the case, you should get "TTH inconsistency", if I have understood BM correctly.

Now downloading the first file you mentioned, connected to the first three hubs w/o reg, 80+% w/o problem, ~6 peers connected.

P. S. I saw it, after 170MB completed, ~93%, the bar went back and forth several times (restarting block). Fortunately the client started to drop peers, maybe only because of "no free block", as it stated, and I finished the file only from one peer. I think that all [Z] compressed were dropped not by accident, the download from 97% to 100% completed from a [T] peer only. And file plays. Here is the SFV:

; MooSFV v1.82 - Mon Apr 02 01:26:22 2007

;

; 189263872 Apr 02 01:19:34 2007 [C1]Mushishi_-_03[XviD][F334DBDF].avi

;

[C1]Mushishi_-_03[XviD][F334DBDF].avi F334DBDF

Try immediately, while we have same peers on.

Share this post


Link to post
Share on other sites

If this is the case, you should get "TTH inconsistency", if I have understood BM correctly.

Now downloading the first file you mentioned, connected to the first three hubs w/o reg, 80+% w/o problem, ~6 peers connected.

As said I tried those files again recently and succeeded over and over where as before it was fail, fail, fail, the list of peers online sharing those files has changed a lot, and continues to change as is the nature of DC++.

Even if its not a problem interacting with certain peers that leads to a corrupt local copy (Unlikely)

And is purely a Defect in the local machine

There needs to be a problem handling technique that prevents the waste caused to by this problem

for all users that have defects in their machines, to prevent said waste.

Share this post


Link to post
Share on other sites

Or something in the latest versions of Strong and Apex fixed your problem. And I do not agree that "There needs to be a problem handling technique that prevents the waste caused to by this problem for all users that have defects in their machines, to prevent said waste." We are not a remote-repair-shop, neither we should do miracles (if possible at all) instead of implementing new features, right? Just post a poll for a choice between this and /w per hub and you will see what I mean. ;)

Share this post


Link to post
Share on other sites

I'm testing the file now., no - I used the exact same version of Strong and Apex

I doubt that you have a private beta.

Share this post


Link to post
Share on other sites

If this is the case, you should get "TTH inconsistency", if I have understood BM correctly.

Now downloading the first file you mentioned, connected to the first three hubs w/o reg, 80+% w/o problem, ~6 peers connected.

P. S. I saw it, after 170MB completed, ~93%, the bar went back and forth several times (restarting block). Fortunately the client started to drop peers, maybe only because of "no free block", as it stated, and I finished the file only from one peer. I think that all [Z] compressed were dropped not by accident, the download from 97% to 100% completed from a [T] peer only. And file plays. Here is the SFV:

When this problem occurs I notice the behaviour

"the pecentage bar went back and forth several times (restarting block)."

This happens sometimes but then the download finishes completly. (Actually not sure of this)

But other times it continues ad infinitum - with "corruption detected at postition" is in the log.

I doubt that you have a private beta.

1. I installed the latest public releases of StrongDC++ & ApexDC++.

2. I tested some files in segmenting mode, they failed.

3. A few days Later I did the same testing with the same files with the same installs of StrongDC++ & ApexDC++. - The downloads of the files succeeded this time - The list of peers sharing those files changed a lot.

What are you trying to pull? Perhaps you just misunderstood when I said

"I used the exact same version of Strong and Apex"

Anyways I'm now downloading in segmenting mode.

[C1]Mushishi_-_03[XviD][F334DBDF].avi

TTH: HBBAHIWV56PEJQD5H6S6J4WAFQ2J3ELRNYBMJUA

The download completed succesfully (with no errors, no corruption)(a few hours later) in segmenting mode.

;)

This was the post with the questions, moved to consilidated replies post.

Share this post


Link to post
Share on other sites

- Detect files that have been downloading the same chunk.. say 50 times.

that's a good solution. the number of detected corruptions would have to be defined, though; 50 is a bit too much...

codewise, i think this would need a counter in the FileChunksInfo class if i'm not mistaken.

however, this is a workaround and i still prefer the client doing nothing automatically and letting the user cancel the download if he thinks he has to.

Share this post


Link to post
Share on other sites

- A continous problem interacting with only certain peers - "Specific Defect in local or remote machine hardware/software that leads

Again: Maybe we technically fully agree

you still can't understand that the problem has nothing to do with remote side? Data was downloaded and verified correctly

However there maybe a problematic interfaction with certain clients (the same set of clients) that caused the local copy to be corrupted.

if it is, it means that you have insecure computer. Some user is hacking your system and modifying files on your HDD ;)

now do you understand?

better, corruption in the file "was created" when it had already been saved on your HDD and connection to the remote user had been closed. It has nothing to do with network communicating!

i think this would need a counter in the FileChunksInfo class if i'm not mistaken.

how many users has corrupted HDD? maybe one of 1000 (maybe of more, just a guess). So I don't think it is worth wasting memory to save some counter for every chunk.

Share this post


Link to post
Share on other sites

Thread killed: Too much of my fact finding process and verbose ranting for people to read.

- Moved last replies (consolidated summarization after gathering facts) and Continued in a fresh new thread titled "Corruption detection technique that informs the user of corrupt temporary files of download queue items.".

Moved > thread continued in Bug & Crash forum.

Old thread ends here (Except the opening post)

New thread begins here because a forum admin moved the new thread here (Replies after the opening post)

You can skip everything previous to this post except the opening post.

You can also ignore everything I have said imbetween the opening post and my next post.

Share this post


Link to post
Share on other sites

sorry. I won't read it.

a) it's too long

B) I don't understand why you have created 4 same topics about the same problem

c) Strong/Apex doesn't cause the corruption so write to your HDD/memory/OS manufacturer to provide you the tools for fixing the bug

Share this post


Link to post
Share on other sites

So, basically you want the "check for corruption and ONLY re-download corrupted parts" feature that Shareaza has?

Share this post


Link to post
Share on other sites

So, basically you want the "check for corruption and ONLY re-download corrupted parts" feature that Shareaza has?

Sounds like this is exactly what I'm looking for I'll look into it.

Some guesses

- File beyond repair problem dosn't exist in this client (excluding maybe Bad sectors)

- User dosn't have to do anything at all Ever - completly automated (regarding the problem of corruption), which is exactly my experience of eMule, Bittorrent and so forth.

sorry. I won't read it.

a) it's too long

:D I don't understand why you have created 4 same topics about the same problem

c) Strong/Apex doesn't cause the corruption so write to your HDD/memory/OS manufacturer to provide you the tools for fixing the bug

Ok needless to say my opinion of this guy has gone down the toilet.

I hope everyone else will have the courtesy to read.

On the point of previous threads, previous threads were for fact checking, finding out exactly what the problem is.

This thread is focused purely on the request - people saying whether they are for or against the request and why.

Share this post


Link to post
Share on other sites

So, basically you want the "check for corruption and ONLY re-download corrupted parts" feature that Shareaza has?

but Strong/Apex redownload ONLY the corrupted parts of file.

Share this post


Link to post
Share on other sites

but Strong/Apex redownload ONLY the corrupted parts of file.

Oh maybe there is a flaw in the segmenting technique itself - In the way it deals with existing corruption of temporary files, afterall we only have your word for it that there isn't.

Rewording: It appears to me that there is in fact a flaw or flaws in the way the segmenting download technique handles corruption in corrupt temporary files of download queue items.

Read: I think StrongDC++s segmenting download technique is Buggy.

Added to opening post:

- It cannot repair the corrupted temporary file because its technique to do so is bugged.

Hes refused to read my request therefore he cannot answer my questions.

Since this thread is in the ApexDC++ forum I'll be requesting a reply from the developer(s) of ApexDC++.

Share this post


Link to post
Share on other sites

Read: I think StrongDC++s segmenting download technique is Buggy.

so explain why you are the only one with corrupted files???

Share this post


Link to post
Share on other sites

so explain why you are the only one with corrupted files???

It wasn't me who started the thread in the bug & crash forum it was XP2

I've also suddenly been getting this error .. everything was fine lastnight but then woke upto this problem & have been getting it every couple of hours now .. rebooting the system dont help --

Right .. found out what my problem was ... its the file [DB]_Naruto_216_[C9B6D4F2].avi

TTH : KWZX77RERY4RDPSR5QJBQJYVBMXG5YQX6QVJDIY .. although this IS a valid file there's a problem with checksums on part of the download .. everytime i try to download this file i crash , and other peeps in the hubs i use have mentioned other files doing this ... so thats going to be a hard one to sort out

the only way i found to work around this problem is to get the file via torrents ... not much help for DC ..

Notice how he said 'and other peeps in the hubs i use have mentioned other files doing this'

I have also noticed users talking about this in the main chats of the hubs I use.

The next time I see this I will quote it.

Clearly it is a problem that affects only a small percentage of users in a significant way but I think the way StrongDC++s segmenting download technique handles corruption in temporary files of download queue items is buggy.

Oh and if you had read my opening post you will also have noticed that it is impossible that I'm the only one with corrupted temporary files of download queue items.

The law of the unfixable problem of corruption caused by errror rates in hardware states it.

There maybe a similar law for software errors causing corruption aswell.

Big Muscle please leave this thread since you have refused to read my request and answer my questions unless of course you redeem yourself by reading my request and answering my questions.

This really belongs in the bug & crash forum, I will ask a forum admin to move these 2 threads asap. I PMd Crise :D.

As a side note perhaps the ultimate best place for these threads is the DC++ forums

where the most developers can see them.

but the http://www.dcpp.net/forum/'s are down I heard they were hacked/attacked :) .

Forgot to mention that the client (StrongDC++/ApexDC++) has a crash bug caused by a bug(s) in one of its existing corruption handling techniques. - Just added in 2.2. of the facts in the opening post.

Its ironic that the only existing automated "technique" to inform the user of a corruption problem is when the client crashes.

Share this post


Link to post
Share on other sites

Forgot to mention that the client (StrongDC++/ApexDC++) has a crash bug caused by a bug(s) in one of its existing corruption handling techniques.

No, it doesn't.

It's true that there's some crash, my betatester reported it to me too. But he doesn't have any corruption. Just a crash. After restart, it downloaded file correctly.

And Strong/Apex doesn't have any "corruption detection technique". I don't know why to call it technique. It's only logical operation to detect whether data on your HDD are equal to data which you have downloaded. If it doesn't, it redownload it. What should it implement more??? Should the scandisk/checkdisk be implemented in Strong. Should there be a HDD fixing tool implemented in Strong??? or what???

Just a question? Why Windows doesn't implement any "corruption detection technique" during file copy???

Share this post


Link to post
Share on other sites

No, it doesn't.

It's true that there's some crash, my betatester reported it to me too. But he doesn't have any corruption. Just a crash. After restart, it downloaded file correctly.

And Strong/Apex doesn't have any "corruption detection technique". I don't know why to call it technique. It's only logical operation to detect whether data on your HDD are equal to data which you have downloaded. If it doesn't, it redownload it. What should it implement more??? Should the scandisk/checkdisk be implemented in Strong. Should there be a HDD fixing tool implemented in Strong??? or what???

Just a question? Why Windows doesn't implement any "corruption detection technique" during file copy???

So, basically you want the "check for corruption and ONLY re-download corrupted parts" feature that Shareaza has?

but Strong/Apex redownload ONLY the corrupted parts of file.

"Just a question? Why Windows doesn't implement any "corruption detection technique" during file copy???"

- Windows does not have a segmenting download technique?

- Bittorrent has a corruption detection technique. it is not bugged.

This technique exists in other p2p clients it also exists in StrongDC++/ApexDC++ as you have said.

There is a bug in that technique.

The crash bug is also caused by the re download chunk ad infinitum bug, not just the problem encountered by your beta-tester.

- That logical operation can be described as a corruption detection technique.

Please read XP2s posts that I have quoted. - There are other users with corrupt temporary files of download queue items.

Please read my opening post and answer the questions in my opening post before posting more here.. :whistling: .

Side note: on my hdds.. no relevant.. deleted.

I've found yet another person who has encountered the bugs with multi-source "B-727"

You can find his posts and some of my replies I did not move here:

http://forums.apexdc.net/index.php?showtop...t=0&start=0

My reply for that thread here:

Sorry, but we are running in circle like hamsters. My guess is that when multi-source is enabled, Apex/SDC check "more carefully" the segments. When it is not enabled, the file just passes the check.

Btw photos higher than 2 MB are quite likely to have been used as "trojan horse" for hiding other info in them...

The standard download technique completes the download regardless of corruption in the local temporary file for that download.

The download not completing in the multi-source mode is because the multi-source mode detects corruption in the local temporary file for that download and prevents the download completing.

multi-source mode is bugged - it fails to correct the corrupt temporary files of download queue items. -

- it also wastes bandwidth - it will re download the same chunk ad infinitum but the download will not complete.

Other p2p apps that use segmenting download techniques succeed in correcting the corrupt temporary files of downloads.

see the thread I started.

B-727's posts are extremly relevant to my interests, another example for me to use - thanks for posting ^_^

I'm hoping no replies from Zlobomir saying "All Media Fixer will fix your problem" Lol.

It is not "One ring to rule them all". It will not detect and fix corrupt temporary files of download queue items ;)

I Don't believe it.. this is ..

The techniques I want are in existence but were removed from the latest versions of StrongDC++

http://forums.apexdc.net/index.php?showtopic=683

Of course you are not. But the problem is that it is not a hardware failure causes broken files. Of course, in a case of hardware malfunction (typically broken RAM) you will often get broken files. But StrongDC++ in any version I have encountered resulted broken files if it was not correctly shutdown (such as power failure, BSOD etc.), although 1.xx detected this during recheck and "repaired" damaged parts by redownloading them. And "broken" data were actually zero-filled. Up to several hundred megabytes in a case of large files. That is, I assume, blocks of file were considered downloaded, while they were not. And, obviously, this can't happen as a result of non-flushed disk cache.

Please don't think that I have malfunctioning hardware - for example I use eMule and Azureus for several years (and several TB). Both of them have recheck function and I keep it enabled. And I never yet seen this recheck fails. But this often happens in StrongDC++.

So I can't understand why it was necessary to remove such a useful feature in StrongDC++ 2.xx. Even if you will fix all problems, which currently results in incomplete files in a case of improper shutdown (which are, unfortunately, are not fixed yet), there are other reasons. For example, hash failure is a good indication "something is wrong" (whether this software of hardware). Also I have often used StrongDC++ (as eMule and Azureus in other networks) to repair broken files (whatever was a reason of this). Everything was fine in 1.xx - already existing temp-file was considered complete, was rechecked and damaged blocks were redownloaded. And this very useful (for advanced users, of course) feature was broken in StrongDC++ 2.xx.

I would be grateful if you will restore this features. Anyone thinking them are useless can disable them, yes?

I agree with what void says.. these "features" should be brought back.

- The techniques are only required if you don't fix the bugs in your download techniques.

- It seems this topic has been discussed many times before.. of course this topic is going to crop up again and again

and yes again.

If you wont release a version with the features you have already made:

I personaly will settle for a quick hackjob workaround, others may not. - Because I personally cannot stand observing the transfers window any longer - which is why I'm using the legacy technique or a hackjob/old client.

I will have to resort to compling a hacked version that allows the segmenting technique to complete downloads with corrupted temporary files - probably just a few bytes or lines of code.

For now I will just use an older client of strongdc++/apexdc++ that allows corruption until it is made obsolete.

Why will I settle for a workaround? Because the legacy download technique already ignores corruption in local temporary files of download queue items - and thats what most people in the dc++ network are using.

If for example everyone in the dc++ network was using strongdc++/apexdc++ you would be completly swamped with complaints. "zomg how do I enable the legacy technique."

- Remember the law of error rates in hardware.

I've found a solution for myself... as for everyone else I find who needs a solution I will offer them mine.

Maybe BigMuscle or someone else will fix the bugs, put in the features.

I'll check back here in a week.. maybe.

Share this post


Link to post
Share on other sites

Maybe BigMuscle or someone else will fix the bugs, put in the features.

there's nothing to fix. Bye.

Share this post


Link to post
Share on other sites

there's nothing to fix. Bye.

Could not resist a final post in awhile..

Big Muscles segmenting technique in its current form will never be accepted as the standard used by the majority of dc++ users (Thank goodness).

Number 1 reason: Due to the law of error rates in hardware, your techniques to deal with these errors is buggy, seriously flawed and have gaps as outlined in my theads, posts - and in others threads & posts.

Bye Big Muscle: It seems you have given up and have stuck your head in the sand, Denied the existence of the bugs, problems. Hence your technique in its current form will never be accepted as the most used technique.

Now we have final word from the developer of StrongDC++.

We are awaiting a response from other developers and from Crise the developer of ApexDC++ I hear hes busy at the moment.

Share this post


Link to post
Share on other sites

Big Muscles segmenting technique in its current form will never be accepted as the standard used by the majority of dc++ users (Thank goodness).

Number 1 reason: Due to the law of error rates in hardware, your techniques to deal with these errors is buggy, seriously flawed and have gaps as outlined in my theads, posts - and in others threads & posts.

Sorry that I must say it. But are you really blockheaded or just you aren't able to read what I have wrote to you in all of your threads ???

so again,

!!!!!!!! ERROR OF HARDWARE HAVE NOTHING TO DO WITH SEGMENTED DOWNLOADING !!!!!!!!!!

Now you are able to understand it??????????

And why normal downloading is accepted if it allows to leave corrupted files your HDD and allows to spread these corrupted files over the network (reason of having two many "same" files with different TTHs in the network). Segmented downloading allows only leaving and sharing 100% correct files.

You are right only in one sentence - the corruption message should display also filename of corrupted file and maybe it will be changed to this way.

And only one bug needs to be fixed - crash in FileChunksInfo. BUT IT HAS NOTHING TO DO WITH THE CORRUPTION BECAUSE IT CRASHES REGARDLESS THE CORRUPTION (understand that it crashes sometimes when there's no corruption and just you are the case that it crashes when there's corruption)

NOW YOU UNDERSTAND IT???

Sorry for that but I really don't know what I should write to you, because if I writes anything, you will still say your rubbish as "there's bug in segmented downloading because I have corrupted files" etc :whistling:

Share this post


Link to post
Share on other sites

Big Muscles segmenting technique in its current form will never be accepted as the standard used by the majority of dc++ users (Thank goodness).

Excelsium, calm down and show respect to a fellow developer of an influencial and popular mod. It wouldnt be popular if the segment downloading techniques failed. It's disgusting to see somebody take the piss out of somebody elses work.

If there's a problem with it, show the developer (like you have). Don't force him to fix something he doesn't agree with, just show him further proof. Both don't agree? That's unfortunate, but it's how it goes.  ^_^

Topic: Haven't read it so cannot comment... don't have time. Will do eventually. :whistling:

Share this post


Link to post
Share on other sites

Hi Lee - I'll have to disagree with you I'm afriad.

- He openly refused to read my posts several times with the excuse "its too long to read".

Why should I post more for him if he wont read? No I'm not going to waste time on him.

By refusing to read my posts he also refuses to answer my questions.

I believe that Big Muscle is ignoring and denying the existence of bugs in his product.

All the bugs & details I have already posted, anything more would be a repost.

If a developer asks for more I will post more (proof); if they at least acknowledge that they have at least read all about the problem and have answered questions.

- It will probably not come from my own machine as I have implemented workarounds to the bugs - I don't use his technique in its current state - I dont have time to test it anymore, until a new version of StrongDC++/ApexDC++ comes out at least.

The consequences of his technique remaining in its current state speak for themselves (Listed in posts already).

I think the bugs and lack of techniques others and I have listed prevents his technique in its current state being the most used technique by the dc++ community.

- Many others have stated thay they have been experiencing the consequences of said bugs.

- The amount of people experiencing the consequences scales with the size of the userbase of StrongDC++s segmenting download technique in its current state.

- I have no complaints about anything else in StrongDC++/ApexDC++.

- These clients are very good and have much a better user experience than the Standard DC++ client.

- Please do not remove the legacy technique from future versions until it is removed from the Standard DC++ client or until the packaged segementing download technique is vastly improved.

Share this post


Link to post
Share on other sites

So sit back and see whether he fixes it. He does it for free remember, and so do we. :whistling:

Share this post


Link to post
Share on other sites
Guest
This topic is now closed to further replies.