-
Content count
99 -
Joined
-
Last visited
Everything posted by Excelsium
-
1. I don't share files I download until verified by myself (TTH + Subjective tests). 2. Of course I will use a workaround to your bugs to prevent said waste. Denial that bugs exists - Of course others and I disagree. The bugs are listed in the opening post and elsewhere. I've given all my replies already before this post, its upto Big Muscle If he wants to coverup his bugs with endless denials. It probably doesn't matter what I say to Big Muscle now I think Big Muscle will do anything he can to 'cover it up'. Anyone new/returning to the thread.. just checkout my opening post, flames and XP2s threads in this forum. Sorry for respost of my earlier reply but heres where Big Muscle admitted something needs to be done. This is nearly the entire point of this thread. I see you have acknowledged the user should be informed of which temporary files are corrupted - This is a critical thing. - This prevents the waste of the users time and his own and others resources. :whistling: Perhaps you have accepted the waste (see opening post) is not acceptable as I have? I forgot to add a point to the reworded version I made. One option instead of just pausing the download after a set number of chunk redownloads: - If the corruption handling technique fails to repair a corrupt temporary file it should delete the download from the queue and re add it to the queue automatically. (Starts over with a new temporary file) - The user could be given this option when the corruption is detected or set it to do this automatically.
-
- I don't want to spread corrupted files over the network. Redownloading the same block ad infinitum is a bug. 1. Client should prevent this behaviour and inform the user which file is corrupted beyond repair (by the client) Client = StrongDC++
-
Correct but there are bugs in the way your segmenting download technique attempts to repair corrupted temporary files in a percentage of cases. Crash bug: Its possible I'm wrong on the crash bug - its the only bug that can be conceded - I've added a note to the opening post saying that the cause(s) of the crash is/are unproven: Possibilities: 1. It crashes randomly regardless of the "redowloading the same chunk ad infinitum bug". 2. It crashes randomly and because of the "redowloading the same chunk ad infinitum bug". I have had a small amount of corrupted temporary files yes caused by error(s) in hardware yes with the remote possibility that the cause was software error(s) elsewhere on my machine and outside of DC++ Client. But your segmenting technique should be able to handle corrupted temporary files as is the case in other p2p clients, clearly at least in some cases it does not - checkout the Buglist in the opening post - e.g. "redownload same chunk(s) ad infinitum etc" This is nearly the entire point of this thread. I see you have acknowledged the user should be informed of which temporary files are corrupted - This is a critical thing. - This prevents the waste of the users time and his own and others resources. :whistling: Perhaps you have accepted the waste (see opening post) is not acceptable as I have? I have added a shortened rewording of the thread for you at the beginning of the opening post. - Cut by more than half and removed the possible crash bug. Point taken . Added rewording for Big Muscle at the beginning of opening post - it is short too read - about 30% of the original there can be no real excuses for it being too long now.
-
Hi Lee - I'll have to disagree with you I'm afriad. - He openly refused to read my posts several times with the excuse "its too long to read". Why should I post more for him if he wont read? No I'm not going to waste time on him. By refusing to read my posts he also refuses to answer my questions. I believe that Big Muscle is ignoring and denying the existence of bugs in his product. All the bugs & details I have already posted, anything more would be a repost. If a developer asks for more I will post more (proof); if they at least acknowledge that they have at least read all about the problem and have answered questions. - It will probably not come from my own machine as I have implemented workarounds to the bugs - I don't use his technique in its current state - I dont have time to test it anymore, until a new version of StrongDC++/ApexDC++ comes out at least. The consequences of his technique remaining in its current state speak for themselves (Listed in posts already). I think the bugs and lack of techniques others and I have listed prevents his technique in its current state being the most used technique by the dc++ community. - Many others have stated thay they have been experiencing the consequences of said bugs. - The amount of people experiencing the consequences scales with the size of the userbase of StrongDC++s segmenting download technique in its current state. - I have no complaints about anything else in StrongDC++/ApexDC++. - These clients are very good and have much a better user experience than the Standard DC++ client. - Please do not remove the legacy technique from future versions until it is removed from the Standard DC++ client or until the packaged segementing download technique is vastly improved.
-
Could not resist a final post in awhile.. Big Muscles segmenting technique in its current form will never be accepted as the standard used by the majority of dc++ users (Thank goodness). Number 1 reason: Due to the law of error rates in hardware, your techniques to deal with these errors is buggy, seriously flawed and have gaps as outlined in my theads, posts - and in others threads & posts. Bye Big Muscle: It seems you have given up and have stuck your head in the sand, Denied the existence of the bugs, problems. Hence your technique in its current form will never be accepted as the most used technique. Now we have final word from the developer of StrongDC++. We are awaiting a response from other developers and from Crise the developer of ApexDC++ I hear hes busy at the moment.
-
:whistling: my last reply moved to the very end of the other thread, but indeed this thread is of great interest. I'll be testing 0.2.2 Segmenting mode to see if it behaves like normal mode - allows completion of corrupted downloads. If its something as simple as disabling TTH checking on the fly I guess I can recompile the latest version myself :> http://forums.apexdc.net/index.php?showtopic=1435
-
To me it looks like another example of the : "download same chunk ad infinitum bug caused by inability to correct corrupted temporary file bug" - see full detail in my bug report thread titled "Download techniques in ApexDC++/StrongDC++". forum admin renamed that thread to: "Detect corrupted file and switch download technique" Which is an incorrect title - Oh well.
-
"Just a question? Why Windows doesn't implement any "corruption detection technique" during file copy???" - Windows does not have a segmenting download technique? - Bittorrent has a corruption detection technique. it is not bugged. This technique exists in other p2p clients it also exists in StrongDC++/ApexDC++ as you have said. There is a bug in that technique. The crash bug is also caused by the re download chunk ad infinitum bug, not just the problem encountered by your beta-tester. - That logical operation can be described as a corruption detection technique. Please read XP2s posts that I have quoted. - There are other users with corrupt temporary files of download queue items. Please read my opening post and answer the questions in my opening post before posting more here.. :whistling: . Side note: on my hdds.. no relevant.. deleted. I've found yet another person who has encountered the bugs with multi-source "B-727" You can find his posts and some of my replies I did not move here: http://forums.apexdc.net/index.php?showtop...t=0&start=0 My reply for that thread here: The standard download technique completes the download regardless of corruption in the local temporary file for that download. The download not completing in the multi-source mode is because the multi-source mode detects corruption in the local temporary file for that download and prevents the download completing. multi-source mode is bugged - it fails to correct the corrupt temporary files of download queue items. - - it also wastes bandwidth - it will re download the same chunk ad infinitum but the download will not complete. Other p2p apps that use segmenting download techniques succeed in correcting the corrupt temporary files of downloads. see the thread I started. B-727's posts are extremly relevant to my interests, another example for me to use - thanks for posting I'm hoping no replies from Zlobomir saying "All Media Fixer will fix your problem" Lol. It is not "One ring to rule them all". It will not detect and fix corrupt temporary files of download queue items I Don't believe it.. this is .. The techniques I want are in existence but were removed from the latest versions of StrongDC++ http://forums.apexdc.net/index.php?showtopic=683 I agree with what void says.. these "features" should be brought back. - The techniques are only required if you don't fix the bugs in your download techniques. - It seems this topic has been discussed many times before.. of course this topic is going to crop up again and again and yes again. If you wont release a version with the features you have already made: I personaly will settle for a quick hackjob workaround, others may not. - Because I personally cannot stand observing the transfers window any longer - which is why I'm using the legacy technique or a hackjob/old client. I will have to resort to compling a hacked version that allows the segmenting technique to complete downloads with corrupted temporary files - probably just a few bytes or lines of code. For now I will just use an older client of strongdc++/apexdc++ that allows corruption until it is made obsolete. Why will I settle for a workaround? Because the legacy download technique already ignores corruption in local temporary files of download queue items - and thats what most people in the dc++ network are using. If for example everyone in the dc++ network was using strongdc++/apexdc++ you would be completly swamped with complaints. "zomg how do I enable the legacy technique." - Remember the law of error rates in hardware. I've found a solution for myself... as for everyone else I find who needs a solution I will offer them mine. Maybe BigMuscle or someone else will fix the bugs, put in the features. I'll check back here in a week.. maybe.
-
apex without the segmented download feature
Excelsium replied to da_asmodean's topic in Pre 1.0 Reports
I remember replying to this exact post earlier.. wierd. You can disable the segmented download technique, heres now: Settings > Downloads > Queue > Enable multi-source > set it to "Disabled" Setting it to disabled will enable the legacy dc++ download technique. -
It wasn't me who started the thread in the bug & crash forum it was XP2 Notice how he said 'and other peeps in the hubs i use have mentioned other files doing this' I have also noticed users talking about this in the main chats of the hubs I use. The next time I see this I will quote it. Clearly it is a problem that affects only a small percentage of users in a significant way but I think the way StrongDC++s segmenting download technique handles corruption in temporary files of download queue items is buggy. Oh and if you had read my opening post you will also have noticed that it is impossible that I'm the only one with corrupted temporary files of download queue items. The law of the unfixable problem of corruption caused by errror rates in hardware states it. There maybe a similar law for software errors causing corruption aswell. Big Muscle please leave this thread since you have refused to read my request and answer my questions unless of course you redeem yourself by reading my request and answering my questions. This really belongs in the bug & crash forum, I will ask a forum admin to move these 2 threads asap. I PMd Crise . As a side note perhaps the ultimate best place for these threads is the DC++ forums where the most developers can see them. but the http://www.dcpp.net/forum/'s are down I heard they were hacked/attacked . Forgot to mention that the client (StrongDC++/ApexDC++) has a crash bug caused by a bug(s) in one of its existing corruption handling techniques. - Just added in 2.2. of the facts in the opening post. Its ironic that the only existing automated "technique" to inform the user of a corruption problem is when the client crashes.
-
Oh maybe there is a flaw in the segmenting technique itself - In the way it deals with existing corruption of temporary files, afterall we only have your word for it that there isn't. Rewording: It appears to me that there is in fact a flaw or flaws in the way the segmenting download technique handles corruption in corrupt temporary files of download queue items. Read: I think StrongDC++s segmenting download technique is Buggy. Added to opening post: - It cannot repair the corrupted temporary file because its technique to do so is bugged. Hes refused to read my request therefore he cannot answer my questions. Since this thread is in the ApexDC++ forum I'll be requesting a reply from the developer(s) of ApexDC++.
-
Sounds like this is exactly what I'm looking for I'll look into it. Some guesses - File beyond repair problem dosn't exist in this client (excluding maybe Bad sectors) - User dosn't have to do anything at all Ever - completly automated (regarding the problem of corruption), which is exactly my experience of eMule, Bittorrent and so forth. Ok needless to say my opinion of this guy has gone down the toilet. I hope everyone else will have the courtesy to read. On the point of previous threads, previous threads were for fact checking, finding out exactly what the problem is. This thread is focused purely on the request - people saying whether they are for or against the request and why.
-
Thread killed: Too much of my fact finding process and verbose ranting for people to read. - Moved last replies (consolidated summarization after gathering facts) and Continued in a fresh new thread titled "Corruption detection technique that informs the user of corrupt temporary files of download queue items.". Moved > thread continued in Bug & Crash forum. Old thread ends here (Except the opening post) New thread begins here because a forum admin moved the new thread here (Replies after the opening post) You can skip everything previous to this post except the opening post. You can also ignore everything I have said imbetween the opening post and my next post.
-
When this problem occurs I notice the behaviour "the pecentage bar went back and forth several times (restarting block)." This happens sometimes but then the download finishes completly. (Actually not sure of this) But other times it continues ad infinitum - with "corruption detected at postition" is in the log. 1. I installed the latest public releases of StrongDC++ & ApexDC++. 2. I tested some files in segmenting mode, they failed. 3. A few days Later I did the same testing with the same files with the same installs of StrongDC++ & ApexDC++. - The downloads of the files succeeded this time - The list of peers sharing those files changed a lot. What are you trying to pull? Perhaps you just misunderstood when I said "I used the exact same version of Strong and Apex" Anyways I'm now downloading in segmenting mode. [C1]Mushishi_-_03[XviD][F334DBDF].avi TTH: HBBAHIWV56PEJQD5H6S6J4WAFQ2J3ELRNYBMJUA The download completed succesfully (with no errors, no corruption)(a few hours later) in segmenting mode. This was the post with the questions, moved to consilidated replies post.
-
I'm testing the file now., no - I used the exact same version of Strong and Apex
-
As said I tried those files again recently and succeeded over and over where as before it was fail, fail, fail, the list of peers online sharing those files has changed a lot, and continues to change as is the nature of DC++. Even if its not a problem interacting with certain peers that leads to a corrupt local copy (Unlikely) And is purely a Defect in the local machine There needs to be a problem handling technique that prevents the waste caused to by this problem for all users that have defects in their machines, to prevent said waste.
-
Critical post I agree partially , the possible causes: - A continous problem interacting with only certain peers - "Specific Defect in local or remote machine hardware/software that leads to a corrupt local copy." - Or the problem is random and caused by "General Defect in local machine" Again: Maybe we technically fully agree "corruption detected at position" is caused by something in your computer - yup the local copy is corrupt, the message "corruption detected at position" proves it. (Big Muscles explanation) However there maybe a problematic interfaction with certain clients (the same set of clients) that caused the local copy to be corrupted. The case for it being a problem interacting with only certain peers is very strong as laid out in full detail in previous posts. Fact remains that there is no automated problem handling technique in dc++ clients to prevent the waste caused by this problem. I've already gone into full detail about why corruption is an unfixable problem (analogies: spam, crime). It will probably always affect a small percentage of transfers on the internet. If you don't handle it, waste happens, also the quality of user experience is reduced.
-
Hehe.. I think it helps with the theory a little "unidentified problem with interaction with certain peers". leading to corruption in the local copy. Again to Big Muscle: I understand now the whole or parts of the local copy is corrupt when "corruption detection at position" is in the system log.
-
reply to Big Muscle So for this particular corruption problem "corruption detected at position": The corruption happens on the recieving users machine, because of the following reasons: - A problem in the users machine causes corruption. - The corruption on the local machine is caused by an unidentified problem with the interaction with certain peers on the network. Again all scenarios require at least basic problem handling to prevent said wastage - I've gone into full detail of this waste and how to prevent it in earlier posts. The automated pausing would only occur for this particular dead end corruption problem (the corruption is in the local machines unfinished copy, and cannot be fixed for some reason) "corruption detected at position". Even more detail on automated pausing: - Detect corruption detection messages in system log. - Detect files that have been downloading the same chunk.. say 50 times. - Correlate the 2 and pause the file. Zlobomir just posted something but deleted it: "Imho this could be caused by a passive peer or an error in hubsoft"
-
- The problem is not on my computer for a big percentage of the time or whole percentage or near whole percentage (corrupt bits being transmitted to reciever). - In the mean time of the user being unaware of the fundamentally Unfixable problem (corrupt bits being transmitted to reciever) Said resource and time waste occurs. - see my replies to Big Muscles posts. I've already fully explained in previous posts why the problem (Corruption) Is not the responsibility of any single user, And that corruption handling techniques are more complete in other p2p apps. Poy's suggestion: one's suggestion would be to act the same way the client acts in the case of a "TTH inconsistency" (remove the user from the queue), but i'd be against this solution I'd say; remove the user from the queue for that file only. This suggestion will work in the case of users transmitting corrupted bits if: - The segementing download technique can detect which user is sending the corrupt bits. - For the times when the problem is on the recievers local machine the download should be automatically paused and the user warned (The most basic essential problem handling technique as described in full already) (Full detail in earlier posts)
-
(Important post) Reply to last post: An additional corruption handling technique could: - Detect that an attempt at downloading a chunk of a file has failed 10 times - Warn the user that this has happened and what file it happened in. - This is the most basic first step of problem handling - The user is left uninformed, therefore said resource and time waste occurs. - Additions to this first step of corruption handling are also obvious and should be added. Segmenting mode cannot bypass users sharing corrupt bits until they are all offline - Won't complete download until all users sharing corrupts bits are offline. (full detail in other post) Normal mode can bypass users sharing corrupt bits simply by switching to a user that is not transmitting corrupt bits. - Download completes sucessfully when the users sharing corrupt bits are online by bypassing them. (full detail in other posts) -To prevent resource and time waste. - Other P2P apps have more complete corruption handling techniques that don't result in a large amount of waste. - Full detail in my previous posts.
-
1.0.0 is an april fools joke Don't use FAT32 it has a 4 Gigabyte filesize limit, back it up and reformat it to NTFS.
-
Its an April Fools Joke !! Also the forums were inacccesible for the past hour+ with "IPS Driver Error".
-
Same here.
-
I will do that if I can otherwise I hope video evidence is enough for you or another developer to take action and implement this or another simple preventative technique - for example: Pausing the download of the affected file when corruption is detected - how hard can it be . Some more ideas.. - If same chunk fails from same user e.g. 10 times, remove that user for that file - If same chunk fails 10 times from e.g. 5 different users pause the file, warn the user and offer to change download mode to normal for that file. I have moved my final summary to my opening post in this thread. Addition to final summary. I have tested those files again in segmenting mode Big Muscle.. This time they worked without any problems - the list of users online sharing those files has changed a lot. My theory which I didn't share earlier because I thought it was not probable but has proven to be quite probable: - The problem is on the transmission side, therefore the problem is not indefinite but lasts only as long as the problem transmitting user(s) is/are online. - A transmitting user is sharing a corrupt copy. - The transmitting users copy is not corrupt but somehow the transmission of the chunk is being corrupted ad infinitum, a problem with thier pc or client. - The version of the transmitting users client has problems interacting properly with StrongDC++, causing said problem. Also an observation I forgot to mention: When I tried to redownload the problem files - the percentage of completion that the file would encounter the repeated chunk download failure was somewhat random, sometimes at the same percentage twice or more in a row however, which left me with another "hunch" that it was transmission side. Hence my current thinking now is that this a problem fundamentally outside of the control of the developers of StrongDC++/ApexDC++ due to it being on the transmitting side therefore an implementation of problem handling is required to deal with the situation - problem handling is of course within the control of StrongDC++/ApexDC++ developers. The problem of corruption occuring is probably never going to disappear one reason being because of the uncontrolled and diverse set of clients in the network. An individual or group needs to implement the proper "problem handling" in DC++ clients as has been done with other P2P applications such as Bittorrent. Why? To prevent resource and time wastage, thereby greatly enhancing the users experience, productivity and satisfaction with the product, that is surely one of the ongoing goals for you devs. Some obvious examples of problem handling.. spam filters, whitelists, prisons for criminals, refrigerators, food containers. Are you catching my drift yet Big Muscle? If you encounter a problem that fundamentally cannot be fixed "If you leave milk out long enough it sours" - you must handle it. Funny analogy (needs some work ) : If someone spams you with junk mail (the same chunk of the file ad infinitum) through the doorflap in your back door one way of "problem handling" would be to weld the doorflap shut in your back door. Or in this case - pause the file (one of your door flaps). Of course you cannot fix the problem by assassinating the post men (fixing all the relevant bugs in the transmitters client or machine) - You can only handle the problem. Another theory: - Normal downloading versus Segmenting - Segmenting will carry on downloading from as many users as it can so it dosn't have a chance to bypass the online user(s) whos sending you the corrupt bits - clearly it does not bypass that/those user(s) because it downloads the same chunk over and over again from them and everyone else at the same time - making it impossible to tell which one is causing the problem just by looking at the transfers window. If the user(s) with the corrupt bits is online your download is doomed (not going to complete) until they are offline. Segmenting technique allows corrupted bits to enter your copy (Is this a possibility?) Perhaps if the user(s) with corruption were online at any stage during the download, your download is doomed until you delete and start over and the user(s) with corruption is/are offline. More detailed version of doomed files (with segmenting) theory - I think this "bits of permanent doom" theory is not likely unless of course theres a problem with the way the segmenting technique handles corrupt bits, maybe it allows corrupt bits into your copy or - allows corrupt bits in yes but detects it and prevents completion forever - the normal mode blocks corrupt bits from entering and all is good. Currently the segmenting mode wastes bandwidth (fails to complete download) until the problem users go offline but perhaps they have already damaged parts of your copy and it doomes your file forever (wont complete) until the following conditions have been met: - You start download from scratch, erase temp file - Users with corruption cannot be online at any time during the download proccess (you cannot recieve any bits from them) or it will fail. - Normal downloading is from one user at a time - if it detects a problem and fails just once it will try another user - well it will try all users, maybe it will come to the same problem user again but not everytime - therefore the repeated chunk download problem is near non existent - it will only happen a few times when it gets a slot from that/those problem user(s). Clearly the segmented downloading technique requires more built in intelligence to work 100% of the time. To bypass the problem users it has to do something like this: Detect that a chunk has been redownloaded 10 times. Switch to single source mode until the chunk has been completed successfully from a user with a clean copy. Switch back to segmenting mode. But Segmenting mode fails again e.g. 5 times in total.. well perhaps a large portion or the entire file the problem user(s) is/are sharing is corrupt. Switch to normal mode until the download is finished. Or switch back to segmenting mode at arbitary intervals to check if the problem user(s) are offline yet. The normal download process is much simpler - it has problem handling that works - it does pretty much "Detection corruption and switch (to a different user) - it bypasses the problem user(s) and the download completes successfully". I think this is my most likely theory to date, also my best example of a problem handling technique. We have now established that the corruption is located in a portion of the users copies (probably just 1 or 2 bad apples) - and that corruption is being transmitted to the reciever - Preventing the download completing temporarily until they are offline. (In a percentage of cases, part of the rest of the percentage could be caused by corruption that happened on the recieving side.) The certainty of the transmission side problem comes from testing the same files many times and failing then suddenly succeeding many times whilst noticing the online users sharing those files has changed a lot. User should fix his problem on its own. - Just finished answering this. Such workarounds will never be implemented.- This too (same thing really)... but again in very short form. These are problem handling techniques to deal with a problem that is fundamentally unfixable. This problem (corruption) has had good complete (or much closer to complete) packages of problem handling techniques implemented elsewhere such as Bittorrent. DC++ Segmenting clients don't have the complete (further away from completion) package of good problem handling techniques for corruption built for them yet, which of course results in lots of bandwidth and time wastage. Out of all the segmenting tools I've used (emule,kademlia,bittorrent,others less popular,http) Only in the DC++ landscape have I ever seen such blatent continuous waste resulting from a lack of problem handling techniques - afaik DC++ is the only place I've seen any waste at all, of course there is waste in other tools but its small enough that its not visible to the user, it dosn't affect the user in terms of bandwidth - and in terms of time - at least not in such an offensive way by making them themselves track down the corrupted files through manual labour observing transfers, it is made very obvious in other tools if there is a problem with a particular file.. which is hardly ever.. since I cant even remember an instance off hand... thats the way it should be. I view DC++ as a single segment mode only P2P application - Its a Great (really great) filesharing tool in this mode because of the features, developments such as StrongDC++,ApexDC++ and the communities that have formed around it. I believe for single segment mode it has a complete set of techniques with little waste (detects corrupt chunks, redownloads from other user(s) bypassing the problem users) - But it turns into an unintelligent wasteful Beast the second you turn on segmenting mode (no ability to bypass problem users)... - To me at least the existing segmenting techniques are meaningless until the last mile of corruption handling techniques have been built... Close, but no cigar.