void

Member
  • Content count

    15
  • Joined

  • Last visited

About void

  • Rank
    Newbie

Contact Methods

  • ICQ
    0
  1. 512/256 kbps Our "DC network" is quite different. Most users have almost equal (and relatively slow) connection speed, so "per-slot" download speed is typically 2..5 KB/s (when DC receives all bandwidth), so I typically use 10..15 download slots to get optimum download speed. This way your hard-coded "slow user limit" becomes 1..1.5 KB/s, which, obviously, becomes a problem. Oh, please do not say "you are a leecher because your upstream bandwidth is lower". I will get this problem with any upstream bandwidth. Because this will not change upstream bandwidth of other users. As for me, I represent one of the most useful sources in our network ;-) It is even more useful to "prioritize" one file above the others when there is no fast sources for it. And this "slow users bug" breaks this. If you do not want to make this "feature" optional - you can modify it to work differently with higher priority downloads. And, BTW, removing "slow" alternate sources completely is not very good idea IMHO - download may freeze for a very long time if all of remaining sources go offline.
  2. To clarify: I don't care if lower priority downloads have their slowest (or even all) sources removed (as long as it will resume automatically when there are no higher priority downloads or there is "extra" bandwidth). But I do care if higher priority downloads are intentionally slowed down and most download bandwidth is occupied by lower priority stuff, which happens now.
  3. This feature description explains how it should work but not how it works in practice. And in practice I can see the following. A lot of files in download queue and one big file which I want to download first. I set it to high priority. And due to this "feature" after a few minutes I typically receive from one to zero (because this source will eventually go offline) active download sources. For the file I want to download before others. And whether this will happen or not depends on some undocumented hard-coded constants. Oh, this is great feature. I doubt very much that this feature helps against leachers (while limited upload slots number works pretty well). But IMHO it really hurts to users.
  4. ...this is why uploader limits number of available upload slots. Am I wrong? Now such type of "slow user" disconnecting due to incomprehensible reasons looks like a bug and speeds down download a lot (often removing all sources but one, which can then go offline, effectively stopping download at all). This is really annoying.
  5. [Bug]Download queue wipe

    Maybe it will be be wise to remove this check when you will find this solution? Unfortunately we are not there yet.
  6. [Bug]Download queue wipe

    So when there is no downloads Strong will not try to overwrite .bak?
  7. [Bug]Download queue wipe

    ...yet. Unfortunately, this did happened to me too. Only once in several months. Empty queue.xml. And .bak did not help because there was already too late. I have very simple idea how to avoid most serious problems. Do not make a backup of queue.xml if there are no downloads in queue. What do you think?
  8. And even doing so would not help in all cases. Modern ATA drives store unsaved data in their own buffer even when OS thinks everything is already written. And, of course, buffer contents would be saved (or not saved) out-of-order...
  9. Please read Windows NT kernel documentation. Then you would know the meaning of numbers shown in Task Manager. I do. Do you really think that "dirty" data is Windows NT disk cache are stored for several tens of hours?
  10. Do you really think that my disk cache is 500 MB?
  11. Anyone can disable this feature. Even in 1.00RC10 it was disabled by default. So what the problem?
  12. No. Open Notepad, type 2M of spaces, save file. Overwrite first half of file with "z", save file. Overwrite second half of file with "z" and simulate power failure. Of course you will get file with only 1M of "z". The problem is that sometimes StrongDC++ decides that we already have 2M of "z". And now it even doesn't bothers to check whether this true.
  13. But StrongDC++ can help avoid such data corruption problems even in a case of power failure etc. In case of real world situations. I can't understand why you are objecting. Almost every other P2P client has this feature because it is useful. And, I repeat, even in 1.xx it was optional.
  14. Then please explain how it is possible to get ~500 MB zero-filled gap in ~3 GB file. Temp file was preallocated, of course.
  15. Of course you are not. But the problem is that it is not a hardware failure causes broken files. Of course, in a case of hardware malfunction (typically broken RAM) you will often get broken files. But StrongDC++ in any version I have encountered resulted broken files if it was not correctly shutdown (such as power failure, BSOD etc.), although 1.xx detected this during recheck and "repaired" damaged parts by redownloading them. And "broken" data were actually zero-filled. Up to several hundred megabytes in a case of large files. That is, I assume, blocks of file were considered downloaded, while they were not. And, obviously, this can't happen as a result of non-flushed disk cache. Please don't think that I have malfunctioning hardware - for example I use eMule and Azureus for several years (and several TB). Both of them have recheck function and I keep it enabled. And I never yet seen this recheck fails. But this often happens in StrongDC++. So I can't understand why it was necessary to remove such a useful feature in StrongDC++ 2.xx. Even if you will fix all problems, which currently results in incomplete files in a case of improper shutdown (which are, unfortunately, are not fixed yet), there are other reasons. For example, hash failure is a good indication "something is wrong" (whether this software of hardware). Also I have often used StrongDC++ (as eMule and Azureus in other networks) to repair broken files (whatever was a reason of this). Everything was fine in 1.xx - already existing temp-file was considered complete, was rechecked and damaged blocks were redownloaded. And this very useful (for advanced users, of course) feature was broken in StrongDC++ 2.xx. I would be grateful if you will restore this features. Anyone thinking them are useless can disable them, yes?