void

Member
  • Content count

    15
  • Joined

  • Last visited

Posts posted by void


  1. what connection type do you have???

    512/256 kbps

    On my 5MB it doesn't slow down anything and only brings the faster downloading and more free slots in the network.

    Our "DC network" is quite different. Most users have almost equal (and relatively slow) connection speed, so "per-slot" download speed is typically 2..5 KB/s (when DC receives all bandwidth), so I typically use 10..15 download slots to get optimum download speed. This way your hard-coded "slow user limit" becomes 1..1.5 KB/s, which, obviously, becomes a problem.

    Oh, please do not say "you are a leecher because your upstream bandwidth is lower". I will get this problem with any upstream bandwidth. Because this will not change upstream bandwidth of other users. As for me, I represent one of the most useful sources in our network ;-)

    Segmented downloading is done for fast connection to download from more fast users and not to occupy every each slot in network with zero influence.

    It is even more useful to "prioritize" one file above the others when there is no fast sources for it. And this "slow users bug" breaks this.

    If you do not want to make this "feature" optional - you can modify it to work differently with higher priority downloads. And, BTW, removing "slow" alternate sources completely is not very good idea IMHO - download may freeze for a very long time if all of remaining sources go offline.


  2. To clarify: I don't care if lower priority downloads have their slowest (or even all) sources removed (as long as it will resume automatically when there are no higher priority downloads or there is "extra" bandwidth). But I do care if higher priority downloads are intentionally slowed down and most download bandwidth is occupied by lower priority stuff, which happens now.


  3. RevConnect has this feature too:

    and it doesn't screw your downloads, it only disconnects very slow sources if you are using segmented downloading. But I understand that leechers want to use every slot which they can, so there won't be any free slots for other user.

    This feature description explains how it should work but not how it works in practice. And in practice I can see the following. A lot of files in download queue and one big file which I want to download first. I set it to high priority. And due to this "feature" after a few minutes I typically receive from one to zero (because this source will eventually go offline) active download sources. For the file I want to download before others. And whether this will happen or not depends on some undocumented hard-coded constants. Oh, this is great feature.

    I doubt very much that this feature helps against leachers (while limited upload slots number works pretty well). But IMHO it really hurts to users.


  4. :) it allows leeching by occupying slow segments and eating too many slots

    ...this is why uploader limits number of available upload slots. Am I wrong?

    Now such type of "slow user" disconnecting due to incomprehensible reasons looks like a bug and speeds down download a lot (often removing all sources but one, which can then go offline, effectively stopping download at all). This is really annoying.


  5. But I don't respect any post-checking whether current file is correct :D There must exist solution to have this file always correct and avoid damaging.

    Maybe it will be be wise to remove this check when you will find this solution? Unfortunately we are not there yet.


  6. I always shutdown my PC immediately after closing StrongDC++ and it has never happened to me :D

    ...yet.

    Unfortunately, this did happened to me too. Only once in several months. Empty queue.xml. And .bak did not help because there was already too late.

    I have very simple idea how to avoid most serious problems. Do not make a backup of queue.xml if there are no downloads in queue. What do you think?


  7. btw I found a workaround for this problem. It's flushing data using FlushFileBuffers before information about verified block is stored to queue.xml. But this can be too time consuming and can decrease overall download speed.

    And even doing so would not help in all cases. Modern ATA drives store unsaved data in their own buffer even when OS thinks everything is already written. And, of course, buffer contents would be saved (or not saved) out-of-order...


  8. just open taskmanager and you will see how many memory is used for cache

    Please read Windows NT kernel documentation. Then you would know the meaning of numbers shown in Task Manager. I do.

    Do you really think that "dirty" data is Windows NT disk cache are stored for several tens of hours?


  9. open Notepad, type something in there and then simulate power failure. The written text will be lost. It's same situation.

    No.

    Open Notepad, type 2M of spaces, save file. Overwrite first half of file with "z", save file. Overwrite second half of file with "z" and simulate power failure. Of course you will get file with only 1M of "z". The problem is that sometimes StrongDC++ decides that we already have 2M of "z". And now it even doesn't bothers to check whether this true.


  10. by "hardware failures" I meant what you said (=power failure, BSOD etc.) and data lost in this case has nothing to do with dc++.

    But StrongDC++ can help avoid such data corruption problems even in a case of power failure etc. In case of real world situations. I can't understand why you are objecting. Almost every other P2P client has this feature because it is useful. And, I repeat, even in 1.xx it was optional.


  11. No it won!t be restored. You should understand that StrongDC++ check its own activity - correctness of downloaded data and correctness of saved data. It doesn't and won't check file integrity for situation which has nothing to do with StrongDC++ (hardware failure etc.)

    Then please explain how it is possible to get ~500 MB zero-filled gap in ~3 GB file. Temp file was preallocated, of course.


  12. We are not responsible for your hardware failure.

    Of course you are not. But the problem is that it is not a hardware failure causes broken files. Of course, in a case of hardware malfunction (typically broken RAM) you will often get broken files. But StrongDC++ in any version I have encountered resulted broken files if it was not correctly shutdown (such as power failure, BSOD etc.), although 1.xx detected this during recheck and "repaired" damaged parts by redownloading them. And "broken" data were actually zero-filled. Up to several hundred megabytes in a case of large files. That is, I assume, blocks of file were considered downloaded, while they were not. And, obviously, this can't happen as a result of non-flushed disk cache.

    Please don't think that I have malfunctioning hardware - for example I use eMule and Azureus for several years (and several TB). Both of them have recheck function and I keep it enabled. And I never yet seen this recheck fails. But this often happens in StrongDC++.

    So I can't understand why it was necessary to remove such a useful feature in StrongDC++ 2.xx. Even if you will fix all problems, which currently results in incomplete files in a case of improper shutdown (which are, unfortunately, are not fixed yet), there are other reasons. For example, hash failure is a good indication "something is wrong" (whether this software of hardware). Also I have often used StrongDC++ (as eMule and Azureus in other networks) to repair broken files (whatever was a reason of this). Everything was fine in 1.xx - already existing temp-file was considered complete, was rechecked and damaged blocks were redownloaded. And this very useful (for advanced users, of course) feature was broken in StrongDC++ 2.xx.

    I would be grateful if you will restore this features. Anyone thinking them are useless can disable them, yes?