Lee

Released: ApexDC++ 1.3.6

8 posts in this topic

Bring on the speed! ApexDC++ 1.3.6 changes the way in which segments are downloaded. The new randomised approach allows for new files to be distributed better across a group of ApexDC++ downloaders. Ultimately, this will provide a better speed over other clients.

Other improvements include a new Plugin API update - so make sure you grab the new plugins - and a minor feature addition.

Download: ApexDC++ 1.3.6

AolemanEl likes this

Share this post


Link to post
Share on other sites

this will provide a better speed over other clients.

No, it won't. Everything will be slowed down especially on very fast connections. Yes, it will be very fast at beginning, but since downloaded segments have no arrangement, soon or later, you will end up with semi-downloaded file with a lot of small undownloaded segments in it (instead of a few bigger segments in normal case). And what happens then? When your download speed is e.g. 10 MB/s and segment size is e.g. 1 MB, your speed will be degraded to approx. 1 MB/s, just because randomization completely breaks that dynamic segment size which has been done in the past to dramatically improve download speed on fast connections. And the more bytes downloaded, the worse it will be.

Share this post


Link to post
Share on other sites

No, it won't. Everything will be slowed down especially on very fast connections. Yes, it will be very fast at beginning, but since downloaded segments have no arrangement, soon or later, you will end up with semi-downloaded file with a lot of small undownloaded segments in it (instead of a few bigger segments in normal case). And what happens then? When your download speed is e.g. 10 MB/s and segment size is e.g. 1 MB, your speed will be degraded to approx. 1 MB/s, just because randomization completely breaks that dynamic segment size which has been done in the past to dramatically improve download speed on fast connections. And the more bytes downloaded, the worse it will be.

I won't say anything about speed, but what it does do is allow users to share files more effectively. We ran a test on this and with a full source uploading a file for ten minutes and then leaving we (four people) were able to continue downloading for about 90 minutes afterwords without the presence of any full sources for the file. This is something that was not possible before since everyone got very much same chunks from the full source.

The point is, before you would never finish a file without a full source but now that is possible. Sure enough ideally each downloader would ask for different segments from the full source to allow more effective partial sharing, however, this requires that the info of the currently downloading/requested chunks is passed among (partial) peers. This way they would know what segments will soon have alternate sources, besides the full source, and ask for the "rarest" segments from the full source.

This might slow the speed one user downloads the file with but on average it will result in more users having the complete file sooner because they are less dependent on full sources upstream and slots. Thus resulting in more full sources for the file faster.

I am by no means someone that should be making these comments since I am anything but familiar with this part of the code but I am simply applying common sense here combined with the fact that most users probably do not have the connections to provide them with 10mb/s downloads on a WAN environment, nor will the sources have the upstream to provide that.

When everyone is connected to the net with fiber optic connections, then it is a different story.

Edit: well as far as I am aware we'll soon have detailed info on the outcome of this change from a real live environment with several users downloading large files from each other and I am sure the capabilities between users connections will have more than enough variety. If the change ends up having more negative side effects than positive ones then it is easy enough to revert it.

Share this post


Link to post
Share on other sites

It's true that it allows faster spreading downloaded segments over network and will increase a chance to download complete file when only partial sources are available.

The code is just my guess. On Friday, Lee asked me how it could be done, so I just spit out this code although I have never tested it before. Today, I analyzed the code more and made some tests and it is not usable as it seems. Many users won't probably notice any problems, because they just can't know that the speed could be better. Current segment downloading (in contrast to old RevConnect's code) is designed that faster sources will get larger segments and the randomization thing completely breaks this intention. The result is lower download speed (and I don't talk about some small difference as 400 kB/s vs 300 kB/s, but about 10 MB/s vs 1 MB/s) and much more stress to HDD (because fast sources will download e.g. 50 random small segments instead of 1 large continuous block). And the last thing is that it breaks preview function.

My conclusion is that the intention is good, but it must be done in completely different way than just randomization. It must take in account all possible factors, especially that dynamic segment size.

Share this post


Link to post
Share on other sites

I can only say here, that the preview function is easily fixable by prioritizing begin and end of the file at the begin of a download, that should allow most players capable of playing back incomplete files to work... in some case probably more so than when just getting segments from begin -> end, depends on the file format.

Share this post


Link to post
Share on other sites

Can we at the very least have this as a option instead of it being forced on us.

I am wanting to go back to using 1.3.5 simply because of this new feature.

This new feature is fine if you have a superfast connection and can have 20 slots open at the same time due to a fast upload speed.

But try using this new feature in Australia and you will see that uploading a file now takes twice as long as it use to in the past.

Share this post


Link to post
Share on other sites

Can we at the very least have this as a option instead of it being forced on us.

I am wanting to go back to using 1.3.5 simply because of this new feature.

This new feature is fine if you have a superfast connection and can have 20 slots open at the same time due to a fast upload speed.

But try using this new feature in Australia and you will see that uploading a file now takes twice as long as it use to in the past.

It will probably be an option in some form eventually yes... as for the uploading taking longer, yes it can happen at times but not always. What's in 1.3.6 is just first step, so the behavior will be tweaked to be more efficient across the board.

Share this post


Link to post
Share on other sites

But try using this new feature in Australia and you will see that uploading a file now takes twice as long as it use to in the past.

Welcome Mutant :)

Why does it take twice as long? If you are uploading to more than one user, it will help slow connections because you are giving random unique chunks to different users. Those users can then upload to each other without you even being connected. If you have a 1GB file to upload, after uploading about 1.1GB to the users they then have all the chunks they need to finish the file between themselves without you connected. This removes the problem of having to upload the 1GB file to User B, then User C, and User D sequentially (totalling 3GB)

Option would only take effect on users requesting chunks off you. Not if you disabled it yourself.

..just because randomization completely breaks that dynamic segment size which has been done in the past to dramatically improve download speed on fast connections. And the more bytes downloaded, the worse it will be.

Dynamic segment size based on speed still works, we have tested it numerous times. It will change though, because it doesn't work efficiently. If a fast user has a large chunk, all other partials users are sat waiting till smaller random chunks are available. It takes longer for the whole swarm to finish the file. We have also prioritised first and last segments for video files for next release, so users can preview and confirm they aren't fake before completing the download.

During our tests we had users on very fast connections... we couldn't max their upload speed. Nobody reported any drops in download speed towards the end of the file.

We will test this next change before release. I suggest you join in and realise the potential, especially over DHT with SDC <-> Apex users.

Share this post


Link to post
Share on other sites