Lee

Released: ApexDC++ 0.2.0 (Preview 2)

26 posts in this topic

Just over a week after releasing 0.1.0 and 10,000 downloads, we bring you our latest milestone release, 0.2.0 (Preview 2). This new preview brings many features to users, alongside fixes, changes and gui improvements. Thanks to everyone who has already provided feedback to our first release, keep them coming!

As usual, check out our changelog for the full details. The tour page has been updated with this new release, but most importantly, jump to the download page!

Preview 1 users can download and install the setup file to upgrade without losing settings, queues, etc.

Share this post


Link to post
Share on other sites

I'm sorry that I must say it, but I'm going to ban ApexDC in our hubs, because I don't want our clients to be spammed over UDP interface.

Share this post


Link to post
Share on other sites

I'm sorry that I must say it, but I'm going to ban ApexDC in our hubs, because I don't want our clients to be spammed over UDP interface.

That is your choice entirely - ban 0.2.0 only and any future versions that have the feature. But remember that without sending udp requests our super seeding feature wouldn't be as efficient as it works now. And we're all willing to take that risk, since the spam isn't high.

Share this post


Link to post
Share on other sites

efficient ? I don't think that sending $PSR to every partial source every second means efficient. With many such clients in hub, it will totally lock UDP interface of partial sources and it's not good.

Share this post


Link to post
Share on other sites

efficient ? I don't think that sending $PSR to every partial source every second means efficient. With many such clients in hub, it will totally lock UDP interface of partial sources and it's not good.

Efficiently as in seeding rare chunks to users instead of popular ones. Without the "spam", users wouldn't get the rarest chunks within the file, and therefore guess at which is the rarest with inaccurate data. The spam has been reduced, and isn't sent every second... stop exagerating. ;) I'd prefer it if we kept this discussion private, too.

Share this post


Link to post
Share on other sites

Efficiently as in seeding rare chunks to users instead of popular ones. Without the "spam", users wouldn't get the rarest chunks within the file, and therefore guess at which is the rarest with inaccurate data. The spam has been reduced, and isn't sent every second... stop exagerating. ;) I'd prefer it if we kept this discussion private, too.

version 0.2.0 sends it after each finished chunk, so if I am in LAN, it will be sent every second.. and as I said--many such users in the hub and partial sources won't be able to search anything, because their UDP interface will be full of $PSR commands.

Share this post


Link to post
Share on other sites

version 0.2.0 sends it after each finished chunk, so if I am in LAN, it will be sent every second.. and as I said--many such users in the hub and partial sources won't be able to search anything, because their UDP interface will be full of $PSR commands.

Lee, BM has a valid point in here... and you can't get around it no matter what you say...

and it's no point in arguing about this matter Lee, you say udp spammage improves speeds, maybe that's true i dunno... but it's not worth of it as sending $PSR every 5 minutes or so should be well enough...

Share this post


Link to post
Share on other sites

What I don't understand is why you use the example of a fast LAN to demonstrate what you don't like about the UDP spam. Surely you realise that the amount of UDP spam will be proportional to your connection speed, and the issues are identical at whatever speed.

The only situation UDP spam would become unmanageable is if there were many many people in the partial sharing group. This would mean sending udp packets to potentially so many people that it isn't useful anymore. A crude way to solve this would be to limit the number of peers you send packets to. I would be happy to implement a nice system to do that if you would then be happier.

Share this post


Link to post
Share on other sites

it should be done like in RevConnect - partial request is sent every 5 minutes and maximally 5x to each partial source. Then it would need some improvement to send it only if there's a lot of new chunks (and not the only one!)

you should also realize what happens when there's a lot of partial sources ->

chunk - send psr - chunk - send psr - chunk - send psr etc..

such situation will decrease download speed because it must send $PSR too all partial sources between chunks.

Share this post


Link to post
Share on other sites

I don't know how to work threading, but if i did, I would be happy to send the psrs in another thread so it doesn't have to wait the minute amount of time to send them to start the next chunk.

"partial request is sent every 5 minutes and maximally 5x to each partial source" no way that's gonna happen I'm afraid.

Any suggestions to reduce the disadvantages without reducing the advantages will be gratefully received, but there is no arguing with the fact that sending a (max) 1 kB UDP packet in order to save downloading the wrong (min) 1 MB chunk is well worth it, even if you send 50 which don't make a difference. If you can think of a clever way to reduce the number sent by cleverly knowing whether they'll make a difference, that'd be great, but I can't think of a way.

Share this post


Link to post
Share on other sites

but patch isn't so difficult to make

--- FileChunksInfo.h	Sat Aug 05 16:15:28 2006

+++ FileChunksInfo.h.patch	Sun Aug 13 11:27:10 2006

@@ -84,13 +84,14 @@

 public:

 	PartialPeer() { }

 	PartialPeer(const string& pMyNick, const string& pHubIpPort, const string& pTth, const string& pAddress, u_int16_t pUdpPort) : 

-	  myNick(pMyNick), hubIpPort(pHubIpPort), tth(pTth), address(pAddress), udpPort(pUdpPort) { }

+	  myNick(pMyNick), hubIpPort(pHubIpPort), tth(pTth), address(pAddress), udpPort(pUdpPort), nextQueryTime(0) { }


 	GETSET(string, myNick, MyNick);

 	GETSET(string, hubIpPort, HubIpPort);

 	GETSET(string, tth, Tth);

 	GETSET(string, address, Address);

 	GETSET(u_int16_t, udpPort, UdpPort);

+	GETSET(u_int32_t, nextQueryTime, NextQueryTime);


 	string getDump()

 	{

--- FileChunksInfo.cpp	Wed Aug 09 00:08:52 2006

+++ FileChunksInfo.cpp.patch	Sun Aug 13 11:29:32 2006

@@ -975,7 +975,10 @@

 	Lock l(cs);


 	for(vector<PartialPeer*>::iterator i = partialPeers.begin(); i != partialPeers.end(); i++) {

-		sendPSR(*i, false);

+		if((*i)->getNextQueryTime() < GET_TICK()) {

+			sendPSR(*i, false);

+			(*i)->setNextQueryTime(GET_TICK() + 300000);

+		}

 	}

 }

Share this post


Link to post
Share on other sites

I think you may also add an option to turn off superseeding.

Share this post


Link to post
Share on other sites

I think you may also add an option to turn off superseeding.

No. Just like partial sharing won't be disabled. :D

Share this post


Link to post
Share on other sites

But why not? I don't think, that selecting between 2 algorithms is *very* hard task for an experienced programmer :D

Share this post


Link to post
Share on other sites

No, exactly, the new one has been universally voted (even big muscle with the caveat he makes clear above agrees) to be better, so it has been selected.

Big muscle: Ok, how about we put in that patch, but instead of hardcoding the 300000, make it a setting. It can default to 5 mins in sdc, I will give in and let it default to maybe 30 seconds in apexdc, then anyone who specifically wants to change it if they experience *gasp* a couple of extra udp packets than they want, can. That behaviour hopefully is a comprimise that should be good enough for everyone, and noone will have to talk about banning clients anymore?

I think what would be a better defence against unnecessary UDP would be to have a maximum size (i think 20ish) of the partialPeers vector, and drop old peers as new ones come. That way in really large groups, people who have now finished no longer get packets.

Share this post


Link to post
Share on other sites

But why not? I don't think, that selecting between 2 algorithms is *very* hard task for an experienced programmer :D

It hasn't nothing to do with the hard task, rather our opinions on the partial sharing feature.

Share this post


Link to post
Share on other sites

2TheBlazingIcicle: It can be done as a setting, but shouldn't be lower than 5 minutes. I think it has no point to sent info about each downloaded chunk, because I don't think that the other client will manage to download these chunks so fast.

example:

Fast client : chunk...psr...chunk...psr...chunk...psr...chunk...psr

Slow client: chunk........psr...chunk..........psr...chunk.............psr

understand? you send $PSR so often, but that client won't download new chunks immediately.

Share this post


Link to post
Share on other sites

Index: FileChunksInfo.cpp

===================================================================

--- FileChunksInfo.cpp	(revision 33)

+++ FileChunksInfo.cpp	(working copy)

@@ -24,6 +24,7 @@

 #include "FileChunksInfo.h"

 #include "SharedFileStream.h"

 #include "ResourceManager.h"

+#include "SettingsManager.h"


 vector<FileChunksInfo::Ptr> FileChunksInfo::vecAllFileChunksInfo;

 CriticalSection FileChunksInfo::hMutexMapList;

@@ -960,8 +961,15 @@

 	}


 	if(!isAlreadyThere) {

+		if(partialPeers.size() >= MAX_PARTIAL_PEERS) {

+			partialPeers.erase(partialPeers.begin());

+		}

 		partialPeers.push_back(partialPeer);

 	}

+	else

+	{

+		delete partialPeer;

+	}


 	dcdebug("storePartialPeer: size of partialPeers is now %d\n", partialPeers.size());

 }

@@ -971,7 +979,10 @@

 	Lock l(cs);


 	for(vector<PartialPeer*>::iterator i = partialPeers.begin(); i != partialPeers.end(); i++) {

-		sendPSR(*i, false);

+		if((*i)->getNextQueryTime() < GET_TICK()) {

+			sendPSR(*i, false);

+			(*i)->setNextQueryTime(GET_TICK() + SETTING(PSR_DELAY));

+		}

 	}

 }


Index: FileChunksInfo.h

===================================================================

--- FileChunksInfo.h	(revision 33)

+++ FileChunksInfo.h	(working copy)

@@ -40,6 +40,9 @@

 // it's used when a source's download speed is unknown

 #define DEFAULT_SPEED 5120


+// maximum number of partial peers to store, to minimise unnecessary UDP

+#define MAX_PARTIAL_PEERS 20

+

 // PFS purpose

 typedef vector<u_int16_t> PartsInfo;


@@ -84,13 +87,14 @@

 public:

 	PartialPeer() { }

 	PartialPeer(const string& pMyNick, const string& pHubIpPort, const string& pTth, const string& pAddress, u_int16_t pUdpPort) : 

-	  myNick(pMyNick), hubIpPort(pHubIpPort), tth(pTth), address(pAddress), udpPort(pUdpPort) { }

+	  myNick(pMyNick), hubIpPort(pHubIpPort), tth(pTth), address(pAddress), udpPort(pUdpPort), nextQueryTime(0) { }


 	GETSET(string, myNick, MyNick);

 	GETSET(string, hubIpPort, HubIpPort);

 	GETSET(string, tth, Tth);

 	GETSET(string, address, Address);

 	GETSET(u_int16_t, udpPort, UdpPort);

+	GETSET(u_int32_t, nextQueryTime, NextQueryTime);


 	string getDump()

 	{

Index: SettingsManager.cpp

===================================================================

--- SettingsManager.cpp	(revision 33)

+++ SettingsManager.cpp	(working copy)

@@ -130,6 +130,7 @@

 	"SENTRY",

 	// Int64

 	"TotalUpload", "TotalDownload",

+	"PSR_DELAY",

 	"SENTRY"

 };


@@ -445,6 +446,7 @@

 	setDefault(UPLOADQUEUEFRAME_SHOW_TREE, true);	

 	setDefault(DONT_BEGIN_SEGMENT, true);

 	setDefault(DONT_BEGIN_SEGMENT_SPEED, 512);

+	setDefault(PSR_DELAY, 30000);


 	setDefault(DETECT_BADSOFT, true);

 	setDefault(BADSOFT_DETECTIONS, 0);

Index: SettingsManager.h

===================================================================

--- SettingsManager.h	(revision 33)

+++ SettingsManager.h	(working copy)

@@ -147,7 +147,7 @@

 		INT_LAST };


 	enum Int64Setting { INT64_FIRST = INT_LAST + 1,

-		TOTAL_UPLOAD = INT64_FIRST, TOTAL_DOWNLOAD, INT64_LAST, SETTINGS_LAST = INT64_LAST };

+		TOTAL_UPLOAD = INT64_FIRST, TOTAL_DOWNLOAD, PSR_DELAY, INT64_LAST, SETTINGS_LAST = INT64_LAST };


 	enum {	BWSETTINGS_DEFAULT, BWSETTINGS_ADVANCED };

Like that ^

I think the part to only have 20 peers at a time limits enough that you could leave the delay as 30 seconds even in strongdc and have no problems, let me know what you think.

To your new post: It isn't about downloading chunks from the client that sends the psr... it never was... its about knowing what the client that sent the psr is downloading, so it is avoided, to make the overall download more efficient for everyone.

Share this post


Link to post
Share on other sites

I share BigMuscle's concern. I run community wireless network, and know wery well what large number of small packets can do to the wireless network.

I also dislike this WE DECIDED TO DO THAT AND WE DO NOT CARE IF IT HURTS YOU approach. We follow your work since early stages of PeerWeb and PeerWeb/ApexDC are chosen as official clients in our network with intention to abaondon Peerweb in favour of ApexDC.

Now we have to think about it, since this feature is dangerous for our network functionality.

From our point of view, this should be option, it is requred that it may be adjusted in a way to avoid network problems and also be forceable to be turnedoff for all hub members, if it shows up that cannot be adjusted. Otherwise, there is no other option but to ban it. We cannot compromise network for this.

Share this post


Link to post
Share on other sites

I also dislike this WE DECIDED TO DO THAT AND WE DO NOT CARE IF IT HURTS YOU approach. We follow your work since early stages of PeerWeb and PeerWeb/ApexDC are chosen as official clients in our network with intention to abaondon Peerweb in favour of ApexDC.

Did you even read the post above you? ;) Because that tells you we care whether it "hurts you".

Share this post


Link to post
Share on other sites

I also dislike this WE DECIDED TO DO THAT AND WE DO NOT CARE IF IT HURTS YOU approach. We follow your work since early stages of PeerWeb and PeerWeb/ApexDC are chosen as official clients in our network with intention to abaondon Peerweb in favour of ApexDC.

This is a bit of a knee jerk comment. What would be the point of these people making a client in the way you suggest. You also fail to take into account these are preview versions & as such under very active development.

My sugestion to you would be to use patience & wait for the completed release, this will very likely be operating to everyones likeing.

Share this post


Link to post
Share on other sites