TheBlazingIcicle
Member-
Content count
31 -
Joined
-
Last visited
Everything posted by TheBlazingIcicle
-
What, what? Accusations I wonder how many of the people that you suddenly couldn't download from were partial sources? It is expected that in the randomness of many people downloading, there will end up some chunks that are very rare (bittorret has this too). Suddenly there are no partial sources anymore and you have to wait for a real source. This was much worse in PWDC when the last chunk was always rare. If I am right, look at the speed you get at the end, this is the speed you would get all the time with strongdc or any other client.
-
Could you use the ordinary release (not debug) version with the pdb file and give any information. Apex is not designed to be run in debug mode
-
multidownload.... why just 10 segment?
TheBlazingIcicle replied to darkins's topic in Client Discussion
At some point I'm going to experiment with removing both the slot system (so everyone has effectively infinite slots) and the number of segments. I reckon it'd be like bittorrent, especailly with apexdc's chunk prioritisation. Will munch your internet connection but be very fast: just like bittorrent. I'll let you know how it goes. -
Enhance client's performance cpu/memory usage
TheBlazingIcicle replied to Nick Danger's topic in Client Discussion
As far as memory goes, windows is quite capable of dealing with putting unused memory on disk itself. Of course it is still a good general principle to be efficient with memory and cpu usage, but I think this program is too old and sprawling to get much gain. -
Full file recheck and recheck on resume
TheBlazingIcicle replied to amp's topic in Client Discussion
I have seen the problem that amp and void see, and it is very annoying. Big Muscle is plain wrong to say that you shouldn't use segments with a slow speed connection. Its all relative, what is helpful at high speed is just as helpful at lower speed. The fix that will keep me happy is this: -Drop slow sources IF there are other sources like we currently do (using an arbitary limit if you have to). -If the other sources go away for whatever reason, put the slow sources back to give them another chance. -
Remember External drives when they're not plugged in
TheBlazingIcicle posted a topic in Feature Requests
Many people have external drives and forget to plug them in. DC++ (all versions afaik) promptly removes all references that the drive ever existed. I have fixed it before by removing the line that does it, but that causes other glitches which aren't ideal on a production client (the folder still appears in the filelist: its just empty, and to remove the folder, you now have to switch to the old style sharing tab). A nice fix would be to keep references to non-existant folders, but filter them from being put into the filelist (at filelist generation time, so you can turn on then refresh and it'll appear). I'll do it when i get a chance, if people agree with me? -
Remember External drives when they're not plugged in
TheBlazingIcicle replied to TheBlazingIcicle's topic in Feature Requests
Yep, the hash comes back immediately, so its not a problem from that front, its just a pain to have to go in and readd the folders. Also means lots of people unnecessarily get kicked for zero share because their external hd was off at some point and dc forgot it. -
I think there may be an odd behaviour going on involving where it starts to download a chunk. It should do it from the leftmost position it can all things being equal, but i often see a weird creeping effect that appears to be caused by attempts on uses with no slots. Has anyone else seen this by looking at the progress bar? Its not a big issue, but it reduces the likelihood of being able to watch while its downloading. No idea why its happening. It also reminds me of a potential inefficiency that sources with no slots might cause. They will cause a $PSR to be sent saying that that chunk is being got, even if it isn't since they have no slots. Don't know if that's important enough to fix though.
-
[Crash] when trying to find download chunk
TheBlazingIcicle replied to SMT's topic in Pre 1.0 Reports
Oh god, of course, can't believe I mised that. Thanks a lot, SMT, that fix is also exactly right. That is very likely to be the fix for those crashes, Lee, but its really annoying that the exceptioninfo.txt gave the wrong line as the problem. If you want to do the change crise, I imagine you are in the middle of doing some other stuff. -
Haha, I know the guy that made pidge. It works ok, although it added a new prefix to my name every time i connected at one point. I'll tell him he's famous.
-
That particular line is generated by something lower level than ApexDC (like windows or visual studio), which is why it hasn't been included. Nevertheless, passing forward error messages is horrible, and they should probably be detected then go through the normal translation like you request. It's just a load more work than you imagine.
-
Index: FileChunksInfo.cpp =================================================================== --- FileChunksInfo.cpp (revision 33) +++ FileChunksInfo.cpp (working copy) @@ -24,6 +24,7 @@ #include "FileChunksInfo.h" #include "SharedFileStream.h" #include "ResourceManager.h" +#include "SettingsManager.h" vector<FileChunksInfo::Ptr> FileChunksInfo::vecAllFileChunksInfo; CriticalSection FileChunksInfo::hMutexMapList; @@ -960,8 +961,15 @@ } if(!isAlreadyThere) { + if(partialPeers.size() >= MAX_PARTIAL_PEERS) { + partialPeers.erase(partialPeers.begin()); + } partialPeers.push_back(partialPeer); } + else + { + delete partialPeer; + } dcdebug("storePartialPeer: size of partialPeers is now %d\n", partialPeers.size()); } @@ -971,7 +979,10 @@ Lock l(cs); for(vector<PartialPeer*>::iterator i = partialPeers.begin(); i != partialPeers.end(); i++) { - sendPSR(*i, false); + if((*i)->getNextQueryTime() < GET_TICK()) { + sendPSR(*i, false); + (*i)->setNextQueryTime(GET_TICK() + SETTING(PSR_DELAY)); + } } } Index: FileChunksInfo.h =================================================================== --- FileChunksInfo.h (revision 33) +++ FileChunksInfo.h (working copy) @@ -40,6 +40,9 @@ // it's used when a source's download speed is unknown #define DEFAULT_SPEED 5120 +// maximum number of partial peers to store, to minimise unnecessary UDP +#define MAX_PARTIAL_PEERS 20 + // PFS purpose typedef vector<u_int16_t> PartsInfo; @@ -84,13 +87,14 @@ public: PartialPeer() { } PartialPeer(const string& pMyNick, const string& pHubIpPort, const string& pTth, const string& pAddress, u_int16_t pUdpPort) : - myNick(pMyNick), hubIpPort(pHubIpPort), tth(pTth), address(pAddress), udpPort(pUdpPort) { } + myNick(pMyNick), hubIpPort(pHubIpPort), tth(pTth), address(pAddress), udpPort(pUdpPort), nextQueryTime(0) { } GETSET(string, myNick, MyNick); GETSET(string, hubIpPort, HubIpPort); GETSET(string, tth, Tth); GETSET(string, address, Address); GETSET(u_int16_t, udpPort, UdpPort); + GETSET(u_int32_t, nextQueryTime, NextQueryTime); string getDump() { Index: SettingsManager.cpp =================================================================== --- SettingsManager.cpp (revision 33) +++ SettingsManager.cpp (working copy) @@ -130,6 +130,7 @@ "SENTRY", // Int64 "TotalUpload", "TotalDownload", + "PSR_DELAY", "SENTRY" }; @@ -445,6 +446,7 @@ setDefault(UPLOADQUEUEFRAME_SHOW_TREE, true); setDefault(DONT_BEGIN_SEGMENT, true); setDefault(DONT_BEGIN_SEGMENT_SPEED, 512); + setDefault(PSR_DELAY, 30000); setDefault(DETECT_BADSOFT, true); setDefault(BADSOFT_DETECTIONS, 0); Index: SettingsManager.h =================================================================== --- SettingsManager.h (revision 33) +++ SettingsManager.h (working copy) @@ -147,7 +147,7 @@ INT_LAST }; enum Int64Setting { INT64_FIRST = INT_LAST + 1, - TOTAL_UPLOAD = INT64_FIRST, TOTAL_DOWNLOAD, INT64_LAST, SETTINGS_LAST = INT64_LAST }; + TOTAL_UPLOAD = INT64_FIRST, TOTAL_DOWNLOAD, PSR_DELAY, INT64_LAST, SETTINGS_LAST = INT64_LAST }; enum { BWSETTINGS_DEFAULT, BWSETTINGS_ADVANCED }; Like that ^ I think the part to only have 20 peers at a time limits enough that you could leave the delay as 30 seconds even in strongdc and have no problems, let me know what you think. To your new post: It isn't about downloading chunks from the client that sends the psr... it never was... its about knowing what the client that sent the psr is downloading, so it is avoided, to make the overall download more efficient for everyone.
-
No, exactly, the new one has been universally voted (even big muscle with the caveat he makes clear above agrees) to be better, so it has been selected. Big muscle: Ok, how about we put in that patch, but instead of hardcoding the 300000, make it a setting. It can default to 5 mins in sdc, I will give in and let it default to maybe 30 seconds in apexdc, then anyone who specifically wants to change it if they experience *gasp* a couple of extra udp packets than they want, can. That behaviour hopefully is a comprimise that should be good enough for everyone, and noone will have to talk about banning clients anymore? I think what would be a better defence against unnecessary UDP would be to have a maximum size (i think 20ish) of the partialPeers vector, and drop old peers as new ones come. That way in really large groups, people who have now finished no longer get packets.
-
I don't know how to work threading, but if i did, I would be happy to send the psrs in another thread so it doesn't have to wait the minute amount of time to send them to start the next chunk. "partial request is sent every 5 minutes and maximally 5x to each partial source" no way that's gonna happen I'm afraid. Any suggestions to reduce the disadvantages without reducing the advantages will be gratefully received, but there is no arguing with the fact that sending a (max) 1 kB UDP packet in order to save downloading the wrong (min) 1 MB chunk is well worth it, even if you send 50 which don't make a difference. If you can think of a clever way to reduce the number sent by cleverly knowing whether they'll make a difference, that'd be great, but I can't think of a way.
-
What I don't understand is why you use the example of a fast LAN to demonstrate what you don't like about the UDP spam. Surely you realise that the amount of UDP spam will be proportional to your connection speed, and the issues are identical at whatever speed. The only situation UDP spam would become unmanageable is if there were many many people in the partial sharing group. This would mean sending udp packets to potentially so many people that it isn't useful anymore. A crude way to solve this would be to limit the number of peers you send packets to. I would be happy to implement a nice system to do that if you would then be happier.
-
@2 The tab completion of nicks from any part of the nick from any main or pm chat window of the hub would be awesome. Excellent suggestion, well worth doing IMO.
-
I just remembered about that, it annoys me too. For want of anything more suitable i just hardcoded it off when i built sdc.
-
How come my stuff is called super seeding? I never worked that out. It has almost nothing to do with seeding, let alone superman :)
-
This feature, whatever you think of its rights and wrongs, is not one to go in ApexDC in its current form. The reason is that it would require such a huge amount of change to the assumptions that are inhenrent in the design of the client that a complete rewrite would be the best way (given that dc++ needs a complete rewrite anyway).