Big Muscle
Member-
Content count
702 -
Joined
-
Last visited
Posts posted by Big Muscle
-
-
I guess that DC tag in ptokaX means list of tags which is PtokaX prepared for (++, StrgDC++ etc.) and it's not prepared for Apex, so it's bug in PtokaX
-
This problem has been with PWDC also. I use my Laptop to do all my computing. When I disco my external drive, which is noted in my sharing config, and then later reconnect the drive PWDC and ApexDC loses the sharing location. I reset the location and Apex will refresh my listing details. Laptop uses WINXPpro has 1Gg mem.TomaHawk
it's not bug but feature that non-existing folders are removed on startup
-
Very much funny.read the first message. Also try the apex download dvd film and to look it and then speak here nonsenseI always thought that ApexDC isn't designed for violating copyright, if you do so, then this topic should be locked.
-
QoS is something for sending/receiving service packets by system. It's not normally used and the set value says how much is reserved for system, but this part of bandwidth is still free to other programs, so it has really no meaning whether you set it to 0 or 20.
-
aah, there can be problem if source was partial but that user already finished downloading and added the file into his share.
-
similar crash appeared in rmDC++ long time ago. It happened when PM > 255 chars became, but the buffer to display PM in balloon has only 255 bytes long. So it caused buffer overrun.
-
This bug appears in ApexDC 0.2.0. Also it was in StrongDC since 1.0 RC10 (maybe earlier).When downloading from source, StrongDC sometimes stops saying something like "No needed part", even if source have full files and he is the only source of this files on hub. Source also became removed from source list (slow user error). Force reconnect, readd etc not helping. But if you "pause" download and resume (or reconnect to hub), it will start again.
This also happens even if slow download disconnecting options are both disabled.
a) "No needed parts" message is displayed for partial sources only. If you see it also for normal source then it's bug in ApexDC PSR enhancement.
while segment downloading the slow sources are always removed regardless your settings
-
2TheBlazingIcicle: It can be done as a setting, but shouldn't be lower than 5 minutes. I think it has no point to sent info about each downloaded chunk, because I don't think that the other client will manage to download these chunks so fast.
example:
Fast client : chunk...psr...chunk...psr...chunk...psr...chunk...psr
Slow client: chunk........psr...chunk..........psr...chunk.............psr
understand? you send $PSR so often, but that client won't download new chunks immediately.
-
just notice that you can't also download from oDC users, because their files doesn't have TTH available, so this client (oDC) is unusable for segmented downloading.
-
but patch isn't so difficult to make
--- FileChunksInfo.h Sat Aug 05 16:15:28 2006 +++ FileChunksInfo.h.patch Sun Aug 13 11:27:10 2006 @@ -84,13 +84,14 @@ public: PartialPeer() { } PartialPeer(const string& pMyNick, const string& pHubIpPort, const string& pTth, const string& pAddress, u_int16_t pUdpPort) : - myNick(pMyNick), hubIpPort(pHubIpPort), tth(pTth), address(pAddress), udpPort(pUdpPort) { } + myNick(pMyNick), hubIpPort(pHubIpPort), tth(pTth), address(pAddress), udpPort(pUdpPort), nextQueryTime(0) { } GETSET(string, myNick, MyNick); GETSET(string, hubIpPort, HubIpPort); GETSET(string, tth, Tth); GETSET(string, address, Address); GETSET(u_int16_t, udpPort, UdpPort); + GETSET(u_int32_t, nextQueryTime, NextQueryTime); string getDump() { --- FileChunksInfo.cpp Wed Aug 09 00:08:52 2006 +++ FileChunksInfo.cpp.patch Sun Aug 13 11:29:32 2006 @@ -975,7 +975,10 @@ Lock l(cs); for(vector<PartialPeer*>::iterator i = partialPeers.begin(); i != partialPeers.end(); i++) { - sendPSR(*i, false); + if((*i)->getNextQueryTime() < GET_TICK()) { + sendPSR(*i, false); + (*i)->setNextQueryTime(GET_TICK() + 300000); + } } }
-
it should be done like in RevConnect - partial request is sent every 5 minutes and maximally 5x to each partial source. Then it would need some improvement to send it only if there's a lot of new chunks (and not the only one!)
you should also realize what happens when there's a lot of partial sources ->
chunk - send psr - chunk - send psr - chunk - send psr etc..
such situation will decrease download speed because it must send $PSR too all partial sources between chunks.
-
it's position where the file gets corrupted.
-
Interesting. Ever since the 1st crash, the "memory leak" seems to be back. Would really like to be able to trace down what those Corruption entries in the SystemLog are referring to.it means that files you download are corrupted... you have corrupted memory or HDD.
-
Efficiently as in seeding rare chunks to users instead of popular ones. Without the "spam", users wouldn't get the rarest chunks within the file, and therefore guess at which is the rarest with inaccurate data. The spam has been reduced, and isn't sent every second... stop exagerating. I'd prefer it if we kept this discussion private, too.version 0.2.0 sends it after each finished chunk, so if I am in LAN, it will be sent every second.. and as I said--many such users in the hub and partial sources won't be able to search anything, because their UDP interface will be full of $PSR commands.
-
efficient ? I don't think that sending $PSR to every partial source every second means efficient. With many such clients in hub, it will totally lock UDP interface of partial sources and it's not good.
-
I'm sorry that I must say it, but I'm going to ban ApexDC in our hubs, because I don't want our clients to be spammed over UDP interface.
-
btw could you send us your DCPlusPlus.xml ? without any change done since this time.
-
Big Muscle, with all due respect, that makes absolutely *zero* sense. Slots will be used regardless of what the speed is. Whether I connect to User A at 1k or User B at 200k it is still a slot usedyes, if you connect to A and to B, and it will disconnect A, then it will eat only 1 slot except of 2.. btw you should know that slow sources are automatically disconnected when it downloads at least 2 segments at once. Source won't be disconnected when it's single segment.
-
feaute called Finished file sharing it's a part of PFS, that downloaded files are being shared until the program is restarted.
-
Making this a USER choice will not change ANYTHING except for the ability of people to GET files easier.easier? no, because there will be no free slot in the network.
-
it looks like some problem with memory allocating...out of memory? corrupted memory? etc..
-
you shouldn't be on DC... other people want to download too!!! and not only you!!!
-
it stays there for (next 1 minute tick + 3 minutes)... currently I changed it to (next 1 minute tick + 1 minute)
-
Yes, they both are still being developed. :ermm:yes, only in CZDC++, author of it started adding feature to kill competitive clients instead of fixing bugs.
some people like old dc
in Client Discussion
Posted
it can also connects to clients below .401 :)