Big Muscle

Member
  • Content count

    702
  • Joined

  • Last visited

Posts posted by Big Muscle


  1. This problem has been with PWDC also. I use my Laptop to do all my computing. When I disco my external drive, which is noted in my sharing config, and then later reconnect the drive PWDC and ApexDC loses the sharing location. I reset the location and Apex will refresh my listing details. Laptop uses WINXPpro has 1Gg mem.

    TomaHawk

    it's not bug but feature that non-existing folders are removed on startup


  2. This bug appears in ApexDC 0.2.0. Also it was in StrongDC since 1.0 RC10 (maybe earlier).

    When downloading from source, StrongDC sometimes stops saying something like "No needed part", even if source have full files and he is the only source of this files on hub. Source also became removed from source list (slow user error). Force reconnect, readd etc not helping. But if you "pause" download and resume (or reconnect to hub), it will start again.

    This also happens even if slow download disconnecting options are both disabled.

    a) "No needed parts" message is displayed for partial sources only. If you see it also for normal source then it's bug in ApexDC PSR enhancement.

    ;) while segment downloading the slow sources are always removed regardless your settings


  3. 2TheBlazingIcicle: It can be done as a setting, but shouldn't be lower than 5 minutes. I think it has no point to sent info about each downloaded chunk, because I don't think that the other client will manage to download these chunks so fast.

    example:

    Fast client : chunk...psr...chunk...psr...chunk...psr...chunk...psr

    Slow client: chunk........psr...chunk..........psr...chunk.............psr

    understand? you send $PSR so often, but that client won't download new chunks immediately.


  4. but patch isn't so difficult to make

    --- FileChunksInfo.h	Sat Aug 05 16:15:28 2006
    
    +++ FileChunksInfo.h.patch	Sun Aug 13 11:27:10 2006
    
    @@ -84,13 +84,14 @@
    
     public:
    
     	PartialPeer() { }
    
     	PartialPeer(const string& pMyNick, const string& pHubIpPort, const string& pTth, const string& pAddress, u_int16_t pUdpPort) : 
    
    -	  myNick(pMyNick), hubIpPort(pHubIpPort), tth(pTth), address(pAddress), udpPort(pUdpPort) { }
    
    +	  myNick(pMyNick), hubIpPort(pHubIpPort), tth(pTth), address(pAddress), udpPort(pUdpPort), nextQueryTime(0) { }
    
    
     	GETSET(string, myNick, MyNick);
    
     	GETSET(string, hubIpPort, HubIpPort);
    
     	GETSET(string, tth, Tth);
    
     	GETSET(string, address, Address);
    
     	GETSET(u_int16_t, udpPort, UdpPort);
    
    +	GETSET(u_int32_t, nextQueryTime, NextQueryTime);
    
    
     	string getDump()
    
     	{
    
    --- FileChunksInfo.cpp	Wed Aug 09 00:08:52 2006
    
    +++ FileChunksInfo.cpp.patch	Sun Aug 13 11:29:32 2006
    
    @@ -975,7 +975,10 @@
    
     	Lock l(cs);
    
    
     	for(vector<PartialPeer*>::iterator i = partialPeers.begin(); i != partialPeers.end(); i++) {
    
    -		sendPSR(*i, false);
    
    +		if((*i)->getNextQueryTime() < GET_TICK()) {
    
    +			sendPSR(*i, false);
    
    +			(*i)->setNextQueryTime(GET_TICK() + 300000);
    
    +		}
    
     	}
    
     }


  5. it should be done like in RevConnect - partial request is sent every 5 minutes and maximally 5x to each partial source. Then it would need some improvement to send it only if there's a lot of new chunks (and not the only one!)

    you should also realize what happens when there's a lot of partial sources ->

    chunk - send psr - chunk - send psr - chunk - send psr etc..

    such situation will decrease download speed because it must send $PSR too all partial sources between chunks.


  6. Interesting. Ever since the 1st crash, the "memory leak" seems to be back. Would really like to be able to trace down what those Corruption entries in the SystemLog are referring to.

    it means that files you download are corrupted... you have corrupted memory or HDD.


  7. Efficiently as in seeding rare chunks to users instead of popular ones. Without the "spam", users wouldn't get the rarest chunks within the file, and therefore guess at which is the rarest with inaccurate data. The spam has been reduced, and isn't sent every second... stop exagerating. ;) I'd prefer it if we kept this discussion private, too.

    version 0.2.0 sends it after each finished chunk, so if I am in LAN, it will be sent every second.. and as I said--many such users in the hub and partial sources won't be able to search anything, because their UDP interface will be full of $PSR commands.


  8. Big Muscle, with all due respect, that makes absolutely *zero* sense. Slots will be used regardless of what the speed is. Whether I connect to User A at 1k or User B at 200k it is still a slot used

    yes, if you connect to A and to B, and it will disconnect A, then it will eat only 1 slot except of 2.. btw you should know that slow sources are automatically disconnected when it downloads at least 2 segments at once. Source won't be disconnected when it's single segment.