gnipi
Member-
Content count
10 -
Joined
-
Last visited
Everything posted by gnipi
-
What about wireless lan ? I need something more flexible for use in wireless lan's. TCP/IP works, but since you have packet loss on most links, it's is reliable but when wireless link gets too saturated, tcp connections usually break cause of too much loss. This way, at least 50% of my "users" (friends on wlan) start off nicely downloading and then, when somebody with better equipment/ closer to access point connects, their connection drops to 0 and disconnects. It's just how it is. Better signal rules. There is absolutely no way to make all clients "equally good" or "equally bad" ... so, it's just not the right solution. In wireless lans, you don't have same upload/download speed for all users. Some have better antenas/802.11 clients then others. Centralised upload/download limiting is what is imho best solution. I am working on a solution to be able to direct upload/download speed limit to clients. Clients would not have any control of limiting. Server would send UDP packets broadcast or unicast (still thinking pros/cons) as some kind of "beacons". Client that has great connection (no packet loss) will get 100% of udp packets and will have 100% of decided maximum upload/download limit... clients that have packet loss on their links (bad antenna, too far from access point ... background noise ... whatever) will lose some UDP packets and in effect, percentage of UDP packets received will decide maximum upload/download ... automatically reducing saturated links to "acceptable" range. It's still a theory... UDP packets would be rather small, some 16 bytes max... so ... maybe packet loss will not be visible on such small packets... in other hand, sending 1024 od 1500 byte packets will make sure packet loss is noticeable if link is saturated/bad... anybody sees future in this feature ? I already have working code but I don't like it so there will be a full rewrite as soon as I decide on some changes.... bah, anyway, I'm not a "client modder" so no need to treat like outcast. I don't really have a preference to any SDC or SDC mod in particular. I just want to make it work for me and my friends so we can share over wireless I am 27 years old, with lots of expirience in x86 assembler and C++ but a bit rusty cause I never used it for commercial purposes. Actually, haven't been coding anything since year 2001... Can't say I'm expert programmer cause lots of things changed since 2001 but I'm catching up. Can somebody at least tell me if my idea is possible ? Do you think it will help in wireless networks ?
-
I am almost 100% sure it can be done simply enough but I have some things more urgent. I will try to do it when I finish what I already started.
-
It would be nice to have. Client could publish list of all files, even non-hashed files, and continue hashing in the background, but if some file is requested, background hashing should be stopped and client should immediately start hashing files that are requested for download ... does that make any sense at all ?
-
Hmm, point is not to enable users to enter hubs with nothing shared. Imagine 3 terabytes of shared files... takes too long to hash. maybe a day or so. Now, why would anybody want to be absolutely sure about every single file in that share 100% of the time? ok it's great to be sure, but why not share list without hashes, and start hashing in the background (this is already how it's done) ... and addition will be, that if somebody *wants* to see filelist, he can see it without hashes off course (also already implemented) but if he chooses a file to download, *by*name*, that file should be hashed ahead of it's turn, so upload can start nothing lost here. Just take the file name, and put it to be next to be hashed. Rules about shares on hubs are not our concern at all. It's just more convinient, and could reduce "wait till it's hashed" time to almost 0 seconds... ok if current hashed file is big, like 1 gb, then somewhat longer, but it still could mean a lot if we share hundreds of gigabytes or perhaps terabytes. Am I boring ? :fear: Exact change needed: to hashing code: - there is already a compiled list of all files that need to be hashed. Just create one more - separate list of same type and name it priorityList. Hasher will always first take files to hash from priorityList. If priorityList is empty, take from default list. to code that currently answers sorry - no hash : ... code to insert file requested *by*name* to priorityList. Nothing more. Think of it as taking notes on interesting files and hashing them first. Files that nobody asked for will be hashed too, but after files we already have "customers" for.
-
afaik, when you pick 10000 files to share, more recent builds of SDC++ / Apex / others make filelist without hashes and add hashes as hashing process completes them ... did not look at source code but that's what I see ... all files ... but most of them without tth and size=0 bytes (cause not yet processed) - but they *are* in list so somebody could request to download them anyway ... Well, here it goes ... A: Client that shares files (our goal) B: Any anonymous DC++ compatible client, that requests file form A order of things: B requests filelist from A B downloads filelist from A B opens filelist and user selects some set of files [C] B waits for open slot in client A and requests file #1 from list of files A accepts connection but since file is not hashed, decided: - for very small files (hmm define "small" ) tth can be calculated on the fly cause it takes only few milliseconds - for larger files, connection is closed, but file that was "asked for" is removed from queue of "files to hash" and added to head of list (to be hashed next) ... no hashing process is stopped, file will wait till current hash completes. B Will ask for some other file if slot is still open in client A thus adding that file also to priority hash queue at some point B might quit but anyway we want all files hashed, so why not make more wanted files hash first ? this could make hell of a difference at lan party's (not that I care much I never move my PC out) if and if only way to request file from client A is to ask for a specific TTH, then this can't possibly work but since I know of some non-tth clients, it must be possible to request filename not tth ... I hope I'm not too confusing... this should be possible, dunno how complicated it really is till I inspect the code more thoroughly... but I'm more into making my Remote controlled bandwidth management via UDP packets (maybe even based on packet loss) first ...
-
Actually as I think of it it would take exactly same amount of time in most cases, and 30 seconds more per each interrupted 1 GB file being hashed if we implement hash file abort without proper resume (start from beginning after serving requested) ... there are lot of alternatives here, some of them easy to implement but not too efficient, some completely efficient but harder to implement... it all depends on you people. it's a feature request ... it can be done. I know how to do it but it would take me at least a month to complete it since I'm much out of C++ last few years, and know almost nothing about win32 gui/ATL stuff.... still it's c++ so it can be done but I suspect it's a 15 minutes work for somebody more PRO ;)
-
I am using this version - not .net 2005... it works fine but some hacks to projects files were needed to make .2003 understand them. Here are attached project files that made succesfull builds both in debug and release versions, latest svn stlport and wtl. Ups. Sorry. Could somebody move this topic to apropriate place ? apexdc_0.2.1_VS.net2003_project_files.rar
-
I would like it also. Anyway, I am on wireless and any speed limit that users can change leads to abuse of wireless link. I am doing an internal only mod with remotely directed traffic limiting. Server broadcasts udp packets that direct clients to certain speed. Intention is to have UDP packet loss on wireless links automatically reduce bandwidth for clients with lower quality links, whilst keeping high speeds for users that don't have any packet loss. It's still in early stages - It's been a long time since I worked in C++ and I don't have VS 2005. Anyway, per_ip is good way to go I think. Full support to the idea.
-
Is it possible to first hash block #0 of each file then re-hash entire files in pass #2 ? Like, fast hash-indexing to get list of all files then slower pass #2 to complete the hashes. Block#0 could be used to inteligently detect if it is "potentitally" same file moved to other folder and avoit complete re-hash ... I'd say that there are bettter ways to determine if it is at least almost same file ... anyway before actually uploading you would force complete hashing of that particular file.