|
|
|
Hentai@Home 1.2.3, The Need For Speed |
|
Jan 5 2015, 13:14
|
Tenboro
|
Clients with a large number of files could run into issues where the startup sequence took so long that you hit a server-side timeout. This could particularly happen if you were on a slow CPU, like a Raspberry Pi or a NAS box. This update changes how cache lists are generated and sent to the server to speed it up by a significant amount. It also refactors part of the startup sequence to happen outside of the critical timeout-able section. You don't need to apply this update unless you are having issues with slow startups, but it would be nice if some people with very large caches could try it out and compare startup times. Download from the usual place.For information on how to join Hentai@Home, check out The Hentai@Home Project FAQ.
|
|
|
|
|
|
Jan 5 2015, 13:55
|
LostLogia4
Group: Gold Star Club
Posts: 2,716
Joined: 4-June 11
|
Where's the stopwatch when I need it? (IMG:[ invalid] style_emoticons/default/unsure.gif) Oh, I do have my good'ol phone... right. (IMG:[ invalid] style_emoticons/default/mellow.gif) Well, I think it's around 15 second per 1000 files on my end like before... nvm, turns out the improvement is the cache list is now transferred in segments. As for the one from the previous thread, I seems to get several issues mixed up there, what I really meant was I believe the negative trust was caused when the server-side expecting the cache size to have already been expanded from the get go, and it keeps getting negative trust when there isn't as much space as expected. Also, both my clients still have the same 163 static ranges as before. Was that the intended effect? (IMG:[ invalid] style_emoticons/default/unsure.gif) This post has been edited by LostLogia4: Jan 5 2015, 14:19
|
|
|
|
|
|
Jan 5 2015, 15:01
|
kamio11
Group: Catgirl Camarilla
Posts: 1,306
Joined: 6-June 13
|
Tested with one of my clients with a ~350 GB cache. Startup time went from 20-30 minutes to 2-3 minutes.
|
|
|
Jan 5 2015, 15:08
|
showoff
Group: Gold Star Club
Posts: 3,773
Joined: 31-December 14
|
May i ask how to upgrade properly?
|
|
|
Jan 5 2015, 15:24
|
Tenboro
|
QUOTE(LostLogia4 @ Jan 5 2015, 12:55) Also, both my clients still have the same 163 static ranges as before. Was that the intended effect? (IMG:[ invalid] style_emoticons/default/unsure.gif) Depends on the disk allocation, but as you have more than two clients and still don't include the IDs, I can't check them. QUOTE(kamio11 @ Jan 5 2015, 14:01) Tested with one of my clients with a ~350 GB cache. Startup time went from 20-30 minutes to 2-3 minutes. I guess that qualifies as "significant". Thanks for checking. QUOTE(showoff @ Jan 5 2015, 14:08) May i ask how to upgrade properly? Just shut down your client, extract the .jar files from the archive and overwrite the existing ones, then start it again.
|
|
|
Jan 5 2015, 15:26
|
showoff
Group: Gold Star Club
Posts: 3,773
Joined: 31-December 14
|
QUOTE(Tenboro @ Jan 5 2015, 21:24) Just shut down your client, extract the .jar files from the archive and overwrite the existing ones, then start it again.
Got it!
|
|
|
|
|
|
Jan 5 2015, 16:40
|
LostLogia4
Group: Gold Star Club
Posts: 2,716
Joined: 4-June 11
|
QUOTE(Tenboro @ Jan 5 2015, 21:24) Depends on the disk allocation, but as you have more than two clients and still don't include the IDs, I can't check them. ...I can only allocate around 24GB to client A and B, Disk space needed to host this many ranges has bumped from ~20GB to ~28GB. Also, I've expanded the disk space allocation for Client 12460 to well above the required amount to host 230 static ranges, but I kept getting negative trusts for this particular client even after I've restarted countless times. QUOTE(showoff @ Jan 5 2015, 21:08) May i ask how to upgrade properly? Assuming you're running a Linux machine: CODE unzip -o HentaiAtHome_1.2.3.zip -d /home/user/hath/ and restart the client. Hot-replaced the source file all the time and never have any problem with that afaik. This post has been edited by LostLogia4: Jan 6 2015, 06:38
|
|
|
|
|
|
Jan 5 2015, 22:43
|
Tenboro
|
If your client is running 1.2.2+ and is online right now, then yes, it's been run. Note that it only takes it down to 150 MB/range for now. QUOTE(LostLogia4 @ Jan 5 2015, 15:40) Also, I've expanded the disk space allocation for Client 12460 to well above the required amount to host 230 static ranges, but I kept getting negative trusts for this particular client even after I've restarted countless times. Nothing to do with the disk space, it just tests really really poorly right now. As the testing mechanism hasn't changed in a year, it's probably a server or network issue.
|
|
|
Jan 6 2015, 02:03
|
teenyman45
Group: Gold Star Club
Posts: 1,573
Joined: 12-July 10
|
Two questions:
1) How much wear and tear will the current H@H put on an SSD since that's what I've got in my rig?
2) In the process of changing H@H did you also update the main page dashboard to not show HV xp info?
|
|
|
|
|
|
Jan 6 2015, 02:59
|
LostLogia4
Group: Gold Star Club
Posts: 2,716
Joined: 4-June 11
|
QUOTE(Tenboro @ Jan 6 2015, 04:43) Nothing to do with the disk space, it just tests really really poorly right now. As the testing mechanism hasn't changed in a year, it's probably a server or network issue. Funny how that happens because one of my client that has negative trust issue are colocated on a server where all other clients were well-behaving (albeit with reduced quality). QUOTE(teenyman45 @ Jan 6 2015, 08:03) Two questions:
1) How much wear and tear will the current H@H put on an SSD since that's what I've got in my rig? Pretty much negligible because it takes years to get hundreds of gigabytes of cache, so most of the traffic are on the reading side. And I'm pretty sure that flash drives today wouldn't wear out from terabytes of reading. This post has been edited by LostLogia4: Jan 6 2015, 06:38
|
|
|
|
|
|
Jan 6 2015, 07:01
|
Achcloan
Newcomer
Group: Catgirl Camarilla
Posts: 29
Joined: 16-November 14
|
|
|
|
Jan 6 2015, 07:09
|
Achcloan
Newcomer
Group: Catgirl Camarilla
Posts: 29
Joined: 16-November 14
|
Cool - thanks.
|
|
|
Jan 6 2015, 12:17
|
Tenboro
|
QUOTE(teenyman45 @ Jan 6 2015, 01:03) 1) How much wear and tear will the current H@H put on an SSD since that's what I've got in my rig? Less than on a platter disk. How much data it writes depends on the cache size, but it shouldn't notably affect its life expectancy. QUOTE(teenyman45 @ Jan 6 2015, 01:03) 2) In the process of changing H@H did you also update the main page dashboard to not show HV xp info? Not in the process of changing H@H, but I did temporarily remove it. QUOTE(Achcloan @ Jan 6 2015, 06:01) Fixed.
|
|
|
|
|
|
Jan 6 2015, 19:15
|
teenyman45
Group: Gold Star Club
Posts: 1,573
Joined: 12-July 10
|
QUOTE(LostLogia4 @ Jan 5 2015, 19:59)
Pretty much negligible because it takes years to get hundreds of gigabytes of cache, so most of the traffic are on the reading side. And I'm pretty sure that flash drives today wouldn't wear out from terabytes of reading.
QUOTE(Tenboro @ Jan 6 2015, 05:17) Less than on a platter disk. How much data it writes depends on the cache size, but it shouldn't notably affect its life expectancy. Not in the process of changing H@H, but I did temporarily remove it. Fixed.
Huh. I was under the impression that there was constant writing and re-writing. Switching over an older C300 drive to this shouldn't be that bad then. So once my neighborhood finally dumps Comcast with its data caps and the "mysterious" losses of connectivity whenever I use significant throughput pretty much anywhere but on Steam... I may finally be able to give H@H a try.
|
|
|
|
|
|
Jan 7 2015, 00:19
|
Tenboro
|
QUOTE(teenyman45 @ Jan 6 2015, 18:15) Huh. I was under the impression that there was constant writing and re-writing. Switching over an older C300 drive to this shouldn't be that bad then. So once my neighborhood finally dumps Comcast with its data caps and the "mysterious" losses of connectivity whenever I use significant throughput pretty much anywhere but on Steam... I may finally be able to give H@H a try. If you run it in low memory mode it will be more, but a relatively high-traffic test server I'm running (40 GB cache) averages about two writes per second on the disk, and it does other stuff as well. Considering I'm running databases with three orders of magnitude more write traffic on SSDs without any issues, it should be very insignificant.
|
|
|
|
|
|
Jan 7 2015, 02:12
|
MrCrackR
Newcomer
Group: Recruits
Posts: 10
Joined: 28-June 10
|
For all those people using a Raspberry Pi and running into the following error CODE java.lang.OutOfMemoryError: Java heap space i'd advise you to start the java executable with the parameters "-Xms512m -Xmx512m". It also requires you to enable swap (even if it does some damage to your sd card) with a size of at least 256m for a model B. This post has been edited by MrCrackR: Jan 7 2015, 02:13
|
|
|
Jan 7 2015, 04:00
|
blue penguin
Group: Gold Star Club
Posts: 10,044
Joined: 24-March 12
|
You're running with --use_less_memory flag on on the Pi, right?
I haven't run a H@H on my Pi since version 1.0.1, so I may be wrong, but that parameter limited the memory enough for a Pi.
This post has been edited by blue penguin: Jan 7 2015, 04:00
|
|
|
Jan 7 2015, 04:21
|
iDShaDoW
Group: Members
Posts: 584
Joined: 4-January 14
|
Not able to give much of an estimate of how long it took before but when you say it should significantly speed H@H's start up time that is definitely the case.
My cache is ~160GBs right now and the difference in time is leaps and bounds faster than before. Instead of cycling through 1000 files at a time slowly it's just 16 segments and done in a matter of moments.
|
|
|
|
|
|
Jan 7 2015, 04:24
|
MrCrackR
Newcomer
Group: Recruits
Posts: 10
Joined: 28-June 10
|
QUOTE(blue penguin @ Jan 7 2015, 03:00) You're running with --use_less_memory flag on on the Pi, right?
I haven't run a H@H on my Pi since version 1.0.1, so I may be wrong, but that parameter limited the memory enough for a Pi.
Nope, the full command I use is CODE java -Xms512m -Xmx512m -jar HentaiAtHome.jar --disable_bwm Memory consumption is just like in version 1.2.1 until uploading of cache list starts. Even after a successful upload it doesn't shrink anymore. 1.2.1 is working fine with exactly the same command. EDIT: I didn't use_less_memory in 1.2.1, but maybe i should give it a try after the next maintenance. I ran H@H (1.2.1), nginx, php5-fpm, mysql, minidlna, rpimonitor, shellinabox and tt-rss at the same time and still had 100mb left without the use of swap. Now that's all stopped. This post has been edited by MrCrackR: Jan 7 2015, 04:34
|
|
|
|
|
|
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:
|
|
|
|
|