 |
 |
 |
Hentai@Home 1.3, We will build a wall, and Mexico is going to read it |
|
Oct 5 2016, 17:02
|
Tenboro

|
QUOTE(Chunga @ Oct 5 2016, 00:12)  What are the chances to get an auto-updater?
Auto-updaters are kinda iffy, I don't really want anything on the site to be able to push locally executable code to people's computers without any interaction. If there was some kind of urgent problem that required old clients to go away and update immediately, the server can force clients with particular build numbers to shut down, but for security reasons that's about the limit for how much direct control I want the server to have on the clients. QUOTE(Phreeman @ Oct 5 2016, 06:50)  It works! Problem solved, thanks(ᗒᗨᗕ)
Thanks for confirming.
|
|
|
|
 |
|
Oct 5 2016, 18:54
|
Alt1523
Newcomer
 Group: Members
Posts: 47
Joined: 3-May 12

|
I updated to 1.3 today and it resolved the startup issues I had due to my shitty modem-router like you predicted it would. Thanks! (IMG:[ invalid] style_emoticons/default/biggrin.gif) QUOTE(Tenboro @ Aug 26 2016, 19:10)  What usually happens is that the "new fancy router" is a pile of crap that prunes NAT table entries after just a few minutes, which means that the client never gets the startup acknowledgement if it takes too long to start up.
On the bright side, I am working on the next version of H@H, which won't need to do some of those startup things. That should eliminate this particular cause of failure, and several others as well. Though it depends on other changes that are being slowly phased in due to server processing capacity not being infinite, so there won't be any public test builds available for probably two to three weeks.
This post has been edited by Alt1523: Oct 5 2016, 18:55
|
|
|
|
 |
|
Oct 6 2016, 02:17
|
mike23
Group: Gold Star Club
Posts: 132
Joined: 23-August 07

|
How long should the cache cleanup take? It's been running for about 10 minutes now and it has decreased to 101.90% from 102.10%.
It's not a big deal since it's a one time thing, but it seems slow. My total cache is set to 400GB.
|
|
|
Oct 6 2016, 02:19
|
Maximum_Joe
Group: Gold Star Club
Posts: 24,074
Joined: 17-April 11

|
I'd say just under an hour for your case.
|
|
|
Oct 6 2016, 02:25
|
mike23
Group: Gold Star Club
Posts: 132
Joined: 23-August 07

|
QUOTE(Maximum_Joe @ Oct 5 2016, 20:19)  I'd say just under an hour for your case.
That feels slow to delete 8 gigabytes from an ssd. For what it's worth, I'm seeing about 12% cpu usage from H@H on a i7 5930k (6 core + hyperthreading) with very little disk access. edit: Just finished. About 30 minutes total I think. This post has been edited by mike23: Oct 6 2016, 02:35
|
|
|
|
 |
|
Oct 6 2016, 02:55
|
foobar20324
Group: Gold Star Club
Posts: 136
Joined: 5-September 15

|
Experimental indeed. 1.3 works somewhat decent on systems which can provide sufficient IOPS, with both significantly lower CPU overhead and less memory consumption than the 1.2 line. Less than 40MB without the use use of the "less memory" mode is definitely nice. Unfortunately, it still suffers greatly on systems which can't provide that many IOPS, e.g. ultra cheap VPS with network backed storage and alike. (Which CAN easily reach a stable 1k/10k rating with customized clients!) QUOTE(mike23 @ Oct 6 2016, 02:17)  How long should the cache cleanup take? It's been running for about 10 minutes now and it has decreased to 101.90% from 102.10%.
And that's the another issue. Traversing the cache directory in lexical order is rather inefficient :/ Well, at least if the file system isn't by chance NTFS and the OS Windows.
|
|
|
|
 |
|
|
 |
|
Oct 6 2016, 13:34
|
Tenboro

|
QUOTE(mike23 @ Oct 6 2016, 02:25)  How long should the cache cleanup take? It's been running for about 10 minutes now and it has decreased to 101.90% from 102.10%. Should only have taken a few seconds to a minute. QUOTE(mike23 @ Oct 6 2016, 02:25)  That feels slow to delete 8 gigabytes from an ssd.
For what it's worth, I'm seeing about 12% cpu usage from H@H on a i7 5930k (6 core + hyperthreading) with very little disk access. That seems exceptionally slow, especially with a SSD. I'd love to see the I/O wait times for this system, as there is a chance there's something wrong with the SSD itself. The only other thing I can think of is that there were some files in your cache tree with very old timestamps, which is a known potential slowdown when upgrading from 1.2 to 1.3, but is a one-time thing. Do you still have the log file from that startup, by any chance? QUOTE(foobar20324 @ Oct 6 2016, 02:55)  Unfortunately, it still suffers greatly on systems which can't provide that many IOPS, e.g. ultra cheap VPS with network backed storage and alike. (Which CAN easily reach a stable 1k/10k rating with customized clients!) And that's the another issue. Traversing the cache directory in lexical order is rather inefficient :/ Well, at least if the file system isn't by chance NTFS and the OS Windows.
It's not traversing directories to do the cache cleanup, the startup indexing builds the list of candidates so the directory can be accessed directly for the cleanup. Nor does it ask the file system to do a lexical sort, that's done after the fact. But you are correct, it's not designed for systems with seconds of I/O wait. QUOTE(mrpops @ Oct 6 2016, 10:47)  no mention of >5k static ranges (IMG:[ invalid] style_emoticons/default/sad.gif) All in due time.
|
|
|
|
 |
|
Oct 7 2016, 05:42
|
mike23
Group: Gold Star Club
Posts: 132
Joined: 23-August 07

|
QUOTE(Tenboro @ Oct 6 2016, 07:34)  Should only have taken a few seconds to a minute. That seems exceptionally slow, especially with a SSD. I'd love to see the I/O wait times for this system, as there is a chance there's something wrong with the SSD itself. The only other thing I can think of is that there were some files in your cache tree with very old timestamps, which is a known potential slowdown when upgrading from 1.2 to 1.3, but is a one-time thing.
Do you still have the log file from that startup, by any chance?
I don't have any logs. I actually have logging to disk turned off in my settings, not sure when I did that. This was all in Windows, I don't think I mentioned that. SSD seems healthy, tests are showing 500+ mb/s read/write and 90k random read iops, 80k random write. SMART is clear. I do have about 40k files in the cache from 2014 and 150k from 2015 out of 1.2m files total.
|
|
|
|
 |
|
Oct 7 2016, 13:28
|
Tenboro

|
QUOTE(mike23 @ Oct 7 2016, 05:42)  I do have about 40k files in the cache from 2014 and 150k from 2015 out of 1.2m files total.
Yeah, that would do it. On the plus side, I did some back of the napkin development last night, and 1.3.2 should come with a vastly more efficient pruning mechanism, which should let it run even on those ultra-cheap VPS jobbies that were previously mentioned.
|
|
|
|
 |
|
Oct 8 2016, 18:00
|
Tenboro

|
1.3.2 was released, which primarily improves the file pruning and cache initialization performance. Updating is recommended if you are running a client with a full cache, or if you have other performance issues. It's otherwise not necessary.
Changes:
- Rather than blindly iterating over the static range directories to find files to prune, we now cache the timestamp of the oldest remaining file from each static range and scan this data structure to find the one with the oldest file. This should significantly reduce disk I/O when the cache is close to full, and vastly decreases the time required for the startup prune if one is needed.
- The aged static range scan was integrated with the cache initialization, which cuts the startup initialization from three passes to two, making startup a bit faster.
- The cache phase 1 cleanup pass now postpones some parts of the cleanup until the phase 2 init pass, where it can be done far more efficiently.
- The file pruning mechanism will now adjust the actual cutoff time depending on the age of the oldest cached file, which cuts down how often the pruner has to run.
- Added a sanity check during the start of the cache cleanup to warn and delay wiping the cache if it has one and there are no static ranges assigned.
|
|
|
|
 |
|
Oct 11 2016, 15:50
|
Tenboro

|
1.3.3 was released, which primarily makes startups after a clean shutdown effectively instant. Updating is not required unless you want the faster startups.
Changes:
- Added persistence to the new cache handler, which means it no longer has to scan the cache on regular startups. The client will now start almost instantly regardless of cache size and CPU/storage performance, and go straight to the speed test.
A cache scan will now only be required if the shutdown was not clean, if the client had a reduction in the number of static ranges, or if a client that previously used --use-less-memory is started without using it. It can also be triggered manually with --rescan-cache
Note that the first startup after upgrading to 1.3.3 will always involve a cache scan, as the necessary data for fast startup is not saved by earlier clients.
- The static range age cache now tracks all static ranges instead of just the "aged" ones. This was necessary for long-term cache persistence, and because of the changes in 1.3.2 it will have no impact on performance.
|
|
|
|
 |
|
Oct 11 2016, 23:43
|
Maximum_Joe
Group: Gold Star Club
Posts: 24,074
Joined: 17-April 11

|
QUOTE(Tenboro @ Oct 11 2016, 09:50)  --rescan-cache
What's the difference between that and --verify_cache / --force_dirty?
|
|
|
Oct 11 2016, 23:48
|
Tenboro

|
QUOTE(Maximum_Joe @ Oct 11 2016, 23:43)  What's the difference between that and --verify_cache / --force_dirty?
--force-dirty was removed with 1.3.0. --verify-cache implies --rescan-cache and will additionally verify the SHA-1 hash of all the files. It's mostly there so the server can tell it to do so if any static ranges were pruned during a reset or cache size reduction.
|
|
|
|
 |
|
Oct 12 2016, 06:55
|
tes143
Lurker
Group: Lurkers
Posts: 2
Joined: 14-December 13

|
hah 1.3.3 openjdk version "1.8.0_102" OpenJDK Runtime Environment (build 1.8.0_102-b14) OpenJDK 64-Bit Server VM (build 25.102-b14, mixed mode) 4.7.6-1-ARCH #1 SMP PREEMPT Fri Sep 30 19:28:42 CEST 2016 x86_64 GNU/Linux log_err CODE 2016-10-12T04:35:03Z Logging started 2016-10-12T04:35:04Z [WARN] Invalid fileid "ID" 2016-10-12T04:35:04Z [ERR] {java.lang.Throwable$WrappedPrintStream.println(Throwable.java:748)} java.lang.NullPointerException 2016-10-12T04:35:04Z [ERR] {java.lang.Throwable$WrappedPrintStream.println(Throwable.java:748)} at org.hath.base.HVFile.getHVFileFromFile(HVFile.java:104) 2016-10-12T04:35:04Z [ERR] {java.lang.Throwable$WrappedPrintStream.println(Throwable.java:748)} at org.hath.base.HVFile.getHVFileFromFile(HVFile.java:94) 2016-10-12T04:35:04Z [ERR] {java.lang.Throwable$WrappedPrintStream.println(Throwable.java:748)} at org.hath.base.CacheHandler.startupCacheCleanup(CacheHandler.java:337) 2016-10-12T04:35:04Z [ERR] {java.lang.Throwable$WrappedPrintStream.println(Throwable.java:748)} at org.hath.base.CacheHandler.<init>(CacheHandler.java:87) 2016-10-12T04:35:04Z [ERR] {java.lang.Throwable$WrappedPrintStream.println(Throwable.java:748)} at org.hath.base.HentaiAtHomeClient.run(HentaiAtHomeClient.java:125) 2016-10-12T04:35:04Z [ERR] {java.lang.Throwable$WrappedPrintStream.println(Throwable.java:748)} at java.lang.Thread.run(Thread.java:745)
|
|
|
|
 |
|
Oct 12 2016, 07:55
|
tes143
Lurker
Group: Lurkers
Posts: 2
Joined: 14-December 13

|
Sorry my bad. Few days ago i install btsync for backup cache folder, and .sync dirrectory (inside cache folder) ruind everithing.
|
|
|
Oct 12 2016, 09:07
|
foobar20324
Group: Gold Star Club
Posts: 136
Joined: 5-September 15

|
QUOTE(tes143 @ Oct 12 2016, 07:55)  Sorry my bad. Few days ago i install btsync for backup cache folder, and .sync dirrectory (inside cache folder) ruind everithing.
Not your fault. There is actually a null pointer check missing in there, that invalid directory should had been cleared out automatically.
|
|
|
Oct 12 2016, 13:29
|
Tenboro

|
QUOTE(tes143 @ Oct 12 2016, 07:55)  Sorry my bad. Few days ago i install btsync for backup cache folder, and .sync dirrectory (inside cache folder) ruind everithing.
Technically a bug. But the system won't let other files remain in the cache directory tree, so btsync might not work very well. I suggest just using rsync if you want a backup. You can even get it via Cygwin if you're on a Windows system.
|
|
|
Oct 13 2016, 03:30
|
yulisunny
Lurker
Group: Lurkers
Posts: 1
Joined: 13-October 16

|
This is great!
|
|
|
Oct 13 2016, 12:20
|
Sapo84
Group: Gold Star Club
Posts: 3,332
Joined: 14-June 09

|
Updated one of my client to 1.3.3, went pretty smoothly. Is it normal that it's now using 2-3GB more than before?
|
|
|
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:
|
 |
 |
 |
|