|
|
|
Hentai@Home 1.6 Stable, Not the kind for horsies |
|
Aug 7 2024, 07:50
|
Tenboro
|
Looks like it overloaded, which would cause that sort of drop. You may need to lower your max speed if it keeps happening, since it means it's getting requests faster than it can handle.
|
|
|
Aug 11 2024, 14:06
|
iorikosakura
Lurker
Group: Lurkers
Posts: 3
Joined: 11-August 24
|
I saw that it's possible to set up a connection to the backend image server through a SOCKS proxy, does this mean that it's possible to use the H@H client in China now?
|
|
|
Aug 11 2024, 14:58
|
Tenboro
|
QUOTE(iorikosakura @ Aug 11 2024, 14:06) I saw that it's possible to set up a connection to the backend image server through a SOCKS proxy, does this mean that it's possible to use the H@H client in China now?
If the problem preventing you from running one is that the image servers are blocked, sure. Otherwise it wouldn't help.
|
|
|
Aug 12 2024, 00:59
|
iorikosakura
Lurker
Group: Lurkers
Posts: 3
Joined: 11-August 24
|
QUOTE(Tenboro @ Aug 11 2024, 20:58) If the problem preventing you from running one is that the image servers are blocked, sure. Otherwise it wouldn't help.
Great! Let me try running H@H client on my device at home.
|
|
|
|
|
|
Sep 13 2024, 10:07
|
Celtae
Group: Catgirl Camarilla
Posts: 226
Joined: 13-March 17
|
Hi, I'm having a bit of trouble with H@H v1.6.3. When I run it, I get the following error messages: CODE 2024-09-13T04:41:43Z [ERR] {java.base/java.lang.Throwable$WrappedPrintStream.println(Throwable.java:807)} java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached 2024-09-13T04:41:43Z [ERR] {java.base/java.lang.Throwable$WrappedPrintStream.println(Throwable.java:807)} at java.base/java.lang.Thread.start0(Native Method) 2024-09-13T04:41:43Z [ERR] {java.base/java.lang.Throwable$WrappedPrintStream.println(Throwable.java:807)} at java.base/java.lang.Thread.start(Thread.java:1513) 2024-09-13T04:41:43Z [ERR] {java.base/java.lang.Throwable$WrappedPrintStream.println(Throwable.java:807)} at hath.base.CakeSphere.stillAlive(CakeSphere.java:43) 2024-09-13T04:41:43Z [ERR] {java.base/java.lang.Throwable$WrappedPrintStream.println(Throwable.java:807)} at hath.base.ServerHandler.stillAliveTest(ServerHandler.java:213) 2024-09-13T04:41:43Z [ERR] {java.base/java.lang.Throwable$WrappedPrintStream.println(Throwable.java:807)} at hath.base.HentaiAtHomeClient.run(HentaiAtHomeClient.java:267) 2024-09-13T04:41:43Z [ERR] {java.base/java.lang.Throwable$WrappedPrintStream.println(Throwable.java:807)} at java.base/java.lang.Thread.run(Thread.java:1570)
CODE 2024-09-13T04:41:34Z [info] {4016/223.160.225.128} Code=200 Bytes=535212 GET /h/292bdfedb53066b6ce704117845f557b707dad2d-535212-1062-1500-jpg/keystamp=1726203000-b64237769b;fileindex=159751735;xres=2400/031.jpg HTTP/1.1 2024-09-13T04:41:35Z [info] {4016/223.160.225.128} Code=200 Bytes=535212 Finished processing request in 0.33 seconds (1646.81 KB/s) [24422.850s][warning][os,thread] Failed to start thread "Unknown thread" - pthread_create failed (EAGAIN) for attributes: stacksize: 1024k, guardsize: 0k, detached. [24422.852s][warning][os,thread] Failed to start the native thread for java.lang.Thread "Thread-4434" [24430.098s][warning][os,thread] Failed to start thread "Unknown thread" - pthread_create failed (EAGAIN) for attributes: stacksize: 1024k, guardsize: 0k, detached. [24430.099s][warning][os,thread] Failed to start the native thread for java.lang.Thread "Thread-4435" 2024-09-13T04:49:45Z [info] Shutting down... 2024-09-13T04:49:46Z [info] Shutdown in progress - please wait up to 30 seconds 2024-09-13T04:49:56Z [info] Waiting for 1 request(s) to finish; will wait for another 20 seconds 2024-09-13T04:50:01Z [info] Waiting for 1 request(s) to finish; will wait for another 15 seconds 2024-09-13T04:50:06Z [info] Waiting for 1 request(s) to finish; will wait for another 10 seconds 2024-09-13T04:50:11Z [info] Waiting for 1 request(s) to finish; will wait for another 5 seconds 2024-09-13T04:50:15Z [info] I don't hate you
I have confirmed that the system's memory usage does not exceed 20% of the total memory, and I have tried adding the parameters `-Xms2g -Xmx2g` to provide Java with more memory, but the issue still persists. I'd be really grateful if you could help me understand what might be causing this problem and how I can resolve it. Thank you so much!
|
|
|
|
|
|
Sep 13 2024, 17:41
|
Tenboro
|
Guessing you're running some flavor of containerized Linux. Check what your thread limit is set to:
cat /proc/sys/kernel/threads-max
If it's low (less than 1000), increase it if possible.
|
|
|
Sep 14 2024, 12:27
|
Celtae
Group: Catgirl Camarilla
Posts: 226
Joined: 13-March 17
|
QUOTE(Tenboro @ Sep 13 2024, 23:41) Guessing you're running some flavor of containerized Linux. Check what your thread limit is set to:
cat /proc/sys/kernel/threads-max
If it's low (less than 1000), increase it if possible.
Thank you for your suggestion. I checked the thread limit using `cat /proc/sys/kernel/threads-max`, and the result is 255901, so it seems the thread limit is not low. Also, I'm not using containerized Linux as you mentioned; I'm running the application directly on the server without Docker or similar technologies. Do you have any other ideas about what might be causing this issue or how I might resolve it? Thanks for your help.
|
|
|
|
|
|
Sep 14 2024, 13:35
|
Tenboro
|
QUOTE(Celtae @ Sep 14 2024, 12:27) Thank you for your suggestion. I checked the thread limit using `cat /proc/sys/kernel/threads-max`, and the result is 255901, so it seems the thread limit is not low.
Also, I'm not using containerized Linux as you mentioned; I'm running the application directly on the server without Docker or similar technologies.
Do you have any other ideas about what might be causing this issue or how I might resolve it?
Could you post the output of ulimit -a with the user you run H@H as? Also, what flavor and version of Java are you using?
|
|
|
|
|
|
Sep 14 2024, 14:53
|
Celtae
Group: Catgirl Camarilla
Posts: 226
Joined: 13-March 17
|
QUOTE(Tenboro @ Sep 14 2024, 19:35) Could you post the output of ulimit -a with the user you run H@H as?
CODE $ ulimit -a core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 127950 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 127950 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited QUOTE(Tenboro @ Sep 14 2024, 19:35) Also, what flavor and version of Java are you using?
jdk-22.0.2 [ www.oracle.com] x64 Compressed Archive
|
|
|
|
|
|
Sep 14 2024, 16:25
|
Tenboro
|
Ew, Oracle Java. I recommend switching to some variant of OpenJDK, for one. H@H is built on [ adoptium.net] Temurin 8 LTS but newer ones should work too if you use Java for anything else. Or just use the one that comes with your distro. Also, I'm not entirely sure if it could be the cause, but your open files limit is pretty low, so you could consider increasing that with ulimit -n 8192 (or higher) and see if it helps. To make that permanent, you would have to put these lines in /etc/security/limits.conf: * soft nofile 8192 * hard nofile 8192
|
|
|
|
|
|
Sep 14 2024, 19:20
|
Celtae
Group: Catgirl Camarilla
Posts: 226
Joined: 13-March 17
|
QUOTE(Tenboro @ Sep 14 2024, 22:25) Ew, Oracle Java. I recommend switching to some variant of OpenJDK, for one. H@H is built on [ adoptium.net] Temurin 8 LTS but newer ones should work too if you use Java for anything else. Or just use the one that comes with your distro. Because I selected "Verify cache integrity on next startup," it will take at least a day to complete the check before H@H can run. After the check is finished, I will shut down H@H and try using Adoptium Temurin 8 LTS. QUOTE(Tenboro @ Sep 14 2024, 22:25) Also, I'm not entirely sure if it could be the cause, but your open files limit is pretty low, so you could consider increasing that with ulimit -n 8192 (or higher) and see if it helps. To make that permanent, you would have to put these lines in /etc/security/limits.conf:
* soft nofile 8192 * hard nofile 8192
Due to certain considerations, I'm not sure if changing the open files limit will cause system instability. May I ask approximately how many open files H@H might require?
|
|
|
|
|
|
Sep 14 2024, 20:39
|
Tenboro
|
QUOTE(Celtae @ Sep 14 2024, 19:20) Due to certain considerations, I'm not sure if changing the open files limit will cause system instability. May I ask approximately how many open files H@H might require?
I mean, it's mostly an anti-DoS limit for multiuser systems. I set it to 100K+ on all my systems, since every socket also counts as an open "file", and I don't ever want a process to die because it hits the limit. How many open files H@H requires depends on how much traffic it gets, but it really shouldn't ever need to go that high, it's just the only one of those limits I could see it conceivably hit. You could check how many "files" H@H has open at any time by using lsof -p <processid>
|
|
|
Sep 16 2024, 05:53
|
LordH3lix
Lurker
Group: Recruits
Posts: 9
Joined: 15-January 19
|
I have a few raspberry pis just sitting around. How do I get my H@H node up and running?
|
|
|
|
|
|
Sep 18 2024, 05:06
|
Celtae
Group: Catgirl Camarilla
Posts: 226
Joined: 13-March 17
|
QUOTE(Tenboro @ Sep 15 2024, 02:39) I mean, it's mostly an anti-DoS limit for multiuser systems. I set it to 100K+ on all my systems, since every socket also counts as an open "file", and I don't ever want a process to die because it hits the limit. How many open files H@H requires depends on how much traffic it gets, but it really shouldn't ever need to go that high, it's just the only one of those limits I could see it conceivably hit.
You could check how many "files" H@H has open at any time by using lsof -p <processid>
I have already tried using Temurin 8 LTS, but the issue still persists. I checked the current number of open files, and neither the system nor the user is anywhere close to reaching the limit. The user running H@H only has 93 open files, so I don’t think the open files limit is causing the error. Additionally, I observed that the error only seems to occur when the "Archive Download" function is triggered multiple times in a short period. In other words, as long as I don't perform operations involving the download of a certain number of galleries, the OutOfMemoryError is not triggered. However, according to the log, it seems like the galleries are downloaded page by page, which makes it hard to imagine how this could trigger an OutOfMemoryError.
|
|
|
|
|
|
Sep 18 2024, 09:11
|
Tenboro
|
QUOTE(Celtae @ Sep 18 2024, 05:06) I have already tried using Temurin 8 LTS, but the issue still persists. I checked the current number of open files, and neither the system nor the user is anywhere close to reaching the limit. The user running H@H only has 93 open files, so I don’t think the open files limit is causing the error.
Additionally, I observed that the error only seems to occur when the "Archive Download" function is triggered multiple times in a short period. In other words, as long as I don't perform operations involving the download of a certain number of galleries, the OutOfMemoryError is not triggered. However, according to the log, it seems like the galleries are downloaded page by page, which makes it hard to imagine how this could trigger an OutOfMemoryError.
I guess you could try increasing the stack limit, but I have never heard of anyone having any issues with it set to 8192k, which seems to be the default across the board. What specific distro+kernel are you using, by the way?
|
|
|
|
|
|
Sep 18 2024, 10:29
|
Celtae
Group: Catgirl Camarilla
Posts: 226
Joined: 13-March 17
|
QUOTE(Tenboro @ Sep 18 2024, 15:11) I guess you could try increasing the stack limit, but I have never heard of anyone having any issues with it set to 8192k, which seems to be the default across the board.
I might try it next week. QUOTE(Tenboro @ Sep 18 2024, 15:11) What specific distro+kernel are you using, by the way?
CODE $ uname -r 4.4.302+
Temurin 8 LTS: 8.0.422+5 JRE Linux x64
|
|
|
Sep 18 2024, 10:40
|
Tenboro
|
QUOTE(Celtae @ Sep 18 2024, 10:29) So it's a Synology NAS? Hard to say, but I guess it could be some quirk specific to those.
|
|
|
Sep 18 2024, 10:48
|
Celtae
Group: Catgirl Camarilla
Posts: 226
Joined: 13-March 17
|
QUOTE(Tenboro @ Sep 18 2024, 16:40) So it's a Synology NAS?
Yes, I am using a Synology NAS. QUOTE(Tenboro @ Sep 18 2024, 16:40) Hard to say, but I guess it could be some quirk specific to those.
I can understand if you want to attribute this issue to the brand, as that type of response is quite common. However, I have been running H@H normally for almost seven years, and suddenly not being able to run it properly makes me feel a bit sad.
|
|
|
|
|
|
Sep 18 2024, 12:12
|
Tenboro
|
QUOTE(Celtae @ Sep 18 2024, 10:48) I can understand if you want to attribute this issue to the brand, as that type of response is quite common. However, I have been running H@H normally for almost seven years, and suddenly not being able to run it properly makes me feel a bit sad.
Well, the kernel fails with an EAGAIN (resource temporarily unavailable) when the JVM tries to create a thread. That is not an issue with H@H, and there have been no changes to H@H that would explain that. If it started recently, something else changed.
|
|
|
6 User(s) are reading this topic (5 Guests and 0 Anonymous Users)
|
|
|
|
|