|
|
|
Hentai@Home 1.6 Stable, Not the kind for horsies |
|
Nov 18 2024, 04:30
|
ycFlames
Newcomer
Group: Recruits
Posts: 10
Joined: 17-November 24
|
Nice to see this project is still going, and going strong!
|
|
|
Nov 19 2024, 21:32
|
lz4515
Lurker
Group: Recruits
Posts: 4
Joined: 12-February 12
|
Emmmmm, It seems that there is something wrong with my client 42673.
Recently, my client always disconnect to server, then I must change the port and restart. But when I test these disabled ports from a public domain, it's passed and enabled.
This has already affected me a lot, my trust has been decreased to below 0 for a long time. Banning me from using hah forever is the worst news for me.
My client has run stably for years, and reach the top 200+ of the daily charts for several month , this problem occurred in near 1 month, it's so strange.
|
|
|
Nov 19 2024, 21:43
|
Tenboro
|
QUOTE(lz4515 @ Nov 19 2024, 20:32) Recently, my client always disconnect to server, then I must change the port and restart. But when I test these disabled ports from a public domain, it's passed and enabled. Most likely your ISP is blocking it, possibly only for connections not from inside the country.
|
|
|
|
|
|
Nov 20 2024, 08:15
|
Celtae
Group: Catgirl Camarilla
Posts: 226
Joined: 13-March 17
|
Over the past three days I have noticed that the following error message is frequently appearing in the logs of my H@H client: `java.net.SocketTimeoutException: Connect timed out'. This has not happened before, so I am wondering if there is a problem with my client. To help with troubleshooting, I have attached the relevant [ gofile.io] log files for your review. If you could kindly assist me in identifying the cause of this problem, I would greatly appreciate it. Please let me know if there is any additional information I can provide to help resolve this issue. Thank you very much for your time and assistance!
|
|
|
|
|
|
Nov 22 2024, 16:35
|
Tenboro
|
Posting this here, since no one cares except for people who run H@H.
I pushed some significant changes to how static ranges are assigned. This basically gets rid of the last parts of the legacy assignment mechanism where clients are assigned ranges that aren't actually being used (P5), which are later promoted to "active" (P1-P3) ranges.
Instead, active ranges are now assigned directly, which means that all ranges added to clients from now on will be P1-P3. (P2-P4 ranges can still be promoted, of course.)
This has a number of effects. Most importantly, it significantly reduces the delay from when a new client is started (or reset) until it starts getting actual usage. Basically, a new client could new potentially start getting traffic after it's been (continuously) running for about a day, assuming that the trust and quality targets are met. It also makes the system have less overhead, since it won't have to dick around with ranges that aren't actually being used, which makes the whole system more efficient in various way.
It also means I'll need to change the hath award calculation - since this is currently based on the total number of static ranges assigned, and clients usually have a lot fewer active ranges than total ranges, it will need to be derived from the number of active ranges instead. The new calculations will use the old numbers as a target, but it does mean that highly saturated regions (notably Europe) will probably see a reduction, while regions with high demand may see an increase. The calculations have not changed yet, this is just a heads-up.
After the reward calculations have been changed, existing P5 ranges will be removed. At that point, this will not affect the clients in any way.
|
|
|
|
|
|
Nov 29 2024, 13:45
|
Tenboro
|
The new hath reward calculations are now live. This changes the amount of hath awarded for assigned static ranges only, hath granted for traffic remains the same.
In the old calculations, it would grant you 0.01 hath per static range.
In the new calculations, it grants 0.025 hath per active range (P1-P3) and 0.05 hath per high-capacity range.
This reflects how much space the system reserves for these ranges internally - 500 MiB per active range and 1 GiB per high-capacity range. Ultimately, this means the calculations will now much better reflect the actual committed disk space of the client. Clients with many useful ranges will now get more hath, while clients with a lot of dead ranges (P4-P5) could get somewhat less.
Globally, this somewhat increased the amount of hath granted for range allocations, especially in the regions that have no excess capacity. Right now, per region, the changes in hath granted per day for this are as follows:
Europe: 8149 > 8566 North America: 6849 > 8615 South America: 413 > 1363 Asia: 8488 > 8669 Oceania: 707 > 2064 China: 1710 > 3976
As mentioned in the previous post, the main reason for these changes is that P5 ranges will soon be removed, so if the calculations hadn't changed, there would have been a significant drop across the board.
--
In an unrelated change, applications for H@H clients in Europe has been suspended due to excess capacity. Donators and people who are already running H@H can still request new clients in Europe, but other users can no longer apply for clients in this region.
|
|
|
|
|
|
Nov 29 2024, 18:33
|
hiroaya
Newcomer
Group: Members
Posts: 49
Joined: 16-January 13
|
my oldest hath client hathrate just drop ~30% lol... (while newer ones increase ~20%) How long it would be back to normal?
|
|
|
Nov 29 2024, 21:11
|
jm320
Newcomer
Group: Gold Star Club
Posts: 74
Joined: 15-March 11
|
Possibly a silly question, but will the Hath calculation changes also affect rent-a-server slots? I usually try to keep at least a couple active all the time because my home ISP is a flaky bitch.
|
|
|
Nov 29 2024, 21:35
|
Tenboro
|
QUOTE(hiroaya @ Nov 29 2024, 17:33) How long it would be back to normal? Not sure what you mean, but this is the new normal. QUOTE(jm320 @ Nov 29 2024, 20:11) Possibly a silly question, but will the Hath calculation changes also affect rent-a-server slots? I usually try to keep at least a couple active all the time because my home ISP is a flaky bitch. Hath rewards for Adopt-a-Server slots are fixed at 15 hath/day, which was not changed.
|
|
|
Nov 30 2024, 00:14
|
ddvd
Newcomer
Group: Catgirl Camarilla
Posts: 64
Joined: 16-October 19
|
(IMG:[ i.ibb.co] https://i.ibb.co/VgHPwpY/hath-rate.png) Surprised to see my hath rate jump from ~900 to 1749. Thanks for the change! I wonder if the toplist calculation will be also updated accordingly?
|
|
|
Nov 30 2024, 09:04
|
Tenboro
|
QUOTE(ddvd @ Nov 29 2024, 23:14) I wonder if the toplist calculation will be also updated accordingly? The toplist calculations would not be affected by this change.
|
|
|
|
|
|
Dec 1 2024, 00:39
|
EmuAGR
Newcomer
Group: Members
Posts: 63
Joined: 8-February 13
|
QUOTE(Tenboro @ Nov 29 2024, 20:35) Not sure what you mean, but this is the new normal. Hath rewards for Adopt-a-Server slots are fixed at 15 hath/day, which was not changed.
All these changes keep punishing my long-time running server... Last year I went from 80 to 30 hath/day. Last week I upgraded the server hardware-wise with an increase from 100 to 300 Mbps upload and I recovered from 30 to 60 hath/day. And now it's even sunk to 20 hath/day. My ranges also changed like this from last week: P1 = 74, P2 = 85, P3 = 129, P4 = 153, HC = 157 P1 = 74, P2 = 85, P3 = 128, P4 = 287, HC = 158 How is this fair for a client which has been running 24/7 reliably for 7 years with a long-standing cache of 3TB? (IMG:[ invalid] style_emoticons/default/huh.gif) This post has been edited by EmuAGR: Dec 1 2024, 02:49
|
|
|
|
|
|
Dec 1 2024, 05:33
|
ddvd
Newcomer
Group: Catgirl Camarilla
Posts: 64
Joined: 16-October 19
|
QUOTE(Tenboro @ Nov 30 2024, 02:04) The toplist calculations would not be affected by this change.
Thanks for the reply! Another minor question I wanted to ask for a long time: How strongly is the static range/P1 range allocation algorithm influenced by test speed alone, assume other settings are near optimal? Right now my clients in the US have abundant storage (~20TB), good quality (~9500) and running at port 443, so these settings are near optimal. My test speed is around 800Mbps. I could double the link speed to 1.6Gbps at some cost, but I wonder how much it will help with the overall performance of them (in terms of hit rate). This post has been edited by ddvd: Dec 1 2024, 05:38
|
|
|
|
|
|
Dec 1 2024, 08:52
|
Tenboro
|
QUOTE(EmuAGR @ Nov 30 2024, 23:39) How is this fair for a client which has been running 24/7 reliably for 7 years with a long-standing cache of 3TB? (IMG:[ invalid] style_emoticons/default/huh.gif) Not really sure what you expect me to do in this case. The formula was not changed just for the hell of it, but because the number it used to calculate the old rate is going away (or significantly decreasing, anyway). I'm aware that some clients that were tuned for max gains prior to the change to priority ranges back in 2023 have had a significant decrease - specifically clients in Europe that had maxed storage but comparatively low bandwidth - but the changes are as fair as I could make it without vastly inflating the hath supply or otherwise negatively affecting the system. Unfortunately Europe is just over-saturated, due to the combination of cheap bandwidth and low population density relative to the other main regions, and "fixing" this is fundamentally impossible. If you want the old numbers back, instead of upgrading and competing for European traffic I would recommend you move your client to a less saturated region if possible, preferably Asia. QUOTE(ddvd @ Dec 1 2024, 04:33) Another minor question I wanted to ask for a long time: How strongly is the static range/P1 range allocation algorithm influenced by test speed alone, assume other settings are near optimal?
Right now my clients in the US have abundant storage (~20TB), good quality (~9500) and running at port 443, so these settings are near optimal. My test speed is around 800Mbps. I could double the link speed to 1.6Gbps at some cost, but I wonder how much it will help with the overall performance of them (in terms of hit rate). It's a factor of the scoring formula, so it has a strong influence, up to a point. That said, past a gigabit you're getting to the point where you will be hard pressed to get an accurate reading, so if there are costs involved it probably won't pay off.
|
|
|
|
|
|
Dec 2 2024, 22:51
|
ddvd
Newcomer
Group: Catgirl Camarilla
Posts: 64
Joined: 16-October 19
|
QUOTE(Tenboro @ Dec 1 2024, 01:52) It's a factor of the scoring formula, so it has a strong influence, up to a point. That said, past a gigabit you're getting to the point where you will be hard pressed to get an accurate reading, so if there are costs involved it probably won't pay off.
I see. I did notice the speed test results are kind of unstable. Like client 49473 and 49474 are from the same datacenter with the same configuration, but one speed is 36M/s while the other one is 83M/s.
|
|
|
|
|
|
Dec 2 2024, 23:05
|
Tenboro
|
QUOTE(ddvd @ Dec 2 2024, 21:51) I see. I did notice the speed test results are kind of unstable. Like client 49473 and 49474 are from the same datacenter with the same configuration, but one speed is 36M/s while the other one is 83M/s.
Hm. Well, 49474 is very consistently testing much higher than 49473. As in, 49474 had an average of 42.5 MB/s with 112 tests, while 49473 only had an average of 19.6 MB/s with 110 tests. So 49474 really is just way faster for whatever reason.
|
|
|
Yesterday, 05:08
|
ddvd
Newcomer
Group: Catgirl Camarilla
Posts: 64
Joined: 16-October 19
|
QUOTE(Tenboro @ Dec 2 2024, 16:05) Hm. Well, 49474 is very consistently testing much higher than 49473. As in, 49474 had an average of 42.5 MB/s with 112 tests, while 49473 only had an average of 19.6 MB/s with 110 tests. So 49474 really is just way faster for whatever reason.
Interesting. Thanks for letting me know. Will dig into it to see what happened.
|
|
|
Yesterday, 06:07
|
kamio11
Group: Catgirl Camarilla
Posts: 1,343
Joined: 6-June 13
|
Given the changes in static ranges, is it still meaningful to list the total number of static ranges assigned to a client? All my clients (which have been running for 5-10 years) have the max 6000 static ranges, but far fewer P1-4 and HC ranges.
|
|
|
Yesterday, 06:23
|
ddvd
Newcomer
Group: Catgirl Camarilla
Posts: 64
Joined: 16-October 19
|
I did find a configuration issue and fixed it. Hopefully the speedtest result will be better now.
|
|
|
Yesterday, 08:47
|
Tenboro
|
QUOTE(kamio11 @ Dec 3 2024, 05:07) Given the changes in static ranges, is it still meaningful to list the total number of static ranges assigned to a client? All my clients (which have been running for 5-10 years) have the max 6000 static ranges, but far fewer P1-4 and HC ranges. I guess if anything the new number is more meaningful. The old one would just keep going up over time assuming you hit certain thresholds, while the new one shows how many ranges are actually being used. Then again, I suppose we don't really need to show the total number of ranges, just the breakdown is plenty.
|
|
|
3 User(s) are reading this topic (2 Guests and 0 Anonymous Users)
|
|
|
|
|