 |
 |
 |
Hentai@Home 1.5, Security Kageki Revue Starlight |
|
Dec 16 2019, 10:44
|
veiledf
Lurker
Group: Gold Star Club
Posts: 9
Joined: 8-July 08

|
I'm not a frequent poster nor visitor here, so I'm sorry if I'm missing something obvious.
1. Regarding Let's Encrypt. There is a way to employ LE without requesting limit increases (which is possible too but will take a lot more time). You can assign common wildcard prefixes to the groups of H@H clients, for example: *.00.tls.hath.network ... *.ff.tls.hath.network
and individual hosts will respond on 0000.00.tls.hath.network .... ffffff.ff.tls.hath.network
You can make the group size as much as it would take to issue all H@H clients certs within 1 month using 50/crt/week limit. I'm not sure of total size of the network now, but if it consists of 5000 active hosts, this will lead to 25 clients per certificate.
This will lessen the damage if one malicious client will revoke group cert. The more time will pass the more granular you can make the group size (theoretically up to one crt per client). As an added bonus, group prefixes should make it impossible to enumerate all clients via certificate transparency.
This is possible due to the fact that renewal limit is counted against each individual cert and not against registered domain.
You can perform all of the verification and renewals server side through DNS TXT records, completely eliminating all troubles with standard 443 port required for HTTP challenges on LE.
Described scheme will require a lot of orchestration on server side but it should be reliable in the end.
2. Regarding the changes imposed by chrome — I for one welcome the change after you'll be able to solve key distribution issue. Simply because the less my provider knows of what's originating from my host, the better.
This post has been edited by newbie88: Dec 16 2019, 10:50
|
|
|
|
 |
|
Dec 16 2019, 13:29
|
ungrown
Newcomer
 Group: Members
Posts: 30
Joined: 7-April 11

|
QUOTE(Sapo84 @ Dec 16 2019, 16:37)  What you, apparently, can do is waste people time in a pointless, unrealistic discussion while having no technical background and thus no real solution to the problem at hand.
Accept this fact and move on.
all i mentioned need NO technical background at all. if you can install a web browser app and use it well, then you can install a non-web-browser-app and use it well, too. if you can update a web browser app to the latest version, then you can keep an old version and refuse updating, too. see? no technical skill here. accept this and move on.
|
|
|
|
 |
|
Dec 16 2019, 13:48
|
ungrown
Newcomer
 Group: Members
Posts: 30
Joined: 7-April 11

|
QUOTE(newbie88 @ Dec 16 2019, 16:44)  You can assign common wildcard prefixes to the groups of H@H clients ... Described scheme will require a lot of orchestration on server side but it should be reliable in the end.
or just embed a step by step half-automatical script helping users to register into letsencrypt and get an own cert for every client. after all, the cert doesn't need to be same. it's the domain that need to follow in a solid format. This post has been edited by ungrown: Dec 16 2019, 13:50
|
|
|
|
 |
|
Dec 16 2019, 15:30
|
blue penguin
Group: Gold Star Club
Posts: 10,046
Joined: 24-March 12

|
QUOTE(ungrown @ Dec 16 2019, 11:29)  all i mentioned need NO technical background at all. if you can install a web browser app and use it well, then you can install a non-web-browser-app and use it well, too. if you can update a web browser app to the latest version, then you can keep an old version and refuse updating, too. see? no technical skill here.
accept this and move on.
I'm not joining this stupid discussion. I'm here to tell you that if you do not stop wasting people's time in this thread you will get the stick treatment. If your overgrown ego requires your own bewilderment in telling people obviously more competent than you how stupid they are, go do it in a different place. No one cares how clever or superhuman you are, unless you stop talking about how super you are and go and do something tangible that proves that fact. E.g. once you write an application that is then used by a two digit percentage of the human population, and can be used to access EH, come back.
|
|
|
|
 |
|
Dec 16 2019, 22:19
|
Hunter Nightblood
Newcomer
 Group: Members
Posts: 39
Joined: 14-March 12

|
QUOTE(ungrown @ Dec 16 2019, 04:29)  all i mentioned need NO technical background at all. if you can install a web browser app and use it well, then you can install a non-web-browser-app and use it well, too. if you can update a web browser app to the latest version, then you can keep an old version and refuse updating, too. see? no technical skill here.
accept this and move on.
First of all, every modern device comes with a web browser. And yes, I could install an app on my device, provided that support is made for it (have fun creating a solution that supports ALL platforms to achieve parity with the current site), but why should I, the end user, have to go out of my way to install a program on my PC to access this site? Not to mention that if I have multiple devices, I now have to install that app on every fucking device I own. At that point I, as well as most of the user base on this site, would just go and visit some other site where I don't have to jump through hoops to access the content. The solution you're suggesting is completely user hostile, and not as you say, something ANYONE can implement. You are underestimating the large amount of work that would need to be put in to make an app that can not only reach feature parity with the current site, but is accessible by any device. To put it simply, fuck off with that idea.
|
|
|
|
 |
|
Dec 16 2019, 22:52
|
Tenboro

|
Any ideas on the line of "people bring their own domain and cert" isn't workable, both from a usability perspective and a reliability one. That also goes for "make visitors install something" suggestions. QUOTE(newbie88 @ Dec 16 2019, 09:44)  I'm not a frequent poster nor visitor here, so I'm sorry if I'm missing something obvious.
1. Regarding Let's Encrypt. There is a way to employ LE without requesting limit increases (which is possible too but will take a lot more time). You can assign common wildcard prefixes to the groups of H@H clients, for example:
You can make the group size as much as it would take to issue all H@H clients certs within 1 month using 50/crt/week limit. I'm not sure of total size of the network now, but if it consists of 5000 active hosts, this will lead to 25 clients per certificate.
I did request a limit increase some time back, but that might be a workable solution in the meantime. Would also allow us to issue certs per user rather than per client, seeing as some people have a bunch of clients.
|
|
|
|
 |
|
Dec 17 2019, 00:16
|
jadoeman
Group: Members
Posts: 108
Joined: 28-February 15

|
If the current plan of action is to abide by a rate-limited cert requesting mechanism (ie vanilla Let's Encrypt, no public suffix list additions, etc), then it might be prudent to start that process earlier rather than later. Even if there's no user clients in the wild right now, at the very least every week earlier that the cert requesting is started is another 50 (or more if the rate got bumped up) certs at our disposal. (Assuming it's not already being done, of course.)
I mean, let's say a new-and-improved 1.5 H@H launched halfway through January. That's four weeks out. Even if we were still rate-limited to 50/week, that would mean we could have 250 certs ready and waiting on that first week. That's a huge difference from launching with just 50 certs, and could help jump-start things much quicker than otherwise possible.
(Assuming that stockpiling certs like that doesn't run afoul of some EULA or ToS somewhere.)
Edit: Oh. Let's Encrypt has 50/week for new certs, but 5/week for renewals. Those two combined place an upper limit on how many certs we could ever have at a single time, as after 90 days they'd start expiring - and at 5/week for renewal they simply couldn't be renewed fast enough to keep up with the 50/week creation. (Or, I guess, just let them expire and request new ones. Not sure how far back they'd look to check to see if a specific domain was used before, and thus if it would be a "renewal" or not.)
This post has been edited by jadoeman: Dec 17 2019, 00:29
|
|
|
|
 |
|
Dec 17 2019, 04:07
|
veiledf
Lurker
Group: Gold Star Club
Posts: 9
Joined: 8-July 08

|
QUOTE(jadoeman @ Dec 17 2019, 07:16)  Edit: Oh. Let's Encrypt has 50/week for new certs, but 5/week for renewals. Those two combined place an upper limit on how many certs we could ever have at a single time, as after 90 days they'd start expiring - and at 5/week for renewal they simply couldn't be renewed fast enough to keep up with the 50/week creation.
Renewal limit counts toward individual cert, not domain. So you can issue 5 renewed certificates in a week for the particular CN, not for registered domain.
|
|
|
|
 |
|
Dec 17 2019, 05:22
|
jadoeman
Group: Members
Posts: 108
Joined: 28-February 15

|
QUOTE(newbie88 @ Dec 16 2019, 21:07)  Renewal limit counts toward individual cert, not domain.
So you can issue 5 renewed certificates in a week for the particular CN, not for registered domain.
Alright. That makes more sense (and is much better, to boot); the wording on the page I was looking at was a little vague I suppose.
|
|
|
Dec 17 2019, 06:37
|
Kagoraphobia
Group: Global Mods
Posts: 11,741
Joined: 12-August 19

|
QUOTE(ungrown @ Dec 17 2019, 04:37) 
welcome to the ignored user list
|
|
|
|
 |
|
Dec 17 2019, 09:47
|
Tenboro

|
QUOTE(jadoeman @ Dec 16 2019, 23:16)  If the current plan of action is to abide by a rate-limited cert requesting mechanism (ie vanilla Let's Encrypt, no public suffix list additions, etc), then it might be prudent to start that process earlier rather than later. Even if there's no user clients in the wild right now, at the very least every week earlier that the cert requesting is started is another 50 (or more if the rate got bumped up) certs at our disposal. (Assuming it's not already being done, of course.)
The rate limit request had been approved when I woke up this morning, so this is somewhat of a moot point now. Though we'll still be doing subdomain-wildcard per-user certs for various other reason, like cycling records rather than having a short DNS TTL without needing a cert reissue, as well a mechanism that prevents issuing certs without DNS authority.
|
|
|
|
 |
|
Dec 19 2019, 09:04
|
Tenboro

|
1.5.4 was released; see OP. This replaces the CA with Let's Encrypt and uses individually issued certs.
As far as timelines go, unless some issues are discovered I expect this to graduate to "stable" around new year's, after which I'll start lowering the quality for 1.4.2 clients. 1.4.2 should be phased out completely around the start of February.
|
|
|
Dec 20 2019, 03:05
|
veiledf
Lurker
Group: Gold Star Club
Posts: 9
Joined: 8-July 08

|
Thank you!
Could you share how long did it took for LE to approve limit increase?
|
|
|
Dec 20 2019, 07:00
|
mewsf
Group: Gold Star Club
Posts: 564
Joined: 24-June 14

|
Really nice work! Also a tip for those who are running h@h 1.5 client over pppoe connections that have 1492 or less mtu: remember to clamp tcp mss to ptmu for incoming connections, I'm using openwrt router and it can be done like this: CODE iptables -t mangle -A FORWARD -p tcp -i pppoe-wan -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu I've found out that it may cause ssl handshake failure for some users if I don't do so, which caused quality drop for a long time. openwrt only clamps for outcoming connections by default.
|
|
|
Dec 20 2019, 09:18
|
Tenboro

|
QUOTE(newbie88 @ Dec 20 2019, 02:05)  Could you share how long did it took for LE to approve limit increase?
About three weeks. It says on the rate limit page that it would take a few weeks, so I'm guessing that's about average.
|
|
|
|
 |
|
Dec 20 2019, 22:31
|
blue penguin
Group: Gold Star Club
Posts: 10,046
Joined: 24-March 12

|
QUOTE(mewsf @ Dec 20 2019, 05:00)  Really nice work! Also a tip for those who are running h@h 1.5 client over pppoe connections that have 1492 or less mtu: remember to clamp tcp mss to ptmu for incoming connections, I'm using openwrt router and it can be done like this: CODE iptables -t mangle -A FORWARD -p tcp -i pppoe-wan -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu I've found out that it may cause ssl handshake failure for some users if I don't do so, which caused quality drop for a long time. openwrt only clamps for outcoming connections by default. Good catch, several customer grade routers sometimes screw up SSL. I have added this to the wiki. That said, it is unlikely that most H@H users are capable of handling that much config (IMG:[ invalid] style_emoticons/default/smile.gif)
|
|
|
|
 |
|
Dec 22 2019, 10:11
|
Mocka
Newcomer
 Group: Members
Posts: 36
Joined: 16-August 12

|
QUOTE(blue penguin @ Dec 20 2019, 21:31)  Good catch, several customer grade routers sometimes screw up SSL. I have added this to the wiki. That said, it is unlikely that most H@H users are capable of handling that much config (IMG:[ invalid] style_emoticons/default/smile.gif) I'm one of them. I have no idea what any of that ppoe mms ptmu nonsense means (IMG:[ invalid] style_emoticons/default/smile.gif). I just poke around until it works
|
|
|
|
 |
|
Dec 23 2019, 01:34
|
blue penguin
Group: Gold Star Club
Posts: 10,046
Joined: 24-March 12

|
QUOTE(Mocka @ Dec 22 2019, 08:11)  I'm one of them. I have no idea what any of that ppoe mms ptmu nonsense means (IMG:[ invalid] style_emoticons/default/smile.gif). I just poke around until it works (IMG:[ invalid] style_emoticons/default/smile.gif) I'll try some nerd speak anyway. In simple terms: the internet uses IP and TCP (and HTTP on top of it for several things, but HTTP is not relevant here). IP has MTU, which it the largest thing to be transported in one packet. TCP has MSS, which tells the other side the largest thing that can be sent over the current network. PMTU is a way that a routed chooses a good MTU. Unfortunately, some routers screw up and choose a small MTU but do not inform the TCP layer. The sender may sometimes then answer with a TCP packet that's too big. Notably SSL often will try big TCP packets to speed things up. Yet, if the packet is too big it is rejected, and a request for a smaller packet is made; this may happen a couple of times timing out the SSL handshake. Thankfully iptables have a --clamp-mss-to-pmtu , that does not require one to change the command to suit his needs but does the adjusting automatically.
|
|
|
|
 |
|
Dec 24 2019, 14:25
|
Mocka
Newcomer
 Group: Members
Posts: 36
Joined: 16-August 12

|
QUOTE(blue penguin @ Dec 23 2019, 00:34)  (IMG:[ invalid] style_emoticons/default/smile.gif) I'll try some nerd speak anyway. In simple terms: the internet uses IP and TCP (and HTTP on top of it for several things, but HTTP is not relevant here). IP has MTU, which it the largest thing to be transported in one packet. TCP has MSS, which tells the other side the largest thing that can be sent over the current network. PMTU is a way that a routed chooses a good MTU. Unfortunately, some routers screw up and choose a small MTU but do not inform the TCP layer. The sender may sometimes then answer with a TCP packet that's too big. Notably SSL often will try big TCP packets to speed things up. Yet, if the packet is too big it is rejected, and a request for a smaller packet is made; this may happen a couple of times timing out the SSL handshake. Thankfully iptables have a --clamp-mss-to-pmtu , that does not require one to change the command to suit his needs but does the adjusting automatically. Thanks for explaining. I've had 10k max quality with 6000 static ranges since upgrading to 1.5 2 days ago so it seems to be working fine. At least for me
|
|
|
|
 |
|
Dec 26 2019, 13:37
|
stanextm
Lurker
Group: Recruits
Posts: 6
Joined: 6-December 19

|
OK, there are something want to ask about the 1.5.4.
When I updated the client, even I saw that everything seems OK, but I also watching the "trust" shown in the H@H webpage drops everytime the uptime has updated.
Is there something in 1.5.4 caused this issue?
|
|
|
2 User(s) are reading this topic (2 Guests and 0 Anonymous Users)
0 Members:
|
 |
 |
 |
|