 |
 |
 |
Hentai@Home 1.3, We will build a wall, and Mexico is going to read it |
|
Nov 19 2016, 11:36
|
zbxehzqn
Newcomer
 Group: Recruits
Posts: 15
Joined: 4-October 16

|
Tenboro: The remove of new files of 1.3.3 is ok?
This post has been edited by zbxehzqn: Nov 19 2016, 11:37
|
|
|
Nov 19 2016, 13:33
|
Tenboro

|
QUOTE(zbxehzqn @ Nov 19 2016, 10:36)  Tenboro: The remove of new files of 1.3.3 is ok?
No idea what you mean, remove what new files?
|
|
|
Nov 19 2016, 17:44
|
zbxehzqn
Newcomer
 Group: Recruits
Posts: 15
Joined: 4-October 16

|
I mean. "pcache_*" files. I can show changes to code that not use pcache files. It starts fast on my test. I will try to see why the code i have propose do not work.
This post has been edited by zbxehzqn: Nov 19 2016, 17:47
|
|
|
|
 |
|
Nov 19 2016, 18:14
|
zbxehzqn
Newcomer
 Group: Recruits
Posts: 15
Joined: 4-October 16

|
Tenboro: Try this. startupInitCache() does not have to be all that code but the part about "walkFile dw = new walkFile();" is like that. I only changed the walkFile. CacheHandler.java: CODE private void startupInitCache() throws IOException { long oldestLastModified = System.currentTimeMillis(); long recentlyAccessedCutoff = oldestLastModified - 604800000; String oldestStaticRangeElement = null;
// if --verify-cache was specified, we use this shiny new FileValidator to avoid having to create a new MessageDigest and ByteBuffer for every single file in the cache FileValidator validator = null; int printFreq;
if(Settings.isVerifyCache()) { Out.info("CacheHandler: Loading cache with full file verification. Depending on the size of your cache, this can take a long time."); validator = new FileValidator(); printFreq = 1000; } else { Out.info("CacheHandler: Loading cache..."); printFreq = 10000; } walkFile dw = new walkFile(); dw.recentlyAccessedCutoff = recentlyAccessedCutoff; dw.validator = validator; dw.oldestLastModified = oldestLastModified; dw.printFreq = printFreq; Files.walkFileTree(cachedir.toPath(), EnumSet.of(FileVisitOption.FOLLOW_LINKS), 3, dw); oldestStaticRangeElement = (String) dw.oldestStaticRangeElement;
if(oldestStaticRangeElement != null) { // unless something is really weird, if we have any aged static ranges at all, one of them will be the one with the oldest file // if this is not the case, we just start the LRU scan from the first range lruStaticRangePointer = agedStaticRanges.indexOf(oldestStaticRangeElement);
if(lruStaticRangePointer < 0) { lruStaticRangePointer = 0; Out.debug("CacheHandler: Static range with oldest element was not found or is not aged, LRU will scan starting from first range"); } else { Out.debug("CacheHandler: Static range with oldest element has been marked as " + oldestStaticRangeElement + " (index=" + lruStaticRangePointer + ")"); } }
// if the cache gets near the limit, start pruning files up to a day newer than the oldest file found lruLastModifiedPruneCutoff = oldestLastModified + 86400000; lruCurrentPassOldest = System.currentTimeMillis();
Out.info("CacheHandler: Finished initializing the cache (" + cacheCount + " files, " + cacheSize + " bytes)"); updateStats(); } private class walkFile extends SimpleFileVisitor<Path> { long oldestLastModified = 0; long fileLastModified = 0; int printFreq = 0; Path cfile = null; private FileValidator validator; private long recentlyAccessedCutoff; private Object oldestStaticRangeElement; private LinkedList<DirPath> dirs = new LinkedList<>(); class DirPath{ String name; boolean empty = true; } @Override public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException { DirPath dirPath = new DirPath(); dirPath.name = dir.getFileName().toString(); dirs.addLast(dirPath); return FileVisitResult.CONTINUE; } public FileVisitResult visitFile(Path cfile, BasicFileAttributes attr) throws IOException { HVFile hvFile = HVFile.getHVFileExistsFile(cfile, attr, validator);
if (hvFile == null) { Out.debug("CacheHandler: The file " + cfile + " was corrupt."); Files.delete(cfile); } else if (!Settings.isStaticRange(hvFile.getFileid())) { Out.debug("CacheHandler: The file " + cfile + " was not in an active static range."); Files.delete(cfile); } else { addFileToActiveCache(hvFile); fileLastModified = attr.lastModifiedTime().toMillis();
if (fileLastModified > recentlyAccessedCutoff) { // if lastModified is from the last week, mark this // as recently accessed in the LRU cache. (this does // not update the metadata) markRecentlyAccessed(hvFile, true); } // dir has files for(DirPath dir : dirs){ dir.empty = false; } }
if (fileLastModified < oldestLastModified) { // cache this as the static range directory with the // currently oldest file. this is used whenever we need // to free up some space, to avoid having to start over // from the first range after every restart. oldestLastModified = fileLastModified; oldestStaticRangeElement = cfile.getFileName().toString().substring(0, 4); String staticRange = dirs.get(dirs.size() - 2).name + dirs.getLast().name; staticRangeOldest.put(staticRange, oldestLastModified); }
if (cacheCount % printFreq == 0) { Out.info("CacheHandler: Loaded " + cacheCount + " files so far..."); }
return FileVisitResult.CONTINUE; } @Override public FileVisitResult postVisitDirectory(Path dir, IOException attr) throws IOException { DirPath visited = dirs.removeLast(); if(visited.empty){ Files.delete(dir); } else if (dirs.size() == 1){ if(++foundStaticRanges % 100 == 0) { Out.info("CacheHandler: Found " + foundStaticRanges + " static ranges with files so far..."); } } return FileVisitResult.CONTINUE; } }
The getHVFileExistsFile() is the same as before. I pasted here to remind it. HVFile.java CODE public static HVFile getHVFileExistsFile(Path file, BasicFileAttributes attr, FileValidator validator) { String fileid = file.getFileName().toString();
try { HVFile hvFile = getHVFileFromFileid(fileid); if(attr.size() != hvFile.getSize()) { return null; } if(validator != null) { if(!validator.validateFile(file, fileid.substring(0, 40))) { return null; } } return hvFile; } catch(java.io.IOException e) { e.printStackTrace(); Out.warning("Warning: Encountered IO error computing the hash value of " + file); } return null; } I was told my english is bad. I tried to make it easier to read. Was I easier to read in this post?
|
|
|
|
 |
|
Nov 20 2016, 19:34
|
Vulpix
Newcomer
 Group: Recruits
Posts: 15
Joined: 4-September 08

|
Would it be possible to make folder names great again? (ha) I'm talking about cases where & or ' ends up being in the name of the gallery. It gets escaped, so for example & ends up being CODE & and ' ends up being CODE & #039; ... It make sense for the browsers but for folder names, not so much. In other words, could we unescape gallery names for H@H downloader? At least these two which are very common and pose no issue to either of the major filesystems. This post has been edited by Vulpix: Nov 20 2016, 19:35
|
|
|
|
 |
|
Nov 20 2016, 21:43
|
Tenboro

|
QUOTE(Vulpix @ Nov 20 2016, 18:34)  Would it be possible to make folder names great again? (ha) I'm talking about cases where & or ' ends up being in the name of the gallery. It gets escaped, so for example & ends up being CODE & and ' ends up being CODE & #039; ... It make sense for the browsers but for folder names, not so much. In other words, could we unescape gallery names for H@H downloader? At least these two which are very common and pose no issue to either of the major filesystems. It should be doing that already, do you have any example galleries where it doesn't?
|
|
|
Nov 22 2016, 08:50
|
zbxehzqn
Newcomer
 Group: Recruits
Posts: 15
Joined: 4-October 16

|
Tenboro: Does that code is ok this time?
|
|
|
Nov 22 2016, 16:11
|
Vulpix
Newcomer
 Group: Recruits
Posts: 15
Joined: 4-September 08

|
QUOTE(Tenboro @ Nov 20 2016, 22:43)  It should be doing that already, do you have any example galleries where it doesn't?
https://e-hentai.org/g/998278/15fd0124f9/ for example this one. Ended up looking this way in my DL folder: CODE Nidoqueen & Anon& #039;s Hotel Adventure [998278] I had to put a space between that apostrophe escape sequence because the forum was automatically unescaping it (IMG:[ invalid] style_emoticons/default/biggrin.gif) Also, not sure if it's the right place, but since we got this new and improved GUI, would it be possible to get Quality and Trust displayed as well? I mean, we're polling the server for data anyway every few minutes. It'd be nice to see the two numbers that are quite important in the gui without having to visit the page. Thanks!
|
|
|
|
 |
|
Nov 23 2016, 02:55
|
Tenboro

|
QUOTE(Vulpix @ Nov 22 2016, 15:11)  https://e-hentai.org/g/998278/15fd0124f9/ for example this one. Ended up looking this way in my DL folder: CODE Nidoqueen & Anon& #039;s Hotel Adventure [998278] I had to put a space between that apostrophe escape sequence because the forum was automatically unescaping it (IMG:[ invalid] style_emoticons/default/biggrin.gif) Should be fixed now. QUOTE(Vulpix @ Nov 22 2016, 15:11)  Also, not sure if it's the right place, but since we got this new and improved GUI, would it be possible to get Quality and Trust displayed as well? I mean, we're polling the server for data anyway every few minutes. It'd be nice to see the two numbers that are quite important in the gui without having to visit the page.
Thanks!
I'll consider adding something like that, but it would need some changes both on the server and client side.
|
|
|
|
 |
|
Nov 24 2016, 08:31
|
KitKat31337
Lurker
Group: Recruits
Posts: 6
Joined: 3-August 14

|
I upgraded my client and now almost immediately after startup, about 5-10 minutes it starts spamming "[WARN] Failed stillAlive test: (FAIL_NOT_LOGGED_IN) - will retry later" in the log and stops serving files or downloading files.
|
|
|
|
 |
|
Nov 24 2016, 16:15
|
Logii
Group: Gold Star Club
Posts: 1,475
Joined: 18-April 13

|
My mysterious bluescreen problem seems to have returned with 1.3.3 (that I poorly described earlier here: 1, 2, but which seemed to be gone after I replaced my network adapter about a month ago), how would I go about reverting back to 1.2.6? QUOTE(Tenboro @ Oct 1 2016, 12:17)  Because of this reorg, it is not trivial to revert to H@H 1.2.6, as you would have to manually collapse the cache tree from two to one directory levels.
Does this mean I would need to manually copy all the folders from the cache subfolders (cache\XX\XY) to a single level cache (cache\XY)? It is entirely possible that the problem was never fixed with the new network adapter and the PC just happened to survive without crashes about twice as long as before with the new adapter. However, my PC had the longest uptime since I started running H@H after I replaced the network adapter, leading me to believe it was a hardware problem and had been solved and it would be safe to upgrade H@H. I upgraded a week ago and last night my PC had crashed, which was pretty typical time for a crash to occur before the new network adapter. Now that I wrote this far I realize that the problem probably isn't with 1.3.3 and reverting back to 1.2.6 probably won't help much. Oh well, if anyone knows solutions to netio.sys (NETIO+106fd) caused bluescreens related to H@H, I would appreciate hearing them. It is possible that the crashes are unrelated to H@H, but they started after I started running H@H and I would have to shut down my client for at least a month or two to confirm if it is the cause or not.
|
|
|
|
 |
|
Nov 25 2016, 17:41
|
morineko
Group: Gold Star Club
Posts: 2,347
Joined: 1-April 14

|
I still keep my H@Hs at 1.2.6 or 1.2.5, but the daily income decreases day by day. Will it rise after I upgrade them to 1.3 or newer version?
|
|
|
|
 |
|
Nov 25 2016, 19:43
|
FabulousCupcake
Group: Gold Star Club
Posts: 494
Joined: 15-April 14

|
QUOTE(morineko @ Nov 25 2016, 16:41)  I still keep my H@Hs at 1.2.6 or 1.2.5, but the daily income decreases day by day. Will it rise after I upgrade them to 1.3 or newer version?
Hathrate stops increasing once your static range and hitrate (which is also influenced by your static range) stops increasing. H@H 1.3 brings some performance increase as described in the first page of this topic, but shouldn't generally affect your hathrate. It could slightly increase hitrate on some clients (and therefore increase your hathrate), but this would only be happening if they were limited by cpu power / disk iops, which are improved in H@H 1.3. Most H@H clients are probably limited in network speed. Also there are no bonus incentives for running H@H 1.3 if that's what you're asking. This post has been edited by FabulousCupcake: Nov 25 2016, 19:44
|
|
|
|
 |
|
Nov 26 2016, 02:12
|
Tenboro

|
QUOTE(Logii @ Nov 24 2016, 15:15)  Now that I wrote this far I realize that the problem probably isn't with 1.3.3 and reverting back to 1.2.6 probably won't help much. Oh well, if anyone knows solutions to netio.sys (NETIO+106fd) caused bluescreens related to H@H, I would appreciate hearing them. It is possible that the crashes are unrelated to H@H, but they started after I started running H@H and I would have to shut down my client for at least a month or two to confirm if it is the cause or not.
It's unlikely that H@H is the cause of the problem as such, but as it puts some load on the whole networking system, it could trigger some existing driver or hardware fault. Seems to be [ www.google.com] fairly common as far as crashes go. [ forum.utorrent.com] This might be a solution.
|
|
|
|
 |
|
Nov 26 2016, 14:01
|
Logii
Group: Gold Star Club
Posts: 1,475
Joined: 18-April 13

|
QUOTE(Tenboro @ Nov 26 2016, 02:12)  It's unlikely that H@H is the cause of the problem as such, but as it puts some load on the whole networking system, it could trigger some existing driver or hardware fault.
This is what I suspect too, which is why I previously updated my network driver and more recently bought a new network adapter to see if it would solve the problem (it didn't). Several months ago I also ran some torture tests and diagnostics on my CPU, RAM and storage and checked all the physical connectors, but didn't find any errors or signs of unstable hardware. The weird thing is that the crash frequency is very random, from a single day to about two weeks and the total network load doesn't seem to affect it since it can happen regardless of if I am using the system for other stuff or if H@H is running alone. I will most likely have to try hunting for other faulty drivers at some point. At the moment all I know is that H@H seems to somehow contribute to the crashes, but I haven't found the actual trigger for them. QUOTE(Tenboro @ Nov 26 2016, 02:12)  Seems to be [ www.google.com] fairly common as far as crashes go. [ forum.utorrent.com] This might be a solution. Teredo is already disabled on my PC. Thank you for the input anyway.
|
|
|
|
 |
|
Nov 27 2016, 14:08
|
KitKat31337
Lurker
Group: Recruits
Posts: 6
Joined: 3-August 14

|
QUOTE(KitKat31337 @ Nov 24 2016, 02:31)  I upgraded my client and now almost immediately after startup, about 5-10 minutes it starts spamming "[WARN] Failed stillAlive test: (FAIL_NOT_LOGGED_IN) - will retry later" in the log and stops serving files or downloading files.
Any update on this please, my client has simply been offline since this is not working...
|
|
|
Nov 27 2016, 15:07
|
Tenboro

|
QUOTE(KitKat31337 @ Nov 27 2016, 13:08)  Any update on this please, my client has simply been offline since this is not working...
Are you trying to use the same ident for multiple clients?
|
|
|
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:
|
 |
 |
 |
|