Note that there has been a change to how the H@H clients calculate their allowable disk space usage. The changes are completely server-side, but will be noticeable on the clients that were limited in that they'll delete a bunch of files the next time you start it.
This might cause a temporary drop in utilization, but the limit for the disk space is calculated so that you should have no problems getting about the same speed as before. Furthermore, the clients should recover after a restart much faster than before since the system should no longer has to reset the file routing tables for it.
For the technically curious, the reason for this change is that I've been noticing a larger than expected rate of server-side file list resets, and after investigating with the fancy new database instrumentation I added the other day, the conclusion was that it was caused by clients with a large discrepancy between the local cache size and the cache size accepted by the server - which depends on the speed capabilities of the client. So essentially, the system had to reset the server-side list every time the client connected, which meant that a) it would be slow as ass to start up, b) it would take a long-ass time before you got decent traffic, and c) the system had to do an assload of unnecessary work to rebuild the client's file list.
Soo, yeah. About 800 clients had their existing disk limit reduced due to this. If you want, you can check if you're one of them from the
H@H page, but otherwise, you don't have to do anything.