One Yukkuri Place

Mr. Bugs can't take it easy.

Posted under Bugs & Features

Well, I've set up a weekly cron job to automatically bounce the SQL server. But the fact is, I know basically jack about Ruby, it's kinda amazing I managed to get this version to work at all, and I'm trying very hard not to touch anything that might break it beyond anything a restart will fix...

Keeping in mind the original intent wasn't to upgrade to danbooru2, but to keep the existing version of danbooru. But after changing hosts the version of posgresql OYP used to run on was too outdated, and we were forced to upgrade. Toawa worked some magic and he somehow managed to migrate the data and get danbooru2 to work.

I'll be fairly blunt: Nobody on the admin team is comfortable enough with ruby to do much, so it's kind of just going to stay this way unless we get someone good enough with ruby to debug through this mess. Danbooru is very touchy and I don't want to mess with this if I can help it.

Really sorry about the timeouts and random errors but those won't be going away in the foreseeable future, unfortunately.

Just to give an example: the server is running in development mode, because I cannot get production mode to run. As such, it's already going to be slower, and its generating on the order of 600 megs of useless logs per day (which, fortunately, are automatically cleaned up). I've modified the settings as best as I can to turn off as much debugging stuff as possible, but still...

So pretty much for the foreseeable future (because it's not as if the community is growing that much), unless someone wants to learn it, we're stuck with the bugs.

They're not breaking, but it can get a little obnoxious.

It's super obnoxious and it's annoying we can't do much about it.

To be honest, it'll be more along the lines of "hey someone pretty well versed with ruby and server knowledge came along" before we can fix it. Danbooru's backend isn't the kind of stuff you can debug after looking through "ruby for dummies" from best buy, you'd have to have some years of db experience and backend experience as well. We already do change simple settings in the code, but as to why its mysteriously breaking involves more debugging than a ctrl-f on the files.

We're kinda like the adeptus mechanicus, without in-depth knowledge of how certain parts actually work. Just uttering litanies and waving purity seals around in hopes that the engine doesn't say "fuck it" and just break down.

No, I'm not going to expect you to refactor somebody else's software project for stability.

The git history on the project does show that it is being actively maintained and patched. OTOH, testing is hard.

Edit: also, it's clear you guys are trying your best to make this run. Mister bugs are so uneasy.

I noticed that often searches omits many pages from the results
For example, the tag "translation_request" is shown to have 526 pages (at the time of writing), but there are actually 575 pages (again at the time of writing)
The same happens with searches with many more results, like "reimu", so the search can have more than 1000 pages, but some of them are not shown

EasyV said:

I noticed that often searches omits many pages from the results
For example, the tag "translation_request" is shown to have 526 pages (at the time of writing), but there are actually 575 pages (again at the time of writing)
The same happens with searches with many more results, like "reimu", so the search can have more than 1000 pages, but some of them are not shown

Same thing happens with anoniyukkuri. I also noticed that the tag count for all tags is taking it too easy. It stopped counting after the DB upgrade. Some of the tags are still counted as "0" although such-tagged images exist.

Probably because pagination, for some reason, takes into account of stuff that's been retagged or deleted. Deleted stuff doesn't really "go away", I see all the deleted posts and comments and stuff but the normal user wouldn't.

Might be related to what toawa said with OYP being in development mode.

That combined with the fact that, in development mode, I'm not 100% certain it's doing all of the housekeeping tasks that it normally would, like deleting stuff and re-counting tags.

poweryoga said:

Probably because pagination, for some reason, takes into account of stuff that's been retagged or deleted. Deleted stuff doesn't really "go away", I see all the deleted posts and comments and stuff but the normal user wouldn't.

Might be related to what toawa said with OYP being in development mode.

As far as I can remember, none of the anoniyukkuri-tagged items were deleted, so is it safe to say that deletion is out of the question?

Updated

Toawa said:

That combined with the fact that, in development mode, I'm not 100% certain it's doing all of the housekeeping tasks that it normally would, like deleting stuff and re-counting tags.

Development mode shouldn't affect it that badly.
Since you mentioned Postgres, indexes, and how restarting fixes it--I'm gonna take a guess and say it's because the stats are out of date. Postgres doesn't use indexes when the stats are out of date, which then makes for very slow queries. so I'd say the stats thing is likely the problem here.

Postgres has a daemon to update the stats called autovacuum. If you don't have it enabled, then that's the cause.

I looked at the config file and I don't see anything that would suggest that autovacuum wasn't on. I can also see the launcher process running...

Toawa said:

I looked at the config file and I don't see anything that would suggest that autovacuum wasn't on. I can also see the launcher process running...

Huh. Well from experience you can try the following:

  • Check if autovacuum actually ran recently: SELECT relname, last_autovacuum, last_autoanalyze FROM pg_stat_user_tables;
  • Manually vacuum all tables and look at the stats: VACUUM ANALYZE VERBOSE;
  • Force the query planner to use indexes when available (don't leave it off for a long time): SET enable_seqscan = OFF;
1 2 3