One Yukkuri Place

Read the rules before proceeding!

Topic: Mr. Bugs can't take it easy.

Posted under Bugs & Features

Ruukasu

The SQL version may be outdated and/or unreliable. Does Toawa know about this, too?

  • ID: 11292
  • Permalink
  • Toawa

    Well, I've set up a weekly cron job to automatically bounce the SQL server. But the fact is, I know basically jack about Ruby, it's kinda amazing I managed to get this version to work at all, and I'm trying very hard not to touch anything that might break it beyond anything a restart will fix...

  • ID: 11293
  • Permalink
  • poweryoga

    Keeping in mind the original intent wasn't to upgrade to danbooru2, but to keep the existing version of danbooru. But after changing hosts the version of posgresql OYP used to run on was too outdated, and we were forced to upgrade. Toawa worked some magic and he somehow managed to migrate the data and get danbooru2 to work.

    I'll be fairly blunt: Nobody on the admin team is comfortable enough with ruby to do much, so it's kind of just going to stay this way unless we get someone good enough with ruby to debug through this mess. Danbooru is very touchy and I don't want to mess with this if I can help it.

    Really sorry about the timeouts and random errors but those won't be going away in the foreseeable future, unfortunately.

  • ID: 11297
  • Permalink
  • BaronMind

    Mister ruby let dosus take it easy! Now is fine!

  • ID: 11302
  • Permalink
  • Toawa

    Just to give an example: the server is running in development mode, because I cannot get production mode to run. As such, it's already going to be slower, and its generating on the order of 600 megs of useless logs per day (which, fortunately, are automatically cleaned up). I've modified the settings as best as I can to turn off as much debugging stuff as possible, but still...

  • ID: 11303
  • Permalink
  • Hitosura

    So pretty much for the foreseeable future (because it's not as if the community is growing that much), unless someone wants to learn it, we're stuck with the bugs.

    They're not breaking, but it can get a little obnoxious.

  • ID: 11304
  • Permalink
  • poweryoga

    It's super obnoxious and it's annoying we can't do much about it.

    To be honest, it'll be more along the lines of "hey someone pretty well versed with ruby and server knowledge came along" before we can fix it. Danbooru's backend isn't the kind of stuff you can debug after looking through "ruby for dummies" from best buy, you'd have to have some years of db experience and backend experience as well. We already do change simple settings in the code, but as to why its mysteriously breaking involves more debugging than a ctrl-f on the files.

    We're kinda like the adeptus mechanicus, without in-depth knowledge of how certain parts actually work. Just uttering litanies and waving purity seals around in hopes that the engine doesn't say "fuck it" and just break down.

  • ID: 11305
  • Permalink
  • Toawa

    Plus I've run out of sacred oils...

  • ID: 11306
  • Permalink
  • exitstrategy

    No, I'm not going to expect you to refactor somebody else's software project for stability.

    The git history on the project does show that it is being actively maintained and patched. OTOH, testing is hard.

    Edit: also, it's clear you guys are trying your best to make this run. Mister bugs are so uneasy.

  • ID: 11312
  • Permalink
  • EasyV

    I noticed that often searches omits many pages from the results
    For example, the tag "translation_request" is shown to have 526 pages (at the time of writing), but there are actually 575 pages (again at the time of writing)
    The same happens with searches with many more results, like "reimu", so the search can have more than 1000 pages, but some of them are not shown

  • ID: 11316
  • Permalink
  • Ruukasu

    EasyV said:

    I noticed that often searches omits many pages from the results
    For example, the tag "translation_request" is shown to have 526 pages (at the time of writing), but there are actually 575 pages (again at the time of writing)
    The same happens with searches with many more results, like "reimu", so the search can have more than 1000 pages, but some of them are not shown

    Same thing happens with anoniyukkuri. I also noticed that the tag count for all tags is taking it too easy. It stopped counting after the DB upgrade. Some of the tags are still counted as "0" although such-tagged images exist.

  • ID: 11317
  • Permalink
  • poweryoga

    Probably because pagination, for some reason, takes into account of stuff that's been retagged or deleted. Deleted stuff doesn't really "go away", I see all the deleted posts and comments and stuff but the normal user wouldn't.

    Might be related to what toawa said with OYP being in development mode.

  • ID: 11318
  • Permalink
  • Toawa

    That combined with the fact that, in development mode, I'm not 100% certain it's doing all of the housekeeping tasks that it normally would, like deleting stuff and re-counting tags.

  • ID: 11324
  • Permalink
  • Ruukasu

    poweryoga said:

    Probably because pagination, for some reason, takes into account of stuff that's been retagged or deleted. Deleted stuff doesn't really "go away", I see all the deleted posts and comments and stuff but the normal user wouldn't.

    Might be related to what toawa said with OYP being in development mode.

    As far as I can remember, none of the anoniyukkuri-tagged items were deleted, so is it safe to say that deletion is out of the question?

    Updated by Ruukasu

  • ID: 11325
  • Permalink
  • Joseph

    Toawa said:

    That combined with the fact that, in development mode, I'm not 100% certain it's doing all of the housekeeping tasks that it normally would, like deleting stuff and re-counting tags.

    Development mode shouldn't affect it that badly.
    Since you mentioned Postgres, indexes, and how restarting fixes it--I'm gonna take a guess and say it's because the stats are out of date. Postgres doesn't use indexes when the stats are out of date, which then makes for very slow queries. so I'd say the stats thing is likely the problem here.

    Postgres has a daemon to update the stats called autovacuum. If you don't have it enabled, then that's the cause.

  • ID: 11340
  • Permalink
  • Toawa

    I looked at the config file and I don't see anything that would suggest that autovacuum wasn't on. I can also see the launcher process running...

  • ID: 11341
  • Permalink
  • Joseph

    Toawa said:

    I looked at the config file and I don't see anything that would suggest that autovacuum wasn't on. I can also see the launcher process running...

    Huh. Well from experience you can try the following:

    • Check if autovacuum actually ran recently:
    SELECT relname, last_autovacuum, last_autoanalyze FROM pg_stat_user_tables;
    
    
    • Manually vacuum all tables and look at the stats:
    VACUUM ANALYZE VERBOSE;
    
    
    • Force the query planner to use indexes when available (don't leave it off for a long time):
    SET enable_seqscan = OFF;
    
    
  • ID: 11342
  • Permalink