Prism will automatically purge old data based on your rules defined in the config. This helps keep the database size down which impacts the speed of your lookups, rollbacks, etc.

    You can configure purge rules by providing a list of parameters (prism.db-records-purge-rules) just like you would do in-game.

    How Purging Works

    • Prism initializes the purge manager on server startup
    • Every 12 hours (not tick-based) the purge cycle runs asynchronously
    • Purge rules are handled one at a time
    • Prism will "chunk" the database queries (see below)
    • Once complete prism will list the total number of records removed and move on to the next rule

    What is Chunking

    Prism 2.0 introduces "chunked" purge queries. Chunking refers to the practice of scanning a limited number of records by their primary key each cycle, and then looking for records that match your parameters. No matter the conditions, queries will be extremely fast and will only lock the exact number of entries scanned. This helps prevent lock exhaustion errors and keeps the locking out of the way of new inserts from your running server.

    The prism.purge.records-per-batch refers to how many records will be scanned each cycle - by their primary keys - to find matches for your conditions. There will very likely be cycles in which the purge system doesn't find any records.

    You need to adjust the range of records scanned based on what your database server and size can manage. For example if you have records-per-batch set to 1000, yet have 20 million records in your database, the purge cycle will have to scan 20 million records 1000 at a time. That will be extremely efficient, yet will take forever.

    Prism 2 defaults to 500,000 based on average server spec performance tests. You can adjust it based on your needs. Scanning 500k records for matches per cycle, with 20 million records present, will require 40 cycles. Each cycle typically runs every few ticks, and since they're generally low-impact to the database because of chunking, it's very efficient to run while your server keeps writing new data.

    Prism 1 Purges

    In Prism 1 we limited the delete queries to prism.records-per-batch records purged at a time. Each cycle we'd limit the sql query to 5000 (as configured) entries. The problem with this is that delete queries have to lock table rows that they're scanning for matches and with a massive table and a wide variety of conditions, it often lead to slow queries locking too many rows - causing lock exhaustion errors and holding up inserts.

    Wiki markup to link to this page: [[purging]] or [[purging|Purging]]