Deploy in production

PostgreSQL setup

Docker

docker run -d --name plone-pg \
  -e POSTGRES_USER=zodb \
  -e POSTGRES_PASSWORD=zodb \
  -e POSTGRES_DB=zodb \
  -p 5432:5432 \
  postgres:17

For BM25 support, use tensorchord/vchord-suite:pg17-latest instead. See Enable BM25 ranking for details.

CloudNativePG (Kubernetes)

plone.pgcatalog is compatible with CloudNativePG. VectorChord-BM25 supports WAL replication, so read replicas can serve BM25 queries. No special operator configuration is needed beyond the standard PostgreSQL image.

Performance tuning

  • Set ZODB cache-size high (for example, 10000) – no BTree pressure means more cache available for application objects.

  • PostgreSQL autocreates all necessary indexes at startup via the CatalogStateProcessor DDL. No manual index creation is needed.

  • Run ANALYZE object_state after large bulk imports to update planner statistics.

  • Configure autovacuum for the object_state table. With frequent catalog writes, increase autovacuum_analyze_threshold:

    ALTER TABLE object_state SET (autovacuum_analyze_threshold = 5000);
    
  • Deploy a reverse proxy (nginx, HAProxy) with rate limiting on search endpoints (@@search, @@search-results) to protect against query abuse.

ZODB cache sizing

The default ZODB cache (5000 objects) is too small for production sites. Increase cache-size and cache-size-bytes in zope.conf:

<zodb_db main>
  cache-size 70000
  cache-size-bytes 500MB
  <pgjsonb>
    dsn dbname=zodb host=localhost port=5432 user=zodb password=zodb
  </pgjsonb>
</zodb_db>

A site with 14,000 events showed 5-6 second warm-cache page loads with the default, dropping to 0.8 seconds with 70,000 objects. While plone.pgcatalog eliminates BTree cache pressure for catalog data, Plone content objects themselves still benefit from a large ZODB cache.

Query cache and prefetch

plone.pgcatalog includes a process-wide query result cache and a batch object prefetcher. Both are enabled by default:

  • PGCATALOG_QUERY_CACHE_SIZE=200 – cached query results per process, invalidated on every ZODB commit. Set to 0 to disable.

  • PGCATALOG_PREFETCH_BATCH=100 – objects prefetched when brain.getObject() is called (requires zodb-pgjsonb >= 1.8.0). Set to 0 to disable.

See Configuration reference for all environment variables.

Monitoring

Key PostgreSQL views:

  • pg_stat_user_tables – row counts, sequential vs. index scans, vacuum stats

  • pg_stat_user_indexes – index usage and size

  • pg_stat_activity – active queries and locks

Enable slow query logging:

log_min_duration_statement = 100   # Log queries slower than 100 ms

Check catalog object count via ZMI (portal_catalog > Catalog tab) or SQL:

SELECT COUNT(*) FROM object_state WHERE idx IS NOT NULL;

Backup and recovery

  • pg_dump captures all catalog data – it lives in the same object_state table as ZODB object data.

  • No separate catalog export/import is needed.

  • Standard PostgreSQL backup strategies (continuous archiving, pg_basebackup, pgBackRest) apply without modification.

  • After restoring from backup, no catalog rebuild is necessary. The catalog data is transactionally consistent with ZODB state.

Upgrading plone.pgcatalog

Install the new version of plone.pgcatalog. 2. Restart Zope. Schema updates (new columns, functions, indexes) are applied automatically at startup by the IDatabaseOpenedWithRoot subscriber. 3. If release notes mention schema changes that require reindexing: run clearFindAndRebuild() from the ZMI Advanced tab or via script. See Rebuild or reindex the catalog.