3.6 KiB
StoryCove Deployment Guide
Quick Deployment
StoryCove includes an automated deployment script that handles Solr volume cleanup and ensures fresh search indices on every deployment.
Using the Deployment Script
./deploy.sh
This script will:
- Stop all running containers
- Remove the Solr data volume (forcing fresh core creation)
- Build and start all containers
- Wait for services to become healthy
- Trigger automatic bulk reindexing
What Happens During Deployment
1. Solr Volume Cleanup
The script removes the storycove_solr_data volume, which:
- Ensures all Solr cores are recreated from scratch
- Prevents stale configuration issues
- Guarantees schema changes are applied
2. Automatic Bulk Reindexing
When the backend starts, it automatically:
- Detects that Solr is available
- Fetches all entities from the database (Stories, Authors, Collections)
- Bulk indexes them into Solr
- Logs progress and completion
Monitoring the Deployment
Watch the backend logs to see reindexing progress:
docker-compose logs -f backend
You should see output like:
========================================
Starting automatic bulk reindexing...
========================================
📚 Indexing stories...
✅ Indexed 150 stories
👤 Indexing authors...
✅ Indexed 45 authors
📂 Indexing collections...
✅ Indexed 12 collections
========================================
✅ Bulk reindexing completed successfully in 2345ms
📊 Total indexed: 150 stories, 45 authors, 12 collections
========================================
Manual Deployment (Without Script)
If you prefer manual control:
# Stop containers
docker-compose down
# Remove Solr volume
docker volume rm storycove_solr_data
# Start containers
docker-compose up -d --build
The automatic reindexing will still occur on startup.
Troubleshooting
Reindexing Fails
If bulk reindexing fails:
- Check Solr is running:
docker-compose logs solr - Verify Solr health:
curl http://localhost:8983/solr/admin/ping - Check backend logs:
docker-compose logs backend
The application will still start even if reindexing fails - you can manually trigger reindexing through the admin API.
Solr Cores Not Created
If Solr cores aren't being created properly:
- Check the
solr.Dockerfileto ensure cores are created - Verify the Solr image builds correctly:
docker-compose build solr - Check Solr Admin UI: http://localhost:8983
Performance Issues
If reindexing takes too long:
- The bulk indexing is already optimized (batch operations)
- Consider increasing Solr memory in
docker-compose.yml:environment: - SOLR_HEAP=1024m
Development Workflow
Daily Development
Just use the normal commands:
docker-compose up -d
The automatic reindexing still happens, but it's fast on small datasets.
Schema Changes
When you modify Solr schema or add new cores:
./deploy.sh
This ensures a clean slate.
Skipping Reindexing
Reindexing is automatic and cannot be disabled. It's designed to be fast and unobtrusive. The application starts immediately - reindexing happens in the background.
Environment Variables
No additional environment variables are needed for the deployment script. All configuration is in docker-compose.yml.
Backup Considerations
Important: Since the Solr volume is recreated on every deployment, you should:
- Never rely on Solr as the source of truth
- Always maintain data in PostgreSQL
- Solr is treated as a disposable cache/index
This is the recommended approach for search indices.