Files
storycove/DEPLOYMENT.md
Stefan Hardegger 30c0132a92 Various Improvements.
- Testing Coverage
- Image Handling
- Session Handling
- Library Switching
2025-10-20 08:24:29 +02:00

138 lines
3.6 KiB
Markdown

# StoryCove Deployment Guide
## Quick Deployment
StoryCove includes an automated deployment script that handles Solr volume cleanup and ensures fresh search indices on every deployment.
### Using the Deployment Script
```bash
./deploy.sh
```
This script will:
1. Stop all running containers
2. **Remove the Solr data volume** (forcing fresh core creation)
3. Build and start all containers
4. Wait for services to become healthy
5. Trigger automatic bulk reindexing
### What Happens During Deployment
#### 1. Solr Volume Cleanup
The script removes the `storycove_solr_data` volume, which:
- Ensures all Solr cores are recreated from scratch
- Prevents stale configuration issues
- Guarantees schema changes are applied
#### 2. Automatic Bulk Reindexing
When the backend starts, it automatically:
- Detects that Solr is available
- Fetches all entities from the database (Stories, Authors, Collections)
- Bulk indexes them into Solr
- Logs progress and completion
### Monitoring the Deployment
Watch the backend logs to see reindexing progress:
```bash
docker-compose logs -f backend
```
You should see output like:
```
========================================
Starting automatic bulk reindexing...
========================================
📚 Indexing stories...
✅ Indexed 150 stories
👤 Indexing authors...
✅ Indexed 45 authors
📂 Indexing collections...
✅ Indexed 12 collections
========================================
✅ Bulk reindexing completed successfully in 2345ms
📊 Total indexed: 150 stories, 45 authors, 12 collections
========================================
```
## Manual Deployment (Without Script)
If you prefer manual control:
```bash
# Stop containers
docker-compose down
# Remove Solr volume
docker volume rm storycove_solr_data
# Start containers
docker-compose up -d --build
```
The automatic reindexing will still occur on startup.
## Troubleshooting
### Reindexing Fails
If bulk reindexing fails:
1. Check Solr is running: `docker-compose logs solr`
2. Verify Solr health: `curl http://localhost:8983/solr/admin/ping`
3. Check backend logs: `docker-compose logs backend`
The application will still start even if reindexing fails - you can manually trigger reindexing through the admin API.
### Solr Cores Not Created
If Solr cores aren't being created properly:
1. Check the `solr.Dockerfile` to ensure cores are created
2. Verify the Solr image builds correctly: `docker-compose build solr`
3. Check Solr Admin UI: http://localhost:8983
### Performance Issues
If reindexing takes too long:
- The bulk indexing is already optimized (batch operations)
- Consider increasing Solr memory in `docker-compose.yml`:
```yaml
environment:
- SOLR_HEAP=1024m
```
## Development Workflow
### Daily Development
Just use the normal commands:
```bash
docker-compose up -d
```
The automatic reindexing still happens, but it's fast on small datasets.
### Schema Changes
When you modify Solr schema or add new cores:
```bash
./deploy.sh
```
This ensures a clean slate.
### Skipping Reindexing
Reindexing is automatic and cannot be disabled. It's designed to be fast and unobtrusive. The application starts immediately - reindexing happens in the background.
## Environment Variables
No additional environment variables are needed for the deployment script. All configuration is in `docker-compose.yml`.
## Backup Considerations
**Important**: Since the Solr volume is recreated on every deployment, you should:
- Never rely on Solr as the source of truth
- Always maintain data in PostgreSQL
- Solr is treated as a disposable cache/index
This is the recommended approach for search indices.