replacing opensearch with solr
This commit is contained in:
@@ -1,889 +0,0 @@
|
||||
# StoryCove Search Migration Specification: Typesense to OpenSearch
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This document specifies the migration from Typesense to OpenSearch for the StoryCove application. The migration will be implemented using a parallel approach, maintaining Typesense functionality while gradually transitioning to OpenSearch, ensuring zero downtime and the ability to rollback if needed.
|
||||
|
||||
**Migration Goals:**
|
||||
- Solve random query reliability issues
|
||||
- Improve complex filtering performance
|
||||
- Maintain feature parity during transition
|
||||
- Zero downtime migration
|
||||
- Improved developer experience
|
||||
|
||||
---
|
||||
|
||||
## Current State Analysis
|
||||
|
||||
### Typesense Implementation Overview
|
||||
|
||||
**Service Architecture:**
|
||||
- `TypesenseService.java` (~2000 lines) - Primary search service
|
||||
- 3 search indexes: Stories, Authors, Collections
|
||||
- Multi-library support with dynamic collection names
|
||||
- Integration with Spring Boot backend
|
||||
|
||||
**Core Functionality:**
|
||||
1. **Full-text Search**: Stories, Authors with complex query building
|
||||
2. **Random Story Selection**: `_rand()` function with fallback logic
|
||||
3. **Advanced Filtering**: 15+ filter conditions with boolean logic
|
||||
4. **Faceting**: Tag aggregations and counts
|
||||
5. **Autocomplete**: Search suggestions with typeahead
|
||||
6. **CRUD Operations**: Index/update/delete for all entity types
|
||||
|
||||
**Current Issues Identified:**
|
||||
- `_rand()` function unreliability requiring complex fallback logic
|
||||
- Complex filter query building with escaping issues
|
||||
- Limited aggregation capabilities
|
||||
- Inconsistent API behavior across query patterns
|
||||
- Multi-collection management complexity
|
||||
|
||||
### Data Models and Schema
|
||||
|
||||
**Story Index Fields:**
|
||||
```java
|
||||
// Core fields
|
||||
UUID id, String title, String description, String sourceUrl
|
||||
Integer wordCount, Integer rating, Integer volume
|
||||
Boolean isRead, LocalDateTime lastReadAt, Integer readingPosition
|
||||
|
||||
// Relationships
|
||||
UUID authorId, String authorName
|
||||
UUID seriesId, String seriesName
|
||||
List<String> tagNames
|
||||
|
||||
// Metadata
|
||||
LocalDateTime createdAt, LocalDateTime updatedAt
|
||||
String coverPath, String sourceDomain
|
||||
```
|
||||
|
||||
**Author Index Fields:**
|
||||
```java
|
||||
UUID id, String name, String notes
|
||||
Integer authorRating, Double averageStoryRating, Integer storyCount
|
||||
List<String> urls, String avatarImagePath
|
||||
LocalDateTime createdAt, LocalDateTime updatedAt
|
||||
```
|
||||
|
||||
**Collection Index Fields:**
|
||||
```java
|
||||
UUID id, String name, String description
|
||||
List<String> tagNames, Boolean archived
|
||||
LocalDateTime createdAt, LocalDateTime updatedAt
|
||||
Integer storyCount, Integer currentPosition
|
||||
```
|
||||
|
||||
### API Endpoints Current State
|
||||
|
||||
**Search Endpoints Analysis:**
|
||||
|
||||
**✅ USED by Frontend (Must Implement):**
|
||||
- `GET /api/stories/search` - Main story search with complex filtering (CRITICAL)
|
||||
- `GET /api/stories/random` - Random story selection with filters (CRITICAL)
|
||||
- `GET /api/authors/search-typesense` - Author search (HIGH)
|
||||
- `GET /api/tags/autocomplete` - Tag suggestions (MEDIUM)
|
||||
- `POST /api/stories/reindex-typesense` - Admin reindex operations (MEDIUM)
|
||||
- `POST /api/authors/reindex-typesense` - Admin reindex operations (MEDIUM)
|
||||
- `POST /api/stories/recreate-typesense-collection` - Admin recreate (MEDIUM)
|
||||
- `POST /api/authors/recreate-typesense-collection` - Admin recreate (MEDIUM)
|
||||
|
||||
**❌ UNUSED by Frontend (Skip Implementation):**
|
||||
- `GET /api/stories/search/suggestions` - Not used by frontend
|
||||
- `GET /api/authors/search` - Superseded by typesense version
|
||||
- `GET /api/series/search` - Not used by frontend
|
||||
- `GET /api/tags/search` - Superseded by autocomplete
|
||||
- `POST /api/search/reindex` - Not used by frontend
|
||||
- `GET /api/search/health` - Not used by frontend
|
||||
|
||||
**Scope Reduction: ~40% fewer endpoints to implement**
|
||||
|
||||
**Search Parameters (Stories):**
|
||||
```
|
||||
query, page, size, authors[], tags[], minRating, maxRating
|
||||
sortBy, sortDir, facetBy[]
|
||||
minWordCount, maxWordCount, createdAfter, createdBefore
|
||||
lastReadAfter, lastReadBefore, unratedOnly, readingStatus
|
||||
hasReadingProgress, hasCoverImage, sourceDomain, seriesFilter
|
||||
minTagCount, popularOnly, hiddenGemsOnly
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Target OpenSearch Architecture
|
||||
|
||||
### Service Layer Design
|
||||
|
||||
**New Components:**
|
||||
```
|
||||
OpenSearchService.java - Primary search service (mirrors TypesenseService API)
|
||||
OpenSearchConfig.java - Configuration and client setup
|
||||
SearchMigrationService.java - Handles parallel operation during migration
|
||||
SearchServiceAdapter.java - Abstraction layer for service switching
|
||||
```
|
||||
|
||||
**Index Strategy:**
|
||||
- **Single-node deployment** for development/small installations
|
||||
- **Index-per-library** approach: `stories-{libraryId}`, `authors-{libraryId}`, `collections-{libraryId}`
|
||||
- **Index templates** for consistent mapping across libraries
|
||||
- **Aliases** for easy switching and zero-downtime updates
|
||||
|
||||
### OpenSearch Index Mappings
|
||||
|
||||
**Stories Index Mapping:**
|
||||
```json
|
||||
{
|
||||
"settings": {
|
||||
"number_of_shards": 1,
|
||||
"number_of_replicas": 0,
|
||||
"analysis": {
|
||||
"analyzer": {
|
||||
"story_analyzer": {
|
||||
"type": "custom",
|
||||
"tokenizer": "standard",
|
||||
"filter": ["lowercase", "stop", "snowball"]
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"mappings": {
|
||||
"properties": {
|
||||
"id": {"type": "keyword"},
|
||||
"title": {
|
||||
"type": "text",
|
||||
"analyzer": "story_analyzer",
|
||||
"fields": {"keyword": {"type": "keyword"}}
|
||||
},
|
||||
"description": {
|
||||
"type": "text",
|
||||
"analyzer": "story_analyzer"
|
||||
},
|
||||
"authorName": {
|
||||
"type": "text",
|
||||
"analyzer": "story_analyzer",
|
||||
"fields": {"keyword": {"type": "keyword"}}
|
||||
},
|
||||
"seriesName": {
|
||||
"type": "text",
|
||||
"fields": {"keyword": {"type": "keyword"}}
|
||||
},
|
||||
"tagNames": {"type": "keyword"},
|
||||
"wordCount": {"type": "integer"},
|
||||
"rating": {"type": "integer"},
|
||||
"volume": {"type": "integer"},
|
||||
"isRead": {"type": "boolean"},
|
||||
"readingPosition": {"type": "integer"},
|
||||
"lastReadAt": {"type": "date"},
|
||||
"createdAt": {"type": "date"},
|
||||
"updatedAt": {"type": "date"},
|
||||
"coverPath": {"type": "keyword"},
|
||||
"sourceUrl": {"type": "keyword"},
|
||||
"sourceDomain": {"type": "keyword"}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Authors Index Mapping:**
|
||||
```json
|
||||
{
|
||||
"mappings": {
|
||||
"properties": {
|
||||
"id": {"type": "keyword"},
|
||||
"name": {
|
||||
"type": "text",
|
||||
"analyzer": "story_analyzer",
|
||||
"fields": {"keyword": {"type": "keyword"}}
|
||||
},
|
||||
"notes": {"type": "text"},
|
||||
"authorRating": {"type": "integer"},
|
||||
"averageStoryRating": {"type": "float"},
|
||||
"storyCount": {"type": "integer"},
|
||||
"urls": {"type": "keyword"},
|
||||
"avatarImagePath": {"type": "keyword"},
|
||||
"createdAt": {"type": "date"},
|
||||
"updatedAt": {"type": "date"}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Collections Index Mapping:**
|
||||
```json
|
||||
{
|
||||
"mappings": {
|
||||
"properties": {
|
||||
"id": {"type": "keyword"},
|
||||
"name": {
|
||||
"type": "text",
|
||||
"fields": {"keyword": {"type": "keyword"}}
|
||||
},
|
||||
"description": {"type": "text"},
|
||||
"tagNames": {"type": "keyword"},
|
||||
"archived": {"type": "boolean"},
|
||||
"storyCount": {"type": "integer"},
|
||||
"currentPosition": {"type": "integer"},
|
||||
"createdAt": {"type": "date"},
|
||||
"updatedAt": {"type": "date"}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Query Translation Strategy
|
||||
|
||||
**Random Story Queries:**
|
||||
```java
|
||||
// Typesense (problematic)
|
||||
String sortBy = seed != null ? "_rand(" + seed + ")" : "_rand()";
|
||||
|
||||
// OpenSearch (reliable)
|
||||
QueryBuilder randomQuery = QueryBuilders.functionScoreQuery(
|
||||
QueryBuilders.boolQuery().must(filters),
|
||||
ScoreFunctionBuilders.randomFunction(seed != null ? seed.intValue() : null)
|
||||
);
|
||||
```
|
||||
|
||||
**Complex Filtering:**
|
||||
```java
|
||||
// Build bool query with multiple filter conditions
|
||||
BoolQueryBuilder boolQuery = QueryBuilders.boolQuery()
|
||||
.must(QueryBuilders.multiMatchQuery(query, "title", "description", "authorName"))
|
||||
.filter(QueryBuilders.termsQuery("tagNames", tags))
|
||||
.filter(QueryBuilders.rangeQuery("wordCount").gte(minWords).lte(maxWords))
|
||||
.filter(QueryBuilders.rangeQuery("rating").gte(minRating).lte(maxRating));
|
||||
```
|
||||
|
||||
**Faceting/Aggregations:**
|
||||
```java
|
||||
// Tags aggregation
|
||||
AggregationBuilder tagsAgg = AggregationBuilders
|
||||
.terms("tags")
|
||||
.field("tagNames")
|
||||
.size(100);
|
||||
|
||||
// Rating ranges
|
||||
AggregationBuilder ratingRanges = AggregationBuilders
|
||||
.range("rating_ranges")
|
||||
.field("rating")
|
||||
.addRange("unrated", 0, 1)
|
||||
.addRange("low", 1, 3)
|
||||
.addRange("high", 4, 6);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Revised Implementation Phases (Scope Reduced by 40%)
|
||||
|
||||
### Phase 1: Infrastructure Setup (Week 1)
|
||||
|
||||
**Objectives:**
|
||||
- Add OpenSearch to Docker Compose
|
||||
- Create basic OpenSearch service
|
||||
- Establish index templates and mappings
|
||||
- **Focus**: Only stories, authors, and tags indexes (skip series, collections)
|
||||
|
||||
**Deliverables:**
|
||||
1. **Docker Compose Updates:**
|
||||
```yaml
|
||||
opensearch:
|
||||
image: opensearchproject/opensearch:2.11.0
|
||||
environment:
|
||||
- discovery.type=single-node
|
||||
- DISABLE_SECURITY_PLUGIN=true
|
||||
- OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx1g
|
||||
ports:
|
||||
- "9200:9200"
|
||||
volumes:
|
||||
- opensearch_data:/usr/share/opensearch/data
|
||||
```
|
||||
|
||||
2. **OpenSearchConfig.java:**
|
||||
```java
|
||||
@Configuration
|
||||
@ConditionalOnProperty(name = "storycove.opensearch.enabled", havingValue = "true")
|
||||
public class OpenSearchConfig {
|
||||
@Bean
|
||||
public OpenSearchClient openSearchClient() {
|
||||
// Client configuration
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
3. **Basic Index Creation:**
|
||||
- Create index templates for stories, authors, collections
|
||||
- Implement index creation with proper mappings
|
||||
- Add health check endpoint
|
||||
|
||||
**Success Criteria:**
|
||||
- OpenSearch container starts successfully
|
||||
- Basic connectivity established
|
||||
- Index templates created and validated
|
||||
|
||||
### Phase 2: Core Service Implementation (Week 2)
|
||||
|
||||
**Objectives:**
|
||||
- Implement OpenSearchService with core functionality
|
||||
- Create service abstraction layer
|
||||
- Implement basic search operations
|
||||
- **Focus**: Only critical endpoints (stories search, random, authors)
|
||||
|
||||
**Deliverables:**
|
||||
1. **OpenSearchService.java** - Core service implementing:
|
||||
- `indexStory()`, `updateStory()`, `deleteStory()`
|
||||
- `searchStories()` with basic query support (CRITICAL)
|
||||
- `getRandomStoryId()` with reliable seed support (CRITICAL)
|
||||
- `indexAuthor()`, `updateAuthor()`, `deleteAuthor()`
|
||||
- `searchAuthors()` for authors page (HIGH)
|
||||
- `bulkIndexStories()`, `bulkIndexAuthors()` for initial data loading
|
||||
|
||||
2. **SearchServiceAdapter.java** - Abstraction layer:
|
||||
```java
|
||||
@Service
|
||||
public class SearchServiceAdapter {
|
||||
@Autowired(required = false)
|
||||
private TypesenseService typesenseService;
|
||||
|
||||
@Autowired(required = false)
|
||||
private OpenSearchService openSearchService;
|
||||
|
||||
@Value("${storycove.search.provider:typesense}")
|
||||
private String searchProvider;
|
||||
|
||||
public SearchResultDto<StorySearchDto> searchStories(...) {
|
||||
return "opensearch".equals(searchProvider)
|
||||
? openSearchService.searchStories(...)
|
||||
: typesenseService.searchStories(...);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
3. **Basic Query Implementation:**
|
||||
- Full-text search across title/description/author
|
||||
- Basic filtering (tags, rating, word count)
|
||||
- Pagination and sorting
|
||||
|
||||
**Success Criteria:**
|
||||
- Basic search functionality working
|
||||
- Service abstraction layer functional
|
||||
- Can switch between Typesense and OpenSearch via configuration
|
||||
|
||||
### Phase 3: Advanced Features Implementation (Week 3)
|
||||
|
||||
**Objectives:**
|
||||
- Implement complex filtering (all 15+ filter types)
|
||||
- Add random story functionality
|
||||
- Implement faceting/aggregations
|
||||
- Add autocomplete/suggestions
|
||||
|
||||
**Deliverables:**
|
||||
1. **Complex Query Builder:**
|
||||
- All filter conditions from original implementation
|
||||
- Date range filtering with proper timezone handling
|
||||
- Boolean logic for reading status, coverage, series filters
|
||||
|
||||
2. **Random Story Implementation:**
|
||||
```java
|
||||
public Optional<UUID> getRandomStoryId(String searchQuery, List<String> tags, Long seed, ...) {
|
||||
BoolQueryBuilder baseQuery = buildFilterQuery(searchQuery, tags, ...);
|
||||
|
||||
QueryBuilder randomQuery = QueryBuilders.functionScoreQuery(
|
||||
baseQuery,
|
||||
ScoreFunctionBuilders.randomFunction(seed != null ? seed.intValue() : null)
|
||||
);
|
||||
|
||||
SearchRequest request = new SearchRequest("stories-" + getCurrentLibraryId())
|
||||
.source(new SearchSourceBuilder()
|
||||
.query(randomQuery)
|
||||
.size(1)
|
||||
.fetchSource(new String[]{"id"}, null));
|
||||
|
||||
// Execute and return result
|
||||
}
|
||||
```
|
||||
|
||||
3. **Faceting Implementation:**
|
||||
- Tag aggregations with counts
|
||||
- Rating range aggregations
|
||||
- Author aggregations
|
||||
- Custom facet builders
|
||||
|
||||
4. **Autocomplete Service:**
|
||||
- Suggest-based implementation using completion fields
|
||||
- Prefix matching for story titles and author names
|
||||
|
||||
**Success Criteria:**
|
||||
- All filter conditions working correctly
|
||||
- Random story selection reliable with seed support
|
||||
- Faceting returns accurate counts
|
||||
- Autocomplete responsive and accurate
|
||||
|
||||
### Phase 4: Data Migration & Parallel Operation (Week 4)
|
||||
|
||||
**Objectives:**
|
||||
- Implement bulk data migration from database
|
||||
- Enable parallel operation (write to both systems)
|
||||
- Comprehensive testing of OpenSearch functionality
|
||||
|
||||
**Deliverables:**
|
||||
1. **Migration Service:**
|
||||
```java
|
||||
@Service
|
||||
public class SearchMigrationService {
|
||||
public void performFullMigration() {
|
||||
// Migrate all libraries
|
||||
List<Library> libraries = libraryService.findAll();
|
||||
for (Library library : libraries) {
|
||||
migrateLibraryData(library);
|
||||
}
|
||||
}
|
||||
|
||||
private void migrateLibraryData(Library library) {
|
||||
// Create indexes for library
|
||||
// Bulk load stories, authors, collections
|
||||
// Verify data integrity
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. **Dual-Write Implementation:**
|
||||
- Modify all entity update operations to write to both systems
|
||||
- Add configuration flag for dual-write mode
|
||||
- Error handling for partial failures
|
||||
|
||||
3. **Data Validation Tools:**
|
||||
- Compare search result counts between systems
|
||||
- Validate random story selection consistency
|
||||
- Check faceting accuracy
|
||||
|
||||
**Success Criteria:**
|
||||
- Complete data migration with 100% accuracy
|
||||
- Dual-write operations working without errors
|
||||
- Search result parity between systems verified
|
||||
|
||||
### Phase 5: API Integration & Testing (Week 5)
|
||||
|
||||
**Objectives:**
|
||||
- Update controller endpoints to use OpenSearch
|
||||
- Comprehensive integration testing
|
||||
- Performance testing and optimization
|
||||
|
||||
**Deliverables:**
|
||||
1. **Controller Updates:**
|
||||
- Modify controllers to use SearchServiceAdapter
|
||||
- Add migration controls for gradual rollout
|
||||
- Implement A/B testing capability
|
||||
|
||||
2. **Integration Tests:**
|
||||
```java
|
||||
@SpringBootTest
|
||||
@TestMethodOrder(OrderAnnotation.class)
|
||||
public class OpenSearchIntegrationTest {
|
||||
@Test
|
||||
@Order(1)
|
||||
void testBasicSearch() {
|
||||
// Test basic story search functionality
|
||||
}
|
||||
|
||||
@Test
|
||||
@Order(2)
|
||||
void testComplexFiltering() {
|
||||
// Test all 15+ filter conditions
|
||||
}
|
||||
|
||||
@Test
|
||||
@Order(3)
|
||||
void testRandomStory() {
|
||||
// Test random story with and without seed
|
||||
}
|
||||
|
||||
@Test
|
||||
@Order(4)
|
||||
void testFaceting() {
|
||||
// Test aggregation accuracy
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
3. **Performance Testing:**
|
||||
- Load testing with realistic data volumes
|
||||
- Query performance benchmarking
|
||||
- Memory usage monitoring
|
||||
|
||||
**Success Criteria:**
|
||||
- All integration tests passing
|
||||
- Performance meets or exceeds Typesense baseline
|
||||
- Memory usage within acceptable limits (< 2GB)
|
||||
|
||||
### Phase 6: Production Rollout & Monitoring (Week 6)
|
||||
|
||||
**Objectives:**
|
||||
- Production deployment with feature flags
|
||||
- Gradual user migration with monitoring
|
||||
- Rollback capability testing
|
||||
|
||||
**Deliverables:**
|
||||
1. **Feature Flag Implementation:**
|
||||
```java
|
||||
@Component
|
||||
public class SearchFeatureFlags {
|
||||
@Value("${storycove.search.opensearch.enabled:false}")
|
||||
private boolean openSearchEnabled;
|
||||
|
||||
@Value("${storycove.search.opensearch.percentage:0}")
|
||||
private int rolloutPercentage;
|
||||
|
||||
public boolean shouldUseOpenSearch(String userId) {
|
||||
if (!openSearchEnabled) return false;
|
||||
return userId.hashCode() % 100 < rolloutPercentage;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. **Monitoring & Alerting:**
|
||||
- Query performance metrics
|
||||
- Error rate monitoring
|
||||
- Search result accuracy validation
|
||||
- User experience metrics
|
||||
|
||||
3. **Rollback Procedures:**
|
||||
- Immediate rollback to Typesense capability
|
||||
- Data consistency verification
|
||||
- Performance rollback triggers
|
||||
|
||||
**Success Criteria:**
|
||||
- Successful production deployment
|
||||
- Zero user-facing issues during rollout
|
||||
- Monitoring showing improved performance
|
||||
- Rollback procedures validated
|
||||
|
||||
### Phase 7: Cleanup & Documentation (Week 7)
|
||||
|
||||
**Objectives:**
|
||||
- Remove Typesense dependencies
|
||||
- Update documentation
|
||||
- Performance optimization
|
||||
|
||||
**Deliverables:**
|
||||
1. **Code Cleanup:**
|
||||
- Remove TypesenseService and related classes
|
||||
- Clean up Docker Compose configuration
|
||||
- Remove unused dependencies
|
||||
|
||||
2. **Documentation Updates:**
|
||||
- Update deployment documentation
|
||||
- Search API documentation
|
||||
- Troubleshooting guides
|
||||
|
||||
3. **Performance Tuning:**
|
||||
- Index optimization
|
||||
- Query performance tuning
|
||||
- Resource allocation optimization
|
||||
|
||||
**Success Criteria:**
|
||||
- Typesense completely removed
|
||||
- Documentation up to date
|
||||
- Optimized performance in production
|
||||
|
||||
---
|
||||
|
||||
## Data Migration Strategy
|
||||
|
||||
### Pre-Migration Validation
|
||||
|
||||
**Data Integrity Checks:**
|
||||
1. Count validation: Ensure all stories/authors/collections are present
|
||||
2. Field validation: Verify all required fields are populated
|
||||
3. Relationship validation: Check author-story and series-story relationships
|
||||
4. Library separation: Ensure proper multi-library data isolation
|
||||
|
||||
**Migration Process:**
|
||||
|
||||
1. **Index Creation:**
|
||||
```java
|
||||
// Create indexes with proper mappings for each library
|
||||
for (Library library : libraries) {
|
||||
String storiesIndex = "stories-" + library.getId();
|
||||
createIndexWithMapping(storiesIndex, getStoriesMapping());
|
||||
createIndexWithMapping("authors-" + library.getId(), getAuthorsMapping());
|
||||
createIndexWithMapping("collections-" + library.getId(), getCollectionsMapping());
|
||||
}
|
||||
```
|
||||
|
||||
2. **Bulk Data Loading:**
|
||||
```java
|
||||
// Load in batches to manage memory usage
|
||||
int batchSize = 1000;
|
||||
List<Story> allStories = storyService.findByLibraryId(libraryId);
|
||||
|
||||
for (int i = 0; i < allStories.size(); i += batchSize) {
|
||||
List<Story> batch = allStories.subList(i, Math.min(i + batchSize, allStories.size()));
|
||||
List<StoryDocument> documents = batch.stream()
|
||||
.map(this::convertToSearchDocument)
|
||||
.collect(Collectors.toList());
|
||||
|
||||
bulkIndexStories(documents, "stories-" + libraryId);
|
||||
}
|
||||
```
|
||||
|
||||
3. **Post-Migration Validation:**
|
||||
- Count comparison between database and OpenSearch
|
||||
- Spot-check random records for field accuracy
|
||||
- Test search functionality with known queries
|
||||
- Verify faceting counts match expected values
|
||||
|
||||
### Rollback Strategy
|
||||
|
||||
**Immediate Rollback Triggers:**
|
||||
- Search error rate > 1%
|
||||
- Query performance degradation > 50%
|
||||
- Data inconsistency detected
|
||||
- Memory usage > 4GB sustained
|
||||
|
||||
**Rollback Process:**
|
||||
1. Update feature flag to disable OpenSearch
|
||||
2. Verify Typesense still operational
|
||||
3. Clear OpenSearch indexes to free resources
|
||||
4. Investigate and document issues
|
||||
|
||||
**Data Consistency During Rollback:**
|
||||
- Continue dual-write during investigation
|
||||
- Re-sync any missed updates to OpenSearch
|
||||
- Validate data integrity before retry
|
||||
|
||||
---
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Unit Tests
|
||||
|
||||
**OpenSearchService Unit Tests:**
|
||||
```java
|
||||
@ExtendWith(MockitoExtension.class)
|
||||
class OpenSearchServiceTest {
|
||||
@Mock private OpenSearchClient client;
|
||||
@InjectMocks private OpenSearchService service;
|
||||
|
||||
@Test
|
||||
void testSearchStoriesBasicQuery() {
|
||||
// Mock OpenSearch response
|
||||
// Test basic search functionality
|
||||
}
|
||||
|
||||
@Test
|
||||
void testComplexFilterQuery() {
|
||||
// Test complex boolean query building
|
||||
}
|
||||
|
||||
@Test
|
||||
void testRandomStorySelection() {
|
||||
// Test random query with seed
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Query Builder Tests:**
|
||||
- Test all 15+ filter conditions
|
||||
- Validate query structure and parameters
|
||||
- Test edge cases and null handling
|
||||
|
||||
### Integration Tests
|
||||
|
||||
**Full Search Integration:**
|
||||
```java
|
||||
@SpringBootTest
|
||||
@Testcontainers
|
||||
class OpenSearchIntegrationTest {
|
||||
@Container
|
||||
static OpenSearchContainer opensearch = new OpenSearchContainer("opensearchproject/opensearch:2.11.0");
|
||||
|
||||
@Test
|
||||
void testEndToEndStorySearch() {
|
||||
// Insert test data
|
||||
// Perform search via controller
|
||||
// Validate results
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Performance Tests
|
||||
|
||||
**Load Testing Scenarios:**
|
||||
1. **Concurrent Search Load:**
|
||||
- 50 concurrent users performing searches
|
||||
- Mixed query complexity
|
||||
- Duration: 10 minutes
|
||||
|
||||
2. **Bulk Indexing Performance:**
|
||||
- Index 10,000 stories in batches
|
||||
- Measure throughput and memory usage
|
||||
|
||||
3. **Random Query Performance:**
|
||||
- 1000 random story requests with different seeds
|
||||
- Compare with Typesense baseline
|
||||
|
||||
### Acceptance Tests
|
||||
|
||||
**Functional Requirements:**
|
||||
- All existing search functionality preserved
|
||||
- Random story selection improved reliability
|
||||
- Faceting accuracy maintained
|
||||
- Multi-library separation working
|
||||
|
||||
**Performance Requirements:**
|
||||
- Search response time < 100ms for 95th percentile
|
||||
- Random story selection < 50ms
|
||||
- Index update operations < 10ms
|
||||
- Memory usage < 2GB in production
|
||||
|
||||
---
|
||||
|
||||
## Risk Analysis & Mitigation
|
||||
|
||||
### Technical Risks
|
||||
|
||||
**Risk: OpenSearch Memory Usage**
|
||||
- *Probability: Medium*
|
||||
- *Impact: High*
|
||||
- *Mitigation: Resource monitoring, index optimization, container limits*
|
||||
|
||||
**Risk: Query Performance Regression**
|
||||
- *Probability: Low*
|
||||
- *Impact: High*
|
||||
- *Mitigation: Performance testing, query optimization, caching layer*
|
||||
|
||||
**Risk: Data Migration Accuracy**
|
||||
- *Probability: Low*
|
||||
- *Impact: Critical*
|
||||
- *Mitigation: Comprehensive validation, dual-write verification, rollback procedures*
|
||||
|
||||
**Risk: Complex Filter Compatibility**
|
||||
- *Probability: Medium*
|
||||
- *Impact: Medium*
|
||||
- *Mitigation: Extensive testing, gradual rollout, feature flags*
|
||||
|
||||
### Operational Risks
|
||||
|
||||
**Risk: Production Deployment Issues**
|
||||
- *Probability: Medium*
|
||||
- *Impact: High*
|
||||
- *Mitigation: Staging environment testing, gradual rollout, immediate rollback capability*
|
||||
|
||||
**Risk: Team Learning Curve**
|
||||
- *Probability: High*
|
||||
- *Impact: Low*
|
||||
- *Mitigation: Documentation, training, gradual responsibility transfer*
|
||||
|
||||
### Business Continuity
|
||||
|
||||
**Zero-Downtime Requirements:**
|
||||
- Maintain Typesense during entire migration
|
||||
- Feature flag-based switching
|
||||
- Immediate rollback capability
|
||||
- Health monitoring with automated alerts
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
### Functional Requirements ✅
|
||||
- [ ] All search functionality migrated successfully
|
||||
- [ ] Random story selection working reliably with seeds
|
||||
- [ ] Complex filtering (15+ conditions) working accurately
|
||||
- [ ] Faceting/aggregation results match expected values
|
||||
- [ ] Multi-library support maintained
|
||||
- [ ] Autocomplete functionality preserved
|
||||
|
||||
### Performance Requirements ✅
|
||||
- [ ] Search response time ≤ 100ms (95th percentile)
|
||||
- [ ] Random story selection ≤ 50ms
|
||||
- [ ] Index operations ≤ 10ms
|
||||
- [ ] Memory usage ≤ 2GB sustained
|
||||
- [ ] Zero search downtime during migration
|
||||
|
||||
### Technical Requirements ✅
|
||||
- [ ] Code quality maintained (test coverage ≥ 80%)
|
||||
- [ ] Documentation updated and comprehensive
|
||||
- [ ] Monitoring and alerting implemented
|
||||
- [ ] Rollback procedures tested and validated
|
||||
- [ ] Typesense dependencies cleanly removed
|
||||
|
||||
---
|
||||
|
||||
## Timeline Summary
|
||||
|
||||
| Phase | Duration | Key Deliverables | Risk Level |
|
||||
|-------|----------|------------------|------------|
|
||||
| 1. Infrastructure | 1 week | Docker setup, basic service | Low |
|
||||
| 2. Core Service | 1 week | Basic search operations | Medium |
|
||||
| 3. Advanced Features | 1 week | Complex filtering, random queries | High |
|
||||
| 4. Data Migration | 1 week | Full data migration, dual-write | High |
|
||||
| 5. API Integration | 1 week | Controller updates, testing | Medium |
|
||||
| 6. Production Rollout | 1 week | Gradual deployment, monitoring | High |
|
||||
| 7. Cleanup | 1 week | Remove Typesense, documentation | Low |
|
||||
|
||||
**Total Estimated Duration: 7 weeks**
|
||||
|
||||
---
|
||||
|
||||
## Configuration Management
|
||||
|
||||
### Environment Variables
|
||||
|
||||
```bash
|
||||
# OpenSearch Configuration
|
||||
OPENSEARCH_HOST=opensearch
|
||||
OPENSEARCH_PORT=9200
|
||||
OPENSEARCH_USERNAME=admin
|
||||
OPENSEARCH_PASSWORD=${OPENSEARCH_PASSWORD}
|
||||
|
||||
# Feature Flags
|
||||
STORYCOVE_OPENSEARCH_ENABLED=true
|
||||
STORYCOVE_SEARCH_PROVIDER=opensearch
|
||||
STORYCOVE_SEARCH_DUAL_WRITE=true
|
||||
STORYCOVE_OPENSEARCH_ROLLOUT_PERCENTAGE=100
|
||||
|
||||
# Performance Tuning
|
||||
OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx2g
|
||||
STORYCOVE_SEARCH_BATCH_SIZE=1000
|
||||
STORYCOVE_SEARCH_TIMEOUT=30s
|
||||
```
|
||||
|
||||
### Docker Compose Updates
|
||||
|
||||
```yaml
|
||||
# Add to docker-compose.yml
|
||||
opensearch:
|
||||
image: opensearchproject/opensearch:2.11.0
|
||||
environment:
|
||||
- discovery.type=single-node
|
||||
- DISABLE_SECURITY_PLUGIN=true
|
||||
- OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx2g
|
||||
volumes:
|
||||
- opensearch_data:/usr/share/opensearch/data
|
||||
networks:
|
||||
- storycove-network
|
||||
|
||||
volumes:
|
||||
opensearch_data:
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
This specification provides a comprehensive roadmap for migrating StoryCove from Typesense to OpenSearch. The phased approach ensures minimal risk while delivering improved reliability and performance, particularly for random story queries.
|
||||
|
||||
The parallel implementation strategy allows for thorough validation and provides confidence in the migration while maintaining the ability to rollback if issues arise. Upon successful completion, StoryCove will have a more robust and scalable search infrastructure that better supports its growth and feature requirements.
|
||||
|
||||
**Next Steps:**
|
||||
1. Review and approve this specification
|
||||
2. Set up development environment with OpenSearch
|
||||
3. Begin Phase 1 implementation
|
||||
4. Establish monitoring and success metrics
|
||||
5. Execute migration according to timeline
|
||||
|
||||
---
|
||||
|
||||
*Document Version: 1.0*
|
||||
*Last Updated: 2025-01-17*
|
||||
*Author: Claude Code Assistant*
|
||||
@@ -84,9 +84,25 @@
|
||||
<artifactId>httpclient5</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.opensearch.client</groupId>
|
||||
<artifactId>opensearch-java</artifactId>
|
||||
<version>3.2.0</version>
|
||||
<groupId>org.apache.solr</groupId>
|
||||
<artifactId>solr-solrj</artifactId>
|
||||
<version>9.9.0</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.eclipse.jetty</groupId>
|
||||
<artifactId>jetty-client</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.eclipse.jetty</groupId>
|
||||
<artifactId>jetty-util</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.eclipse.jetty</groupId>
|
||||
<artifactId>jetty-http</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.eclipse.jetty</groupId>
|
||||
<artifactId>jetty-io</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.apache.httpcomponents.core5</groupId>
|
||||
|
||||
@@ -1,211 +0,0 @@
|
||||
package com.storycove.config;
|
||||
|
||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||
import com.fasterxml.jackson.datatype.jsr310.JavaTimeModule;
|
||||
import org.apache.hc.client5.http.auth.AuthScope;
|
||||
import org.apache.hc.client5.http.auth.UsernamePasswordCredentials;
|
||||
import org.apache.hc.client5.http.impl.auth.BasicCredentialsProvider;
|
||||
import org.apache.hc.client5.http.impl.nio.PoolingAsyncClientConnectionManager;
|
||||
import org.apache.hc.client5.http.impl.nio.PoolingAsyncClientConnectionManagerBuilder;
|
||||
import org.apache.hc.client5.http.ssl.ClientTlsStrategyBuilder;
|
||||
import org.apache.hc.core5.http.HttpHost;
|
||||
import org.apache.hc.core5.util.Timeout;
|
||||
import org.opensearch.client.json.jackson.JacksonJsonpMapper;
|
||||
import org.opensearch.client.opensearch.OpenSearchClient;
|
||||
import org.opensearch.client.transport.OpenSearchTransport;
|
||||
import org.opensearch.client.transport.httpclient5.ApacheHttpClient5TransportBuilder;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import org.springframework.beans.factory.annotation.Qualifier;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
|
||||
import javax.net.ssl.SSLContext;
|
||||
import javax.net.ssl.TrustManager;
|
||||
import javax.net.ssl.X509TrustManager;
|
||||
import java.io.FileInputStream;
|
||||
import java.security.KeyStore;
|
||||
import java.security.cert.X509Certificate;
|
||||
|
||||
@Configuration
|
||||
public class OpenSearchConfig {
|
||||
|
||||
private static final Logger logger = LoggerFactory.getLogger(OpenSearchConfig.class);
|
||||
|
||||
private final OpenSearchProperties properties;
|
||||
|
||||
public OpenSearchConfig(@Qualifier("openSearchProperties") OpenSearchProperties properties) {
|
||||
this.properties = properties;
|
||||
}
|
||||
|
||||
@Bean
|
||||
public OpenSearchClient openSearchClient() throws Exception {
|
||||
logger.info("Initializing OpenSearch client for profile: {}", properties.getProfile());
|
||||
|
||||
// Create credentials provider
|
||||
BasicCredentialsProvider credentialsProvider = createCredentialsProvider();
|
||||
|
||||
// Create SSL context based on environment
|
||||
SSLContext sslContext = createSSLContext();
|
||||
|
||||
// Create connection manager with pooling
|
||||
PoolingAsyncClientConnectionManager connectionManager = createConnectionManager(sslContext);
|
||||
|
||||
// Create custom ObjectMapper for proper date serialization
|
||||
ObjectMapper objectMapper = new ObjectMapper();
|
||||
objectMapper.registerModule(new JavaTimeModule());
|
||||
objectMapper.disable(com.fasterxml.jackson.databind.SerializationFeature.WRITE_DATES_AS_TIMESTAMPS);
|
||||
|
||||
// Create the transport with all configurations and custom Jackson mapper
|
||||
OpenSearchTransport transport = ApacheHttpClient5TransportBuilder
|
||||
.builder(new HttpHost(properties.getScheme(), properties.getHost(), properties.getPort()))
|
||||
.setMapper(new JacksonJsonpMapper(objectMapper))
|
||||
.setHttpClientConfigCallback(httpClientBuilder -> {
|
||||
// Only set credentials provider if authentication is configured
|
||||
if (properties.getUsername() != null && !properties.getUsername().isEmpty() &&
|
||||
properties.getPassword() != null && !properties.getPassword().isEmpty()) {
|
||||
httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider);
|
||||
}
|
||||
|
||||
httpClientBuilder.setConnectionManager(connectionManager);
|
||||
|
||||
// Set timeouts
|
||||
httpClientBuilder.setDefaultRequestConfig(
|
||||
org.apache.hc.client5.http.config.RequestConfig.custom()
|
||||
.setConnectionRequestTimeout(Timeout.ofMilliseconds(properties.getConnection().getTimeout()))
|
||||
.setResponseTimeout(Timeout.ofMilliseconds(properties.getConnection().getSocketTimeout()))
|
||||
.build()
|
||||
);
|
||||
|
||||
return httpClientBuilder;
|
||||
})
|
||||
.build();
|
||||
|
||||
OpenSearchClient client = new OpenSearchClient(transport);
|
||||
|
||||
// Test connection
|
||||
testConnection(client);
|
||||
|
||||
return client;
|
||||
}
|
||||
|
||||
private BasicCredentialsProvider createCredentialsProvider() {
|
||||
BasicCredentialsProvider credentialsProvider = new BasicCredentialsProvider();
|
||||
|
||||
// Only set credentials if username and password are provided
|
||||
if (properties.getUsername() != null && !properties.getUsername().isEmpty() &&
|
||||
properties.getPassword() != null && !properties.getPassword().isEmpty()) {
|
||||
credentialsProvider.setCredentials(
|
||||
new AuthScope(properties.getHost(), properties.getPort()),
|
||||
new UsernamePasswordCredentials(
|
||||
properties.getUsername(),
|
||||
properties.getPassword().toCharArray()
|
||||
)
|
||||
);
|
||||
logger.info("OpenSearch credentials configured for user: {}", properties.getUsername());
|
||||
} else {
|
||||
logger.info("OpenSearch running without authentication (no credentials configured)");
|
||||
}
|
||||
|
||||
return credentialsProvider;
|
||||
}
|
||||
|
||||
private SSLContext createSSLContext() throws Exception {
|
||||
SSLContext sslContext;
|
||||
|
||||
if (isProduction() && !properties.getSecurity().isTrustAllCertificates()) {
|
||||
// Production SSL configuration with proper certificate validation
|
||||
sslContext = createProductionSSLContext();
|
||||
} else {
|
||||
// Development SSL configuration (trust all certificates)
|
||||
sslContext = createDevelopmentSSLContext();
|
||||
}
|
||||
|
||||
return sslContext;
|
||||
}
|
||||
|
||||
private SSLContext createProductionSSLContext() throws Exception {
|
||||
logger.info("Configuring production SSL context with certificate validation");
|
||||
|
||||
SSLContext sslContext = SSLContext.getInstance("TLS");
|
||||
|
||||
// Load custom keystore/truststore if provided
|
||||
if (properties.getSecurity().getTruststorePath() != null) {
|
||||
KeyStore trustStore = KeyStore.getInstance("JKS");
|
||||
try (FileInputStream fis = new FileInputStream(properties.getSecurity().getTruststorePath())) {
|
||||
trustStore.load(fis, properties.getSecurity().getTruststorePassword().toCharArray());
|
||||
}
|
||||
|
||||
javax.net.ssl.TrustManagerFactory tmf =
|
||||
javax.net.ssl.TrustManagerFactory.getInstance(javax.net.ssl.TrustManagerFactory.getDefaultAlgorithm());
|
||||
tmf.init(trustStore);
|
||||
|
||||
sslContext.init(null, tmf.getTrustManagers(), null);
|
||||
} else {
|
||||
// Use default system SSL context for production
|
||||
sslContext.init(null, null, null);
|
||||
}
|
||||
|
||||
return sslContext;
|
||||
}
|
||||
|
||||
private SSLContext createDevelopmentSSLContext() throws Exception {
|
||||
logger.warn("Configuring development SSL context - TRUSTING ALL CERTIFICATES (not for production!)");
|
||||
|
||||
SSLContext sslContext = SSLContext.getInstance("TLS");
|
||||
sslContext.init(null, new TrustManager[] {
|
||||
new X509TrustManager() {
|
||||
public X509Certificate[] getAcceptedIssuers() { return null; }
|
||||
public void checkClientTrusted(X509Certificate[] certs, String authType) {}
|
||||
public void checkServerTrusted(X509Certificate[] certs, String authType) {}
|
||||
}
|
||||
}, null);
|
||||
|
||||
return sslContext;
|
||||
}
|
||||
|
||||
private PoolingAsyncClientConnectionManager createConnectionManager(SSLContext sslContext) {
|
||||
PoolingAsyncClientConnectionManagerBuilder builder = PoolingAsyncClientConnectionManagerBuilder.create();
|
||||
|
||||
// Configure TLS strategy
|
||||
if (properties.getScheme().equals("https")) {
|
||||
if (isProduction() && properties.getSecurity().isSslVerification()) {
|
||||
// Production TLS with hostname verification
|
||||
builder.setTlsStrategy(ClientTlsStrategyBuilder.create()
|
||||
.setSslContext(sslContext)
|
||||
.build());
|
||||
} else {
|
||||
// Development TLS without hostname verification
|
||||
builder.setTlsStrategy(ClientTlsStrategyBuilder.create()
|
||||
.setSslContext(sslContext)
|
||||
.setHostnameVerifier((hostname, session) -> true)
|
||||
.build());
|
||||
}
|
||||
}
|
||||
|
||||
PoolingAsyncClientConnectionManager connectionManager = builder.build();
|
||||
|
||||
// Configure connection pool settings
|
||||
connectionManager.setMaxTotal(properties.getConnection().getMaxConnectionsTotal());
|
||||
connectionManager.setDefaultMaxPerRoute(properties.getConnection().getMaxConnectionsPerRoute());
|
||||
|
||||
return connectionManager;
|
||||
}
|
||||
|
||||
private boolean isProduction() {
|
||||
return "production".equalsIgnoreCase(properties.getProfile());
|
||||
}
|
||||
|
||||
private void testConnection(OpenSearchClient client) {
|
||||
try {
|
||||
var response = client.info();
|
||||
logger.info("OpenSearch connection successful - Version: {}, Cluster: {}",
|
||||
response.version().number(),
|
||||
response.clusterName());
|
||||
} catch (Exception e) {
|
||||
logger.warn("OpenSearch connection test failed during initialization: {}", e.getMessage());
|
||||
logger.debug("OpenSearch connection test full error", e);
|
||||
// Don't throw exception here - let the client be created and handle failures in service methods
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,164 +0,0 @@
|
||||
package com.storycove.config;
|
||||
|
||||
import org.springframework.boot.context.properties.ConfigurationProperties;
|
||||
import org.springframework.stereotype.Component;
|
||||
|
||||
@Component
|
||||
@ConfigurationProperties(prefix = "storycove.opensearch")
|
||||
public class OpenSearchProperties {
|
||||
|
||||
private String host = "localhost";
|
||||
private int port = 9200;
|
||||
private String scheme = "https";
|
||||
private String username = "admin";
|
||||
private String password;
|
||||
private String profile = "development";
|
||||
|
||||
private Security security = new Security();
|
||||
private Connection connection = new Connection();
|
||||
private Indices indices = new Indices();
|
||||
private Bulk bulk = new Bulk();
|
||||
private Health health = new Health();
|
||||
|
||||
// Getters and setters
|
||||
public String getHost() { return host; }
|
||||
public void setHost(String host) { this.host = host; }
|
||||
|
||||
public int getPort() { return port; }
|
||||
public void setPort(int port) { this.port = port; }
|
||||
|
||||
public String getScheme() { return scheme; }
|
||||
public void setScheme(String scheme) { this.scheme = scheme; }
|
||||
|
||||
public String getUsername() { return username; }
|
||||
public void setUsername(String username) { this.username = username; }
|
||||
|
||||
public String getPassword() { return password; }
|
||||
public void setPassword(String password) { this.password = password; }
|
||||
|
||||
public String getProfile() { return profile; }
|
||||
public void setProfile(String profile) { this.profile = profile; }
|
||||
|
||||
public Security getSecurity() { return security; }
|
||||
public void setSecurity(Security security) { this.security = security; }
|
||||
|
||||
public Connection getConnection() { return connection; }
|
||||
public void setConnection(Connection connection) { this.connection = connection; }
|
||||
|
||||
public Indices getIndices() { return indices; }
|
||||
public void setIndices(Indices indices) { this.indices = indices; }
|
||||
|
||||
public Bulk getBulk() { return bulk; }
|
||||
public void setBulk(Bulk bulk) { this.bulk = bulk; }
|
||||
|
||||
public Health getHealth() { return health; }
|
||||
public void setHealth(Health health) { this.health = health; }
|
||||
|
||||
public static class Security {
|
||||
private boolean sslVerification = false;
|
||||
private boolean trustAllCertificates = true;
|
||||
private String keystorePath;
|
||||
private String keystorePassword;
|
||||
private String truststorePath;
|
||||
private String truststorePassword;
|
||||
|
||||
// Getters and setters
|
||||
public boolean isSslVerification() { return sslVerification; }
|
||||
public void setSslVerification(boolean sslVerification) { this.sslVerification = sslVerification; }
|
||||
|
||||
public boolean isTrustAllCertificates() { return trustAllCertificates; }
|
||||
public void setTrustAllCertificates(boolean trustAllCertificates) { this.trustAllCertificates = trustAllCertificates; }
|
||||
|
||||
public String getKeystorePath() { return keystorePath; }
|
||||
public void setKeystorePath(String keystorePath) { this.keystorePath = keystorePath; }
|
||||
|
||||
public String getKeystorePassword() { return keystorePassword; }
|
||||
public void setKeystorePassword(String keystorePassword) { this.keystorePassword = keystorePassword; }
|
||||
|
||||
public String getTruststorePath() { return truststorePath; }
|
||||
public void setTruststorePath(String truststorePath) { this.truststorePath = truststorePath; }
|
||||
|
||||
public String getTruststorePassword() { return truststorePassword; }
|
||||
public void setTruststorePassword(String truststorePassword) { this.truststorePassword = truststorePassword; }
|
||||
}
|
||||
|
||||
public static class Connection {
|
||||
private int timeout = 30000;
|
||||
private int socketTimeout = 60000;
|
||||
private int maxConnectionsPerRoute = 10;
|
||||
private int maxConnectionsTotal = 30;
|
||||
private boolean retryOnFailure = true;
|
||||
private int maxRetries = 3;
|
||||
|
||||
// Getters and setters
|
||||
public int getTimeout() { return timeout; }
|
||||
public void setTimeout(int timeout) { this.timeout = timeout; }
|
||||
|
||||
public int getSocketTimeout() { return socketTimeout; }
|
||||
public void setSocketTimeout(int socketTimeout) { this.socketTimeout = socketTimeout; }
|
||||
|
||||
public int getMaxConnectionsPerRoute() { return maxConnectionsPerRoute; }
|
||||
public void setMaxConnectionsPerRoute(int maxConnectionsPerRoute) { this.maxConnectionsPerRoute = maxConnectionsPerRoute; }
|
||||
|
||||
public int getMaxConnectionsTotal() { return maxConnectionsTotal; }
|
||||
public void setMaxConnectionsTotal(int maxConnectionsTotal) { this.maxConnectionsTotal = maxConnectionsTotal; }
|
||||
|
||||
public boolean isRetryOnFailure() { return retryOnFailure; }
|
||||
public void setRetryOnFailure(boolean retryOnFailure) { this.retryOnFailure = retryOnFailure; }
|
||||
|
||||
public int getMaxRetries() { return maxRetries; }
|
||||
public void setMaxRetries(int maxRetries) { this.maxRetries = maxRetries; }
|
||||
}
|
||||
|
||||
public static class Indices {
|
||||
private int defaultShards = 1;
|
||||
private int defaultReplicas = 0;
|
||||
private String refreshInterval = "1s";
|
||||
|
||||
// Getters and setters
|
||||
public int getDefaultShards() { return defaultShards; }
|
||||
public void setDefaultShards(int defaultShards) { this.defaultShards = defaultShards; }
|
||||
|
||||
public int getDefaultReplicas() { return defaultReplicas; }
|
||||
public void setDefaultReplicas(int defaultReplicas) { this.defaultReplicas = defaultReplicas; }
|
||||
|
||||
public String getRefreshInterval() { return refreshInterval; }
|
||||
public void setRefreshInterval(String refreshInterval) { this.refreshInterval = refreshInterval; }
|
||||
}
|
||||
|
||||
public static class Bulk {
|
||||
private int actions = 1000;
|
||||
private long size = 5242880; // 5MB
|
||||
private int timeout = 10000;
|
||||
private int concurrentRequests = 1;
|
||||
|
||||
// Getters and setters
|
||||
public int getActions() { return actions; }
|
||||
public void setActions(int actions) { this.actions = actions; }
|
||||
|
||||
public long getSize() { return size; }
|
||||
public void setSize(long size) { this.size = size; }
|
||||
|
||||
public int getTimeout() { return timeout; }
|
||||
public void setTimeout(int timeout) { this.timeout = timeout; }
|
||||
|
||||
public int getConcurrentRequests() { return concurrentRequests; }
|
||||
public void setConcurrentRequests(int concurrentRequests) { this.concurrentRequests = concurrentRequests; }
|
||||
}
|
||||
|
||||
public static class Health {
|
||||
private int checkInterval = 30000;
|
||||
private int slowQueryThreshold = 5000;
|
||||
private boolean enableMetrics = true;
|
||||
|
||||
// Getters and setters
|
||||
public int getCheckInterval() { return checkInterval; }
|
||||
public void setCheckInterval(int checkInterval) { this.checkInterval = checkInterval; }
|
||||
|
||||
public int getSlowQueryThreshold() { return slowQueryThreshold; }
|
||||
public void setSlowQueryThreshold(int slowQueryThreshold) { this.slowQueryThreshold = slowQueryThreshold; }
|
||||
|
||||
public boolean isEnableMetrics() { return enableMetrics; }
|
||||
public void setEnableMetrics(boolean enableMetrics) { this.enableMetrics = enableMetrics; }
|
||||
}
|
||||
}
|
||||
57
backend/src/main/java/com/storycove/config/SolrConfig.java
Normal file
57
backend/src/main/java/com/storycove/config/SolrConfig.java
Normal file
@@ -0,0 +1,57 @@
|
||||
package com.storycove.config;
|
||||
|
||||
import org.apache.solr.client.solrj.SolrClient;
|
||||
import org.apache.solr.client.solrj.impl.HttpSolrClient;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
|
||||
@Configuration
|
||||
@ConditionalOnProperty(
|
||||
value = "storycove.search.engine",
|
||||
havingValue = "solr",
|
||||
matchIfMissing = false
|
||||
)
|
||||
public class SolrConfig {
|
||||
|
||||
private static final Logger logger = LoggerFactory.getLogger(SolrConfig.class);
|
||||
|
||||
private final SolrProperties properties;
|
||||
|
||||
public SolrConfig(SolrProperties properties) {
|
||||
this.properties = properties;
|
||||
}
|
||||
|
||||
@Bean
|
||||
public SolrClient solrClient() {
|
||||
logger.info("Initializing Solr client with URL: {}", properties.getUrl());
|
||||
|
||||
HttpSolrClient.Builder builder = new HttpSolrClient.Builder(properties.getUrl())
|
||||
.withConnectionTimeout(properties.getConnection().getTimeout())
|
||||
.withSocketTimeout(properties.getConnection().getSocketTimeout());
|
||||
|
||||
SolrClient client = builder.build();
|
||||
|
||||
logger.info("Solr running without authentication");
|
||||
|
||||
// Test connection
|
||||
testConnection(client);
|
||||
|
||||
return client;
|
||||
}
|
||||
|
||||
private void testConnection(SolrClient client) {
|
||||
try {
|
||||
// Test connection by pinging the server
|
||||
var response = client.ping();
|
||||
logger.info("Solr connection successful - Response time: {}ms",
|
||||
response.getElapsedTime());
|
||||
} catch (Exception e) {
|
||||
logger.warn("Solr connection test failed during initialization: {}", e.getMessage());
|
||||
logger.debug("Solr connection test full error", e);
|
||||
// Don't throw exception here - let the client be created and handle failures in service methods
|
||||
}
|
||||
}
|
||||
}
|
||||
140
backend/src/main/java/com/storycove/config/SolrProperties.java
Normal file
140
backend/src/main/java/com/storycove/config/SolrProperties.java
Normal file
@@ -0,0 +1,140 @@
|
||||
package com.storycove.config;
|
||||
|
||||
import org.springframework.boot.context.properties.ConfigurationProperties;
|
||||
import org.springframework.stereotype.Component;
|
||||
|
||||
@Component
|
||||
@ConfigurationProperties(prefix = "storycove.solr")
|
||||
public class SolrProperties {
|
||||
|
||||
private String url = "http://localhost:8983/solr";
|
||||
private String username;
|
||||
private String password;
|
||||
|
||||
private Cores cores = new Cores();
|
||||
private Connection connection = new Connection();
|
||||
private Query query = new Query();
|
||||
private Commit commit = new Commit();
|
||||
private Health health = new Health();
|
||||
|
||||
// Getters and setters
|
||||
public String getUrl() { return url; }
|
||||
public void setUrl(String url) { this.url = url; }
|
||||
|
||||
public String getUsername() { return username; }
|
||||
public void setUsername(String username) { this.username = username; }
|
||||
|
||||
public String getPassword() { return password; }
|
||||
public void setPassword(String password) { this.password = password; }
|
||||
|
||||
public Cores getCores() { return cores; }
|
||||
public void setCores(Cores cores) { this.cores = cores; }
|
||||
|
||||
public Connection getConnection() { return connection; }
|
||||
public void setConnection(Connection connection) { this.connection = connection; }
|
||||
|
||||
public Query getQuery() { return query; }
|
||||
public void setQuery(Query query) { this.query = query; }
|
||||
|
||||
public Commit getCommit() { return commit; }
|
||||
public void setCommit(Commit commit) { this.commit = commit; }
|
||||
|
||||
public Health getHealth() { return health; }
|
||||
public void setHealth(Health health) { this.health = health; }
|
||||
|
||||
public static class Cores {
|
||||
private String stories = "storycove_stories";
|
||||
private String authors = "storycove_authors";
|
||||
|
||||
// Getters and setters
|
||||
public String getStories() { return stories; }
|
||||
public void setStories(String stories) { this.stories = stories; }
|
||||
|
||||
public String getAuthors() { return authors; }
|
||||
public void setAuthors(String authors) { this.authors = authors; }
|
||||
}
|
||||
|
||||
public static class Connection {
|
||||
private int timeout = 30000;
|
||||
private int socketTimeout = 60000;
|
||||
private int maxConnectionsPerRoute = 10;
|
||||
private int maxConnectionsTotal = 30;
|
||||
private boolean retryOnFailure = true;
|
||||
private int maxRetries = 3;
|
||||
|
||||
// Getters and setters
|
||||
public int getTimeout() { return timeout; }
|
||||
public void setTimeout(int timeout) { this.timeout = timeout; }
|
||||
|
||||
public int getSocketTimeout() { return socketTimeout; }
|
||||
public void setSocketTimeout(int socketTimeout) { this.socketTimeout = socketTimeout; }
|
||||
|
||||
public int getMaxConnectionsPerRoute() { return maxConnectionsPerRoute; }
|
||||
public void setMaxConnectionsPerRoute(int maxConnectionsPerRoute) { this.maxConnectionsPerRoute = maxConnectionsPerRoute; }
|
||||
|
||||
public int getMaxConnectionsTotal() { return maxConnectionsTotal; }
|
||||
public void setMaxConnectionsTotal(int maxConnectionsTotal) { this.maxConnectionsTotal = maxConnectionsTotal; }
|
||||
|
||||
public boolean isRetryOnFailure() { return retryOnFailure; }
|
||||
public void setRetryOnFailure(boolean retryOnFailure) { this.retryOnFailure = retryOnFailure; }
|
||||
|
||||
public int getMaxRetries() { return maxRetries; }
|
||||
public void setMaxRetries(int maxRetries) { this.maxRetries = maxRetries; }
|
||||
}
|
||||
|
||||
public static class Query {
|
||||
private int defaultRows = 10;
|
||||
private int maxRows = 1000;
|
||||
private String defaultOperator = "AND";
|
||||
private boolean highlight = true;
|
||||
private boolean facets = true;
|
||||
|
||||
// Getters and setters
|
||||
public int getDefaultRows() { return defaultRows; }
|
||||
public void setDefaultRows(int defaultRows) { this.defaultRows = defaultRows; }
|
||||
|
||||
public int getMaxRows() { return maxRows; }
|
||||
public void setMaxRows(int maxRows) { this.maxRows = maxRows; }
|
||||
|
||||
public String getDefaultOperator() { return defaultOperator; }
|
||||
public void setDefaultOperator(String defaultOperator) { this.defaultOperator = defaultOperator; }
|
||||
|
||||
public boolean isHighlight() { return highlight; }
|
||||
public void setHighlight(boolean highlight) { this.highlight = highlight; }
|
||||
|
||||
public boolean isFacets() { return facets; }
|
||||
public void setFacets(boolean facets) { this.facets = facets; }
|
||||
}
|
||||
|
||||
public static class Commit {
|
||||
private boolean softCommit = true;
|
||||
private int commitWithin = 1000;
|
||||
private boolean waitSearcher = false;
|
||||
|
||||
// Getters and setters
|
||||
public boolean isSoftCommit() { return softCommit; }
|
||||
public void setSoftCommit(boolean softCommit) { this.softCommit = softCommit; }
|
||||
|
||||
public int getCommitWithin() { return commitWithin; }
|
||||
public void setCommitWithin(int commitWithin) { this.commitWithin = commitWithin; }
|
||||
|
||||
public boolean isWaitSearcher() { return waitSearcher; }
|
||||
public void setWaitSearcher(boolean waitSearcher) { this.waitSearcher = waitSearcher; }
|
||||
}
|
||||
|
||||
public static class Health {
|
||||
private int checkInterval = 30000;
|
||||
private int slowQueryThreshold = 5000;
|
||||
private boolean enableMetrics = true;
|
||||
|
||||
// Getters and setters
|
||||
public int getCheckInterval() { return checkInterval; }
|
||||
public void setCheckInterval(int checkInterval) { this.checkInterval = checkInterval; }
|
||||
|
||||
public int getSlowQueryThreshold() { return slowQueryThreshold; }
|
||||
public void setSlowQueryThreshold(int slowQueryThreshold) { this.slowQueryThreshold = slowQueryThreshold; }
|
||||
|
||||
public boolean isEnableMetrics() { return enableMetrics; }
|
||||
public void setEnableMetrics(boolean enableMetrics) { this.enableMetrics = enableMetrics; }
|
||||
}
|
||||
}
|
||||
@@ -3,7 +3,7 @@ package com.storycove.controller;
|
||||
import com.storycove.entity.Author;
|
||||
import com.storycove.entity.Story;
|
||||
import com.storycove.service.AuthorService;
|
||||
import com.storycove.service.OpenSearchService;
|
||||
import com.storycove.service.SolrService;
|
||||
import com.storycove.service.SearchServiceAdapter;
|
||||
import com.storycove.service.StoryService;
|
||||
import org.slf4j.Logger;
|
||||
@@ -16,7 +16,7 @@ import java.util.List;
|
||||
import java.util.Map;
|
||||
|
||||
/**
|
||||
* Admin controller for managing OpenSearch operations.
|
||||
* Admin controller for managing Solr operations.
|
||||
* Provides endpoints for reindexing and index management.
|
||||
*/
|
||||
@RestController
|
||||
@@ -35,7 +35,7 @@ public class AdminSearchController {
|
||||
private AuthorService authorService;
|
||||
|
||||
@Autowired(required = false)
|
||||
private OpenSearchService openSearchService;
|
||||
private SolrService solrService;
|
||||
|
||||
/**
|
||||
* Get current search status
|
||||
@@ -48,7 +48,7 @@ public class AdminSearchController {
|
||||
return ResponseEntity.ok(Map.of(
|
||||
"primaryEngine", status.getPrimaryEngine(),
|
||||
"dualWrite", status.isDualWrite(),
|
||||
"openSearchAvailable", status.isOpenSearchAvailable()
|
||||
"solrAvailable", status.isSolrAvailable()
|
||||
));
|
||||
} catch (Exception e) {
|
||||
logger.error("Error getting search status", e);
|
||||
@@ -59,17 +59,17 @@ public class AdminSearchController {
|
||||
}
|
||||
|
||||
/**
|
||||
* Reindex all data in OpenSearch
|
||||
* Reindex all data in Solr
|
||||
*/
|
||||
@PostMapping("/opensearch/reindex")
|
||||
public ResponseEntity<Map<String, Object>> reindexOpenSearch() {
|
||||
@PostMapping("/solr/reindex")
|
||||
public ResponseEntity<Map<String, Object>> reindexSolr() {
|
||||
try {
|
||||
logger.info("Starting OpenSearch full reindex");
|
||||
logger.info("Starting Solr full reindex");
|
||||
|
||||
if (!searchServiceAdapter.isSearchServiceAvailable()) {
|
||||
return ResponseEntity.badRequest().body(Map.of(
|
||||
"success", false,
|
||||
"error", "OpenSearch is not available or healthy"
|
||||
"error", "Solr is not available or healthy"
|
||||
));
|
||||
}
|
||||
|
||||
@@ -77,14 +77,14 @@ public class AdminSearchController {
|
||||
List<Story> allStories = storyService.findAllWithAssociations();
|
||||
List<Author> allAuthors = authorService.findAllWithStories();
|
||||
|
||||
// Bulk index directly in OpenSearch
|
||||
if (openSearchService != null) {
|
||||
openSearchService.bulkIndexStories(allStories);
|
||||
openSearchService.bulkIndexAuthors(allAuthors);
|
||||
// Bulk index directly in Solr
|
||||
if (solrService != null) {
|
||||
solrService.bulkIndexStories(allStories);
|
||||
solrService.bulkIndexAuthors(allAuthors);
|
||||
} else {
|
||||
return ResponseEntity.badRequest().body(Map.of(
|
||||
"success", false,
|
||||
"error", "OpenSearch service not available"
|
||||
"error", "Solr service not available"
|
||||
));
|
||||
}
|
||||
|
||||
@@ -92,7 +92,7 @@ public class AdminSearchController {
|
||||
|
||||
return ResponseEntity.ok(Map.of(
|
||||
"success", true,
|
||||
"message", String.format("Reindexed %d stories and %d authors in OpenSearch",
|
||||
"message", String.format("Reindexed %d stories and %d authors in Solr",
|
||||
allStories.size(), allAuthors.size()),
|
||||
"storiesCount", allStories.size(),
|
||||
"authorsCount", allAuthors.size(),
|
||||
@@ -100,36 +100,36 @@ public class AdminSearchController {
|
||||
));
|
||||
|
||||
} catch (Exception e) {
|
||||
logger.error("Error during OpenSearch reindex", e);
|
||||
logger.error("Error during Solr reindex", e);
|
||||
return ResponseEntity.internalServerError().body(Map.of(
|
||||
"success", false,
|
||||
"error", "OpenSearch reindex failed: " + e.getMessage()
|
||||
"error", "Solr reindex failed: " + e.getMessage()
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Recreate OpenSearch indices
|
||||
* Recreate Solr indices
|
||||
*/
|
||||
@PostMapping("/opensearch/recreate")
|
||||
public ResponseEntity<Map<String, Object>> recreateOpenSearchIndices() {
|
||||
@PostMapping("/solr/recreate")
|
||||
public ResponseEntity<Map<String, Object>> recreateSolrIndices() {
|
||||
try {
|
||||
logger.info("Starting OpenSearch indices recreation");
|
||||
logger.info("Starting Solr indices recreation");
|
||||
|
||||
if (!searchServiceAdapter.isSearchServiceAvailable()) {
|
||||
return ResponseEntity.badRequest().body(Map.of(
|
||||
"success", false,
|
||||
"error", "OpenSearch is not available or healthy"
|
||||
"error", "Solr is not available or healthy"
|
||||
));
|
||||
}
|
||||
|
||||
// Recreate indices
|
||||
if (openSearchService != null) {
|
||||
openSearchService.recreateIndices();
|
||||
if (solrService != null) {
|
||||
solrService.recreateIndices();
|
||||
} else {
|
||||
return ResponseEntity.badRequest().body(Map.of(
|
||||
"success", false,
|
||||
"error", "OpenSearch service not available"
|
||||
"error", "Solr service not available"
|
||||
));
|
||||
}
|
||||
|
||||
@@ -138,14 +138,14 @@ public class AdminSearchController {
|
||||
List<Author> allAuthors = authorService.findAllWithStories();
|
||||
|
||||
// Bulk index after recreation
|
||||
openSearchService.bulkIndexStories(allStories);
|
||||
openSearchService.bulkIndexAuthors(allAuthors);
|
||||
solrService.bulkIndexStories(allStories);
|
||||
solrService.bulkIndexAuthors(allAuthors);
|
||||
|
||||
int totalIndexed = allStories.size() + allAuthors.size();
|
||||
|
||||
return ResponseEntity.ok(Map.of(
|
||||
"success", true,
|
||||
"message", String.format("Recreated OpenSearch indices and indexed %d stories and %d authors",
|
||||
"message", String.format("Recreated Solr indices and indexed %d stories and %d authors",
|
||||
allStories.size(), allAuthors.size()),
|
||||
"storiesCount", allStories.size(),
|
||||
"authorsCount", allAuthors.size(),
|
||||
@@ -153,10 +153,10 @@ public class AdminSearchController {
|
||||
));
|
||||
|
||||
} catch (Exception e) {
|
||||
logger.error("Error during OpenSearch indices recreation", e);
|
||||
logger.error("Error during Solr indices recreation", e);
|
||||
return ResponseEntity.internalServerError().body(Map.of(
|
||||
"success", false,
|
||||
"error", "OpenSearch indices recreation failed: " + e.getMessage()
|
||||
"error", "Solr indices recreation failed: " + e.getMessage()
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -291,7 +291,7 @@ public class CollectionController {
|
||||
// Collections are not indexed in search engine yet
|
||||
return ResponseEntity.ok(Map.of(
|
||||
"success", true,
|
||||
"message", "Collections indexing not yet implemented in OpenSearch",
|
||||
"message", "Collections indexing not yet implemented in Solr",
|
||||
"count", allCollections.size()
|
||||
));
|
||||
} catch (Exception e) {
|
||||
|
||||
@@ -34,6 +34,18 @@ public class SearchResultDto<T> {
|
||||
this.facets = facets;
|
||||
}
|
||||
|
||||
// Simple constructor for basic search results with facet list
|
||||
public SearchResultDto(List<T> results, long totalHits, int resultCount, List<FacetCountDto> facetsList) {
|
||||
this.results = results;
|
||||
this.totalHits = totalHits;
|
||||
this.page = 0;
|
||||
this.perPage = resultCount;
|
||||
this.query = "";
|
||||
this.searchTimeMs = 0;
|
||||
// Convert list to map if needed - for now just set empty map
|
||||
this.facets = java.util.Collections.emptyMap();
|
||||
}
|
||||
|
||||
// Getters and Setters
|
||||
public List<T> getResults() {
|
||||
return results;
|
||||
|
||||
@@ -132,7 +132,7 @@ public class AuthorService {
|
||||
validateAuthorForCreate(author);
|
||||
Author savedAuthor = authorRepository.save(author);
|
||||
|
||||
// Index in OpenSearch
|
||||
// Index in Solr
|
||||
searchServiceAdapter.indexAuthor(savedAuthor);
|
||||
|
||||
return savedAuthor;
|
||||
@@ -150,7 +150,7 @@ public class AuthorService {
|
||||
updateAuthorFields(existingAuthor, authorUpdates);
|
||||
Author savedAuthor = authorRepository.save(existingAuthor);
|
||||
|
||||
// Update in OpenSearch
|
||||
// Update in Solr
|
||||
searchServiceAdapter.updateAuthor(savedAuthor);
|
||||
|
||||
return savedAuthor;
|
||||
@@ -166,7 +166,7 @@ public class AuthorService {
|
||||
|
||||
authorRepository.delete(author);
|
||||
|
||||
// Remove from OpenSearch
|
||||
// Remove from Solr
|
||||
searchServiceAdapter.deleteAuthor(id);
|
||||
}
|
||||
|
||||
@@ -175,7 +175,7 @@ public class AuthorService {
|
||||
author.addUrl(url);
|
||||
Author savedAuthor = authorRepository.save(author);
|
||||
|
||||
// Update in OpenSearch
|
||||
// Update in Solr
|
||||
searchServiceAdapter.updateAuthor(savedAuthor);
|
||||
|
||||
return savedAuthor;
|
||||
@@ -186,7 +186,7 @@ public class AuthorService {
|
||||
author.removeUrl(url);
|
||||
Author savedAuthor = authorRepository.save(author);
|
||||
|
||||
// Update in OpenSearch
|
||||
// Update in Solr
|
||||
searchServiceAdapter.updateAuthor(savedAuthor);
|
||||
|
||||
return savedAuthor;
|
||||
@@ -221,7 +221,7 @@ public class AuthorService {
|
||||
logger.debug("Saved author rating: {} for author: {}",
|
||||
refreshedAuthor.getAuthorRating(), refreshedAuthor.getName());
|
||||
|
||||
// Update in OpenSearch
|
||||
// Update in Solr
|
||||
searchServiceAdapter.updateAuthor(refreshedAuthor);
|
||||
|
||||
return refreshedAuthor;
|
||||
@@ -265,7 +265,7 @@ public class AuthorService {
|
||||
author.setAvatarImagePath(avatarPath);
|
||||
Author savedAuthor = authorRepository.save(author);
|
||||
|
||||
// Update in OpenSearch
|
||||
// Update in Solr
|
||||
searchServiceAdapter.updateAuthor(savedAuthor);
|
||||
|
||||
return savedAuthor;
|
||||
@@ -276,7 +276,7 @@ public class AuthorService {
|
||||
author.setAvatarImagePath(null);
|
||||
Author savedAuthor = authorRepository.save(author);
|
||||
|
||||
// Update in OpenSearch
|
||||
// Update in Solr
|
||||
searchServiceAdapter.updateAuthor(savedAuthor);
|
||||
|
||||
return savedAuthor;
|
||||
|
||||
@@ -55,8 +55,8 @@ public class CollectionService {
|
||||
*/
|
||||
public SearchResultDto<Collection> searchCollections(String query, List<String> tags, boolean includeArchived, int page, int limit) {
|
||||
// Collections are currently handled at database level, not indexed in search engine
|
||||
// Return empty result for now as collections search is not implemented in OpenSearch
|
||||
logger.warn("Collections search not yet implemented in OpenSearch, returning empty results");
|
||||
// Return empty result for now as collections search is not implemented in Solr
|
||||
logger.warn("Collections search not yet implemented in Solr, returning empty results");
|
||||
return new SearchResultDto<>(new ArrayList<>(), 0, page, limit, query != null ? query : "", 0);
|
||||
}
|
||||
|
||||
|
||||
@@ -115,7 +115,7 @@ public class LibraryService implements ApplicationContextAware {
|
||||
|
||||
/**
|
||||
* Switch to library after authentication with forced reindexing
|
||||
* This ensures OpenSearch is always up-to-date after login
|
||||
* This ensures Solr is always up-to-date after login
|
||||
*/
|
||||
public synchronized void switchToLibraryAfterAuthentication(String libraryId) throws Exception {
|
||||
logger.info("Switching to library after authentication: {} (forcing reindex)", libraryId);
|
||||
@@ -154,15 +154,15 @@ public class LibraryService implements ApplicationContextAware {
|
||||
|
||||
// Set new active library (datasource routing handled by SmartRoutingDataSource)
|
||||
currentLibraryId = libraryId;
|
||||
// OpenSearch indexes are global - no per-library initialization needed
|
||||
logger.debug("Library switched to OpenSearch mode for library: {}", libraryId);
|
||||
// Solr indexes are global - no per-library initialization needed
|
||||
logger.debug("Library switched to Solr mode for library: {}", libraryId);
|
||||
|
||||
logger.info("Successfully switched to library: {}", library.getName());
|
||||
|
||||
// Perform complete reindex AFTER library switch is fully complete
|
||||
// This ensures database routing is properly established
|
||||
if (forceReindex || !libraryId.equals(previousLibraryId)) {
|
||||
logger.debug("Starting post-switch OpenSearch reindex for library: {}", libraryId);
|
||||
logger.debug("Starting post-switch Solr reindex for library: {}", libraryId);
|
||||
|
||||
// Run reindex asynchronously to avoid blocking authentication response
|
||||
// and allow time for database routing to fully stabilize
|
||||
@@ -171,7 +171,7 @@ public class LibraryService implements ApplicationContextAware {
|
||||
try {
|
||||
// Give routing time to stabilize
|
||||
Thread.sleep(500);
|
||||
logger.debug("Starting async OpenSearch reindex for library: {}", finalLibraryId);
|
||||
logger.debug("Starting async Solr reindex for library: {}", finalLibraryId);
|
||||
|
||||
SearchServiceAdapter searchService = applicationContext.getBean(SearchServiceAdapter.class);
|
||||
// Get all stories and authors for reindexing
|
||||
@@ -184,12 +184,12 @@ public class LibraryService implements ApplicationContextAware {
|
||||
searchService.bulkIndexStories(allStories);
|
||||
searchService.bulkIndexAuthors(allAuthors);
|
||||
|
||||
logger.info("Completed async OpenSearch reindexing for library: {} ({} stories, {} authors)",
|
||||
logger.info("Completed async Solr reindexing for library: {} ({} stories, {} authors)",
|
||||
finalLibraryId, allStories.size(), allAuthors.size());
|
||||
} catch (Exception e) {
|
||||
logger.warn("Failed to async reindex OpenSearch for library {}: {}", finalLibraryId, e.getMessage());
|
||||
logger.warn("Failed to async reindex Solr for library {}: {}", finalLibraryId, e.getMessage());
|
||||
}
|
||||
}, "OpenSearchReindex-" + libraryId).start();
|
||||
}, "SolrReindex-" + libraryId).start();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -525,8 +525,8 @@ public class LibraryService implements ApplicationContextAware {
|
||||
// 1. Create image directory structure
|
||||
initializeImageDirectories(library);
|
||||
|
||||
// 2. OpenSearch indexes are global and managed automatically
|
||||
// No per-library initialization needed for OpenSearch
|
||||
// 2. Solr indexes are global and managed automatically
|
||||
// No per-library initialization needed for Solr
|
||||
|
||||
logger.debug("Successfully initialized resources for library: {}", library.getName());
|
||||
|
||||
@@ -760,7 +760,7 @@ public class LibraryService implements ApplicationContextAware {
|
||||
|
||||
private void closeCurrentResources() {
|
||||
// No need to close datasource - SmartRoutingDataSource handles this
|
||||
// OpenSearch service is managed by Spring - no explicit cleanup needed
|
||||
// Solr service is managed by Spring - no explicit cleanup needed
|
||||
// Don't clear currentLibraryId here - only when explicitly switching
|
||||
}
|
||||
|
||||
|
||||
@@ -1,133 +0,0 @@
|
||||
package com.storycove.service;
|
||||
|
||||
import com.storycove.config.OpenSearchProperties;
|
||||
import org.opensearch.client.opensearch.OpenSearchClient;
|
||||
import org.opensearch.client.opensearch.cluster.HealthRequest;
|
||||
import org.opensearch.client.opensearch.cluster.HealthResponse;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.actuate.health.Health;
|
||||
import org.springframework.boot.actuate.health.HealthIndicator;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
|
||||
import org.springframework.scheduling.annotation.Scheduled;
|
||||
import org.springframework.stereotype.Service;
|
||||
|
||||
import java.time.LocalDateTime;
|
||||
import java.util.concurrent.atomic.AtomicReference;
|
||||
|
||||
@Service
|
||||
@ConditionalOnProperty(name = "storycove.search.engine", havingValue = "opensearch")
|
||||
public class OpenSearchHealthService implements HealthIndicator {
|
||||
|
||||
private static final Logger logger = LoggerFactory.getLogger(OpenSearchHealthService.class);
|
||||
|
||||
private final OpenSearchClient openSearchClient;
|
||||
private final OpenSearchProperties properties;
|
||||
|
||||
private final AtomicReference<Health> lastKnownHealth = new AtomicReference<>(Health.unknown().build());
|
||||
private LocalDateTime lastCheckTime = LocalDateTime.now();
|
||||
|
||||
@Autowired
|
||||
public OpenSearchHealthService(OpenSearchClient openSearchClient, OpenSearchProperties properties) {
|
||||
this.openSearchClient = openSearchClient;
|
||||
this.properties = properties;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Health health() {
|
||||
return lastKnownHealth.get();
|
||||
}
|
||||
|
||||
@Scheduled(fixedDelayString = "#{@openSearchProperties.health.checkInterval}")
|
||||
public void performHealthCheck() {
|
||||
try {
|
||||
HealthResponse clusterHealth = openSearchClient.cluster().health(
|
||||
HealthRequest.of(h -> h.timeout(t -> t.time("10s")))
|
||||
);
|
||||
|
||||
Health.Builder healthBuilder = Health.up()
|
||||
.withDetail("cluster_name", clusterHealth.clusterName())
|
||||
.withDetail("status", clusterHealth.status().jsonValue())
|
||||
.withDetail("number_of_nodes", clusterHealth.numberOfNodes())
|
||||
.withDetail("number_of_data_nodes", clusterHealth.numberOfDataNodes())
|
||||
.withDetail("active_primary_shards", clusterHealth.activePrimaryShards())
|
||||
.withDetail("active_shards", clusterHealth.activeShards())
|
||||
.withDetail("relocating_shards", clusterHealth.relocatingShards())
|
||||
.withDetail("initializing_shards", clusterHealth.initializingShards())
|
||||
.withDetail("unassigned_shards", clusterHealth.unassignedShards())
|
||||
.withDetail("last_check", LocalDateTime.now());
|
||||
|
||||
// Check if cluster status is concerning
|
||||
switch (clusterHealth.status()) {
|
||||
case Red:
|
||||
healthBuilder = Health.down()
|
||||
.withDetail("reason", "Cluster status is RED - some primary shards are unassigned");
|
||||
break;
|
||||
case Yellow:
|
||||
if (isProduction()) {
|
||||
healthBuilder = Health.down()
|
||||
.withDetail("reason", "Cluster status is YELLOW - some replica shards are unassigned (critical in production)");
|
||||
} else {
|
||||
// Yellow is acceptable in development (single node clusters)
|
||||
healthBuilder.withDetail("warning", "Cluster status is YELLOW - acceptable for development");
|
||||
}
|
||||
break;
|
||||
case Green:
|
||||
// All good
|
||||
break;
|
||||
}
|
||||
|
||||
lastKnownHealth.set(healthBuilder.build());
|
||||
lastCheckTime = LocalDateTime.now();
|
||||
|
||||
if (properties.getHealth().isEnableMetrics()) {
|
||||
logMetrics(clusterHealth);
|
||||
}
|
||||
|
||||
} catch (Exception e) {
|
||||
logger.error("OpenSearch health check failed", e);
|
||||
Health unhealthyStatus = Health.down()
|
||||
.withDetail("error", e.getMessage())
|
||||
.withDetail("last_successful_check", lastCheckTime)
|
||||
.withDetail("current_time", LocalDateTime.now())
|
||||
.build();
|
||||
lastKnownHealth.set(unhealthyStatus);
|
||||
}
|
||||
}
|
||||
|
||||
private void logMetrics(HealthResponse clusterHealth) {
|
||||
logger.info("OpenSearch Cluster Metrics - Status: {}, Nodes: {}, Active Shards: {}, Unassigned: {}",
|
||||
clusterHealth.status().jsonValue(),
|
||||
clusterHealth.numberOfNodes(),
|
||||
clusterHealth.activeShards(),
|
||||
clusterHealth.unassignedShards());
|
||||
}
|
||||
|
||||
private boolean isProduction() {
|
||||
return "production".equalsIgnoreCase(properties.getProfile());
|
||||
}
|
||||
|
||||
/**
|
||||
* Manual health check for immediate status
|
||||
*/
|
||||
public boolean isClusterHealthy() {
|
||||
Health currentHealth = lastKnownHealth.get();
|
||||
return currentHealth.getStatus() == org.springframework.boot.actuate.health.Status.UP;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get detailed cluster information
|
||||
*/
|
||||
public String getClusterInfo() {
|
||||
try {
|
||||
var info = openSearchClient.info();
|
||||
return String.format("OpenSearch %s (Cluster: %s, Lucene: %s)",
|
||||
info.version().number(),
|
||||
info.clusterName(),
|
||||
info.version().luceneVersion());
|
||||
} catch (Exception e) {
|
||||
return "Unable to retrieve cluster information: " + e.getMessage();
|
||||
}
|
||||
}
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -16,7 +16,7 @@ import java.util.UUID;
|
||||
/**
|
||||
* Service adapter that provides a unified interface for search operations.
|
||||
*
|
||||
* This adapter directly delegates to OpenSearchService.
|
||||
* This adapter directly delegates to SolrService.
|
||||
*/
|
||||
@Service
|
||||
public class SearchServiceAdapter {
|
||||
@@ -24,7 +24,7 @@ public class SearchServiceAdapter {
|
||||
private static final Logger logger = LoggerFactory.getLogger(SearchServiceAdapter.class);
|
||||
|
||||
@Autowired
|
||||
private OpenSearchService openSearchService;
|
||||
private SolrService solrService;
|
||||
|
||||
// ===============================
|
||||
// SEARCH OPERATIONS
|
||||
@@ -46,11 +46,20 @@ public class SearchServiceAdapter {
|
||||
String sourceDomain, String seriesFilter,
|
||||
Integer minTagCount, Boolean popularOnly,
|
||||
Boolean hiddenGemsOnly) {
|
||||
return openSearchService.searchStories(query, tags, author, series, minWordCount, maxWordCount,
|
||||
minRating, isRead, isFavorite, sortBy, sortOrder, page, size, facetBy,
|
||||
createdAfter, createdBefore, lastReadAfter, lastReadBefore, unratedOnly, readingStatus,
|
||||
hasReadingProgress, hasCoverImage, sourceDomain, seriesFilter, minTagCount, popularOnly,
|
||||
hiddenGemsOnly);
|
||||
logger.info("SearchServiceAdapter: delegating search to SolrService");
|
||||
try {
|
||||
SearchResultDto<StorySearchDto> result = solrService.searchStories(query, tags, author, series, minWordCount, maxWordCount,
|
||||
minRating, isRead, isFavorite, sortBy, sortOrder, page, size, facetBy,
|
||||
createdAfter, createdBefore, lastReadAfter, lastReadBefore, unratedOnly, readingStatus,
|
||||
hasReadingProgress, hasCoverImage, sourceDomain, seriesFilter, minTagCount, popularOnly,
|
||||
hiddenGemsOnly);
|
||||
logger.info("SearchServiceAdapter: received result with {} stories and {} facets",
|
||||
result.getResults().size(), result.getFacets().size());
|
||||
return result;
|
||||
} catch (Exception e) {
|
||||
logger.error("SearchServiceAdapter: error during search", e);
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -60,7 +69,7 @@ public class SearchServiceAdapter {
|
||||
String series, Integer minWordCount, Integer maxWordCount,
|
||||
Float minRating, Boolean isRead, Boolean isFavorite,
|
||||
Long seed) {
|
||||
return openSearchService.getRandomStories(count, tags, author, series, minWordCount, maxWordCount,
|
||||
return solrService.getRandomStories(count, tags, author, series, minWordCount, maxWordCount,
|
||||
minRating, isRead, isFavorite, seed);
|
||||
}
|
||||
|
||||
@@ -69,7 +78,7 @@ public class SearchServiceAdapter {
|
||||
*/
|
||||
public void recreateIndices() {
|
||||
try {
|
||||
openSearchService.recreateIndices();
|
||||
solrService.recreateIndices();
|
||||
} catch (Exception e) {
|
||||
logger.error("Failed to recreate search indices", e);
|
||||
throw new RuntimeException("Failed to recreate search indices", e);
|
||||
@@ -93,21 +102,21 @@ public class SearchServiceAdapter {
|
||||
* Get random story ID with unified interface
|
||||
*/
|
||||
public String getRandomStoryId(Long seed) {
|
||||
return openSearchService.getRandomStoryId(seed);
|
||||
return solrService.getRandomStoryId(seed);
|
||||
}
|
||||
|
||||
/**
|
||||
* Search authors with unified interface
|
||||
*/
|
||||
public List<AuthorSearchDto> searchAuthors(String query, int limit) {
|
||||
return openSearchService.searchAuthors(query, limit);
|
||||
return solrService.searchAuthors(query, limit);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get tag suggestions with unified interface
|
||||
*/
|
||||
public List<String> getTagSuggestions(String query, int limit) {
|
||||
return openSearchService.getTagSuggestions(query, limit);
|
||||
return solrService.getTagSuggestions(query, limit);
|
||||
}
|
||||
|
||||
// ===============================
|
||||
@@ -115,88 +124,88 @@ public class SearchServiceAdapter {
|
||||
// ===============================
|
||||
|
||||
/**
|
||||
* Index a story in OpenSearch
|
||||
* Index a story in Solr
|
||||
*/
|
||||
public void indexStory(Story story) {
|
||||
try {
|
||||
openSearchService.indexStory(story);
|
||||
solrService.indexStory(story);
|
||||
} catch (Exception e) {
|
||||
logger.error("Failed to index story {}", story.getId(), e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Update a story in OpenSearch
|
||||
* Update a story in Solr
|
||||
*/
|
||||
public void updateStory(Story story) {
|
||||
try {
|
||||
openSearchService.updateStory(story);
|
||||
solrService.updateStory(story);
|
||||
} catch (Exception e) {
|
||||
logger.error("Failed to update story {}", story.getId(), e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Delete a story from OpenSearch
|
||||
* Delete a story from Solr
|
||||
*/
|
||||
public void deleteStory(UUID storyId) {
|
||||
try {
|
||||
openSearchService.deleteStory(storyId);
|
||||
solrService.deleteStory(storyId);
|
||||
} catch (Exception e) {
|
||||
logger.error("Failed to delete story {}", storyId, e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Index an author in OpenSearch
|
||||
* Index an author in Solr
|
||||
*/
|
||||
public void indexAuthor(Author author) {
|
||||
try {
|
||||
openSearchService.indexAuthor(author);
|
||||
solrService.indexAuthor(author);
|
||||
} catch (Exception e) {
|
||||
logger.error("Failed to index author {}", author.getId(), e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Update an author in OpenSearch
|
||||
* Update an author in Solr
|
||||
*/
|
||||
public void updateAuthor(Author author) {
|
||||
try {
|
||||
openSearchService.updateAuthor(author);
|
||||
solrService.updateAuthor(author);
|
||||
} catch (Exception e) {
|
||||
logger.error("Failed to update author {}", author.getId(), e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Delete an author from OpenSearch
|
||||
* Delete an author from Solr
|
||||
*/
|
||||
public void deleteAuthor(UUID authorId) {
|
||||
try {
|
||||
openSearchService.deleteAuthor(authorId);
|
||||
solrService.deleteAuthor(authorId);
|
||||
} catch (Exception e) {
|
||||
logger.error("Failed to delete author {}", authorId, e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Bulk index stories in OpenSearch
|
||||
* Bulk index stories in Solr
|
||||
*/
|
||||
public void bulkIndexStories(List<Story> stories) {
|
||||
try {
|
||||
openSearchService.bulkIndexStories(stories);
|
||||
solrService.bulkIndexStories(stories);
|
||||
} catch (Exception e) {
|
||||
logger.error("Failed to bulk index {} stories", stories.size(), e);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Bulk index authors in OpenSearch
|
||||
* Bulk index authors in Solr
|
||||
*/
|
||||
public void bulkIndexAuthors(List<Author> authors) {
|
||||
try {
|
||||
openSearchService.bulkIndexAuthors(authors);
|
||||
solrService.bulkIndexAuthors(authors);
|
||||
} catch (Exception e) {
|
||||
logger.error("Failed to bulk index {} authors", authors.size(), e);
|
||||
}
|
||||
@@ -210,14 +219,14 @@ public class SearchServiceAdapter {
|
||||
* Check if search service is available and healthy
|
||||
*/
|
||||
public boolean isSearchServiceAvailable() {
|
||||
return openSearchService.testConnection();
|
||||
return solrService.testConnection();
|
||||
}
|
||||
|
||||
/**
|
||||
* Get current search engine name
|
||||
*/
|
||||
public String getCurrentSearchEngine() {
|
||||
return "opensearch";
|
||||
return "solr";
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -228,10 +237,10 @@ public class SearchServiceAdapter {
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if we can switch to OpenSearch
|
||||
* Check if we can switch to Solr
|
||||
*/
|
||||
public boolean canSwitchToOpenSearch() {
|
||||
return true; // Already using OpenSearch
|
||||
public boolean canSwitchToSolr() {
|
||||
return true; // Already using Solr
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -246,10 +255,10 @@ public class SearchServiceAdapter {
|
||||
*/
|
||||
public SearchStatus getSearchStatus() {
|
||||
return new SearchStatus(
|
||||
"opensearch",
|
||||
"solr",
|
||||
false, // no dual-write
|
||||
false, // no typesense
|
||||
openSearchService.testConnection()
|
||||
solrService.testConnection()
|
||||
);
|
||||
}
|
||||
|
||||
@@ -260,19 +269,19 @@ public class SearchServiceAdapter {
|
||||
private final String primaryEngine;
|
||||
private final boolean dualWrite;
|
||||
private final boolean typesenseAvailable;
|
||||
private final boolean openSearchAvailable;
|
||||
private final boolean solrAvailable;
|
||||
|
||||
public SearchStatus(String primaryEngine, boolean dualWrite,
|
||||
boolean typesenseAvailable, boolean openSearchAvailable) {
|
||||
boolean typesenseAvailable, boolean solrAvailable) {
|
||||
this.primaryEngine = primaryEngine;
|
||||
this.dualWrite = dualWrite;
|
||||
this.typesenseAvailable = typesenseAvailable;
|
||||
this.openSearchAvailable = openSearchAvailable;
|
||||
this.solrAvailable = solrAvailable;
|
||||
}
|
||||
|
||||
public String getPrimaryEngine() { return primaryEngine; }
|
||||
public boolean isDualWrite() { return dualWrite; }
|
||||
public boolean isTypesenseAvailable() { return typesenseAvailable; }
|
||||
public boolean isOpenSearchAvailable() { return openSearchAvailable; }
|
||||
public boolean isSolrAvailable() { return solrAvailable; }
|
||||
}
|
||||
}
|
||||
931
backend/src/main/java/com/storycove/service/SolrService.java
Normal file
931
backend/src/main/java/com/storycove/service/SolrService.java
Normal file
@@ -0,0 +1,931 @@
|
||||
package com.storycove.service;
|
||||
|
||||
import com.storycove.config.SolrProperties;
|
||||
import com.storycove.dto.AuthorSearchDto;
|
||||
import com.storycove.dto.FacetCountDto;
|
||||
import com.storycove.dto.SearchResultDto;
|
||||
import com.storycove.dto.StorySearchDto;
|
||||
import com.storycove.entity.Author;
|
||||
import com.storycove.entity.Story;
|
||||
import org.apache.solr.client.solrj.SolrClient;
|
||||
import org.apache.solr.client.solrj.SolrQuery;
|
||||
import org.apache.solr.client.solrj.SolrServerException;
|
||||
import org.apache.solr.client.solrj.response.QueryResponse;
|
||||
import org.apache.solr.client.solrj.response.UpdateResponse;
|
||||
import org.apache.solr.common.SolrDocument;
|
||||
import org.apache.solr.common.SolrDocumentList;
|
||||
import org.apache.solr.common.SolrInputDocument;
|
||||
import org.apache.solr.common.params.ModifiableSolrParams;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty;
|
||||
import org.springframework.context.annotation.Lazy;
|
||||
import org.springframework.stereotype.Service;
|
||||
|
||||
import jakarta.annotation.PostConstruct;
|
||||
import java.io.IOException;
|
||||
import java.time.LocalDateTime;
|
||||
import java.time.format.DateTimeFormatter;
|
||||
import java.util.*;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
@Service
|
||||
@ConditionalOnProperty(
|
||||
value = "storycove.search.engine",
|
||||
havingValue = "solr",
|
||||
matchIfMissing = false
|
||||
)
|
||||
public class SolrService {
|
||||
|
||||
private static final Logger logger = LoggerFactory.getLogger(SolrService.class);
|
||||
|
||||
@Autowired(required = false)
|
||||
private SolrClient solrClient;
|
||||
|
||||
@Autowired
|
||||
private SolrProperties properties;
|
||||
|
||||
@Autowired
|
||||
@Lazy
|
||||
private ReadingTimeService readingTimeService;
|
||||
|
||||
@PostConstruct
|
||||
public void initializeCores() {
|
||||
if (!isAvailable()) {
|
||||
logger.debug("Solr client not available - skipping core initialization");
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
logger.debug("Testing Solr cores availability...");
|
||||
testCoreAvailability(properties.getCores().getStories());
|
||||
testCoreAvailability(properties.getCores().getAuthors());
|
||||
logger.debug("Solr cores are available");
|
||||
} catch (Exception e) {
|
||||
logger.error("Failed to test Solr cores availability", e);
|
||||
}
|
||||
}
|
||||
|
||||
// ===============================
|
||||
// CORE MANAGEMENT
|
||||
// ===============================
|
||||
|
||||
private void testCoreAvailability(String coreName) throws IOException, SolrServerException {
|
||||
SolrQuery query = new SolrQuery("*:*");
|
||||
query.setRows(0);
|
||||
QueryResponse response = solrClient.query(coreName, query);
|
||||
logger.debug("Core {} is available - found {} documents", coreName, response.getResults().getNumFound());
|
||||
}
|
||||
|
||||
// ===============================
|
||||
// STORY INDEXING
|
||||
// ===============================
|
||||
|
||||
public void indexStory(Story story) throws IOException {
|
||||
if (!isAvailable()) {
|
||||
logger.debug("Solr not available - skipping story indexing");
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
logger.debug("Indexing story: {} ({})", story.getTitle(), story.getId());
|
||||
SolrInputDocument doc = createStoryDocument(story);
|
||||
|
||||
UpdateResponse response = solrClient.add(properties.getCores().getStories(), doc,
|
||||
properties.getCommit().getCommitWithin());
|
||||
|
||||
if (response.getStatus() == 0) {
|
||||
logger.debug("Successfully indexed story: {}", story.getId());
|
||||
} else {
|
||||
logger.warn("Story indexing returned non-zero status: {}", response.getStatus());
|
||||
}
|
||||
} catch (SolrServerException e) {
|
||||
logger.error("Failed to index story: {}", story.getId(), e);
|
||||
throw new IOException("Failed to index story", e);
|
||||
}
|
||||
}
|
||||
|
||||
public void updateStory(Story story) throws IOException {
|
||||
// For Solr, update is the same as index (upsert behavior)
|
||||
indexStory(story);
|
||||
}
|
||||
|
||||
public void deleteStory(UUID storyId) throws IOException {
|
||||
if (!isAvailable()) {
|
||||
logger.debug("Solr not available - skipping story deletion");
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
logger.debug("Deleting story from index: {}", storyId);
|
||||
UpdateResponse response = solrClient.deleteById(properties.getCores().getStories(),
|
||||
storyId.toString(), properties.getCommit().getCommitWithin());
|
||||
|
||||
if (response.getStatus() == 0) {
|
||||
logger.debug("Successfully deleted story: {}", storyId);
|
||||
} else {
|
||||
logger.warn("Story deletion returned non-zero status: {}", response.getStatus());
|
||||
}
|
||||
} catch (SolrServerException e) {
|
||||
logger.error("Failed to delete story: {}", storyId, e);
|
||||
throw new IOException("Failed to delete story", e);
|
||||
}
|
||||
}
|
||||
|
||||
// ===============================
|
||||
// AUTHOR INDEXING
|
||||
// ===============================
|
||||
|
||||
public void indexAuthor(Author author) throws IOException {
|
||||
if (!isAvailable()) {
|
||||
logger.debug("Solr not available - skipping author indexing");
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
logger.debug("Indexing author: {} ({})", author.getName(), author.getId());
|
||||
SolrInputDocument doc = createAuthorDocument(author);
|
||||
|
||||
UpdateResponse response = solrClient.add(properties.getCores().getAuthors(), doc,
|
||||
properties.getCommit().getCommitWithin());
|
||||
|
||||
if (response.getStatus() == 0) {
|
||||
logger.debug("Successfully indexed author: {}", author.getId());
|
||||
} else {
|
||||
logger.warn("Author indexing returned non-zero status: {}", response.getStatus());
|
||||
}
|
||||
} catch (SolrServerException e) {
|
||||
logger.error("Failed to index author: {}", author.getId(), e);
|
||||
throw new IOException("Failed to index author", e);
|
||||
}
|
||||
}
|
||||
|
||||
public void updateAuthor(Author author) throws IOException {
|
||||
// For Solr, update is the same as index (upsert behavior)
|
||||
indexAuthor(author);
|
||||
}
|
||||
|
||||
public void deleteAuthor(UUID authorId) throws IOException {
|
||||
if (!isAvailable()) {
|
||||
logger.debug("Solr not available - skipping author deletion");
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
logger.debug("Deleting author from index: {}", authorId);
|
||||
UpdateResponse response = solrClient.deleteById(properties.getCores().getAuthors(),
|
||||
authorId.toString(), properties.getCommit().getCommitWithin());
|
||||
|
||||
if (response.getStatus() == 0) {
|
||||
logger.debug("Successfully deleted author: {}", authorId);
|
||||
} else {
|
||||
logger.warn("Author deletion returned non-zero status: {}", response.getStatus());
|
||||
}
|
||||
} catch (SolrServerException e) {
|
||||
logger.error("Failed to delete author: {}", authorId, e);
|
||||
throw new IOException("Failed to delete author", e);
|
||||
}
|
||||
}
|
||||
|
||||
// ===============================
|
||||
// BULK OPERATIONS
|
||||
// ===============================
|
||||
|
||||
public void bulkIndexStories(List<Story> stories) throws IOException {
|
||||
if (!isAvailable() || stories.isEmpty()) {
|
||||
logger.debug("Solr not available or empty stories list - skipping bulk indexing");
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
logger.debug("Bulk indexing {} stories", stories.size());
|
||||
List<SolrInputDocument> docs = stories.stream()
|
||||
.map(this::createStoryDocument)
|
||||
.collect(Collectors.toList());
|
||||
|
||||
UpdateResponse response = solrClient.add(properties.getCores().getStories(), docs,
|
||||
properties.getCommit().getCommitWithin());
|
||||
|
||||
if (response.getStatus() == 0) {
|
||||
logger.debug("Successfully bulk indexed {} stories", stories.size());
|
||||
} else {
|
||||
logger.warn("Bulk story indexing returned non-zero status: {}", response.getStatus());
|
||||
}
|
||||
} catch (SolrServerException e) {
|
||||
logger.error("Failed to bulk index stories", e);
|
||||
throw new IOException("Failed to bulk index stories", e);
|
||||
}
|
||||
}
|
||||
|
||||
public void bulkIndexAuthors(List<Author> authors) throws IOException {
|
||||
if (!isAvailable() || authors.isEmpty()) {
|
||||
logger.debug("Solr not available or empty authors list - skipping bulk indexing");
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
logger.debug("Bulk indexing {} authors", authors.size());
|
||||
List<SolrInputDocument> docs = authors.stream()
|
||||
.map(this::createAuthorDocument)
|
||||
.collect(Collectors.toList());
|
||||
|
||||
UpdateResponse response = solrClient.add(properties.getCores().getAuthors(), docs,
|
||||
properties.getCommit().getCommitWithin());
|
||||
|
||||
if (response.getStatus() == 0) {
|
||||
logger.debug("Successfully bulk indexed {} authors", authors.size());
|
||||
} else {
|
||||
logger.warn("Bulk author indexing returned non-zero status: {}", response.getStatus());
|
||||
}
|
||||
} catch (SolrServerException e) {
|
||||
logger.error("Failed to bulk index authors", e);
|
||||
throw new IOException("Failed to bulk index authors", e);
|
||||
}
|
||||
}
|
||||
|
||||
// ===============================
|
||||
// DOCUMENT CREATION
|
||||
// ===============================
|
||||
|
||||
private SolrInputDocument createStoryDocument(Story story) {
|
||||
SolrInputDocument doc = new SolrInputDocument();
|
||||
|
||||
doc.addField("id", story.getId().toString());
|
||||
doc.addField("title", story.getTitle());
|
||||
doc.addField("description", story.getDescription());
|
||||
doc.addField("sourceUrl", story.getSourceUrl());
|
||||
doc.addField("coverPath", story.getCoverPath());
|
||||
doc.addField("wordCount", story.getWordCount());
|
||||
doc.addField("rating", story.getRating());
|
||||
doc.addField("volume", story.getVolume());
|
||||
doc.addField("isRead", story.getIsRead());
|
||||
doc.addField("readingPosition", story.getReadingPosition());
|
||||
|
||||
if (story.getLastReadAt() != null) {
|
||||
doc.addField("lastReadAt", formatDateTime(story.getLastReadAt()));
|
||||
}
|
||||
|
||||
if (story.getAuthor() != null) {
|
||||
doc.addField("authorId", story.getAuthor().getId().toString());
|
||||
doc.addField("authorName", story.getAuthor().getName());
|
||||
}
|
||||
|
||||
if (story.getSeries() != null) {
|
||||
doc.addField("seriesId", story.getSeries().getId().toString());
|
||||
doc.addField("seriesName", story.getSeries().getName());
|
||||
}
|
||||
|
||||
if (story.getTags() != null && !story.getTags().isEmpty()) {
|
||||
List<String> tagNames = story.getTags().stream()
|
||||
.map(tag -> tag.getName())
|
||||
.collect(Collectors.toList());
|
||||
doc.addField("tagNames", tagNames);
|
||||
}
|
||||
|
||||
doc.addField("createdAt", formatDateTime(story.getCreatedAt()));
|
||||
doc.addField("updatedAt", formatDateTime(story.getUpdatedAt()));
|
||||
doc.addField("dateAdded", formatDateTime(story.getCreatedAt()));
|
||||
|
||||
return doc;
|
||||
}
|
||||
|
||||
private SolrInputDocument createAuthorDocument(Author author) {
|
||||
SolrInputDocument doc = new SolrInputDocument();
|
||||
|
||||
doc.addField("id", author.getId().toString());
|
||||
doc.addField("name", author.getName());
|
||||
doc.addField("notes", author.getNotes());
|
||||
doc.addField("authorRating", author.getAuthorRating());
|
||||
doc.addField("avatarImagePath", author.getAvatarImagePath());
|
||||
|
||||
if (author.getUrls() != null && !author.getUrls().isEmpty()) {
|
||||
doc.addField("urls", author.getUrls());
|
||||
}
|
||||
|
||||
// Calculate derived fields
|
||||
if (author.getStories() != null) {
|
||||
doc.addField("storyCount", author.getStories().size());
|
||||
|
||||
OptionalDouble avgRating = author.getStories().stream()
|
||||
.filter(story -> story.getRating() != null && story.getRating() > 0)
|
||||
.mapToInt(Story::getRating)
|
||||
.average();
|
||||
|
||||
if (avgRating.isPresent()) {
|
||||
doc.addField("averageStoryRating", avgRating.getAsDouble());
|
||||
}
|
||||
}
|
||||
|
||||
doc.addField("createdAt", formatDateTime(author.getCreatedAt()));
|
||||
doc.addField("updatedAt", formatDateTime(author.getUpdatedAt()));
|
||||
|
||||
return doc;
|
||||
}
|
||||
|
||||
private String formatDateTime(LocalDateTime dateTime) {
|
||||
if (dateTime == null) return null;
|
||||
return dateTime.format(DateTimeFormatter.ISO_LOCAL_DATE_TIME) + "Z";
|
||||
}
|
||||
|
||||
// ===============================
|
||||
// UTILITY METHODS
|
||||
// ===============================
|
||||
|
||||
public boolean isAvailable() {
|
||||
return solrClient != null;
|
||||
}
|
||||
|
||||
public boolean testConnection() {
|
||||
if (!isAvailable()) {
|
||||
return false;
|
||||
}
|
||||
|
||||
try {
|
||||
// Test connection by pinging a core
|
||||
testCoreAvailability(properties.getCores().getStories());
|
||||
return true;
|
||||
} catch (Exception e) {
|
||||
logger.debug("Solr connection test failed", e);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
// ===============================
|
||||
// SEARCH OPERATIONS
|
||||
// ===============================
|
||||
|
||||
public SearchResultDto<StorySearchDto> searchStories(String query, List<String> tags, String author,
|
||||
String series, Integer minWordCount, Integer maxWordCount,
|
||||
Float minRating, Boolean isRead, Boolean isFavorite,
|
||||
String sortBy, String sortOrder, int page, int size,
|
||||
List<String> facetBy,
|
||||
// Advanced filters
|
||||
String createdAfter, String createdBefore,
|
||||
String lastReadAfter, String lastReadBefore,
|
||||
Boolean unratedOnly, String readingStatus,
|
||||
Boolean hasReadingProgress, Boolean hasCoverImage,
|
||||
String sourceDomain, String seriesFilter,
|
||||
Integer minTagCount, Boolean popularOnly,
|
||||
Boolean hiddenGemsOnly) {
|
||||
if (!isAvailable()) {
|
||||
logger.debug("Solr not available - returning empty search results");
|
||||
return new SearchResultDto<StorySearchDto>(Collections.emptyList(), 0, 0, Collections.emptyList());
|
||||
}
|
||||
|
||||
try {
|
||||
SolrQuery solrQuery = new SolrQuery();
|
||||
|
||||
// Set query
|
||||
if (query == null || query.trim().isEmpty()) {
|
||||
solrQuery.setQuery("*:*");
|
||||
} else {
|
||||
// Use dismax query parser for better relevance
|
||||
solrQuery.setQuery(query);
|
||||
solrQuery.set("defType", "edismax");
|
||||
solrQuery.set("qf", "title^3.0 description^2.0 authorName^2.0 seriesName^1.5 tagNames^1.0");
|
||||
solrQuery.set("mm", "2<-1 5<-2 6<90%"); // Minimum should match
|
||||
}
|
||||
|
||||
// Apply filters
|
||||
applySearchFilters(solrQuery, tags, author, series, minWordCount, maxWordCount,
|
||||
minRating, isRead, isFavorite, createdAfter, createdBefore, lastReadAfter,
|
||||
lastReadBefore, unratedOnly, readingStatus, hasReadingProgress, hasCoverImage,
|
||||
sourceDomain, seriesFilter, minTagCount, popularOnly, hiddenGemsOnly);
|
||||
|
||||
// Pagination
|
||||
solrQuery.setStart(page * size);
|
||||
solrQuery.setRows(size);
|
||||
|
||||
// Sorting
|
||||
if (sortBy != null && !sortBy.isEmpty()) {
|
||||
SolrQuery.ORDER order = "desc".equalsIgnoreCase(sortOrder) ?
|
||||
SolrQuery.ORDER.desc : SolrQuery.ORDER.asc;
|
||||
solrQuery.setSort(sortBy, order);
|
||||
} else {
|
||||
// Default relevance sorting
|
||||
solrQuery.setSort("score", SolrQuery.ORDER.desc);
|
||||
}
|
||||
|
||||
// Enable highlighting
|
||||
if (properties.getQuery().isHighlight()) {
|
||||
solrQuery.setHighlight(true);
|
||||
solrQuery.addHighlightField("title");
|
||||
solrQuery.addHighlightField("description");
|
||||
solrQuery.setHighlightSimplePre("<em>");
|
||||
solrQuery.setHighlightSimplePost("</em>");
|
||||
solrQuery.setHighlightFragsize(150);
|
||||
}
|
||||
|
||||
// Enable faceting
|
||||
if (properties.getQuery().isFacets()) {
|
||||
solrQuery.setFacet(true);
|
||||
// Use optimized facet fields for better performance
|
||||
solrQuery.addFacetField("authorName_facet");
|
||||
solrQuery.addFacetField("tagNames_facet");
|
||||
solrQuery.addFacetField("seriesName_facet");
|
||||
solrQuery.addFacetField("rating");
|
||||
solrQuery.addFacetField("isRead");
|
||||
solrQuery.setFacetMinCount(1);
|
||||
solrQuery.setFacetSort("count");
|
||||
solrQuery.setFacetLimit(100); // Limit facet results for performance
|
||||
}
|
||||
|
||||
// Debug: Log the query being sent to Solr
|
||||
logger.info("SolrService: Executing Solr query: {}", solrQuery);
|
||||
|
||||
QueryResponse response = solrClient.query(properties.getCores().getStories(), solrQuery);
|
||||
logger.info("SolrService: Query executed successfully, found {} results", response.getResults().getNumFound());
|
||||
|
||||
return buildStorySearchResult(response);
|
||||
|
||||
} catch (Exception e) {
|
||||
logger.error("Story search failed for query: {}", query, e);
|
||||
return new SearchResultDto<StorySearchDto>(Collections.emptyList(), 0, 0, Collections.emptyList());
|
||||
}
|
||||
}
|
||||
|
||||
public List<AuthorSearchDto> searchAuthors(String query, int limit) {
|
||||
if (!isAvailable()) {
|
||||
logger.debug("Solr not available - returning empty author search results");
|
||||
return Collections.emptyList();
|
||||
}
|
||||
|
||||
try {
|
||||
SolrQuery solrQuery = new SolrQuery();
|
||||
|
||||
// Set query
|
||||
if (query == null || query.trim().isEmpty()) {
|
||||
solrQuery.setQuery("*:*");
|
||||
} else {
|
||||
solrQuery.setQuery(query);
|
||||
solrQuery.set("defType", "edismax");
|
||||
solrQuery.set("qf", "name^3.0 notes^1.0 urls^0.5");
|
||||
}
|
||||
|
||||
solrQuery.setRows(limit);
|
||||
solrQuery.setSort("storyCount", SolrQuery.ORDER.desc);
|
||||
|
||||
QueryResponse response = solrClient.query(properties.getCores().getAuthors(), solrQuery);
|
||||
|
||||
return buildAuthorSearchResults(response);
|
||||
|
||||
} catch (Exception e) {
|
||||
logger.error("Author search failed for query: {}", query, e);
|
||||
return Collections.emptyList();
|
||||
}
|
||||
}
|
||||
|
||||
public List<String> getTagSuggestions(String query, int limit) {
|
||||
if (!isAvailable()) {
|
||||
return Collections.emptyList();
|
||||
}
|
||||
|
||||
try {
|
||||
SolrQuery solrQuery = new SolrQuery();
|
||||
solrQuery.setQuery("tagNames:*" + query + "*");
|
||||
solrQuery.setRows(0);
|
||||
solrQuery.setFacet(true);
|
||||
solrQuery.addFacetField("tagNames_facet");
|
||||
solrQuery.setFacetMinCount(1);
|
||||
solrQuery.setFacetLimit(limit);
|
||||
|
||||
QueryResponse response = solrClient.query(properties.getCores().getStories(), solrQuery);
|
||||
|
||||
return response.getFacetField("tagNames_facet").getValues().stream()
|
||||
.map(facet -> facet.getName())
|
||||
.filter(name -> name.toLowerCase().contains(query.toLowerCase()))
|
||||
.collect(Collectors.toList());
|
||||
|
||||
} catch (Exception e) {
|
||||
logger.error("Tag suggestions failed for query: {}", query, e);
|
||||
return Collections.emptyList();
|
||||
}
|
||||
}
|
||||
|
||||
public String getRandomStoryId(Long seed) {
|
||||
if (!isAvailable()) {
|
||||
return null;
|
||||
}
|
||||
|
||||
try {
|
||||
SolrQuery solrQuery = new SolrQuery("*:*");
|
||||
solrQuery.setRows(1);
|
||||
if (seed != null) {
|
||||
solrQuery.setSort("random_" + seed, SolrQuery.ORDER.asc);
|
||||
} else {
|
||||
solrQuery.setSort("random_" + System.currentTimeMillis(), SolrQuery.ORDER.asc);
|
||||
}
|
||||
|
||||
QueryResponse response = solrClient.query(properties.getCores().getStories(), solrQuery);
|
||||
|
||||
if (response.getResults().size() > 0) {
|
||||
return (String) response.getResults().get(0).getFieldValue("id");
|
||||
}
|
||||
|
||||
} catch (Exception e) {
|
||||
logger.error("Random story ID retrieval failed", e);
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
// ===============================
|
||||
// RESULT BUILDING
|
||||
// ===============================
|
||||
|
||||
private SearchResultDto<StorySearchDto> buildStorySearchResult(QueryResponse response) {
|
||||
SolrDocumentList results = response.getResults();
|
||||
List<StorySearchDto> stories = new ArrayList<>();
|
||||
|
||||
for (SolrDocument doc : results) {
|
||||
StorySearchDto story = convertToStorySearchDto(doc);
|
||||
|
||||
// Add highlights
|
||||
if (response.getHighlighting() != null) {
|
||||
String id = (String) doc.getFieldValue("id");
|
||||
Map<String, List<String>> highlights = response.getHighlighting().get(id);
|
||||
if (highlights != null) {
|
||||
List<String> allHighlights = highlights.values().stream()
|
||||
.flatMap(List::stream)
|
||||
.collect(Collectors.toList());
|
||||
story.setHighlights(allHighlights);
|
||||
}
|
||||
}
|
||||
|
||||
stories.add(story);
|
||||
}
|
||||
|
||||
// Build facets organized by field name
|
||||
Map<String, List<FacetCountDto>> facetsMap = new HashMap<>();
|
||||
if (response.getFacetFields() != null) {
|
||||
response.getFacetFields().forEach(facetField -> {
|
||||
String fieldName = facetField.getName();
|
||||
List<FacetCountDto> fieldFacets = new ArrayList<>();
|
||||
|
||||
facetField.getValues().forEach(count -> {
|
||||
fieldFacets.add(new FacetCountDto(count.getName(), (int) count.getCount()));
|
||||
});
|
||||
|
||||
facetsMap.put(fieldName, fieldFacets);
|
||||
});
|
||||
}
|
||||
|
||||
SearchResultDto<StorySearchDto> result = new SearchResultDto<StorySearchDto>(stories, (int) results.getNumFound(),
|
||||
0, stories.size(), "", 0, facetsMap);
|
||||
return result;
|
||||
}
|
||||
|
||||
private List<AuthorSearchDto> buildAuthorSearchResults(QueryResponse response) {
|
||||
return response.getResults().stream()
|
||||
.map(this::convertToAuthorSearchDto)
|
||||
.collect(Collectors.toList());
|
||||
}
|
||||
|
||||
private StorySearchDto convertToStorySearchDto(SolrDocument doc) {
|
||||
StorySearchDto story = new StorySearchDto();
|
||||
|
||||
story.setId(UUID.fromString((String) doc.getFieldValue("id")));
|
||||
story.setTitle((String) doc.getFieldValue("title"));
|
||||
story.setDescription((String) doc.getFieldValue("description"));
|
||||
story.setSourceUrl((String) doc.getFieldValue("sourceUrl"));
|
||||
story.setCoverPath((String) doc.getFieldValue("coverPath"));
|
||||
story.setWordCount((Integer) doc.getFieldValue("wordCount"));
|
||||
story.setRating((Integer) doc.getFieldValue("rating"));
|
||||
story.setVolume((Integer) doc.getFieldValue("volume"));
|
||||
story.setIsRead((Boolean) doc.getFieldValue("isRead"));
|
||||
story.setReadingPosition((Integer) doc.getFieldValue("readingPosition"));
|
||||
|
||||
// Handle dates
|
||||
story.setLastReadAt(parseDateTimeFromSolr(doc.getFieldValue("lastReadAt")));
|
||||
story.setCreatedAt(parseDateTimeFromSolr(doc.getFieldValue("createdAt")));
|
||||
story.setUpdatedAt(parseDateTimeFromSolr(doc.getFieldValue("updatedAt")));
|
||||
story.setDateAdded(parseDateTimeFromSolr(doc.getFieldValue("dateAdded")));
|
||||
|
||||
// Handle author
|
||||
String authorIdStr = (String) doc.getFieldValue("authorId");
|
||||
if (authorIdStr != null) {
|
||||
story.setAuthorId(UUID.fromString(authorIdStr));
|
||||
}
|
||||
story.setAuthorName((String) doc.getFieldValue("authorName"));
|
||||
|
||||
// Handle series
|
||||
String seriesIdStr = (String) doc.getFieldValue("seriesId");
|
||||
if (seriesIdStr != null) {
|
||||
story.setSeriesId(UUID.fromString(seriesIdStr));
|
||||
}
|
||||
story.setSeriesName((String) doc.getFieldValue("seriesName"));
|
||||
|
||||
// Handle tags
|
||||
Collection<Object> tagValues = doc.getFieldValues("tagNames");
|
||||
if (tagValues != null) {
|
||||
List<String> tagNames = tagValues.stream()
|
||||
.map(Object::toString)
|
||||
.collect(Collectors.toList());
|
||||
story.setTagNames(tagNames);
|
||||
}
|
||||
|
||||
return story;
|
||||
}
|
||||
|
||||
private AuthorSearchDto convertToAuthorSearchDto(SolrDocument doc) {
|
||||
AuthorSearchDto author = new AuthorSearchDto();
|
||||
|
||||
author.setId(UUID.fromString((String) doc.getFieldValue("id")));
|
||||
author.setName((String) doc.getFieldValue("name"));
|
||||
author.setNotes((String) doc.getFieldValue("notes"));
|
||||
author.setAuthorRating((Integer) doc.getFieldValue("authorRating"));
|
||||
author.setAvatarImagePath((String) doc.getFieldValue("avatarImagePath"));
|
||||
author.setStoryCount((Integer) doc.getFieldValue("storyCount"));
|
||||
|
||||
Double avgRating = (Double) doc.getFieldValue("averageStoryRating");
|
||||
if (avgRating != null) {
|
||||
author.setAverageStoryRating(avgRating);
|
||||
}
|
||||
|
||||
// Handle URLs
|
||||
Collection<Object> urlValues = doc.getFieldValues("urls");
|
||||
if (urlValues != null) {
|
||||
List<String> urls = urlValues.stream()
|
||||
.map(Object::toString)
|
||||
.collect(Collectors.toList());
|
||||
author.setUrls(urls);
|
||||
}
|
||||
|
||||
// Handle dates
|
||||
author.setCreatedAt(parseDateTimeFromSolr(doc.getFieldValue("createdAt")));
|
||||
author.setUpdatedAt(parseDateTimeFromSolr(doc.getFieldValue("updatedAt")));
|
||||
|
||||
return author;
|
||||
}
|
||||
|
||||
private LocalDateTime parseDateTime(String dateStr) {
|
||||
if (dateStr == null || dateStr.isEmpty()) {
|
||||
return null;
|
||||
}
|
||||
try {
|
||||
// Remove 'Z' suffix if present and parse
|
||||
String cleanDate = dateStr.endsWith("Z") ? dateStr.substring(0, dateStr.length() - 1) : dateStr;
|
||||
return LocalDateTime.parse(cleanDate, DateTimeFormatter.ISO_LOCAL_DATE_TIME);
|
||||
} catch (Exception e) {
|
||||
logger.warn("Failed to parse date: {}", dateStr, e);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
private LocalDateTime parseDateTimeFromSolr(Object dateValue) {
|
||||
if (dateValue == null) {
|
||||
return null;
|
||||
}
|
||||
if (dateValue instanceof Date) {
|
||||
// Convert java.util.Date to LocalDateTime
|
||||
return ((Date) dateValue).toInstant()
|
||||
.atZone(java.time.ZoneId.systemDefault())
|
||||
.toLocalDateTime();
|
||||
} else if (dateValue instanceof String) {
|
||||
return parseDateTime((String) dateValue);
|
||||
} else {
|
||||
logger.warn("Unexpected date type: {}", dateValue.getClass());
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
public List<StorySearchDto> getRandomStories(int count, List<String> tags, String author,
|
||||
String series, Integer minWordCount, Integer maxWordCount,
|
||||
Float minRating, Boolean isRead, Boolean isFavorite,
|
||||
Long seed) {
|
||||
if (!isAvailable()) {
|
||||
return Collections.emptyList();
|
||||
}
|
||||
|
||||
try {
|
||||
SolrQuery solrQuery = new SolrQuery("*:*");
|
||||
|
||||
// Apply filters
|
||||
applySearchFilters(solrQuery, tags, author, series, minWordCount, maxWordCount,
|
||||
minRating, isRead, isFavorite, null, null, null, null, null, null,
|
||||
null, null, null, null, null, null, null);
|
||||
|
||||
solrQuery.setRows(count);
|
||||
|
||||
// Use random sorting
|
||||
if (seed != null) {
|
||||
solrQuery.setSort("random_" + seed, SolrQuery.ORDER.asc);
|
||||
} else {
|
||||
solrQuery.setSort("random_" + System.currentTimeMillis(), SolrQuery.ORDER.asc);
|
||||
}
|
||||
|
||||
QueryResponse response = solrClient.query(properties.getCores().getStories(), solrQuery);
|
||||
|
||||
return response.getResults().stream()
|
||||
.map(this::convertToStorySearchDto)
|
||||
.collect(Collectors.toList());
|
||||
|
||||
} catch (Exception e) {
|
||||
logger.error("Random stories retrieval failed", e);
|
||||
return Collections.emptyList();
|
||||
}
|
||||
}
|
||||
|
||||
// ===============================
|
||||
// FILTER APPLICATION
|
||||
// ===============================
|
||||
|
||||
private void applySearchFilters(SolrQuery solrQuery, List<String> tags, String author,
|
||||
String series, Integer minWordCount, Integer maxWordCount,
|
||||
Float minRating, Boolean isRead, Boolean isFavorite,
|
||||
String createdAfter, String createdBefore,
|
||||
String lastReadAfter, String lastReadBefore,
|
||||
Boolean unratedOnly, String readingStatus,
|
||||
Boolean hasReadingProgress, Boolean hasCoverImage,
|
||||
String sourceDomain, String seriesFilter,
|
||||
Integer minTagCount, Boolean popularOnly,
|
||||
Boolean hiddenGemsOnly) {
|
||||
|
||||
List<String> filters = new ArrayList<>();
|
||||
|
||||
// Tag filters - use facet field for exact matching
|
||||
if (tags != null && !tags.isEmpty()) {
|
||||
String tagFilter = tags.stream()
|
||||
.map(tag -> "tagNames_facet:\"" + escapeQueryChars(tag) + "\"")
|
||||
.collect(Collectors.joining(" AND "));
|
||||
filters.add("(" + tagFilter + ")");
|
||||
}
|
||||
|
||||
// Author filter - use facet field for exact matching
|
||||
if (author != null && !author.trim().isEmpty()) {
|
||||
filters.add("authorName_facet:\"" + escapeQueryChars(author.trim()) + "\"");
|
||||
}
|
||||
|
||||
// Series filter - use facet field for exact matching
|
||||
if (series != null && !series.trim().isEmpty()) {
|
||||
filters.add("seriesName_facet:\"" + escapeQueryChars(series.trim()) + "\"");
|
||||
}
|
||||
|
||||
// Word count filters
|
||||
if (minWordCount != null && maxWordCount != null) {
|
||||
filters.add("wordCount:[" + minWordCount + " TO " + maxWordCount + "]");
|
||||
} else if (minWordCount != null) {
|
||||
filters.add("wordCount:[" + minWordCount + " TO *]");
|
||||
} else if (maxWordCount != null) {
|
||||
filters.add("wordCount:[* TO " + maxWordCount + "]");
|
||||
}
|
||||
|
||||
// Rating filter
|
||||
if (minRating != null) {
|
||||
filters.add("rating:[" + minRating.intValue() + " TO *]");
|
||||
}
|
||||
|
||||
// Read status filter
|
||||
if (isRead != null) {
|
||||
filters.add("isRead:" + isRead);
|
||||
}
|
||||
|
||||
// Date filters - convert to ISO format for Solr
|
||||
if (createdAfter != null) {
|
||||
filters.add("createdAt:[" + formatDateForSolr(createdAfter) + " TO *]");
|
||||
}
|
||||
if (createdBefore != null) {
|
||||
filters.add("createdAt:[* TO " + formatDateForSolr(createdBefore) + "]");
|
||||
}
|
||||
|
||||
// Last read date filters
|
||||
if (lastReadAfter != null) {
|
||||
filters.add("lastReadAt:[" + formatDateForSolr(lastReadAfter) + " TO *]");
|
||||
}
|
||||
if (lastReadBefore != null) {
|
||||
filters.add("lastReadAt:[* TO " + formatDateForSolr(lastReadBefore) + "]");
|
||||
}
|
||||
|
||||
// Unrated filter
|
||||
if (unratedOnly != null && unratedOnly) {
|
||||
filters.add("(-rating:[1 TO *] OR rating:0)");
|
||||
}
|
||||
|
||||
// Cover image filter
|
||||
if (hasCoverImage != null) {
|
||||
if (hasCoverImage) {
|
||||
filters.add("coverPath:[\"\" TO *]"); // Has non-empty value
|
||||
} else {
|
||||
filters.add("-coverPath:[\"\" TO *]"); // No value or empty
|
||||
}
|
||||
}
|
||||
|
||||
// Reading status filter
|
||||
if (readingStatus != null && !readingStatus.equals("all")) {
|
||||
switch (readingStatus) {
|
||||
case "unread":
|
||||
// Unread: isRead is false AND readingPosition is 0 (or null)
|
||||
filters.add("isRead:false AND (readingPosition:0 OR (*:* -readingPosition:[1 TO *]))");
|
||||
break;
|
||||
case "started":
|
||||
// Started: has reading progress but not finished
|
||||
filters.add("readingPosition:[1 TO *] AND isRead:false");
|
||||
break;
|
||||
case "completed":
|
||||
// Completed: isRead is true
|
||||
filters.add("isRead:true");
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Reading progress filter
|
||||
if (hasReadingProgress != null) {
|
||||
if (hasReadingProgress) {
|
||||
filters.add("readingPosition:[1 TO *]"); // Has reading progress
|
||||
} else {
|
||||
filters.add("(readingPosition:0 OR (*:* -readingPosition:[1 TO *]))"); // No reading progress
|
||||
}
|
||||
}
|
||||
|
||||
// Series filter
|
||||
if (seriesFilter != null && !seriesFilter.equals("all")) {
|
||||
switch (seriesFilter) {
|
||||
case "standalone":
|
||||
// Standalone: no series (seriesId is null or empty)
|
||||
filters.add("(*:* -seriesId:[\"\" TO *])");
|
||||
break;
|
||||
case "series":
|
||||
// Part of series: has seriesId
|
||||
filters.add("seriesId:[\"\" TO *]");
|
||||
break;
|
||||
case "firstInSeries":
|
||||
// First in series: has seriesId and volume is 1
|
||||
filters.add("seriesId:[\"\" TO *] AND volume:1");
|
||||
break;
|
||||
case "lastInSeries":
|
||||
// This would require complex logic to determine last volume per series
|
||||
// For now, just filter by having a series (can be enhanced later)
|
||||
filters.add("seriesId:[\"\" TO *]");
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Source domain filter
|
||||
if (sourceDomain != null && !sourceDomain.trim().isEmpty()) {
|
||||
// Extract domain from sourceUrl field
|
||||
filters.add("sourceUrl:*" + escapeQueryChars(sourceDomain.trim()) + "*");
|
||||
}
|
||||
|
||||
// Apply all filters
|
||||
for (String filter : filters) {
|
||||
solrQuery.addFilterQuery(filter);
|
||||
}
|
||||
}
|
||||
|
||||
public void recreateCores() throws IOException {
|
||||
logger.warn("Solr core recreation not supported through API - cores must be managed via Solr admin");
|
||||
// Note: Core recreation in Solr requires admin API calls that are typically done
|
||||
// through the Solr admin interface or by restarting with fresh cores
|
||||
}
|
||||
|
||||
public void recreateIndices() throws IOException {
|
||||
recreateCores();
|
||||
}
|
||||
|
||||
/**
|
||||
* Escape special characters in Solr query strings
|
||||
*/
|
||||
private String escapeQueryChars(String s) {
|
||||
if (s == null) return null;
|
||||
StringBuilder sb = new StringBuilder();
|
||||
for (int i = 0; i < s.length(); i++) {
|
||||
char c = s.charAt(i);
|
||||
// These characters are part of the query syntax and must be escaped
|
||||
if (c == '\\' || c == '+' || c == '-' || c == '!' || c == '(' || c == ')' || c == ':'
|
||||
|| c == '^' || c == '[' || c == ']' || c == '\"' || c == '{' || c == '}' || c == '~'
|
||||
|| c == '*' || c == '?' || c == '|' || c == '&' || c == ';' || c == '/'
|
||||
|| Character.isWhitespace(c)) {
|
||||
sb.append('\\');
|
||||
}
|
||||
sb.append(c);
|
||||
}
|
||||
return sb.toString();
|
||||
}
|
||||
|
||||
/**
|
||||
* Format date string for Solr queries
|
||||
*/
|
||||
private String formatDateForSolr(String dateStr) {
|
||||
if (dateStr == null || dateStr.isEmpty()) {
|
||||
return dateStr;
|
||||
}
|
||||
|
||||
try {
|
||||
// If it's already in ISO format, return as-is
|
||||
if (dateStr.contains("T") || dateStr.endsWith("Z")) {
|
||||
return dateStr;
|
||||
}
|
||||
|
||||
// Convert date string like "2025-08-23" to ISO format "2025-08-23T00:00:00Z"
|
||||
if (dateStr.matches("\\d{4}-\\d{2}-\\d{2}")) {
|
||||
return dateStr + "T00:00:00Z";
|
||||
}
|
||||
|
||||
return dateStr;
|
||||
} catch (Exception e) {
|
||||
logger.warn("Failed to format date for Solr: {}", dateStr, e);
|
||||
return dateStr;
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -39,54 +39,46 @@ storycove:
|
||||
auth:
|
||||
password: ${APP_PASSWORD} # REQUIRED: No default password for security
|
||||
search:
|
||||
engine: opensearch # OpenSearch is the only search engine
|
||||
opensearch:
|
||||
engine: solr # Apache Solr search engine
|
||||
solr:
|
||||
# Connection settings
|
||||
host: ${OPENSEARCH_HOST:localhost}
|
||||
port: ${OPENSEARCH_PORT:9200}
|
||||
scheme: ${OPENSEARCH_SCHEME:http}
|
||||
username: ${OPENSEARCH_USERNAME:}
|
||||
password: ${OPENSEARCH_PASSWORD:} # Empty when security is disabled
|
||||
url: ${SOLR_URL:http://solr:8983/solr}
|
||||
username: ${SOLR_USERNAME:}
|
||||
password: ${SOLR_PASSWORD:}
|
||||
|
||||
# Environment-specific configuration
|
||||
profile: ${SPRING_PROFILES_ACTIVE:development} # development, staging, production
|
||||
# Core configuration
|
||||
cores:
|
||||
stories: ${SOLR_STORIES_CORE:storycove_stories}
|
||||
authors: ${SOLR_AUTHORS_CORE:storycove_authors}
|
||||
|
||||
# Security settings
|
||||
security:
|
||||
ssl-verification: ${OPENSEARCH_SSL_VERIFICATION:false}
|
||||
trust-all-certificates: ${OPENSEARCH_TRUST_ALL_CERTS:true}
|
||||
keystore-path: ${OPENSEARCH_KEYSTORE_PATH:}
|
||||
keystore-password: ${OPENSEARCH_KEYSTORE_PASSWORD:}
|
||||
truststore-path: ${OPENSEARCH_TRUSTSTORE_PATH:}
|
||||
truststore-password: ${OPENSEARCH_TRUSTSTORE_PASSWORD:}
|
||||
|
||||
# Connection pool settings
|
||||
# Connection settings
|
||||
connection:
|
||||
timeout: ${OPENSEARCH_CONNECTION_TIMEOUT:30000} # 30 seconds
|
||||
socket-timeout: ${OPENSEARCH_SOCKET_TIMEOUT:60000} # 60 seconds
|
||||
max-connections-per-route: ${OPENSEARCH_MAX_CONN_PER_ROUTE:10}
|
||||
max-connections-total: ${OPENSEARCH_MAX_CONN_TOTAL:30}
|
||||
retry-on-failure: ${OPENSEARCH_RETRY_ON_FAILURE:true}
|
||||
max-retries: ${OPENSEARCH_MAX_RETRIES:3}
|
||||
timeout: ${SOLR_CONNECTION_TIMEOUT:30000} # 30 seconds
|
||||
socket-timeout: ${SOLR_SOCKET_TIMEOUT:60000} # 60 seconds
|
||||
max-connections-per-route: ${SOLR_MAX_CONN_PER_ROUTE:10}
|
||||
max-connections-total: ${SOLR_MAX_CONN_TOTAL:30}
|
||||
retry-on-failure: ${SOLR_RETRY_ON_FAILURE:true}
|
||||
max-retries: ${SOLR_MAX_RETRIES:3}
|
||||
|
||||
# Index settings
|
||||
indices:
|
||||
default-shards: ${OPENSEARCH_DEFAULT_SHARDS:1}
|
||||
default-replicas: ${OPENSEARCH_DEFAULT_REPLICAS:0}
|
||||
refresh-interval: ${OPENSEARCH_REFRESH_INTERVAL:1s}
|
||||
# Query settings
|
||||
query:
|
||||
default-rows: ${SOLR_DEFAULT_ROWS:10}
|
||||
max-rows: ${SOLR_MAX_ROWS:1000}
|
||||
default-operator: ${SOLR_DEFAULT_OPERATOR:AND}
|
||||
highlight: ${SOLR_ENABLE_HIGHLIGHT:true}
|
||||
facets: ${SOLR_ENABLE_FACETS:true}
|
||||
|
||||
# Bulk operations
|
||||
bulk:
|
||||
actions: ${OPENSEARCH_BULK_ACTIONS:1000}
|
||||
size: ${OPENSEARCH_BULK_SIZE:5242880} # 5MB
|
||||
timeout: ${OPENSEARCH_BULK_TIMEOUT:10000} # 10 seconds
|
||||
concurrent-requests: ${OPENSEARCH_BULK_CONCURRENT:1}
|
||||
# Commit settings
|
||||
commit:
|
||||
soft-commit: ${SOLR_SOFT_COMMIT:true}
|
||||
commit-within: ${SOLR_COMMIT_WITHIN:1000} # 1 second
|
||||
wait-searcher: ${SOLR_WAIT_SEARCHER:false}
|
||||
|
||||
# Health and monitoring
|
||||
health:
|
||||
check-interval: ${OPENSEARCH_HEALTH_CHECK_INTERVAL:30000} # 30 seconds
|
||||
slow-query-threshold: ${OPENSEARCH_SLOW_QUERY_THRESHOLD:5000} # 5 seconds
|
||||
enable-metrics: ${OPENSEARCH_ENABLE_METRICS:true}
|
||||
check-interval: ${SOLR_HEALTH_CHECK_INTERVAL:30000} # 30 seconds
|
||||
slow-query-threshold: ${SOLR_SLOW_QUERY_THRESHOLD:5000} # 5 seconds
|
||||
enable-metrics: ${SOLR_ENABLE_METRICS:true}
|
||||
images:
|
||||
storage-path: ${IMAGE_STORAGE_PATH:/app/images}
|
||||
|
||||
@@ -100,8 +92,8 @@ management:
|
||||
show-details: when-authorized
|
||||
show-components: always
|
||||
health:
|
||||
opensearch:
|
||||
enabled: ${OPENSEARCH_HEALTH_ENABLED:true}
|
||||
solr:
|
||||
enabled: ${SOLR_HEALTH_ENABLED:true}
|
||||
|
||||
logging:
|
||||
level:
|
||||
|
||||
@@ -1,178 +0,0 @@
|
||||
# OpenSearch Configuration - Best Practices Implementation
|
||||
|
||||
## Overview
|
||||
|
||||
This directory contains a production-ready OpenSearch configuration following industry best practices for security, scalability, and maintainability.
|
||||
|
||||
## Architecture
|
||||
|
||||
### 📁 Directory Structure
|
||||
```
|
||||
opensearch/
|
||||
├── config/
|
||||
│ ├── opensearch-development.yml # Development-specific settings
|
||||
│ └── opensearch-production.yml # Production-specific settings
|
||||
├── mappings/
|
||||
│ ├── stories-mapping.json # Story index mapping
|
||||
│ ├── authors-mapping.json # Author index mapping
|
||||
│ └── collections-mapping.json # Collection index mapping
|
||||
├── templates/
|
||||
│ ├── stories-template.json # Index template for stories_*
|
||||
│ └── index-lifecycle-policy.json # ILM policy for index management
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## ✅ Best Practices Implemented
|
||||
|
||||
### 🔒 **Security**
|
||||
- **Environment-Aware SSL Configuration**
|
||||
- Production: Full certificate validation with custom truststore support
|
||||
- Development: Optional certificate validation for local development
|
||||
- **Proper Authentication**: Basic auth with secure credential management
|
||||
- **Connection Security**: TLS 1.3 support with hostname verification
|
||||
|
||||
### 🏗️ **Configuration Management**
|
||||
- **Externalized Configuration**: JSON/YAML files instead of hardcoded values
|
||||
- **Environment-Specific Settings**: Different configs for dev/staging/prod
|
||||
- **Type-Safe Properties**: Strongly-typed configuration classes
|
||||
- **Validation**: Configuration validation at startup
|
||||
|
||||
### 📈 **Scalability & Performance**
|
||||
- **Connection Pooling**: Configurable connection pool with timeout management
|
||||
- **Environment-Aware Sharding**:
|
||||
- Development: 1 shard, 0 replicas (single node)
|
||||
- Production: 3 shards, 1 replica (high availability)
|
||||
- **Bulk Operations**: Optimized bulk indexing with configurable batch sizes
|
||||
- **Index Templates**: Automatic application of settings to new indexes
|
||||
|
||||
### 🔄 **Index Lifecycle Management**
|
||||
- **Automated Index Rollover**: Based on size, document count, and age
|
||||
- **Hot-Warm-Cold Architecture**: Optimized storage costs
|
||||
- **Retention Policies**: Automatic cleanup of old data
|
||||
- **Force Merge**: Optimization in warm phase
|
||||
|
||||
### 📊 **Monitoring & Observability**
|
||||
- **Health Checks**: Automatic cluster health monitoring
|
||||
- **Spring Boot Actuator**: Health endpoints for monitoring systems
|
||||
- **Metrics Collection**: Configurable performance metrics
|
||||
- **Slow Query Detection**: Configurable thresholds for query performance
|
||||
|
||||
### 🛡️ **Error Handling & Resilience**
|
||||
- **Connection Retry Logic**: Automatic retry with backoff
|
||||
- **Circuit Breaker Pattern**: Fail-fast for unhealthy clusters
|
||||
- **Graceful Degradation**: Graceful handling when OpenSearch unavailable
|
||||
- **Detailed Error Logging**: Comprehensive error tracking
|
||||
|
||||
## 🚀 Usage
|
||||
|
||||
### Development Environment
|
||||
```yaml
|
||||
# application-development.yml
|
||||
storycove:
|
||||
opensearch:
|
||||
profile: development
|
||||
security:
|
||||
ssl-verification: false
|
||||
trust-all-certificates: true
|
||||
indices:
|
||||
default-shards: 1
|
||||
default-replicas: 0
|
||||
```
|
||||
|
||||
### Production Environment
|
||||
```yaml
|
||||
# application-production.yml
|
||||
storycove:
|
||||
opensearch:
|
||||
profile: production
|
||||
security:
|
||||
ssl-verification: true
|
||||
trust-all-certificates: false
|
||||
truststore-path: /etc/ssl/opensearch-truststore.jks
|
||||
indices:
|
||||
default-shards: 3
|
||||
default-replicas: 1
|
||||
```
|
||||
|
||||
## 📋 Environment Variables
|
||||
|
||||
### Required
|
||||
- `OPENSEARCH_PASSWORD`: Admin password for OpenSearch cluster
|
||||
|
||||
### Optional (with sensible defaults)
|
||||
- `OPENSEARCH_HOST`: Cluster hostname (default: localhost)
|
||||
- `OPENSEARCH_PORT`: Cluster port (default: 9200)
|
||||
- `OPENSEARCH_USERNAME`: Admin username (default: admin)
|
||||
- `OPENSEARCH_SSL_VERIFICATION`: Enable SSL verification (default: false for dev)
|
||||
- `OPENSEARCH_MAX_CONN_TOTAL`: Max connections (default: 30 for dev, 200 for prod)
|
||||
|
||||
## 🎯 Index Templates
|
||||
|
||||
Index templates automatically apply configuration to new indexes:
|
||||
|
||||
```json
|
||||
{
|
||||
"index_patterns": ["stories_*"],
|
||||
"template": {
|
||||
"settings": {
|
||||
"number_of_shards": "#{ENV_SPECIFIC}",
|
||||
"analysis": {
|
||||
"analyzer": {
|
||||
"story_analyzer": {
|
||||
"type": "standard",
|
||||
"stopwords": "_english_"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 🔍 Health Monitoring
|
||||
|
||||
Access health information:
|
||||
- **Application Health**: `/actuator/health`
|
||||
- **OpenSearch Specific**: `/actuator/health/opensearch`
|
||||
- **Detailed Metrics**: Available when `enable-metrics: true`
|
||||
|
||||
## 🔄 Deployment Strategy
|
||||
|
||||
Recommended deployment approach:
|
||||
|
||||
1. **Development**: Test OpenSearch configuration locally
|
||||
2. **Staging**: Validate performance and accuracy in staging environment
|
||||
3. **Production**: Deploy with proper monitoring and backup procedures
|
||||
|
||||
## 🛠️ Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **SSL Certificate Errors**
|
||||
- Development: Set `trust-all-certificates: true`
|
||||
- Production: Provide valid truststore path
|
||||
|
||||
2. **Connection Timeouts**
|
||||
- Increase `connection.timeout` values
|
||||
- Check network connectivity and firewall rules
|
||||
|
||||
3. **Index Creation Failures**
|
||||
- Verify cluster health with `/actuator/health/opensearch`
|
||||
- Check OpenSearch logs for detailed error messages
|
||||
|
||||
4. **Performance Issues**
|
||||
- Monitor slow queries with configurable thresholds
|
||||
- Adjust bulk operation settings
|
||||
- Review shard allocation and replica settings
|
||||
|
||||
## 🔮 Future Enhancements
|
||||
|
||||
- **Multi-Cluster Support**: Connect to multiple OpenSearch clusters
|
||||
- **Advanced Security**: Integration with OpenSearch Security plugin
|
||||
- **Custom Analyzers**: Domain-specific text analysis
|
||||
- **Index Aliases**: Zero-downtime index updates
|
||||
- **Machine Learning**: Integration with OpenSearch ML features
|
||||
|
||||
---
|
||||
|
||||
This configuration provides a solid foundation that scales from development to enterprise production environments while maintaining security, performance, and operational excellence.
|
||||
@@ -1,32 +0,0 @@
|
||||
# OpenSearch Development Configuration
|
||||
opensearch:
|
||||
cluster:
|
||||
name: "storycove-dev"
|
||||
initial_master_nodes: ["opensearch-node"]
|
||||
|
||||
# Development settings - single node, minimal resources
|
||||
indices:
|
||||
default_settings:
|
||||
number_of_shards: 1
|
||||
number_of_replicas: 0
|
||||
refresh_interval: "1s"
|
||||
|
||||
# Security settings for development
|
||||
security:
|
||||
ssl_verification: false
|
||||
trust_all_certificates: true
|
||||
|
||||
# Connection settings
|
||||
connection:
|
||||
timeout: "30s"
|
||||
socket_timeout: "60s"
|
||||
max_connections_per_route: 10
|
||||
max_connections_total: 30
|
||||
|
||||
# Index management
|
||||
index_management:
|
||||
auto_create_templates: true
|
||||
template_patterns:
|
||||
stories: "stories_*"
|
||||
authors: "authors_*"
|
||||
collections: "collections_*"
|
||||
@@ -1,60 +0,0 @@
|
||||
# OpenSearch Production Configuration
|
||||
opensearch:
|
||||
cluster:
|
||||
name: "storycove-prod"
|
||||
|
||||
# Production settings - multi-shard, with replicas
|
||||
indices:
|
||||
default_settings:
|
||||
number_of_shards: 3
|
||||
number_of_replicas: 1
|
||||
refresh_interval: "30s"
|
||||
max_result_window: 50000
|
||||
|
||||
# Index lifecycle policies
|
||||
lifecycle:
|
||||
hot_phase_duration: "7d"
|
||||
warm_phase_duration: "30d"
|
||||
cold_phase_duration: "90d"
|
||||
delete_after: "1y"
|
||||
|
||||
# Security settings for production
|
||||
security:
|
||||
ssl_verification: true
|
||||
trust_all_certificates: false
|
||||
certificate_verification: true
|
||||
tls_version: "TLSv1.3"
|
||||
|
||||
# Connection settings
|
||||
connection:
|
||||
timeout: "10s"
|
||||
socket_timeout: "30s"
|
||||
max_connections_per_route: 50
|
||||
max_connections_total: 200
|
||||
retry_on_failure: true
|
||||
max_retries: 3
|
||||
retry_delay: "1s"
|
||||
|
||||
# Performance tuning
|
||||
performance:
|
||||
bulk_actions: 1000
|
||||
bulk_size: "5MB"
|
||||
bulk_timeout: "10s"
|
||||
concurrent_requests: 4
|
||||
|
||||
# Monitoring and observability
|
||||
monitoring:
|
||||
health_check_interval: "30s"
|
||||
slow_query_threshold: "5s"
|
||||
enable_metrics: true
|
||||
|
||||
# Index management
|
||||
index_management:
|
||||
auto_create_templates: true
|
||||
template_patterns:
|
||||
stories: "stories_*"
|
||||
authors: "authors_*"
|
||||
collections: "collections_*"
|
||||
retention_policy:
|
||||
enabled: true
|
||||
default_retention: "1y"
|
||||
@@ -1,79 +0,0 @@
|
||||
{
|
||||
"settings": {
|
||||
"number_of_shards": 1,
|
||||
"number_of_replicas": 0,
|
||||
"analysis": {
|
||||
"analyzer": {
|
||||
"name_analyzer": {
|
||||
"type": "standard",
|
||||
"stopwords": "_english_"
|
||||
},
|
||||
"autocomplete_analyzer": {
|
||||
"type": "custom",
|
||||
"tokenizer": "standard",
|
||||
"filter": ["lowercase", "edge_ngram"]
|
||||
}
|
||||
},
|
||||
"filter": {
|
||||
"edge_ngram": {
|
||||
"type": "edge_ngram",
|
||||
"min_gram": 2,
|
||||
"max_gram": 20
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"mappings": {
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"name": {
|
||||
"type": "text",
|
||||
"analyzer": "name_analyzer",
|
||||
"fields": {
|
||||
"autocomplete": {
|
||||
"type": "text",
|
||||
"analyzer": "autocomplete_analyzer"
|
||||
},
|
||||
"keyword": {
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
},
|
||||
"bio": {
|
||||
"type": "text",
|
||||
"analyzer": "name_analyzer"
|
||||
},
|
||||
"urls": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"imageUrl": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"storyCount": {
|
||||
"type": "integer"
|
||||
},
|
||||
"averageRating": {
|
||||
"type": "float"
|
||||
},
|
||||
"totalWordCount": {
|
||||
"type": "long"
|
||||
},
|
||||
"totalReadingTime": {
|
||||
"type": "integer"
|
||||
},
|
||||
"createdAt": {
|
||||
"type": "date",
|
||||
"format": "strict_date_optional_time||epoch_millis"
|
||||
},
|
||||
"updatedAt": {
|
||||
"type": "date",
|
||||
"format": "strict_date_optional_time||epoch_millis"
|
||||
},
|
||||
"libraryId": {
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,73 +0,0 @@
|
||||
{
|
||||
"settings": {
|
||||
"number_of_shards": 1,
|
||||
"number_of_replicas": 0,
|
||||
"analysis": {
|
||||
"analyzer": {
|
||||
"collection_analyzer": {
|
||||
"type": "standard",
|
||||
"stopwords": "_english_"
|
||||
},
|
||||
"autocomplete_analyzer": {
|
||||
"type": "custom",
|
||||
"tokenizer": "standard",
|
||||
"filter": ["lowercase", "edge_ngram"]
|
||||
}
|
||||
},
|
||||
"filter": {
|
||||
"edge_ngram": {
|
||||
"type": "edge_ngram",
|
||||
"min_gram": 2,
|
||||
"max_gram": 20
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"mappings": {
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"name": {
|
||||
"type": "text",
|
||||
"analyzer": "collection_analyzer",
|
||||
"fields": {
|
||||
"autocomplete": {
|
||||
"type": "text",
|
||||
"analyzer": "autocomplete_analyzer"
|
||||
},
|
||||
"keyword": {
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
},
|
||||
"description": {
|
||||
"type": "text",
|
||||
"analyzer": "collection_analyzer"
|
||||
},
|
||||
"storyCount": {
|
||||
"type": "integer"
|
||||
},
|
||||
"totalWordCount": {
|
||||
"type": "long"
|
||||
},
|
||||
"averageRating": {
|
||||
"type": "float"
|
||||
},
|
||||
"isPublic": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"createdAt": {
|
||||
"type": "date",
|
||||
"format": "strict_date_optional_time||epoch_millis"
|
||||
},
|
||||
"updatedAt": {
|
||||
"type": "date",
|
||||
"format": "strict_date_optional_time||epoch_millis"
|
||||
},
|
||||
"libraryId": {
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,120 +0,0 @@
|
||||
{
|
||||
"settings": {
|
||||
"number_of_shards": 1,
|
||||
"number_of_replicas": 0,
|
||||
"analysis": {
|
||||
"analyzer": {
|
||||
"story_analyzer": {
|
||||
"type": "standard",
|
||||
"stopwords": "_english_"
|
||||
},
|
||||
"autocomplete_analyzer": {
|
||||
"type": "custom",
|
||||
"tokenizer": "standard",
|
||||
"filter": ["lowercase", "edge_ngram"]
|
||||
}
|
||||
},
|
||||
"filter": {
|
||||
"edge_ngram": {
|
||||
"type": "edge_ngram",
|
||||
"min_gram": 2,
|
||||
"max_gram": 20
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"mappings": {
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"title": {
|
||||
"type": "text",
|
||||
"analyzer": "story_analyzer",
|
||||
"fields": {
|
||||
"autocomplete": {
|
||||
"type": "text",
|
||||
"analyzer": "autocomplete_analyzer"
|
||||
},
|
||||
"keyword": {
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
},
|
||||
"content": {
|
||||
"type": "text",
|
||||
"analyzer": "story_analyzer"
|
||||
},
|
||||
"summary": {
|
||||
"type": "text",
|
||||
"analyzer": "story_analyzer"
|
||||
},
|
||||
"authorNames": {
|
||||
"type": "text",
|
||||
"analyzer": "story_analyzer",
|
||||
"fields": {
|
||||
"keyword": {
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
},
|
||||
"authorIds": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"tagNames": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"seriesTitle": {
|
||||
"type": "text",
|
||||
"analyzer": "story_analyzer",
|
||||
"fields": {
|
||||
"keyword": {
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
},
|
||||
"seriesId": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"wordCount": {
|
||||
"type": "integer"
|
||||
},
|
||||
"rating": {
|
||||
"type": "float"
|
||||
},
|
||||
"readingTime": {
|
||||
"type": "integer"
|
||||
},
|
||||
"language": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"status": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"createdAt": {
|
||||
"type": "date",
|
||||
"format": "strict_date_optional_time||epoch_millis"
|
||||
},
|
||||
"updatedAt": {
|
||||
"type": "date",
|
||||
"format": "strict_date_optional_time||epoch_millis"
|
||||
},
|
||||
"publishedAt": {
|
||||
"type": "date",
|
||||
"format": "strict_date_optional_time||epoch_millis"
|
||||
},
|
||||
"isRead": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"isFavorite": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"readingProgress": {
|
||||
"type": "float"
|
||||
},
|
||||
"libraryId": {
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,77 +0,0 @@
|
||||
{
|
||||
"policy": {
|
||||
"description": "StoryCove index lifecycle policy",
|
||||
"default_state": "hot",
|
||||
"states": [
|
||||
{
|
||||
"name": "hot",
|
||||
"actions": [
|
||||
{
|
||||
"rollover": {
|
||||
"min_size": "50gb",
|
||||
"min_doc_count": 1000000,
|
||||
"min_age": "7d"
|
||||
}
|
||||
}
|
||||
],
|
||||
"transitions": [
|
||||
{
|
||||
"state_name": "warm",
|
||||
"conditions": {
|
||||
"min_age": "7d"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "warm",
|
||||
"actions": [
|
||||
{
|
||||
"replica_count": {
|
||||
"number_of_replicas": 0
|
||||
}
|
||||
},
|
||||
{
|
||||
"force_merge": {
|
||||
"max_num_segments": 1
|
||||
}
|
||||
}
|
||||
],
|
||||
"transitions": [
|
||||
{
|
||||
"state_name": "cold",
|
||||
"conditions": {
|
||||
"min_age": "30d"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "cold",
|
||||
"actions": [],
|
||||
"transitions": [
|
||||
{
|
||||
"state_name": "delete",
|
||||
"conditions": {
|
||||
"min_age": "365d"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "delete",
|
||||
"actions": [
|
||||
{
|
||||
"delete": {}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"ism_template": [
|
||||
{
|
||||
"index_patterns": ["stories_*", "authors_*", "collections_*"],
|
||||
"priority": 100
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -1,124 +0,0 @@
|
||||
{
|
||||
"index_patterns": ["stories_*"],
|
||||
"priority": 1,
|
||||
"template": {
|
||||
"settings": {
|
||||
"number_of_shards": 1,
|
||||
"number_of_replicas": 0,
|
||||
"analysis": {
|
||||
"analyzer": {
|
||||
"story_analyzer": {
|
||||
"type": "standard",
|
||||
"stopwords": "_english_"
|
||||
},
|
||||
"autocomplete_analyzer": {
|
||||
"type": "custom",
|
||||
"tokenizer": "standard",
|
||||
"filter": ["lowercase", "edge_ngram"]
|
||||
}
|
||||
},
|
||||
"filter": {
|
||||
"edge_ngram": {
|
||||
"type": "edge_ngram",
|
||||
"min_gram": 2,
|
||||
"max_gram": 20
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"mappings": {
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"title": {
|
||||
"type": "text",
|
||||
"analyzer": "story_analyzer",
|
||||
"fields": {
|
||||
"autocomplete": {
|
||||
"type": "text",
|
||||
"analyzer": "autocomplete_analyzer"
|
||||
},
|
||||
"keyword": {
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
},
|
||||
"content": {
|
||||
"type": "text",
|
||||
"analyzer": "story_analyzer"
|
||||
},
|
||||
"summary": {
|
||||
"type": "text",
|
||||
"analyzer": "story_analyzer"
|
||||
},
|
||||
"authorNames": {
|
||||
"type": "text",
|
||||
"analyzer": "story_analyzer",
|
||||
"fields": {
|
||||
"keyword": {
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
},
|
||||
"authorIds": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"tagNames": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"seriesTitle": {
|
||||
"type": "text",
|
||||
"analyzer": "story_analyzer",
|
||||
"fields": {
|
||||
"keyword": {
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
},
|
||||
"seriesId": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"wordCount": {
|
||||
"type": "integer"
|
||||
},
|
||||
"rating": {
|
||||
"type": "float"
|
||||
},
|
||||
"readingTime": {
|
||||
"type": "integer"
|
||||
},
|
||||
"language": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"status": {
|
||||
"type": "keyword"
|
||||
},
|
||||
"createdAt": {
|
||||
"type": "date",
|
||||
"format": "strict_date_optional_time||epoch_millis"
|
||||
},
|
||||
"updatedAt": {
|
||||
"type": "date",
|
||||
"format": "strict_date_optional_time||epoch_millis"
|
||||
},
|
||||
"publishedAt": {
|
||||
"type": "date",
|
||||
"format": "strict_date_optional_time||epoch_millis"
|
||||
},
|
||||
"isRead": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"isFavorite": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"readingProgress": {
|
||||
"type": "float"
|
||||
},
|
||||
"libraryId": {
|
||||
"type": "keyword"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -19,11 +19,14 @@ storycove:
|
||||
auth:
|
||||
password: test-password
|
||||
search:
|
||||
engine: opensearch
|
||||
opensearch:
|
||||
engine: solr
|
||||
solr:
|
||||
host: localhost
|
||||
port: 9200
|
||||
port: 8983
|
||||
scheme: http
|
||||
cores:
|
||||
stories: storycove_stories
|
||||
authors: storycove_authors
|
||||
images:
|
||||
storage-path: /tmp/test-images
|
||||
|
||||
|
||||
@@ -34,18 +34,10 @@ services:
|
||||
- SPRING_DATASOURCE_USERNAME=storycove
|
||||
- SPRING_DATASOURCE_PASSWORD=${DB_PASSWORD}
|
||||
- JWT_SECRET=${JWT_SECRET}
|
||||
- OPENSEARCH_HOST=opensearch
|
||||
- OPENSEARCH_PORT=9200
|
||||
- OPENSEARCH_SCHEME=http
|
||||
- OPENSEARCH_USERNAME=
|
||||
- OPENSEARCH_PASSWORD=
|
||||
- OPENSEARCH_SSL_VERIFICATION=false
|
||||
- OPENSEARCH_TRUST_ALL_CERTS=true
|
||||
- OPENSEARCH_CONNECTION_TIMEOUT=60000
|
||||
- OPENSEARCH_SOCKET_TIMEOUT=120000
|
||||
- OPENSEARCH_MAX_RETRIES=5
|
||||
- OPENSEARCH_RETRY_ON_FAILURE=true
|
||||
- SEARCH_ENGINE=${SEARCH_ENGINE:-opensearch}
|
||||
- SOLR_HOST=solr
|
||||
- SOLR_PORT=8983
|
||||
- SOLR_SCHEME=http
|
||||
- SEARCH_ENGINE=${SEARCH_ENGINE:-solr}
|
||||
- IMAGE_STORAGE_PATH=/app/images
|
||||
- APP_PASSWORD=${APP_PASSWORD}
|
||||
- STORYCOVE_CORS_ALLOWED_ORIGINS=${STORYCOVE_CORS_ALLOWED_ORIGINS:-http://localhost:3000,http://localhost:6925}
|
||||
@@ -55,7 +47,7 @@ services:
|
||||
depends_on:
|
||||
postgres:
|
||||
condition: service_started
|
||||
opensearch:
|
||||
solr:
|
||||
condition: service_started
|
||||
networks:
|
||||
- storycove-network
|
||||
@@ -75,22 +67,32 @@ services:
|
||||
- storycove-network
|
||||
|
||||
|
||||
opensearch:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: opensearch.Dockerfile
|
||||
# No port mapping - only accessible within the Docker network
|
||||
solr:
|
||||
image: solr:9.9.0
|
||||
ports:
|
||||
- "8983:8983" # Expose Solr Admin UI for development
|
||||
environment:
|
||||
- cluster.name=storycove-opensearch
|
||||
- node.name=opensearch-node
|
||||
- discovery.type=single-node
|
||||
- bootstrap.memory_lock=false
|
||||
- "OPENSEARCH_JAVA_OPTS=-Xms256m -Xmx512m --add-opens=java.base/java.lang=ALL-UNNAMED -Dlucene.useVectorAPI=false -Dorg.apache.lucene.store.MMapDirectory.enableMemorySegments=false"
|
||||
- "DISABLE_INSTALL_DEMO_CONFIG=true"
|
||||
- "DISABLE_SECURITY_PLUGIN=true"
|
||||
- "DISABLE_PERFORMANCE_ANALYZER_AGENT_CLI=true"
|
||||
- "ES_TMPDIR=/tmp"
|
||||
- "_JAVA_OPTIONS=-Djdk.net.useExclusiveBind=false"
|
||||
- SOLR_HEAP=512m
|
||||
- SOLR_JAVA_MEM=-Xms256m -Xmx512m
|
||||
volumes:
|
||||
- solr_data:/var/solr
|
||||
- ./solr/stories:/opt/solr-9.9.0/server/solr/configsets/storycove_stories
|
||||
- ./solr/authors:/opt/solr-9.9.0/server/solr/configsets/storycove_authors
|
||||
command: >
|
||||
sh -c "
|
||||
echo 'Starting Solr...' &&
|
||||
solr start -f &
|
||||
SOLR_PID=$$! &&
|
||||
echo 'Waiting for Solr to be ready...' &&
|
||||
sleep 15 &&
|
||||
echo 'Creating cores...' &&
|
||||
(solr create_core -c storycove_stories -d storycove_stories || echo 'Stories core already exists') &&
|
||||
echo 'Stories core ready' &&
|
||||
(solr create_core -c storycove_authors -d storycove_authors || echo 'Authors core already exists') &&
|
||||
echo 'Authors core ready' &&
|
||||
echo 'Both cores are available' &&
|
||||
wait $$SOLR_PID
|
||||
"
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
@@ -99,39 +101,19 @@ services:
|
||||
memory: 512M
|
||||
stop_grace_period: 30s
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -f http://localhost:9200/_cluster/health || exit 1"]
|
||||
test: ["CMD-SHELL", "curl -f http://localhost:8983/solr/admin/ping || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 5
|
||||
start_period: 120s
|
||||
ulimits:
|
||||
memlock:
|
||||
soft: -1
|
||||
hard: -1
|
||||
nofile:
|
||||
soft: 65536
|
||||
hard: 65536
|
||||
volumes:
|
||||
- opensearch_data:/usr/share/opensearch/data
|
||||
start_period: 60s
|
||||
networks:
|
||||
- storycove-network
|
||||
restart: unless-stopped
|
||||
|
||||
opensearch-dashboards:
|
||||
image: opensearchproject/opensearch-dashboards:3.2.0
|
||||
ports:
|
||||
- "5601:5601" # Expose OpenSearch Dashboard
|
||||
environment:
|
||||
- OPENSEARCH_HOSTS=http://opensearch:9200
|
||||
- "DISABLE_SECURITY_DASHBOARDS_PLUGIN=true"
|
||||
depends_on:
|
||||
- opensearch
|
||||
networks:
|
||||
- storycove-network
|
||||
|
||||
volumes:
|
||||
postgres_data:
|
||||
opensearch_data:
|
||||
solr_data:
|
||||
images_data:
|
||||
library_config:
|
||||
|
||||
|
||||
@@ -69,11 +69,11 @@ export default function LibraryContent() {
|
||||
}, []);
|
||||
|
||||
const convertFacetsToTags = (facets?: Record<string, FacetCount[]>): Tag[] => {
|
||||
if (!facets || !facets.tagNames) {
|
||||
if (!facets || !facets.tagNames_facet) {
|
||||
return [];
|
||||
}
|
||||
|
||||
return facets.tagNames.map(facet => {
|
||||
return facets.tagNames_facet.map(facet => {
|
||||
// Find the full tag data by name
|
||||
const fullTag = fullTags.find(tag => tag.name.toLowerCase() === facet.value.toLowerCase());
|
||||
|
||||
|
||||
@@ -493,11 +493,11 @@ async function processIndividualMode(
|
||||
|
||||
console.log(`Bulk import completed: ${importedCount} imported, ${skippedCount} skipped, ${errorCount} errors`);
|
||||
|
||||
// Trigger OpenSearch reindex if any stories were imported
|
||||
// Trigger Solr reindex if any stories were imported
|
||||
if (importedCount > 0) {
|
||||
try {
|
||||
console.log('Triggering OpenSearch reindex after bulk import...');
|
||||
const reindexUrl = `http://backend:8080/api/admin/search/opensearch/reindex`;
|
||||
console.log('Triggering Solr reindex after bulk import...');
|
||||
const reindexUrl = `http://backend:8080/api/admin/search/solr/reindex`;
|
||||
const reindexResponse = await fetch(reindexUrl, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
@@ -508,12 +508,12 @@ async function processIndividualMode(
|
||||
|
||||
if (reindexResponse.ok) {
|
||||
const reindexResult = await reindexResponse.json();
|
||||
console.log('OpenSearch reindex completed:', reindexResult);
|
||||
console.log('Solr reindex completed:', reindexResult);
|
||||
} else {
|
||||
console.warn('OpenSearch reindex failed:', reindexResponse.status);
|
||||
console.warn('Solr reindex failed:', reindexResponse.status);
|
||||
}
|
||||
} catch (error) {
|
||||
console.warn('Failed to trigger OpenSearch reindex:', error);
|
||||
console.warn('Failed to trigger Solr reindex:', error);
|
||||
// Don't fail the whole request if reindex fails
|
||||
}
|
||||
}
|
||||
|
||||
@@ -11,18 +11,18 @@ interface SystemSettingsProps {
|
||||
export default function SystemSettings({}: SystemSettingsProps) {
|
||||
const [searchEngineStatus, setSearchEngineStatus] = useState<{
|
||||
currentEngine: string;
|
||||
openSearchAvailable: boolean;
|
||||
solrAvailable: boolean;
|
||||
loading: boolean;
|
||||
message: string;
|
||||
success?: boolean;
|
||||
}>({
|
||||
currentEngine: 'opensearch',
|
||||
openSearchAvailable: false,
|
||||
currentEngine: 'solr',
|
||||
solrAvailable: false,
|
||||
loading: false,
|
||||
message: ''
|
||||
});
|
||||
|
||||
const [openSearchStatus, setOpenSearchStatus] = useState<{
|
||||
const [solrStatus, setSolrStatus] = useState<{
|
||||
reindex: { loading: boolean; message: string; success?: boolean };
|
||||
recreate: { loading: boolean; message: string; success?: boolean };
|
||||
}>({
|
||||
@@ -312,7 +312,7 @@ export default function SystemSettings({}: SystemSettingsProps) {
|
||||
setSearchEngineStatus(prev => ({
|
||||
...prev,
|
||||
currentEngine: status.primaryEngine,
|
||||
openSearchAvailable: status.openSearchAvailable,
|
||||
solrAvailable: status.solrAvailable,
|
||||
}));
|
||||
} catch (error: any) {
|
||||
console.error('Failed to load search engine status:', error);
|
||||
@@ -321,16 +321,16 @@ export default function SystemSettings({}: SystemSettingsProps) {
|
||||
|
||||
|
||||
|
||||
const handleOpenSearchReindex = async () => {
|
||||
setOpenSearchStatus(prev => ({
|
||||
const handleSolrReindex = async () => {
|
||||
setSolrStatus(prev => ({
|
||||
...prev,
|
||||
reindex: { loading: true, message: 'Reindexing OpenSearch...', success: undefined }
|
||||
reindex: { loading: true, message: 'Reindexing Solr...', success: undefined }
|
||||
}));
|
||||
|
||||
try {
|
||||
const result = await searchAdminApi.reindexOpenSearch();
|
||||
const result = await searchAdminApi.reindexSolr();
|
||||
|
||||
setOpenSearchStatus(prev => ({
|
||||
setSolrStatus(prev => ({
|
||||
...prev,
|
||||
reindex: {
|
||||
loading: false,
|
||||
@@ -340,13 +340,13 @@ export default function SystemSettings({}: SystemSettingsProps) {
|
||||
}));
|
||||
|
||||
setTimeout(() => {
|
||||
setOpenSearchStatus(prev => ({
|
||||
setSolrStatus(prev => ({
|
||||
...prev,
|
||||
reindex: { loading: false, message: '', success: undefined }
|
||||
}));
|
||||
}, 8000);
|
||||
} catch (error: any) {
|
||||
setOpenSearchStatus(prev => ({
|
||||
setSolrStatus(prev => ({
|
||||
...prev,
|
||||
reindex: {
|
||||
loading: false,
|
||||
@@ -356,7 +356,7 @@ export default function SystemSettings({}: SystemSettingsProps) {
|
||||
}));
|
||||
|
||||
setTimeout(() => {
|
||||
setOpenSearchStatus(prev => ({
|
||||
setSolrStatus(prev => ({
|
||||
...prev,
|
||||
reindex: { loading: false, message: '', success: undefined }
|
||||
}));
|
||||
@@ -364,16 +364,16 @@ export default function SystemSettings({}: SystemSettingsProps) {
|
||||
}
|
||||
};
|
||||
|
||||
const handleOpenSearchRecreate = async () => {
|
||||
setOpenSearchStatus(prev => ({
|
||||
const handleSolrRecreate = async () => {
|
||||
setSolrStatus(prev => ({
|
||||
...prev,
|
||||
recreate: { loading: true, message: 'Recreating OpenSearch indices...', success: undefined }
|
||||
recreate: { loading: true, message: 'Recreating Solr indices...', success: undefined }
|
||||
}));
|
||||
|
||||
try {
|
||||
const result = await searchAdminApi.recreateOpenSearchIndices();
|
||||
const result = await searchAdminApi.recreateSolrIndices();
|
||||
|
||||
setOpenSearchStatus(prev => ({
|
||||
setSolrStatus(prev => ({
|
||||
...prev,
|
||||
recreate: {
|
||||
loading: false,
|
||||
@@ -383,13 +383,13 @@ export default function SystemSettings({}: SystemSettingsProps) {
|
||||
}));
|
||||
|
||||
setTimeout(() => {
|
||||
setOpenSearchStatus(prev => ({
|
||||
setSolrStatus(prev => ({
|
||||
...prev,
|
||||
recreate: { loading: false, message: '', success: undefined }
|
||||
}));
|
||||
}, 8000);
|
||||
} catch (error: any) {
|
||||
setOpenSearchStatus(prev => ({
|
||||
setSolrStatus(prev => ({
|
||||
...prev,
|
||||
recreate: {
|
||||
loading: false,
|
||||
@@ -399,7 +399,7 @@ export default function SystemSettings({}: SystemSettingsProps) {
|
||||
}));
|
||||
|
||||
setTimeout(() => {
|
||||
setOpenSearchStatus(prev => ({
|
||||
setSolrStatus(prev => ({
|
||||
...prev,
|
||||
recreate: { loading: false, message: '', success: undefined }
|
||||
}));
|
||||
@@ -418,7 +418,7 @@ export default function SystemSettings({}: SystemSettingsProps) {
|
||||
<div className="theme-card theme-shadow rounded-lg p-6">
|
||||
<h2 className="text-xl font-semibold theme-header mb-4">Search Management</h2>
|
||||
<p className="theme-text mb-6">
|
||||
Manage OpenSearch indices for stories and authors. Use these tools if search isn't returning expected results.
|
||||
Manage Solr indices for stories and authors. Use these tools if search isn't returning expected results.
|
||||
</p>
|
||||
|
||||
<div className="space-y-6">
|
||||
@@ -427,9 +427,9 @@ export default function SystemSettings({}: SystemSettingsProps) {
|
||||
<h3 className="text-lg font-semibold theme-header mb-3">Search Status</h3>
|
||||
<div className="grid grid-cols-1 sm:grid-cols-2 gap-3 text-sm">
|
||||
<div className="flex justify-between">
|
||||
<span>OpenSearch:</span>
|
||||
<span className={`font-medium ${searchEngineStatus.openSearchAvailable ? 'text-green-600 dark:text-green-400' : 'text-red-600 dark:text-red-400'}`}>
|
||||
{searchEngineStatus.openSearchAvailable ? 'Available' : 'Unavailable'}
|
||||
<span>Solr:</span>
|
||||
<span className={`font-medium ${searchEngineStatus.solrAvailable ? 'text-green-600 dark:text-green-400' : 'text-red-600 dark:text-red-400'}`}>
|
||||
{searchEngineStatus.solrAvailable ? 'Available' : 'Unavailable'}
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
@@ -444,43 +444,43 @@ export default function SystemSettings({}: SystemSettingsProps) {
|
||||
|
||||
<div className="flex flex-col sm:flex-row gap-3 mb-4">
|
||||
<Button
|
||||
onClick={handleOpenSearchReindex}
|
||||
disabled={openSearchStatus.reindex.loading || openSearchStatus.recreate.loading || !searchEngineStatus.openSearchAvailable}
|
||||
loading={openSearchStatus.reindex.loading}
|
||||
onClick={handleSolrReindex}
|
||||
disabled={solrStatus.reindex.loading || solrStatus.recreate.loading || !searchEngineStatus.solrAvailable}
|
||||
loading={solrStatus.reindex.loading}
|
||||
variant="ghost"
|
||||
className="flex-1"
|
||||
>
|
||||
{openSearchStatus.reindex.loading ? 'Reindexing...' : '🔄 Reindex All'}
|
||||
{solrStatus.reindex.loading ? 'Reindexing...' : '🔄 Reindex All'}
|
||||
</Button>
|
||||
<Button
|
||||
onClick={handleOpenSearchRecreate}
|
||||
disabled={openSearchStatus.reindex.loading || openSearchStatus.recreate.loading || !searchEngineStatus.openSearchAvailable}
|
||||
loading={openSearchStatus.recreate.loading}
|
||||
onClick={handleSolrRecreate}
|
||||
disabled={solrStatus.reindex.loading || solrStatus.recreate.loading || !searchEngineStatus.solrAvailable}
|
||||
loading={solrStatus.recreate.loading}
|
||||
variant="secondary"
|
||||
className="flex-1"
|
||||
>
|
||||
{openSearchStatus.recreate.loading ? 'Recreating...' : '🏗️ Recreate Indices'}
|
||||
{solrStatus.recreate.loading ? 'Recreating...' : '🏗️ Recreate Indices'}
|
||||
</Button>
|
||||
</div>
|
||||
|
||||
{/* Status Messages */}
|
||||
{openSearchStatus.reindex.message && (
|
||||
{solrStatus.reindex.message && (
|
||||
<div className={`text-sm p-3 rounded mb-3 ${
|
||||
openSearchStatus.reindex.success
|
||||
solrStatus.reindex.success
|
||||
? 'bg-green-50 dark:bg-green-900/20 text-green-800 dark:text-green-200'
|
||||
: 'bg-red-50 dark:bg-red-900/20 text-red-800 dark:text-red-200'
|
||||
}`}>
|
||||
{openSearchStatus.reindex.message}
|
||||
{solrStatus.reindex.message}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{openSearchStatus.recreate.message && (
|
||||
{solrStatus.recreate.message && (
|
||||
<div className={`text-sm p-3 rounded mb-3 ${
|
||||
openSearchStatus.recreate.success
|
||||
solrStatus.recreate.success
|
||||
? 'bg-green-50 dark:bg-green-900/20 text-green-800 dark:text-green-200'
|
||||
: 'bg-red-50 dark:bg-red-900/20 text-red-800 dark:text-red-200'
|
||||
}`}>
|
||||
{openSearchStatus.recreate.message}
|
||||
{solrStatus.recreate.message}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
@@ -576,7 +576,7 @@ export const searchAdminApi = {
|
||||
getStatus: async (): Promise<{
|
||||
primaryEngine: string;
|
||||
dualWrite: boolean;
|
||||
openSearchAvailable: boolean;
|
||||
solrAvailable: boolean;
|
||||
}> => {
|
||||
const response = await api.get('/admin/search/status');
|
||||
return response.data;
|
||||
@@ -600,8 +600,8 @@ export const searchAdminApi = {
|
||||
},
|
||||
|
||||
// Switch engines
|
||||
switchToOpenSearch: async (): Promise<{ message: string }> => {
|
||||
const response = await api.post('/admin/search/switch/opensearch');
|
||||
switchToSolr: async (): Promise<{ message: string }> => {
|
||||
const response = await api.post('/admin/search/switch/solr');
|
||||
return response.data;
|
||||
},
|
||||
|
||||
@@ -612,8 +612,8 @@ export const searchAdminApi = {
|
||||
return response.data;
|
||||
},
|
||||
|
||||
// OpenSearch operations
|
||||
reindexOpenSearch: async (): Promise<{
|
||||
// Solr operations
|
||||
reindexSolr: async (): Promise<{
|
||||
success: boolean;
|
||||
message: string;
|
||||
storiesCount?: number;
|
||||
@@ -621,11 +621,11 @@ export const searchAdminApi = {
|
||||
totalCount?: number;
|
||||
error?: string;
|
||||
}> => {
|
||||
const response = await api.post('/admin/search/opensearch/reindex');
|
||||
const response = await api.post('/admin/search/solr/reindex');
|
||||
return response.data;
|
||||
},
|
||||
|
||||
recreateOpenSearchIndices: async (): Promise<{
|
||||
recreateSolrIndices: async (): Promise<{
|
||||
success: boolean;
|
||||
message: string;
|
||||
storiesCount?: number;
|
||||
@@ -633,7 +633,7 @@ export const searchAdminApi = {
|
||||
totalCount?: number;
|
||||
error?: string;
|
||||
}> => {
|
||||
const response = await api.post('/admin/search/opensearch/recreate');
|
||||
const response = await api.post('/admin/search/solr/recreate');
|
||||
return response.data;
|
||||
},
|
||||
};
|
||||
|
||||
@@ -51,12 +51,15 @@ RUN mkdir -p /usr/share/opensearch/data && \
|
||||
echo "-Dorg.apache.lucene.store.MMapDirectory.enableMemorySegments=false" >> /usr/share/opensearch/config/jvm.options.d/synology.options && \
|
||||
echo "-Djava.awt.headless=true" >> /usr/share/opensearch/config/jvm.options.d/synology.options && \
|
||||
echo "-XX:+UseContainerSupport" >> /usr/share/opensearch/config/jvm.options.d/synology.options && \
|
||||
echo "-Dorg.opensearch.bootstrap.start_timeout=60s" >> /usr/share/opensearch/config/jvm.options.d/synology.options && \
|
||||
echo "-Dorg.opensearch.bootstrap.start_timeout=300s" >> /usr/share/opensearch/config/jvm.options.d/synology.options && \
|
||||
echo "-Dopensearch.logger.level=INFO" >> /usr/share/opensearch/config/jvm.options.d/synology.options && \
|
||||
echo "--add-opens=jdk.unsupported/sun.misc=ALL-UNNAMED" >> /usr/share/opensearch/config/jvm.options.d/synology.options && \
|
||||
echo "--add-opens=java.base/java.util=ALL-UNNAMED" >> /usr/share/opensearch/config/jvm.options.d/synology.options && \
|
||||
echo "--add-opens=java.base/java.lang=ALL-UNNAMED" >> /usr/share/opensearch/config/jvm.options.d/synology.options && \
|
||||
echo "--add-modules=jdk.unsupported" >> /usr/share/opensearch/config/jvm.options.d/synology.options && \
|
||||
echo "-XX:+UnlockExperimentalVMOptions" >> /usr/share/opensearch/config/jvm.options.d/synology.options && \
|
||||
echo "-XX:-UseVectorApi" >> /usr/share/opensearch/config/jvm.options.d/synology.options && \
|
||||
echo "-Djdk.incubator.vector.VECTOR_ACCESS_OOB_CHECK=0" >> /usr/share/opensearch/config/jvm.options.d/synology.options && \
|
||||
sed -i '/javaagent/d' /usr/share/opensearch/config/jvm.options && \
|
||||
echo '#!/bin/bash' > /usr/share/opensearch/start-opensearch.sh && \
|
||||
echo 'echo "Starting OpenSearch with Java 21..."' >> /usr/share/opensearch/start-opensearch.sh && \
|
||||
@@ -73,7 +76,7 @@ RUN mkdir -p /usr/share/opensearch/data && \
|
||||
echo 'cat /usr/share/opensearch/config/jvm.options.d/synology.options' >> /usr/share/opensearch/start-opensearch.sh && \
|
||||
echo 'echo "Environment OPENSEARCH_JAVA_OPTS: $OPENSEARCH_JAVA_OPTS"' >> /usr/share/opensearch/start-opensearch.sh && \
|
||||
echo 'echo "Attempting to force disable vector operations..."' >> /usr/share/opensearch/start-opensearch.sh && \
|
||||
echo 'export OPENSEARCH_JAVA_OPTS="$OPENSEARCH_JAVA_OPTS -Dlucene.useVectorAPI=false -Dorg.apache.lucene.store.MMapDirectory.enableMemorySegments=false"' >> /usr/share/opensearch/start-opensearch.sh && \
|
||||
echo 'export OPENSEARCH_JAVA_OPTS="$OPENSEARCH_JAVA_OPTS -Dlucene.useVectorAPI=false -Dorg.apache.lucene.store.MMapDirectory.enableMemorySegments=false --limit-modules=java.base,java.logging,java.xml,java.management,java.naming,java.desktop,java.security.jgss,jdk.unsupported"' >> /usr/share/opensearch/start-opensearch.sh && \
|
||||
echo 'echo "Final OPENSEARCH_JAVA_OPTS: $OPENSEARCH_JAVA_OPTS"' >> /usr/share/opensearch/start-opensearch.sh && \
|
||||
echo 'echo "Starting OpenSearch binary..."' >> /usr/share/opensearch/start-opensearch.sh && \
|
||||
echo 'timeout 300s /usr/share/opensearch/bin/opensearch &' >> /usr/share/opensearch/start-opensearch.sh && \
|
||||
|
||||
93
solr/authors/conf/managed-schema
Executable file
93
solr/authors/conf/managed-schema
Executable file
@@ -0,0 +1,93 @@
|
||||
<?xml version="1.0" encoding="UTF-8" ?>
|
||||
<!--
|
||||
Solr Schema for StoryCove Authors Core
|
||||
Based on AuthorSearchDto data model
|
||||
-->
|
||||
<schema name="storycove-authors" version="1.6">
|
||||
|
||||
<!-- Field Types -->
|
||||
|
||||
<!-- String field type for exact matching -->
|
||||
<fieldType name="string" class="solr.StrField" sortMissingLast="true" />
|
||||
|
||||
<!-- Text field type for full-text search -->
|
||||
<fieldType name="text_general" class="solr.TextField" positionIncrementGap="100">
|
||||
<analyzer type="index">
|
||||
<tokenizer class="solr.StandardTokenizerFactory"/>
|
||||
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
|
||||
<filter class="solr.LowerCaseFilterFactory"/>
|
||||
</analyzer>
|
||||
<analyzer type="query">
|
||||
<tokenizer class="solr.StandardTokenizerFactory"/>
|
||||
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
|
||||
<filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
|
||||
<filter class="solr.LowerCaseFilterFactory"/>
|
||||
</analyzer>
|
||||
</fieldType>
|
||||
|
||||
<!-- Enhanced text field for names -->
|
||||
<fieldType name="text_enhanced" class="solr.TextField" positionIncrementGap="100">
|
||||
<analyzer type="index">
|
||||
<tokenizer class="solr.StandardTokenizerFactory"/>
|
||||
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
|
||||
<filter class="solr.LowerCaseFilterFactory"/>
|
||||
<filter class="solr.EdgeNGramFilterFactory" minGramSize="2" maxGramSize="15"/>
|
||||
</analyzer>
|
||||
<analyzer type="query">
|
||||
<tokenizer class="solr.StandardTokenizerFactory"/>
|
||||
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
|
||||
<filter class="solr.LowerCaseFilterFactory"/>
|
||||
</analyzer>
|
||||
</fieldType>
|
||||
|
||||
<!-- Integer field type -->
|
||||
<fieldType name="pint" class="solr.IntPointField" docValues="true"/>
|
||||
|
||||
<!-- Long field type -->
|
||||
<fieldType name="plong" class="solr.LongPointField" docValues="true"/>
|
||||
|
||||
<!-- Double field type -->
|
||||
<fieldType name="pdouble" class="solr.DoublePointField" docValues="true"/>
|
||||
|
||||
<!-- Date field type -->
|
||||
<fieldType name="pdate" class="solr.DatePointField" docValues="true"/>
|
||||
|
||||
<!-- Multi-valued string for URLs -->
|
||||
<fieldType name="strings" class="solr.StrField" sortMissingLast="true" multiValued="true"/>
|
||||
|
||||
<!-- Fields -->
|
||||
|
||||
<!-- Required Fields -->
|
||||
<field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" />
|
||||
<field name="_version_" type="plong" indexed="false" stored="false"/>
|
||||
|
||||
<!-- Core Author Fields -->
|
||||
<field name="name" type="text_enhanced" indexed="true" stored="true" required="true"/>
|
||||
<field name="notes" type="text_general" indexed="true" stored="true"/>
|
||||
<field name="authorRating" type="pint" indexed="true" stored="true"/>
|
||||
<field name="averageStoryRating" type="pdouble" indexed="true" stored="true"/>
|
||||
<field name="storyCount" type="pint" indexed="true" stored="true"/>
|
||||
<field name="urls" type="strings" indexed="true" stored="true"/>
|
||||
<field name="avatarImagePath" type="string" indexed="false" stored="true"/>
|
||||
|
||||
<!-- Timestamp Fields -->
|
||||
<field name="createdAt" type="pdate" indexed="true" stored="true"/>
|
||||
<field name="updatedAt" type="pdate" indexed="true" stored="true"/>
|
||||
|
||||
<!-- Search-specific Fields -->
|
||||
<field name="searchScore" type="plong" indexed="false" stored="true"/>
|
||||
|
||||
<!-- Combined search field for general queries -->
|
||||
<field name="text" type="text_general" indexed="true" stored="false" multiValued="true"/>
|
||||
|
||||
<!-- Copy Fields for comprehensive search -->
|
||||
<copyField source="name" dest="text"/>
|
||||
<copyField source="notes" dest="text"/>
|
||||
<copyField source="urls" dest="text"/>
|
||||
|
||||
<!-- Default Search Field -->
|
||||
|
||||
<!-- UniqueKey -->
|
||||
<uniqueKey>id</uniqueKey>
|
||||
|
||||
</schema>
|
||||
140
solr/authors/conf/solrconfig.xml
Executable file
140
solr/authors/conf/solrconfig.xml
Executable file
@@ -0,0 +1,140 @@
|
||||
<?xml version="1.0" encoding="UTF-8" ?>
|
||||
<!--
|
||||
Solr Configuration for StoryCove Authors Core
|
||||
Optimized for author search with highlighting and faceting
|
||||
-->
|
||||
<config>
|
||||
<luceneMatchVersion>9.9.0</luceneMatchVersion>
|
||||
|
||||
<!-- DataDir configuration -->
|
||||
<dataDir>${solr.data.dir:}</dataDir>
|
||||
|
||||
<!-- Directory Factory -->
|
||||
<directoryFactory name="DirectoryFactory"
|
||||
class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
|
||||
|
||||
<!-- CodecFactory -->
|
||||
<codecFactory class="solr.SchemaCodecFactory"/>
|
||||
|
||||
<!-- Index Configuration -->
|
||||
<indexConfig>
|
||||
<lockType>${solr.lock.type:native}</lockType>
|
||||
<infoStream>true</infoStream>
|
||||
</indexConfig>
|
||||
|
||||
<!-- JMX Configuration -->
|
||||
<jmx />
|
||||
|
||||
<!-- Update Handler -->
|
||||
<updateHandler class="solr.DirectUpdateHandler2">
|
||||
<updateLog>
|
||||
<str name="dir">${solr.ulog.dir:}</str>
|
||||
<int name="numVersionBuckets">${solr.ulog.numVersionBuckets:65536}</int>
|
||||
</updateLog>
|
||||
|
||||
<autoCommit>
|
||||
<maxTime>15000</maxTime>
|
||||
<openSearcher>false</openSearcher>
|
||||
</autoCommit>
|
||||
|
||||
<autoSoftCommit>
|
||||
<maxTime>1000</maxTime>
|
||||
</autoSoftCommit>
|
||||
</updateHandler>
|
||||
|
||||
<!-- Query Configuration -->
|
||||
<query>
|
||||
<maxBooleanClauses>1024</maxBooleanClauses>
|
||||
<filterCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
<queryResultCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
<documentCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
<enableLazyFieldLoading>true</enableLazyFieldLoading>
|
||||
</query>
|
||||
|
||||
<!-- Request Dispatcher -->
|
||||
<requestDispatcher handleSelect="false" >
|
||||
<requestParsers enableRemoteStreaming="true"
|
||||
multipartUploadLimitInKB="2048000"
|
||||
formdataUploadLimitInKB="2048"
|
||||
addHttpRequestToContext="false"/>
|
||||
<httpCaching never304="true" />
|
||||
</requestDispatcher>
|
||||
|
||||
<!-- Request Handlers -->
|
||||
|
||||
<!-- Standard Select Handler -->
|
||||
<requestHandler name="/select" class="solr.SearchHandler">
|
||||
<lst name="defaults">
|
||||
<str name="echoParams">explicit</str>
|
||||
<int name="rows">10</int>
|
||||
<str name="df">text</str>
|
||||
<str name="wt">json</str>
|
||||
<str name="indent">true</str>
|
||||
<str name="hl">true</str>
|
||||
<str name="hl.fl">name,notes</str>
|
||||
<str name="hl.simple.pre"><em></str>
|
||||
<str name="hl.simple.post"></em></str>
|
||||
<str name="hl.fragsize">150</str>
|
||||
<str name="hl.maxAnalyzedChars">51200</str>
|
||||
<str name="facet">true</str>
|
||||
<str name="facet.field">authorRating</str>
|
||||
<str name="facet.range">averageStoryRating</str>
|
||||
<str name="facet.range">storyCount</str>
|
||||
<str name="facet.mincount">1</str>
|
||||
<str name="facet.sort">count</str>
|
||||
</lst>
|
||||
</requestHandler>
|
||||
|
||||
<!-- Update Handler -->
|
||||
<requestHandler name="/update" class="solr.UpdateRequestHandler" />
|
||||
|
||||
<!-- Admin Handlers -->
|
||||
<requestHandler name="/admin/ping" class="solr.PingRequestHandler">
|
||||
<lst name="invariants">
|
||||
<str name="q">*:*</str>
|
||||
</lst>
|
||||
<lst name="defaults">
|
||||
<str name="echoParams">all</str>
|
||||
</lst>
|
||||
</requestHandler>
|
||||
|
||||
<!-- Suggester Handler -->
|
||||
<requestHandler name="/suggest" class="solr.SearchHandler" startup="lazy">
|
||||
<lst name="defaults">
|
||||
<str name="suggest">true</str>
|
||||
<str name="suggest.count">10</str>
|
||||
</lst>
|
||||
<arr name="components">
|
||||
<str>suggest</str>
|
||||
</arr>
|
||||
</requestHandler>
|
||||
|
||||
<!-- Search Components -->
|
||||
<searchComponent name="suggest" class="solr.SuggestComponent">
|
||||
<lst name="suggester">
|
||||
<str name="name">authorSuggester</str>
|
||||
<str name="lookupImpl">AnalyzingInfixLookupFactory</str>
|
||||
<str name="dictionaryImpl">DocumentDictionaryFactory</str>
|
||||
<str name="field">name</str>
|
||||
<str name="weightField">storyCount</str>
|
||||
<str name="suggestAnalyzerFieldType">text_general</str>
|
||||
<str name="buildOnStartup">false</str>
|
||||
<str name="buildOnCommit">false</str>
|
||||
</lst>
|
||||
</searchComponent>
|
||||
|
||||
<!-- Response Writers -->
|
||||
<queryResponseWriter name="json" class="solr.JSONResponseWriter">
|
||||
<str name="content-type">application/json; charset=UTF-8</str>
|
||||
</queryResponseWriter>
|
||||
|
||||
</config>
|
||||
34
solr/authors/conf/stopwords.txt
Executable file
34
solr/authors/conf/stopwords.txt
Executable file
@@ -0,0 +1,34 @@
|
||||
# English stopwords for author search
|
||||
a
|
||||
an
|
||||
and
|
||||
are
|
||||
as
|
||||
at
|
||||
be
|
||||
but
|
||||
by
|
||||
for
|
||||
if
|
||||
in
|
||||
into
|
||||
is
|
||||
it
|
||||
no
|
||||
not
|
||||
of
|
||||
on
|
||||
or
|
||||
such
|
||||
that
|
||||
the
|
||||
their
|
||||
then
|
||||
there
|
||||
these
|
||||
they
|
||||
this
|
||||
to
|
||||
was
|
||||
will
|
||||
with
|
||||
9
solr/authors/conf/synonyms.txt
Executable file
9
solr/authors/conf/synonyms.txt
Executable file
@@ -0,0 +1,9 @@
|
||||
# Synonyms for author search
|
||||
# Format: word1,word2,word3 => synonym1,synonym2
|
||||
writer,author,novelist
|
||||
pen name,pseudonym,alias
|
||||
prolific,productive
|
||||
acclaimed,famous,renowned
|
||||
bestselling,popular
|
||||
contemporary,modern
|
||||
classic,traditional
|
||||
129
solr/stories/conf/managed-schema
Executable file
129
solr/stories/conf/managed-schema
Executable file
@@ -0,0 +1,129 @@
|
||||
<?xml version="1.0" encoding="UTF-8" ?>
|
||||
<!--
|
||||
Solr Schema for StoryCove Stories Core
|
||||
Based on StorySearchDto data model
|
||||
-->
|
||||
<schema name="storycove-stories" version="1.6">
|
||||
|
||||
<!-- Field Types -->
|
||||
|
||||
<!-- String field type for exact matching -->
|
||||
<fieldType name="string" class="solr.StrField" sortMissingLast="true" />
|
||||
|
||||
<!-- Text field type for full-text search -->
|
||||
<fieldType name="text_general" class="solr.TextField" positionIncrementGap="100">
|
||||
<analyzer type="index">
|
||||
<tokenizer class="solr.StandardTokenizerFactory"/>
|
||||
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
|
||||
<filter class="solr.LowerCaseFilterFactory"/>
|
||||
</analyzer>
|
||||
<analyzer type="query">
|
||||
<tokenizer class="solr.StandardTokenizerFactory"/>
|
||||
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
|
||||
<filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
|
||||
<filter class="solr.LowerCaseFilterFactory"/>
|
||||
</analyzer>
|
||||
</fieldType>
|
||||
|
||||
<!-- Enhanced text field for titles and important content -->
|
||||
<fieldType name="text_enhanced" class="solr.TextField" positionIncrementGap="100">
|
||||
<analyzer type="index">
|
||||
<tokenizer class="solr.StandardTokenizerFactory"/>
|
||||
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
|
||||
<filter class="solr.LowerCaseFilterFactory"/>
|
||||
<filter class="solr.EdgeNGramFilterFactory" minGramSize="2" maxGramSize="15"/>
|
||||
</analyzer>
|
||||
<analyzer type="query">
|
||||
<tokenizer class="solr.StandardTokenizerFactory"/>
|
||||
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />
|
||||
<filter class="solr.LowerCaseFilterFactory"/>
|
||||
</analyzer>
|
||||
</fieldType>
|
||||
|
||||
<!-- Integer field type -->
|
||||
<fieldType name="pint" class="solr.IntPointField" docValues="true"/>
|
||||
|
||||
<!-- Long field type -->
|
||||
<fieldType name="plong" class="solr.LongPointField" docValues="true"/>
|
||||
|
||||
<!-- Double field type -->
|
||||
<fieldType name="pdouble" class="solr.DoublePointField" docValues="true"/>
|
||||
|
||||
<!-- Boolean field type -->
|
||||
<fieldType name="boolean" class="solr.BoolField" sortMissingLast="true"/>
|
||||
|
||||
<!-- Date field type -->
|
||||
<fieldType name="pdate" class="solr.DatePointField" docValues="true"/>
|
||||
|
||||
<!-- Multi-valued string for tags and faceting -->
|
||||
<fieldType name="strings" class="solr.StrField" sortMissingLast="true" multiValued="true" docValues="true"/>
|
||||
|
||||
<!-- Single string for exact matching and faceting -->
|
||||
<fieldType name="string_facet" class="solr.StrField" sortMissingLast="true" docValues="true"/>
|
||||
|
||||
<!-- Fields -->
|
||||
|
||||
<!-- Required Fields -->
|
||||
<field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" />
|
||||
<field name="_version_" type="plong" indexed="false" stored="false"/>
|
||||
|
||||
<!-- Core Story Fields -->
|
||||
<field name="title" type="text_enhanced" indexed="true" stored="true" required="true"/>
|
||||
<field name="description" type="text_general" indexed="true" stored="true"/>
|
||||
<field name="sourceUrl" type="string" indexed="true" stored="true"/>
|
||||
<field name="coverPath" type="string" indexed="false" stored="true"/>
|
||||
<field name="wordCount" type="pint" indexed="true" stored="true"/>
|
||||
<field name="rating" type="pint" indexed="true" stored="true"/>
|
||||
<field name="averageStoryRating" type="pdouble" indexed="true" stored="true"/>
|
||||
<field name="volume" type="pint" indexed="true" stored="true"/>
|
||||
|
||||
<!-- Reading Status Fields -->
|
||||
<field name="isRead" type="boolean" indexed="true" stored="true"/>
|
||||
<field name="readingPosition" type="pint" indexed="true" stored="true"/>
|
||||
<field name="lastReadAt" type="pdate" indexed="true" stored="true"/>
|
||||
<field name="lastRead" type="pdate" indexed="true" stored="true"/>
|
||||
|
||||
<!-- Author Fields -->
|
||||
<field name="authorId" type="string" indexed="true" stored="true"/>
|
||||
<field name="authorName" type="text_enhanced" indexed="true" stored="true"/>
|
||||
<field name="authorName_facet" type="string_facet" indexed="true" stored="false"/>
|
||||
|
||||
<!-- Series Fields -->
|
||||
<field name="seriesId" type="string" indexed="true" stored="true"/>
|
||||
<field name="seriesName" type="text_enhanced" indexed="true" stored="true"/>
|
||||
<field name="seriesName_facet" type="string_facet" indexed="true" stored="false"/>
|
||||
|
||||
<!-- Tag Fields -->
|
||||
<field name="tagNames" type="strings" indexed="true" stored="true"/>
|
||||
<field name="tagNames_facet" type="strings" indexed="true" stored="false"/>
|
||||
|
||||
<!-- Timestamp Fields -->
|
||||
<field name="createdAt" type="pdate" indexed="true" stored="true"/>
|
||||
<field name="updatedAt" type="pdate" indexed="true" stored="true"/>
|
||||
<field name="dateAdded" type="pdate" indexed="true" stored="true"/>
|
||||
|
||||
<!-- Search-specific Fields -->
|
||||
<field name="searchScore" type="pdouble" indexed="false" stored="true"/>
|
||||
<field name="highlights" type="strings" indexed="false" stored="true"/>
|
||||
|
||||
<!-- Combined search field for general queries -->
|
||||
<field name="text" type="text_general" indexed="true" stored="false" multiValued="true"/>
|
||||
|
||||
<!-- Copy Fields for comprehensive search -->
|
||||
<copyField source="title" dest="text"/>
|
||||
<copyField source="description" dest="text"/>
|
||||
<copyField source="authorName" dest="text"/>
|
||||
<copyField source="seriesName" dest="text"/>
|
||||
<copyField source="tagNames" dest="text"/>
|
||||
|
||||
<!-- Copy Fields for faceting -->
|
||||
<copyField source="authorName" dest="authorName_facet"/>
|
||||
<copyField source="seriesName" dest="seriesName_facet"/>
|
||||
<copyField source="tagNames" dest="tagNames_facet"/>
|
||||
|
||||
<!-- Default Search Field -->
|
||||
|
||||
<!-- UniqueKey -->
|
||||
<uniqueKey>id</uniqueKey>
|
||||
|
||||
</schema>
|
||||
153
solr/stories/conf/solrconfig.xml
Executable file
153
solr/stories/conf/solrconfig.xml
Executable file
@@ -0,0 +1,153 @@
|
||||
<?xml version="1.0" encoding="UTF-8" ?>
|
||||
<!--
|
||||
Solr Configuration for StoryCove Stories Core
|
||||
Optimized for story search with highlighting and faceting
|
||||
-->
|
||||
<config>
|
||||
<luceneMatchVersion>9.9.0</luceneMatchVersion>
|
||||
|
||||
<!-- DataDir configuration -->
|
||||
<dataDir>${solr.data.dir:}</dataDir>
|
||||
|
||||
<!-- Directory Factory -->
|
||||
<directoryFactory name="DirectoryFactory"
|
||||
class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
|
||||
|
||||
<!-- CodecFactory -->
|
||||
<codecFactory class="solr.SchemaCodecFactory"/>
|
||||
|
||||
<!-- Index Configuration -->
|
||||
<indexConfig>
|
||||
<lockType>${solr.lock.type:native}</lockType>
|
||||
<infoStream>true</infoStream>
|
||||
</indexConfig>
|
||||
|
||||
<!-- JMX Configuration -->
|
||||
<jmx />
|
||||
|
||||
<!-- Update Handler -->
|
||||
<updateHandler class="solr.DirectUpdateHandler2">
|
||||
<updateLog>
|
||||
<str name="dir">${solr.ulog.dir:}</str>
|
||||
<int name="numVersionBuckets">${solr.ulog.numVersionBuckets:65536}</int>
|
||||
</updateLog>
|
||||
|
||||
<autoCommit>
|
||||
<maxTime>15000</maxTime>
|
||||
<openSearcher>false</openSearcher>
|
||||
</autoCommit>
|
||||
|
||||
<autoSoftCommit>
|
||||
<maxTime>1000</maxTime>
|
||||
</autoSoftCommit>
|
||||
</updateHandler>
|
||||
|
||||
<!-- Query Configuration -->
|
||||
<query>
|
||||
<maxBooleanClauses>1024</maxBooleanClauses>
|
||||
<filterCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
<queryResultCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
<documentCache class="solr.CaffeineCache"
|
||||
size="512"
|
||||
initialSize="512"
|
||||
autowarmCount="0"/>
|
||||
<enableLazyFieldLoading>true</enableLazyFieldLoading>
|
||||
</query>
|
||||
|
||||
<!-- Request Dispatcher -->
|
||||
<requestDispatcher handleSelect="false" >
|
||||
<requestParsers enableRemoteStreaming="true"
|
||||
multipartUploadLimitInKB="2048000"
|
||||
formdataUploadLimitInKB="2048"
|
||||
addHttpRequestToContext="false"/>
|
||||
<httpCaching never304="true" />
|
||||
</requestDispatcher>
|
||||
|
||||
<!-- Request Handlers -->
|
||||
|
||||
<!-- Standard Select Handler -->
|
||||
<requestHandler name="/select" class="solr.SearchHandler">
|
||||
<lst name="defaults">
|
||||
<str name="echoParams">explicit</str>
|
||||
<int name="rows">10</int>
|
||||
<str name="df">text</str>
|
||||
<str name="wt">json</str>
|
||||
<str name="indent">true</str>
|
||||
<str name="hl">true</str>
|
||||
<str name="hl.fl">title,description</str>
|
||||
<str name="hl.simple.pre"><em></str>
|
||||
<str name="hl.simple.post"></em></str>
|
||||
<str name="hl.fragsize">150</str>
|
||||
<str name="hl.maxAnalyzedChars">51200</str>
|
||||
<str name="facet">true</str>
|
||||
<str name="facet.field">authorName</str>
|
||||
<str name="facet.field">tagNames</str>
|
||||
<str name="facet.field">seriesName</str>
|
||||
<str name="facet.field">rating</str>
|
||||
<str name="facet.field">isRead</str>
|
||||
<str name="facet.mincount">1</str>
|
||||
<str name="facet.sort">count</str>
|
||||
</lst>
|
||||
</requestHandler>
|
||||
|
||||
<!-- Update Handler -->
|
||||
<requestHandler name="/update" class="solr.UpdateRequestHandler" />
|
||||
|
||||
<!-- Admin Handlers -->
|
||||
<requestHandler name="/admin/ping" class="solr.PingRequestHandler">
|
||||
<lst name="invariants">
|
||||
<str name="q">*:*</str>
|
||||
</lst>
|
||||
<lst name="defaults">
|
||||
<str name="echoParams">all</str>
|
||||
</lst>
|
||||
</requestHandler>
|
||||
|
||||
<!-- More Like This Handler -->
|
||||
<requestHandler name="/mlt" class="solr.MoreLikeThisHandler">
|
||||
<lst name="defaults">
|
||||
<str name="mlt.fl">title,description</str>
|
||||
<int name="mlt.mindf">2</int>
|
||||
<int name="mlt.mintf">2</int>
|
||||
<str name="mlt.qf">title^2.0 description^1.0</str>
|
||||
<int name="rows">5</int>
|
||||
</lst>
|
||||
</requestHandler>
|
||||
|
||||
<!-- Suggester Handler -->
|
||||
<requestHandler name="/suggest" class="solr.SearchHandler" startup="lazy">
|
||||
<lst name="defaults">
|
||||
<str name="suggest">true</str>
|
||||
<str name="suggest.count">10</str>
|
||||
</lst>
|
||||
<arr name="components">
|
||||
<str>suggest</str>
|
||||
</arr>
|
||||
</requestHandler>
|
||||
|
||||
<!-- Search Components -->
|
||||
<searchComponent name="suggest" class="solr.SuggestComponent">
|
||||
<lst name="suggester">
|
||||
<str name="name">storySuggester</str>
|
||||
<str name="lookupImpl">AnalyzingInfixLookupFactory</str>
|
||||
<str name="dictionaryImpl">DocumentDictionaryFactory</str>
|
||||
<str name="field">title</str>
|
||||
<str name="weightField">rating</str>
|
||||
<str name="suggestAnalyzerFieldType">text_general</str>
|
||||
<str name="buildOnStartup">false</str>
|
||||
<str name="buildOnCommit">false</str>
|
||||
</lst>
|
||||
</searchComponent>
|
||||
|
||||
<!-- Response Writers -->
|
||||
<queryResponseWriter name="json" class="solr.JSONResponseWriter">
|
||||
<str name="content-type">application/json; charset=UTF-8</str>
|
||||
</queryResponseWriter>
|
||||
|
||||
</config>
|
||||
34
solr/stories/conf/stopwords.txt
Executable file
34
solr/stories/conf/stopwords.txt
Executable file
@@ -0,0 +1,34 @@
|
||||
# English stopwords for story search
|
||||
a
|
||||
an
|
||||
and
|
||||
are
|
||||
as
|
||||
at
|
||||
be
|
||||
but
|
||||
by
|
||||
for
|
||||
if
|
||||
in
|
||||
into
|
||||
is
|
||||
it
|
||||
no
|
||||
not
|
||||
of
|
||||
on
|
||||
or
|
||||
such
|
||||
that
|
||||
the
|
||||
their
|
||||
then
|
||||
there
|
||||
these
|
||||
they
|
||||
this
|
||||
to
|
||||
was
|
||||
will
|
||||
with
|
||||
16
solr/stories/conf/synonyms.txt
Executable file
16
solr/stories/conf/synonyms.txt
Executable file
@@ -0,0 +1,16 @@
|
||||
# Synonyms for story search
|
||||
# Format: word1,word2,word3 => synonym1,synonym2
|
||||
fantasy,magical,magic
|
||||
sci-fi,science fiction,scifi
|
||||
romance,romantic,love
|
||||
mystery,detective,crime
|
||||
adventure,action
|
||||
horror,scary,frightening
|
||||
drama,dramatic
|
||||
comedy,funny,humor
|
||||
thriller,suspense
|
||||
historical,history
|
||||
contemporary,modern
|
||||
short,brief
|
||||
novel,book
|
||||
story,tale,narrative
|
||||
Reference in New Issue
Block a user