Skip to content

Blog

MongoDB Bulk Operations and Performance Optimization: Advanced Batch Processing and High-Throughput Data Management

High-performance data processing applications require sophisticated bulk operation strategies that can handle large volumes of data efficiently while maintaining consistency and performance under varying load conditions. Traditional row-by-row database operations become prohibitively slow when processing thousands or millions of records, leading to application bottlenecks, extended processing times, and resource exhaustion in production environments.

MongoDB provides comprehensive bulk operation capabilities that enable high-throughput batch processing for insertions, updates, and deletions through optimized write strategies and intelligent batching mechanisms. Unlike traditional databases that require complex stored procedures or application-level batching logic, MongoDB's bulk operations leverage server-side optimization, write concern management, and atomic operation guarantees to deliver superior performance for large-scale data processing scenarios.

The Traditional Batch Processing Challenge

Conventional database batch processing approaches often struggle with performance and complexity:

-- Traditional PostgreSQL batch processing - limited throughput and complex error handling

-- Basic batch insert approach with poor performance characteristics
CREATE OR REPLACE FUNCTION batch_insert_products(
    product_data JSONB[]
) RETURNS TABLE(
    inserted_count INTEGER,
    failed_count INTEGER,
    processing_time_ms INTEGER,
    error_details JSONB
) AS $$
DECLARE
    product_record JSONB;
    insert_count INTEGER := 0;
    error_count INTEGER := 0;
    start_time TIMESTAMP := clock_timestamp();
    current_error TEXT;
    error_list JSONB := '[]'::JSONB;
BEGIN

    -- Individual row processing (extremely inefficient for large datasets)
    FOREACH product_record IN ARRAY product_data
    LOOP
        BEGIN
            INSERT INTO products (
                product_name,
                category,
                price,
                stock_quantity,
                supplier_id,
                created_at,
                updated_at,

                -- Basic validation during insertion
                sku,
                description,
                weight_kg,
                dimensions_cm,

                -- Limited metadata support
                tags,
                attributes
            )
            VALUES (
                product_record->>'product_name',
                product_record->>'category',
                (product_record->>'price')::DECIMAL(10,2),
                (product_record->>'stock_quantity')::INTEGER,
                (product_record->>'supplier_id')::UUID,
                CURRENT_TIMESTAMP,
                CURRENT_TIMESTAMP,

                -- Manual data extraction and validation
                product_record->>'sku',
                product_record->>'description',
                (product_record->>'weight_kg')::DECIMAL(8,3),
                product_record->>'dimensions_cm',

                -- Limited JSON processing capabilities
                string_to_array(product_record->>'tags', ','),
                product_record->'attributes'
            );

            insert_count := insert_count + 1;

        EXCEPTION 
            WHEN unique_violation THEN
                error_count := error_count + 1;
                error_list := error_list || jsonb_build_object(
                    'sku', product_record->>'sku',
                    'error', 'Duplicate SKU violation',
                    'error_code', 'UNIQUE_VIOLATION'
                );
            WHEN check_violation THEN
                error_count := error_count + 1;
                error_list := error_list || jsonb_build_object(
                    'sku', product_record->>'sku',
                    'error', 'Data validation failed',
                    'error_code', 'CHECK_VIOLATION'
                );
            WHEN OTHERS THEN
                error_count := error_count + 1;
                GET STACKED DIAGNOSTICS current_error = MESSAGE_TEXT;
                error_list := error_list || jsonb_build_object(
                    'sku', product_record->>'sku',
                    'error', current_error,
                    'error_code', 'GENERAL_ERROR'
                );
        END;
    END LOOP;

    RETURN QUERY SELECT 
        insert_count,
        error_count,
        EXTRACT(MILLISECONDS FROM clock_timestamp() - start_time)::INTEGER,
        error_list;
END;
$$ LANGUAGE plpgsql;

-- Batch update operation with limited optimization
CREATE OR REPLACE FUNCTION batch_update_inventory(
    updates JSONB[]
) RETURNS TABLE(
    updated_count INTEGER,
    not_found_count INTEGER,
    error_count INTEGER,
    processing_details JSONB
) AS $$
DECLARE
    update_record JSONB;
    updated_rows INTEGER := 0;
    not_found_rows INTEGER := 0;
    error_rows INTEGER := 0;
    temp_table_name TEXT := 'temp_inventory_updates_' || extract(epoch from now())::INTEGER;
    processing_stats JSONB := '{}'::JSONB;
BEGIN

    -- Create temporary table for batch processing (complex setup)
    EXECUTE format('
        CREATE TEMP TABLE %I (
            sku VARCHAR(100),
            stock_adjustment INTEGER,
            price_adjustment DECIMAL(10,2),
            update_reason VARCHAR(200),
            batch_id UUID DEFAULT gen_random_uuid()
        )', temp_table_name);

    -- Insert updates into temporary table
    FOREACH update_record IN ARRAY updates
    LOOP
        EXECUTE format('
            INSERT INTO %I (sku, stock_adjustment, price_adjustment, update_reason)
            VALUES ($1, $2, $3, $4)', 
            temp_table_name
        ) USING 
            update_record->>'sku',
            (update_record->>'stock_adjustment')::INTEGER,
            (update_record->>'price_adjustment')::DECIMAL(10,2),
            update_record->>'update_reason';
    END LOOP;

    -- Perform batch update with limited atomicity
    EXECUTE format('
        WITH update_results AS (
            UPDATE products p
            SET 
                stock_quantity = p.stock_quantity + t.stock_adjustment,
                price = CASE 
                    WHEN t.price_adjustment IS NOT NULL THEN p.price + t.price_adjustment
                    ELSE p.price
                END,
                updated_at = CURRENT_TIMESTAMP,
                last_update_reason = t.update_reason
            FROM %I t
            WHERE p.sku = t.sku
            RETURNING p.sku, p.stock_quantity, p.price
        ),
        stats AS (
            SELECT COUNT(*) as updated_count FROM update_results
        )
        SELECT updated_count FROM stats', 
        temp_table_name
    ) INTO updated_rows;

    -- Calculate not found items (complex logic)
    EXECUTE format('
        SELECT COUNT(*)
        FROM %I t
        WHERE NOT EXISTS (
            SELECT 1 FROM products p WHERE p.sku = t.sku
        )', temp_table_name
    ) INTO not_found_rows;

    -- Cleanup temporary table
    EXECUTE format('DROP TABLE %I', temp_table_name);

    processing_stats := jsonb_build_object(
        'total_processed', array_length(updates, 1),
        'success_rate', CASE 
            WHEN array_length(updates, 1) > 0 THEN 
                ROUND((updated_rows::DECIMAL / array_length(updates, 1)) * 100, 2)
            ELSE 0
        END
    );

    RETURN QUERY SELECT 
        updated_rows,
        not_found_rows,
        error_rows,
        processing_stats;
END;
$$ LANGUAGE plpgsql;

-- Complex batch delete with limited performance optimization
WITH batch_delete_products AS (
    -- Identify products to delete based on complex criteria
    SELECT 
        product_id,
        sku,
        category,
        last_sold_date,
        stock_quantity,

        -- Complex deletion logic
        CASE 
            WHEN stock_quantity = 0 AND last_sold_date < CURRENT_DATE - INTERVAL '365 days' THEN 'discontinued'
            WHEN category = 'seasonal' AND EXTRACT(MONTH FROM CURRENT_DATE) NOT BETWEEN 6 AND 8 THEN 'seasonal_cleanup'
            WHEN supplier_id IN (
                SELECT supplier_id FROM suppliers WHERE status = 'inactive'
            ) THEN 'supplier_inactive'
            ELSE 'no_delete'
        END as delete_reason

    FROM products
    WHERE 
        -- Multi-condition filtering
        (stock_quantity = 0 AND last_sold_date < CURRENT_DATE - INTERVAL '365 days')
        OR (category = 'seasonal' AND EXTRACT(MONTH FROM CURRENT_DATE) NOT BETWEEN 6 AND 8)
        OR supplier_id IN (
            SELECT supplier_id FROM suppliers WHERE status = 'inactive'
        )
),
deletion_validation AS (
    -- Validate deletion constraints (complex dependency checking)
    SELECT 
        bdp.*,
        CASE 
            WHEN EXISTS (
                SELECT 1 FROM order_items oi 
                WHERE oi.product_id = bdp.product_id 
                AND oi.order_date > CURRENT_DATE - INTERVAL '90 days'
            ) THEN 'recent_orders_exist'
            WHEN EXISTS (
                SELECT 1 FROM shopping_carts sc 
                WHERE sc.product_id = bdp.product_id
            ) THEN 'in_shopping_carts'
            WHEN EXISTS (
                SELECT 1 FROM wishlists w 
                WHERE w.product_id = bdp.product_id
            ) THEN 'in_wishlists'
            ELSE 'safe_to_delete'
        END as validation_status

    FROM batch_delete_products bdp
    WHERE bdp.delete_reason != 'no_delete'
),
safe_deletions AS (
    -- Only proceed with safe deletions
    SELECT product_id, sku, delete_reason
    FROM deletion_validation
    WHERE validation_status = 'safe_to_delete'
),
delete_execution AS (
    -- Perform the actual deletion (limited batch efficiency)
    DELETE FROM products
    WHERE product_id IN (
        SELECT product_id FROM safe_deletions
    )
    RETURNING product_id, sku
)
SELECT 
    COUNT(*) as deleted_count,

    -- Limited statistics and reporting
    json_agg(
        json_build_object(
            'sku', de.sku,
            'delete_reason', sd.delete_reason
        )
    ) as deleted_items,

    -- Processing summary
    (
        SELECT COUNT(*) 
        FROM batch_delete_products 
        WHERE delete_reason != 'no_delete'
    ) as candidates_identified,

    (
        SELECT COUNT(*) 
        FROM deletion_validation 
        WHERE validation_status != 'safe_to_delete'
    ) as unsafe_deletions_blocked

FROM delete_execution de
JOIN safe_deletions sd ON de.product_id = sd.product_id;

-- Problems with traditional batch processing approaches:
-- 1. Poor performance due to row-by-row processing instead of set-based operations
-- 2. Complex error handling that doesn't scale with data volume
-- 3. Limited transaction management and rollback capabilities for batch operations
-- 4. No built-in support for partial failures and retry mechanisms
-- 5. Difficulty in maintaining data consistency during large batch operations
-- 6. Complex temporary table management and cleanup requirements
-- 7. Limited monitoring and progress tracking capabilities
-- 8. No native support for ordered vs unordered bulk operations
-- 9. Inefficient memory usage and connection management for large batches
-- 10. Lack of automatic optimization based on operation types and data patterns

-- Attempt at optimized bulk insert (still limited)
INSERT INTO products (
    product_name, category, price, stock_quantity, 
    supplier_id, sku, description, created_at, updated_at
)
SELECT 
    batch_data.product_name,
    batch_data.category,
    batch_data.price::DECIMAL(10,2),
    batch_data.stock_quantity::INTEGER,
    batch_data.supplier_id::UUID,
    batch_data.sku,
    batch_data.description,
    CURRENT_TIMESTAMP,
    CURRENT_TIMESTAMP
FROM (
    VALUES 
        ('Product A', 'Electronics', '299.99', '100', '123e4567-e89b-12d3-a456-426614174000', 'SKU001', 'Description A'),
        ('Product B', 'Electronics', '199.99', '50', '123e4567-e89b-12d3-a456-426614174000', 'SKU002', 'Description B')
    -- Limited to small static datasets
) AS batch_data(product_name, category, price, stock_quantity, supplier_id, sku, description)
ON CONFLICT (sku) DO UPDATE SET
    stock_quantity = products.stock_quantity + EXCLUDED.stock_quantity,
    price = EXCLUDED.price,
    updated_at = CURRENT_TIMESTAMP;

-- Traditional approach limitations:
-- 1. No dynamic batch size optimization based on system resources
-- 2. Limited support for complex document structures and nested data
-- 3. Poor error reporting and partial failure handling
-- 4. No built-in retry logic for transient failures
-- 5. Complex application logic required for batch orchestration
-- 6. Limited write concern and consistency level management
-- 7. No automatic performance monitoring and optimization
-- 8. Difficulty in handling mixed operation types (insert, update, delete) efficiently
-- 9. No native support for bulk operations with custom validation logic
-- 10. Limited scalability for distributed database deployments

MongoDB provides comprehensive bulk operation capabilities with advanced optimization:

// MongoDB Advanced Bulk Operations - high-performance batch processing with intelligent optimization
const { MongoClient, ObjectId } = require('mongodb');

const client = new MongoClient('mongodb://localhost:27017');
const db = client.db('advanced_bulk_operations');

// Comprehensive MongoDB Bulk Operations Manager
class AdvancedBulkOperationsManager {
  constructor(db, config = {}) {
    this.db = db;
    this.collections = {
      products: db.collection('products'),
      inventory: db.collection('inventory'),
      orders: db.collection('orders'),
      customers: db.collection('customers'),
      bulkOperationLogs: db.collection('bulk_operation_logs'),
      performanceMetrics: db.collection('performance_metrics')
    };

    // Advanced bulk operation configuration
    this.config = {
      defaultBatchSize: config.defaultBatchSize || 1000,
      maxBatchSize: config.maxBatchSize || 10000,
      maxRetries: config.maxRetries || 3,
      retryDelay: config.retryDelay || 1000,

      // Performance optimization settings
      enableOptimisticBatching: config.enableOptimisticBatching !== false,
      enableAdaptiveBatchSize: config.enableAdaptiveBatchSize !== false,
      enablePerformanceMonitoring: config.enablePerformanceMonitoring !== false,
      enableErrorAggregation: config.enableErrorAggregation !== false,

      // Write concern and consistency settings
      writeConcern: config.writeConcern || {
        w: 'majority',
        j: true,
        wtimeout: 30000
      },

      // Bulk operation strategies
      unorderedOperations: config.unorderedOperations !== false,
      enablePartialFailures: config.enablePartialFailures !== false,
      enableTransactionalBulk: config.enableTransactionalBulk || false,

      // Memory and resource management
      maxMemoryUsage: config.maxMemoryUsage || '1GB',
      enableGarbageCollection: config.enableGarbageCollection !== false,
      parallelOperations: config.parallelOperations || 4
    };

    // Performance tracking
    this.performanceMetrics = {
      operationsPerSecond: new Map(),
      averageBatchTime: new Map(),
      errorRates: new Map(),
      throughputHistory: []
    };

    this.initializeBulkOperations();
  }

  async initializeBulkOperations() {
    console.log('Initializing advanced bulk operations system...');

    try {
      // Create optimized indexes for bulk operations
      await this.setupOptimizedIndexes();

      // Initialize performance monitoring
      if (this.config.enablePerformanceMonitoring) {
        await this.setupPerformanceMonitoring();
      }

      // Setup bulk operation logging
      await this.setupBulkOperationLogging();

      console.log('Bulk operations system initialized successfully');

    } catch (error) {
      console.error('Error initializing bulk operations:', error);
      throw error;
    }
  }

  async setupOptimizedIndexes() {
    console.log('Setting up indexes optimized for bulk operations...');

    try {
      // Product collection indexes for efficient bulk operations
      await this.collections.products.createIndexes([
        { key: { sku: 1 }, unique: true, background: true },
        { key: { category: 1, createdAt: -1 }, background: true },
        { key: { supplier_id: 1, status: 1 }, background: true },
        { key: { 'pricing.lastUpdated': -1 }, background: true, sparse: true },
        { key: { tags: 1 }, background: true },
        { key: { 'inventory.lastStockUpdate': -1 }, background: true, sparse: true }
      ]);

      // Inventory collection indexes
      await this.collections.inventory.createIndexes([
        { key: { product_id: 1, warehouse_id: 1 }, unique: true, background: true },
        { key: { lastUpdated: -1 }, background: true },
        { key: { quantity: 1, status: 1 }, background: true }
      ]);

      console.log('Bulk operation indexes created successfully');

    } catch (error) {
      console.error('Error creating bulk operation indexes:', error);
      throw error;
    }
  }

  async performAdvancedBulkInsert(documents, options = {}) {
    console.log(`Performing advanced bulk insert for ${documents.length} documents...`);
    const startTime = Date.now();

    try {
      // Validate and prepare documents for bulk insertion
      const preparedDocuments = await this.prepareDocumentsForInsertion(documents, options);

      // Determine optimal batch configuration
      const batchConfig = this.calculateOptimalBatchConfiguration(preparedDocuments, 'insert');

      // Execute bulk insert with advanced error handling
      const insertResults = await this.executeBulkInsertBatches(preparedDocuments, batchConfig, options);

      // Process and aggregate results
      const aggregatedResults = await this.aggregateBulkResults(insertResults, 'insert');

      // Log operation performance
      await this.logBulkOperation('bulk_insert', {
        documentCount: documents.length,
        batchConfiguration: batchConfig,
        results: aggregatedResults,
        processingTime: Date.now() - startTime
      });

      return {
        operation: 'bulk_insert',
        totalDocuments: documents.length,
        successful: aggregatedResults.successfulInserts,
        failed: aggregatedResults.failedInserts,

        // Detailed results
        insertedIds: aggregatedResults.insertedIds,
        errors: aggregatedResults.errors,
        duplicates: aggregatedResults.duplicateErrors,

        // Performance metrics
        processingTime: Date.now() - startTime,
        documentsPerSecond: Math.round((aggregatedResults.successfulInserts / (Date.now() - startTime)) * 1000),
        batchesProcessed: insertResults.length,
        averageBatchTime: insertResults.reduce((sum, r) => sum + r.processingTime, 0) / insertResults.length,

        // Configuration used
        batchConfiguration: batchConfig,

        // Quality metrics
        successRate: (aggregatedResults.successfulInserts / documents.length) * 100,
        errorRate: (aggregatedResults.failedInserts / documents.length) * 100
      };

    } catch (error) {
      console.error('Bulk insert operation failed:', error);

      // Log failed operation
      await this.logBulkOperation('bulk_insert_failed', {
        documentCount: documents.length,
        error: error.message,
        processingTime: Date.now() - startTime
      });

      throw error;
    }
  }

  async prepareDocumentsForInsertion(documents, options = {}) {
    console.log('Preparing documents for bulk insertion with validation and enhancement...');

    const preparedDocuments = [];
    const validationErrors = [];

    for (let i = 0; i < documents.length; i++) {
      const document = documents[i];

      try {
        // Document validation and standardization
        const preparedDoc = {
          ...document,

          // Ensure consistent ObjectId handling
          _id: document._id || new ObjectId(),

          // Standardize timestamps
          createdAt: document.createdAt || new Date(),
          updatedAt: document.updatedAt || new Date(),

          // Add bulk operation metadata
          bulkOperationMetadata: {
            batchId: options.batchId || new ObjectId(),
            sourceOperation: 'bulk_insert',
            insertionIndex: i,
            processingTimestamp: new Date()
          }
        };

        // Apply custom document transformations if provided
        if (options.documentTransform) {
          const transformedDoc = await options.documentTransform(preparedDoc, i);
          preparedDocuments.push(transformedDoc);
        } else {
          preparedDocuments.push(preparedDoc);
        }

        // Enhanced document preparation for specific collections
        if (options.collection === 'products') {
          preparedDoc.searchKeywords = this.generateSearchKeywords(preparedDoc);
          preparedDoc.categoryHierarchy = this.buildCategoryHierarchy(preparedDoc.category);
          preparedDoc.pricingTiers = this.calculatePricingTiers(preparedDoc.price);
        }

      } catch (validationError) {
        validationErrors.push({
          index: i,
          document: document,
          error: validationError.message
        });
      }
    }

    if (validationErrors.length > 0 && !options.allowPartialFailures) {
      throw new Error(`Document validation failed for ${validationErrors.length} documents`);
    }

    return {
      documents: preparedDocuments,
      validationErrors: validationErrors
    };
  }

  calculateOptimalBatchConfiguration(preparedDocuments, operationType) {
    console.log(`Calculating optimal batch configuration for ${operationType}...`);

    const documentCount = preparedDocuments.documents ? preparedDocuments.documents.length : preparedDocuments.length;
    const avgDocumentSize = this.estimateAverageDocumentSize(preparedDocuments);

    // Adaptive batch sizing based on document characteristics
    let optimalBatchSize = this.config.defaultBatchSize;

    // Adjust based on document size
    if (avgDocumentSize > 100000) { // Large documents (>100KB)
      optimalBatchSize = Math.min(100, this.config.defaultBatchSize);
    } else if (avgDocumentSize > 10000) { // Medium documents (>10KB)
      optimalBatchSize = Math.min(500, this.config.defaultBatchSize);
    } else { // Small documents
      optimalBatchSize = Math.min(this.config.maxBatchSize, documentCount);
    }

    // Adjust based on operation type
    const operationMultiplier = {
      'insert': 1.0,
      'update': 0.8,
      'delete': 1.2,
      'upsert': 0.7
    };

    optimalBatchSize = Math.round(optimalBatchSize * (operationMultiplier[operationType] || 1.0));

    // Calculate number of batches
    const numberOfBatches = Math.ceil(documentCount / optimalBatchSize);

    return {
      batchSize: optimalBatchSize,
      numberOfBatches: numberOfBatches,
      estimatedDocumentSize: avgDocumentSize,
      operationType: operationType,

      // Advanced configuration
      unordered: this.config.unorderedOperations,
      writeConcern: this.config.writeConcern,
      maxTimeMS: 30000,

      // Parallel processing configuration
      parallelBatches: Math.min(this.config.parallelOperations, numberOfBatches)
    };
  }

  async executeBulkInsertBatches(preparedDocuments, batchConfig, options = {}) {
    console.log(`Executing ${batchConfig.numberOfBatches} bulk insert batches...`);

    const documents = preparedDocuments.documents || preparedDocuments;
    const batchResults = [];
    const batches = this.createBatches(documents, batchConfig.batchSize);

    // Execute batches with parallel processing
    if (batchConfig.parallelBatches > 1) {
      const batchGroups = this.createBatchGroups(batches, batchConfig.parallelBatches);

      for (const batchGroup of batchGroups) {
        const groupResults = await Promise.all(
          batchGroup.map(batch => this.executeSingleInsertBatch(batch, batchConfig, options))
        );
        batchResults.push(...groupResults);
      }
    } else {
      // Sequential execution for ordered operations
      for (const batch of batches) {
        const result = await this.executeSingleInsertBatch(batch, batchConfig, options);
        batchResults.push(result);
      }
    }

    return batchResults;
  }

  async executeSingleInsertBatch(batchDocuments, batchConfig, options = {}) {
    const batchStartTime = Date.now();

    try {
      // Create collection reference
      const collection = options.collection ? this.db.collection(options.collection) : this.collections.products;

      // Configure bulk insert operation
      const insertOptions = {
        ordered: !batchConfig.unordered,
        writeConcern: batchConfig.writeConcern,
        maxTimeMS: batchConfig.maxTimeMS,
        bypassDocumentValidation: options.bypassValidation || false
      };

      // Execute bulk insert
      const insertResult = await collection.insertMany(batchDocuments, insertOptions);

      return {
        success: true,
        batchSize: batchDocuments.length,
        insertedCount: insertResult.insertedCount,
        insertedIds: insertResult.insertedIds,
        processingTime: Date.now() - batchStartTime,
        errors: [],

        // Performance metrics
        documentsPerSecond: Math.round((insertResult.insertedCount / (Date.now() - batchStartTime)) * 1000),
        avgDocumentProcessingTime: (Date.now() - batchStartTime) / batchDocuments.length
      };

    } catch (error) {
      console.error('Batch insert failed:', error);

      // Handle bulk write errors with detailed analysis
      if (error.name === 'BulkWriteError' || error.name === 'MongoBulkWriteError') {
        return this.processBulkWriteError(error, batchDocuments, batchStartTime);
      }

      return {
        success: false,
        batchSize: batchDocuments.length,
        insertedCount: 0,
        insertedIds: {},
        processingTime: Date.now() - batchStartTime,
        errors: [{
          error: error.message,
          errorCode: error.code,
          batchIndex: 0
        }]
      };
    }
  }

  processBulkWriteError(bulkError, batchDocuments, startTime) {
    console.log('Processing bulk write error with detailed analysis...');

    const processedResults = {
      success: false,
      batchSize: batchDocuments.length,
      insertedCount: bulkError.result?.insertedCount || 0,
      insertedIds: bulkError.result?.insertedIds || {},
      processingTime: Date.now() - startTime,
      errors: []
    };

    // Process individual write errors
    if (bulkError.writeErrors) {
      for (const writeError of bulkError.writeErrors) {
        processedResults.errors.push({
          index: writeError.index,
          error: writeError.errmsg,
          errorCode: writeError.code,
          document: batchDocuments[writeError.index]
        });
      }
    }

    // Process write concern errors
    if (bulkError.writeConcernErrors) {
      for (const wcError of bulkError.writeConcernErrors) {
        processedResults.errors.push({
          error: wcError.errmsg,
          errorCode: wcError.code,
          type: 'write_concern_error'
        });
      }
    }

    return processedResults;
  }

  async performAdvancedBulkUpdate(updates, options = {}) {
    console.log(`Performing advanced bulk update for ${updates.length} operations...`);
    const startTime = Date.now();

    try {
      // Prepare update operations
      const preparedUpdates = await this.prepareUpdateOperations(updates, options);

      // Calculate optimal batching strategy
      const batchConfig = this.calculateOptimalBatchConfiguration(preparedUpdates, 'update');

      // Execute bulk updates with error handling
      const updateResults = await this.executeBulkUpdateBatches(preparedUpdates, batchConfig, options);

      // Aggregate and analyze results
      const aggregatedResults = await this.aggregateBulkResults(updateResults, 'update');

      return {
        operation: 'bulk_update',
        totalOperations: updates.length,
        successful: aggregatedResults.successfulUpdates,
        failed: aggregatedResults.failedUpdates,
        modified: aggregatedResults.modifiedCount,
        matched: aggregatedResults.matchedCount,
        upserted: aggregatedResults.upsertedCount,

        // Detailed results
        errors: aggregatedResults.errors,
        upsertedIds: aggregatedResults.upsertedIds,

        // Performance metrics
        processingTime: Date.now() - startTime,
        operationsPerSecond: Math.round((aggregatedResults.successfulUpdates / (Date.now() - startTime)) * 1000),
        batchesProcessed: updateResults.length,

        // Update-specific metrics
        updateEfficiency: aggregatedResults.modifiedCount / Math.max(aggregatedResults.matchedCount, 1),

        batchConfiguration: batchConfig
      };

    } catch (error) {
      console.error('Bulk update operation failed:', error);
      throw error;
    }
  }

  async prepareUpdateOperations(updates, options = {}) {
    console.log('Preparing update operations with validation and optimization...');

    const preparedOperations = [];

    for (let i = 0; i < updates.length; i++) {
      const update = updates[i];

      // Standardize update operation structure
      const preparedOperation = {
        updateOne: {
          filter: update.filter || { _id: update._id },
          update: {
            $set: {
              ...update.$set,
              updatedAt: new Date(),
              'bulkOperationMetadata.lastBulkUpdate': new Date(),
              'bulkOperationMetadata.updateIndex': i
            },
            ...(update.$inc && { $inc: update.$inc }),
            ...(update.$unset && { $unset: update.$unset }),
            ...(update.$push && { $push: update.$push }),
            ...(update.$pull && { $pull: update.$pull })
          },
          upsert: update.upsert || options.upsert || false,
          arrayFilters: update.arrayFilters,
          hint: update.hint
        }
      };

      // Add conditional updates based on operation type
      if (options.operationType === 'inventory_update') {
        preparedOperation.updateOne.update.$set.lastStockUpdate = new Date();

        // Add inventory-specific validation
        if (update.$inc && update.$inc.quantity) {
          preparedOperation.updateOne.update.$max = { 
            quantity: 0 // Prevent negative inventory
          };
        }
      }

      preparedOperations.push(preparedOperation);
    }

    return preparedOperations;
  }

  async executeBulkUpdateBatches(operations, batchConfig, options = {}) {
    console.log(`Executing ${batchConfig.numberOfBatches} bulk update batches...`);

    const collection = options.collection ? this.db.collection(options.collection) : this.collections.products;
    const batches = this.createBatches(operations, batchConfig.batchSize);
    const batchResults = [];

    for (const batch of batches) {
      const batchStartTime = Date.now();

      try {
        // Execute bulk write operations
        const bulkResult = await collection.bulkWrite(batch, {
          ordered: !batchConfig.unordered,
          writeConcern: batchConfig.writeConcern,
          maxTimeMS: batchConfig.maxTimeMS
        });

        batchResults.push({
          success: true,
          batchSize: batch.length,
          matchedCount: bulkResult.matchedCount,
          modifiedCount: bulkResult.modifiedCount,
          upsertedCount: bulkResult.upsertedCount,
          upsertedIds: bulkResult.upsertedIds,
          processingTime: Date.now() - batchStartTime,
          errors: []
        });

      } catch (error) {
        console.error('Bulk update batch failed:', error);

        if (error.name === 'BulkWriteError') {
          batchResults.push(this.processBulkWriteError(error, batch, batchStartTime));
        } else {
          batchResults.push({
            success: false,
            batchSize: batch.length,
            matchedCount: 0,
            modifiedCount: 0,
            processingTime: Date.now() - batchStartTime,
            errors: [{ error: error.message, errorCode: error.code }]
          });
        }
      }
    }

    return batchResults;
  }

  async performAdvancedBulkDelete(deletions, options = {}) {
    console.log(`Performing advanced bulk delete for ${deletions.length} operations...`);
    const startTime = Date.now();

    try {
      // Prepare deletion operations with safety checks
      const preparedDeletions = await this.prepareDeletionOperations(deletions, options);

      // Calculate optimal batching
      const batchConfig = this.calculateOptimalBatchConfiguration(preparedDeletions, 'delete');

      // Execute bulk deletions
      const deleteResults = await this.executeBulkDeleteBatches(preparedDeletions, batchConfig, options);

      // Aggregate results
      const aggregatedResults = await this.aggregateBulkResults(deleteResults, 'delete');

      return {
        operation: 'bulk_delete',
        totalOperations: deletions.length,
        successful: aggregatedResults.successfulDeletes,
        failed: aggregatedResults.failedDeletes,
        deletedCount: aggregatedResults.deletedCount,

        // Safety and audit information
        safeguardsApplied: preparedDeletions.safeguards || [],
        blockedDeletions: preparedDeletions.blocked || [],

        // Performance metrics
        processingTime: Date.now() - startTime,
        operationsPerSecond: Math.round((aggregatedResults.successfulDeletes / (Date.now() - startTime)) * 1000),

        errors: aggregatedResults.errors,
        batchConfiguration: batchConfig
      };

    } catch (error) {
      console.error('Bulk delete operation failed:', error);
      throw error;
    }
  }

  async prepareDeletionOperations(deletions, options = {}) {
    console.log('Preparing deletion operations with safety validations...');

    const preparedOperations = [];
    const blockedDeletions = [];
    const appliedSafeguards = [];

    for (const deletion of deletions) {
      // Apply safety checks for deletion operations
      const safetyCheck = await this.validateDeletionSafety(deletion, options);

      if (safetyCheck.safe) {
        preparedOperations.push({
          deleteOne: {
            filter: deletion.filter || { _id: deletion._id },
            hint: deletion.hint,
            collation: deletion.collation
          }
        });
      } else {
        blockedDeletions.push({
          operation: deletion,
          reason: safetyCheck.reason,
          dependencies: safetyCheck.dependencies
        });
      }

      if (safetyCheck.safeguards) {
        appliedSafeguards.push(...safetyCheck.safeguards);
      }
    }

    return {
      operations: preparedOperations,
      blocked: blockedDeletions,
      safeguards: appliedSafeguards
    };
  }

  async validateDeletionSafety(deletion, options = {}) {
    // Implement comprehensive safety checks for deletion operations
    const safeguards = [];
    const dependencies = [];

    // Check for referential integrity
    if (options.checkReferences !== false) {
      const refCheck = await this.checkReferentialIntegrity(deletion.filter);
      if (refCheck.hasReferences) {
        dependencies.push(...refCheck.references);
      }
    }

    // Check for recent activity
    if (options.checkRecentActivity !== false) {
      const activityCheck = await this.checkRecentActivity(deletion.filter);
      if (activityCheck.hasRecentActivity) {
        safeguards.push('recent_activity_detected');
      }
    }

    // Determine if deletion is safe
    const safe = dependencies.length === 0 && (!options.requireConfirmation || deletion.confirmed);

    return {
      safe: safe,
      reason: safe ? null : `Dependencies found: ${dependencies.join(', ')}`,
      dependencies: dependencies,
      safeguards: safeguards
    };
  }

  // Utility methods for batch processing and optimization

  createBatches(items, batchSize) {
    const batches = [];
    for (let i = 0; i < items.length; i += batchSize) {
      batches.push(items.slice(i, i + batchSize));
    }
    return batches;
  }

  createBatchGroups(batches, groupSize) {
    const groups = [];
    for (let i = 0; i < batches.length; i += groupSize) {
      groups.push(batches.slice(i, i + groupSize));
    }
    return groups;
  }

  estimateAverageDocumentSize(documents) {
    if (!documents || documents.length === 0) return 1000; // Default estimate

    const sampleSize = Math.min(10, documents.length);
    const sample = documents.slice(0, sampleSize);
    const totalSize = sample.reduce((size, doc) => {
      return size + JSON.stringify(doc).length;
    }, 0);

    return Math.round(totalSize / sampleSize);
  }

  async aggregateBulkResults(batchResults, operationType) {
    console.log(`Aggregating results for ${batchResults.length} batches...`);

    const aggregated = {
      successfulOperations: 0,
      failedOperations: 0,
      errors: [],
      totalProcessingTime: 0
    };

    // Operation-specific aggregation
    switch (operationType) {
      case 'insert':
        aggregated.successfulInserts = 0;
        aggregated.failedInserts = 0;
        aggregated.insertedIds = {};
        aggregated.duplicateErrors = [];
        break;
      case 'update':
        aggregated.successfulUpdates = 0;
        aggregated.failedUpdates = 0;
        aggregated.matchedCount = 0;
        aggregated.modifiedCount = 0;
        aggregated.upsertedCount = 0;
        aggregated.upsertedIds = {};
        break;
      case 'delete':
        aggregated.successfulDeletes = 0;
        aggregated.failedDeletes = 0;
        aggregated.deletedCount = 0;
        break;
    }

    // Aggregate results from all batches
    for (const batchResult of batchResults) {
      aggregated.totalProcessingTime += batchResult.processingTime;

      if (batchResult.success) {
        switch (operationType) {
          case 'insert':
            aggregated.successfulInserts += batchResult.insertedCount;
            Object.assign(aggregated.insertedIds, batchResult.insertedIds);
            break;
          case 'update':
            aggregated.successfulUpdates += batchResult.batchSize;
            aggregated.matchedCount += batchResult.matchedCount;
            aggregated.modifiedCount += batchResult.modifiedCount;
            aggregated.upsertedCount += batchResult.upsertedCount || 0;
            Object.assign(aggregated.upsertedIds, batchResult.upsertedIds || {});
            break;
          case 'delete':
            aggregated.successfulDeletes += batchResult.batchSize;
            aggregated.deletedCount += batchResult.deletedCount || batchResult.batchSize;
            break;
        }
      } else {
        switch (operationType) {
          case 'insert':
            aggregated.failedInserts += batchResult.batchSize - (batchResult.insertedCount || 0);
            break;
          case 'update':
            aggregated.failedUpdates += batchResult.batchSize - (batchResult.matchedCount || 0);
            break;
          case 'delete':
            aggregated.failedDeletes += batchResult.batchSize;
            break;
        }
      }

      // Aggregate errors
      if (batchResult.errors && batchResult.errors.length > 0) {
        aggregated.errors.push(...batchResult.errors);
      }
    }

    return aggregated;
  }

  async logBulkOperation(operationType, operationData) {
    try {
      const logEntry = {
        operationType: operationType,
        timestamp: new Date(),
        ...operationData,

        // System context
        systemMetrics: {
          memoryUsage: process.memoryUsage(),
          nodeVersion: process.version
        }
      };

      await this.collections.bulkOperationLogs.insertOne(logEntry);

    } catch (error) {
      console.error('Error logging bulk operation:', error);
      // Don't throw - logging shouldn't break bulk operations
    }
  }

  // Additional utility methods for comprehensive bulk operations

  generateSearchKeywords(document) {
    const keywords = [];

    if (document.title) {
      keywords.push(...document.title.toLowerCase().split(/\s+/));
    }

    if (document.description) {
      keywords.push(...document.description.toLowerCase().split(/\s+/));
    }

    if (document.tags) {
      keywords.push(...document.tags.map(tag => tag.toLowerCase()));
    }

    // Remove duplicates and filter short words
    return [...new Set(keywords)].filter(word => word.length > 2);
  }

  buildCategoryHierarchy(category) {
    if (!category) return [];

    const hierarchy = category.split('/');
    const hierarchyPath = [];

    for (let i = 0; i < hierarchy.length; i++) {
      hierarchyPath.push(hierarchy.slice(0, i + 1).join('/'));
    }

    return hierarchyPath;
  }

  calculatePricingTiers(price) {
    if (!price) return {};

    return {
      tier: price < 50 ? 'budget' : price < 200 ? 'mid-range' : 'premium',
      priceRange: {
        min: Math.floor(price / 50) * 50,
        max: Math.ceil(price / 50) * 50
      }
    };
  }

  async checkReferentialIntegrity(filter) {
    // Simplified referential integrity check
    // In production, implement comprehensive relationship checking
    return {
      hasReferences: false,
      references: []
    };
  }

  async checkRecentActivity(filter) {
    // Simplified activity check
    // In production, check recent orders, updates, etc.
    return {
      hasRecentActivity: false,
      lastActivity: null
    };
  }
}

// Benefits of MongoDB Advanced Bulk Operations:
// - High-performance batch processing with intelligent batch size optimization
// - Comprehensive error handling and partial failure recovery
// - Advanced write concern and consistency management
// - Optimized memory usage and resource management
// - Built-in performance monitoring and metrics collection
// - Sophisticated validation and safety checks for data integrity
// - Parallel processing capabilities for maximum throughput
// - Transaction support for atomic multi-document operations
// - Automatic retry logic with exponential backoff
// - SQL-compatible bulk operations through QueryLeaf integration

module.exports = {
  AdvancedBulkOperationsManager
};

Understanding MongoDB Bulk Operations Architecture

Advanced Batch Processing and Performance Optimization Strategies

Implement sophisticated bulk operation patterns for production MongoDB deployments:

// Production-ready MongoDB bulk operations with advanced optimization and monitoring
class ProductionBulkProcessor extends AdvancedBulkOperationsManager {
  constructor(db, productionConfig) {
    super(db, productionConfig);

    this.productionConfig = {
      ...productionConfig,
      enableDistributedProcessing: true,
      enableLoadBalancing: true,
      enableFailoverHandling: true,
      enableCapacityPlanning: true,
      enableAutomaticOptimization: true,
      enableComplianceAuditing: true
    };

    this.setupProductionOptimizations();
    this.initializeDistributedProcessing();
    this.setupCapacityPlanning();
  }

  async implementDistributedBulkProcessing(operations, distributionStrategy) {
    console.log('Implementing distributed bulk processing across multiple nodes...');

    const distributedStrategy = {
      // Sharding-aware distribution
      shardAwareDistribution: {
        enableShardKeyOptimization: true,
        balanceAcrossShards: true,
        minimizeCrossShardOperations: true,
        optimizeForShardKey: distributionStrategy.shardKey
      },

      // Load balancing strategies
      loadBalancing: {
        dynamicBatchSizing: true,
        nodeCapacityAware: true,
        latencyOptimized: true,
        throughputMaximization: true
      },

      // Fault tolerance and recovery
      faultTolerance: {
        automaticFailover: true,
        retryFailedBatches: true,
        partialFailureRecovery: true,
        deadlockDetection: true
      }
    };

    return await this.executeDistributedBulkOperations(operations, distributedStrategy);
  }

  async setupAdvancedBulkOptimization() {
    console.log('Setting up advanced bulk operation optimization...');

    const optimizationStrategies = {
      // Write optimization patterns
      writeOptimization: {
        journalingSyncOptimization: true,
        writeBufferOptimization: true,
        concurrencyControlOptimization: true,
        lockMinimizationStrategies: true
      },

      // Memory management optimization
      memoryOptimization: {
        documentBatching: true,
        memoryPooling: true,
        garbageCollectionOptimization: true,
        cacheOptimization: true
      },

      // Network optimization
      networkOptimization: {
        compressionOptimization: true,
        connectionPoolingOptimization: true,
        batchTransmissionOptimization: true,
        networkLatencyMinimization: true
      }
    };

    return await this.deployOptimizationStrategies(optimizationStrategies);
  }

  async implementAdvancedErrorHandlingAndRecovery() {
    console.log('Implementing advanced error handling and recovery mechanisms...');

    const errorHandlingStrategy = {
      // Error classification and handling
      errorClassification: {
        transientErrors: ['NetworkTimeout', 'TemporaryUnavailable'],
        permanentErrors: ['ValidationError', 'DuplicateKey'],
        retriableErrors: ['WriteConflict', 'LockTimeout'],
        fatalErrors: ['OutOfMemory', 'DiskFull']
      },

      // Recovery strategies
      recoveryStrategies: {
        automaticRetry: {
          maxRetries: 5,
          exponentialBackoff: true,
          jitterRandomization: true
        },
        partialFailureHandling: {
          isolateFailedOperations: true,
          continueWithSuccessful: true,
          generateFailureReport: true
        },
        circuitBreaker: {
          failureThreshold: 10,
          recoveryTimeout: 60000,
          halfOpenRetryCount: 3
        }
      }
    };

    return await this.deployErrorHandlingStrategy(errorHandlingStrategy);
  }
}

SQL-Style Bulk Operations with QueryLeaf

QueryLeaf provides familiar SQL syntax for MongoDB bulk operations and high-throughput data processing:

-- QueryLeaf advanced bulk operations with SQL-familiar syntax for MongoDB

-- Configure bulk operation settings
SET bulk_operation_batch_size = 1000;
SET bulk_operation_parallel_batches = 4;
SET bulk_operation_write_concern = 'majority';
SET bulk_operation_ordered = false;
SET bulk_operation_bypass_validation = false;

-- Advanced bulk insert with comprehensive error handling and performance optimization
WITH product_data_preparation AS (
  SELECT 
    -- Prepare product data with validation and enhancement
    product_id,
    product_name,
    category,
    CAST(price AS DECIMAL(10,2)) as validated_price,
    CAST(stock_quantity AS INTEGER) as validated_stock,
    supplier_id,

    -- Generate enhanced metadata for optimal MongoDB storage
    ARRAY[
      LOWER(product_name),
      LOWER(category),
      LOWER(supplier_name)
    ] as search_keywords,

    -- Build category hierarchy for efficient querying
    STRING_TO_ARRAY(category, '/') as category_hierarchy,

    -- Calculate pricing tiers for analytics
    CASE 
      WHEN price < 50 THEN 'budget'
      WHEN price < 200 THEN 'mid-range' 
      ELSE 'premium'
    END as pricing_tier,

    -- Add bulk operation metadata
    JSON_OBJECT(
      'batch_id', GENERATE_UUID(),
      'source_system', 'inventory_import',
      'import_timestamp', CURRENT_TIMESTAMP,
      'validation_status', 'passed'
    ) as bulk_metadata,

    -- Standard timestamps
    CURRENT_TIMESTAMP as created_at,
    CURRENT_TIMESTAMP as updated_at,

    -- Data quality scoring
    (
      CASE WHEN product_name IS NOT NULL AND LENGTH(TRIM(product_name)) > 0 THEN 1 ELSE 0 END +
      CASE WHEN category IS NOT NULL AND LENGTH(TRIM(category)) > 0 THEN 1 ELSE 0 END +
      CASE WHEN price > 0 THEN 1 ELSE 0 END +
      CASE WHEN stock_quantity >= 0 THEN 1 ELSE 0 END +
      CASE WHEN supplier_id IS NOT NULL THEN 1 ELSE 0 END
    ) / 5.0 as data_quality_score

  FROM staging_products sp
  JOIN suppliers s ON sp.supplier_id = s.supplier_id
  WHERE 
    -- Data validation filters
    sp.product_name IS NOT NULL 
    AND TRIM(sp.product_name) != ''
    AND sp.price > 0
    AND sp.stock_quantity >= 0
    AND s.status = 'active'
),

bulk_insert_configuration AS (
  SELECT 
    COUNT(*) as total_documents,

    -- Calculate optimal batch configuration
    CASE 
      WHEN AVG(LENGTH(product_name::TEXT) + LENGTH(COALESCE(description, '')::TEXT)) > 10000 THEN 500
      WHEN AVG(LENGTH(product_name::TEXT) + LENGTH(COALESCE(description, '')::TEXT)) > 1000 THEN 1000
      ELSE 2000
    END as optimal_batch_size,

    -- Parallel processing configuration
    LEAST(4, CEIL(COUNT(*) / 1000.0)) as parallel_batches,

    -- Performance prediction
    CASE 
      WHEN COUNT(*) < 1000 THEN 'fast'
      WHEN COUNT(*) < 10000 THEN 'moderate'
      ELSE 'extended'
    END as expected_processing_time

  FROM product_data_preparation
  WHERE data_quality_score >= 0.8
)

-- Execute advanced bulk insert operation
INSERT INTO products (
  product_id,
  product_name,
  category,
  category_hierarchy,
  price,
  pricing_tier,
  stock_quantity,
  supplier_id,
  search_keywords,
  bulk_operation_metadata,
  created_at,
  updated_at,
  data_quality_score
)
SELECT 
  pdp.product_id,
  pdp.product_name,
  pdp.category,
  pdp.category_hierarchy,
  pdp.validated_price,
  pdp.pricing_tier,
  pdp.validated_stock,
  pdp.supplier_id,
  pdp.search_keywords,
  pdp.bulk_metadata,
  pdp.created_at,
  pdp.updated_at,
  pdp.data_quality_score
FROM product_data_preparation pdp
CROSS JOIN bulk_insert_configuration bic
WHERE pdp.data_quality_score >= 0.8

-- Advanced bulk insert configuration
WITH (
  batch_size = (SELECT optimal_batch_size FROM bulk_insert_configuration),
  parallel_batches = (SELECT parallel_batches FROM bulk_insert_configuration),
  write_concern = 'majority',
  ordered_operations = false,

  -- Error handling configuration
  continue_on_error = true,
  duplicate_key_handling = 'skip',
  validation_bypass = false,

  -- Performance optimization
  enable_compression = true,
  connection_pooling = true,
  write_buffer_size = '64MB',

  -- Monitoring and logging
  enable_performance_monitoring = true,
  log_detailed_errors = true,
  track_operation_metrics = true
);

-- Advanced bulk update with intelligent batching and conflict resolution
WITH inventory_updates AS (
  SELECT 
    product_id,
    warehouse_id,
    quantity_adjustment,
    price_adjustment,
    update_reason,
    source_system,

    -- Calculate update priority
    CASE 
      WHEN ABS(quantity_adjustment) > 1000 THEN 'high'
      WHEN ABS(quantity_adjustment) > 100 THEN 'medium'  
      ELSE 'low'
    END as update_priority,

    -- Validate adjustments
    CASE 
      WHEN quantity_adjustment < 0 THEN 
        -- Ensure we don't create negative inventory
        GREATEST(quantity_adjustment, -current_stock_quantity)
      ELSE quantity_adjustment
    END as safe_quantity_adjustment,

    -- Add update metadata
    JSON_OBJECT(
      'update_batch_id', GENERATE_UUID(),
      'update_timestamp', CURRENT_TIMESTAMP,
      'update_source', source_system,
      'validation_status', 'approved'
    ) as update_metadata

  FROM staging_inventory_updates siu
  JOIN current_inventory ci ON siu.product_id = ci.product_id 
    AND siu.warehouse_id = ci.warehouse_id
  WHERE 
    -- Update validation
    ABS(siu.quantity_adjustment) <= 10000  -- Prevent massive adjustments
    AND siu.price_adjustment IS NULL OR ABS(siu.price_adjustment) <= siu.current_price * 0.5  -- Max 50% price change
),

conflict_resolution AS (
  -- Handle potential update conflicts
  SELECT 
    iu.*,

    -- Detect conflicting updates
    CASE 
      WHEN EXISTS (
        SELECT 1 FROM recent_inventory_updates riu 
        WHERE riu.product_id = iu.product_id 
        AND riu.warehouse_id = iu.warehouse_id
        AND riu.update_timestamp > CURRENT_TIMESTAMP - INTERVAL '5 minutes'
      ) THEN 'potential_conflict'
      ELSE 'safe_to_update'
    END as conflict_status,

    -- Calculate final values
    ci.stock_quantity + iu.safe_quantity_adjustment as final_stock_quantity,
    COALESCE(ci.price + iu.price_adjustment, ci.price) as final_price

  FROM inventory_updates iu
  JOIN current_inventory ci ON iu.product_id = ci.product_id 
    AND iu.warehouse_id = ci.warehouse_id
)

-- Execute bulk update with advanced error handling
UPDATE products 
SET 
  -- Core field updates
  stock_quantity = cr.final_stock_quantity,
  price = cr.final_price,
  updated_at = CURRENT_TIMESTAMP,

  -- Audit trail updates
  last_inventory_update = CURRENT_TIMESTAMP,
  inventory_update_reason = cr.update_reason,
  inventory_update_source = cr.source_system,

  -- Metadata updates
  bulk_operation_metadata = JSON_SET(
    COALESCE(bulk_operation_metadata, '{}'),
    '$.last_bulk_update', CURRENT_TIMESTAMP,
    '$.update_batch_info', cr.update_metadata
  ),

  -- Analytics updates
  total_adjustments = COALESCE(total_adjustments, 0) + 1,
  cumulative_quantity_adjustments = COALESCE(cumulative_quantity_adjustments, 0) + cr.safe_quantity_adjustment

FROM conflict_resolution cr
WHERE products.product_id = cr.product_id
  AND cr.conflict_status = 'safe_to_update'
  AND cr.final_stock_quantity >= 0  -- Additional safety check

-- Bulk update configuration
WITH (
  batch_size = 1500,
  parallel_batches = 3,
  write_concern = 'majority',
  max_time_ms = 30000,

  -- Conflict handling
  retry_on_conflict = true,
  max_retries = 3,
  backoff_strategy = 'exponential',

  -- Validation and safety
  enable_pre_update_validation = true,
  enable_post_update_validation = true,
  rollback_on_validation_failure = true,

  -- Performance optimization
  hint_index = 'product_warehouse_compound',
  bypass_document_validation = false
);

-- Advanced bulk upsert operation combining insert and update logic
WITH product_sync_data AS (
  SELECT 
    external_product_id,
    product_name,
    category,
    price,
    stock_quantity,
    supplier_code,
    last_modified_external,

    -- Determine if this should be insert or update
    CASE 
      WHEN EXISTS (
        SELECT 1 FROM products p 
        WHERE p.external_product_id = spd.external_product_id
      ) THEN 'update'
      ELSE 'insert'
    END as operation_type,

    -- Calculate data freshness
    EXTRACT(DAYS FROM CURRENT_TIMESTAMP - last_modified_external) as days_since_modified,

    -- Prepare upsert metadata
    JSON_OBJECT(
      'sync_batch_id', GENERATE_UUID(),
      'sync_timestamp', CURRENT_TIMESTAMP,
      'source_system', 'external_catalog',
      'operation_type', 'upsert',
      'data_freshness_days', EXTRACT(DAYS FROM CURRENT_TIMESTAMP - last_modified_external)
    ) as upsert_metadata

  FROM staging_product_data spd
  WHERE spd.last_modified_external > CURRENT_TIMESTAMP - INTERVAL '7 days'  -- Only sync recent changes
),

upsert_validation AS (
  SELECT 
    psd.*,

    -- Validate data quality for upsert
    (
      CASE WHEN product_name IS NOT NULL AND LENGTH(TRIM(product_name)) > 0 THEN 1 ELSE 0 END +
      CASE WHEN category IS NOT NULL THEN 1 ELSE 0 END +
      CASE WHEN price > 0 THEN 1 ELSE 0 END +
      CASE WHEN supplier_code IS NOT NULL THEN 1 ELSE 0 END
    ) / 4.0 as validation_score,

    -- Check for significant changes (for updates)
    CASE 
      WHEN psd.operation_type = 'update' THEN
        COALESCE(
          (SELECT 
            CASE 
              WHEN ABS(p.price - psd.price) > p.price * 0.1 OR  -- 10% price change
                   ABS(p.stock_quantity - psd.stock_quantity) > 10 OR  -- Stock change > 10
                   p.product_name != psd.product_name  -- Name change
              THEN 'significant_changes'
              ELSE 'minor_changes'
            END
           FROM products p 
           WHERE p.external_product_id = psd.external_product_id), 
          'new_record'
        )
      ELSE 'new_record'
    END as change_significance

  FROM product_sync_data psd
)

-- Execute bulk upsert operation
INSERT INTO products (
  external_product_id,
  product_name,
  category,
  price,
  stock_quantity,
  supplier_code,
  bulk_operation_metadata,
  created_at,
  updated_at,
  data_validation_score,
  sync_status
)
SELECT 
  uv.external_product_id,
  uv.product_name,
  uv.category,
  uv.price,
  uv.stock_quantity,
  uv.supplier_code,
  uv.upsert_metadata,
  CASE WHEN uv.operation_type = 'insert' THEN CURRENT_TIMESTAMP ELSE NULL END,
  CURRENT_TIMESTAMP,
  uv.validation_score,
  'synchronized'
FROM upsert_validation uv
WHERE uv.validation_score >= 0.75

-- Handle conflicts with upsert logic
ON CONFLICT (external_product_id) 
DO UPDATE SET
  product_name = CASE 
    WHEN EXCLUDED.change_significance = 'significant_changes' THEN EXCLUDED.product_name
    ELSE products.product_name
  END,

  category = EXCLUDED.category,

  price = CASE 
    WHEN ABS(EXCLUDED.price - products.price) > products.price * 0.05  -- 5% threshold
    THEN EXCLUDED.price
    ELSE products.price
  END,

  stock_quantity = EXCLUDED.stock_quantity,

  updated_at = CURRENT_TIMESTAMP,
  last_sync_timestamp = CURRENT_TIMESTAMP,
  sync_status = 'synchronized',

  -- Update metadata with merge information
  bulk_operation_metadata = JSON_SET(
    COALESCE(products.bulk_operation_metadata, '{}'),
    '$.last_upsert_operation', EXCLUDED.upsert_metadata,
    '$.upsert_history', JSON_ARRAY_APPEND(
      COALESCE(JSON_EXTRACT(products.bulk_operation_metadata, '$.upsert_history'), '[]'),
      '$', JSON_OBJECT(
        'timestamp', CURRENT_TIMESTAMP,
        'changes_applied', EXCLUDED.change_significance
      )
    )
  )

-- Upsert operation configuration  
WITH (
  batch_size = 800,  -- Smaller batches for upsert complexity
  parallel_batches = 2,
  write_concern = 'majority',

  -- Upsert-specific configuration
  conflict_resolution = 'merge_strategy',
  enable_change_detection = true,
  preserve_existing_metadata = true,

  -- Performance optimization for upsert
  enable_index_hints = true,
  optimize_for_update_heavy = true
);

-- Advanced bulk delete with comprehensive safety checks and audit trail
WITH deletion_candidates AS (
  SELECT 
    product_id,
    product_name,
    category,
    created_at,
    last_sold_date,
    stock_quantity,

    -- Determine deletion reason and safety
    CASE 
      WHEN stock_quantity = 0 AND last_sold_date < CURRENT_DATE - INTERVAL '2 years' THEN 'discontinued_product'
      WHEN category IN ('seasonal', 'limited_edition') AND created_at < CURRENT_DATE - INTERVAL '1 year' THEN 'seasonal_cleanup'
      WHEN supplier_id IN (SELECT supplier_id FROM suppliers WHERE status = 'inactive') THEN 'inactive_supplier'
      ELSE 'no_deletion'
    END as deletion_reason,

    -- Safety checks
    NOT EXISTS (
      SELECT 1 FROM order_items oi 
      WHERE oi.product_id = p.product_id 
      AND oi.order_date > CURRENT_DATE - INTERVAL '6 months'
    ) as no_recent_orders,

    NOT EXISTS (
      SELECT 1 FROM shopping_carts sc 
      WHERE sc.product_id = p.product_id
    ) as not_in_carts,

    NOT EXISTS (
      SELECT 1 FROM pending_shipments ps 
      WHERE ps.product_id = p.product_id
    ) as no_pending_shipments

  FROM products p
  WHERE p.status IN ('discontinued', 'inactive', 'marked_for_deletion')
),

safe_deletions AS (
  SELECT 
    dc.*,

    -- Overall safety assessment
    (dc.no_recent_orders AND dc.not_in_carts AND dc.no_pending_shipments) as safe_to_delete,

    -- Create audit record
    JSON_OBJECT(
      'deletion_batch_id', GENERATE_UUID(),
      'deletion_timestamp', CURRENT_TIMESTAMP,
      'deletion_reason', dc.deletion_reason,
      'safety_checks_passed', (dc.no_recent_orders AND dc.not_in_carts AND dc.no_pending_shipments),
      'product_snapshot', JSON_OBJECT(
        'product_id', dc.product_id,
        'product_name', dc.product_name,
        'category', dc.category,
        'last_sold_date', dc.last_sold_date,
        'stock_quantity', dc.stock_quantity
      )
    ) as audit_record

  FROM deletion_candidates dc
  WHERE dc.deletion_reason != 'no_deletion'
)

-- Create audit trail before deletion
INSERT INTO product_deletion_audit (
  product_id,
  deletion_reason,
  audit_record,
  deleted_at
)
SELECT 
  sd.product_id,
  sd.deletion_reason,
  sd.audit_record,
  CURRENT_TIMESTAMP
FROM safe_deletions sd
WHERE sd.safe_to_delete = true;

-- Execute bulk delete operation
DELETE FROM products 
WHERE product_id IN (
  SELECT sd.product_id 
  FROM safe_deletions sd 
  WHERE sd.safe_to_delete = true
)

-- Bulk delete configuration
WITH (
  batch_size = 500,  -- Conservative batch size for deletes
  parallel_batches = 2,
  write_concern = 'majority',

  -- Safety configuration
  enable_referential_integrity_check = true,
  enable_audit_trail = true,
  require_confirmation = true,

  -- Performance and safety balance
  max_deletions_per_batch = 500,
  enable_soft_delete = false,  -- True deletion for cleanup
  create_backup_before_delete = true
);

-- Comprehensive bulk operation monitoring and analytics
WITH bulk_operation_performance AS (
  SELECT 
    operation_type,
    DATE_TRUNC('hour', operation_timestamp) as hour_bucket,

    -- Volume metrics
    COUNT(*) as total_operations,
    SUM(documents_processed) as total_documents_processed,
    SUM(successful_operations) as total_successful,
    SUM(failed_operations) as total_failed,

    -- Performance metrics
    AVG(processing_time_ms) as avg_processing_time,
    PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY processing_time_ms) as p95_processing_time,
    AVG(documents_per_second) as avg_throughput,
    MAX(documents_per_second) as peak_throughput,

    -- Error analysis
    AVG(CASE WHEN total_failed > 0 THEN (failed_operations * 100.0 / documents_processed) ELSE 0 END) as avg_error_rate,

    -- Resource utilization
    AVG(batch_size_used) as avg_batch_size,
    AVG(parallel_batches_used) as avg_parallel_batches,
    AVG(memory_usage_mb) as avg_memory_usage,

    -- Configuration analysis  
    MODE() WITHIN GROUP (ORDER BY write_concern) as most_common_write_concern,
    AVG(CASE WHEN ordered_operations THEN 1 ELSE 0 END) as ordered_operations_ratio

  FROM bulk_operation_logs
  WHERE operation_timestamp >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
  GROUP BY operation_type, DATE_TRUNC('hour', operation_timestamp)
),

performance_trends AS (
  SELECT 
    bop.*,

    -- Trend analysis
    LAG(avg_throughput) OVER (
      PARTITION BY operation_type 
      ORDER BY hour_bucket
    ) as prev_hour_throughput,

    LAG(avg_error_rate) OVER (
      PARTITION BY operation_type 
      ORDER BY hour_bucket
    ) as prev_hour_error_rate,

    -- Performance classification
    CASE 
      WHEN avg_throughput > 1000 THEN 'high_performance'
      WHEN avg_throughput > 500 THEN 'good_performance'  
      WHEN avg_throughput > 100 THEN 'adequate_performance'
      ELSE 'low_performance'
    END as performance_classification,

    -- Optimization recommendations
    CASE 
      WHEN avg_error_rate > 5 THEN 'investigate_error_patterns'
      WHEN p95_processing_time > avg_processing_time * 2 THEN 'optimize_batch_sizing'
      WHEN avg_memory_usage > 500 THEN 'optimize_memory_usage'
      WHEN avg_throughput < 100 THEN 'review_indexing_strategy'
      ELSE 'performance_optimal'
    END as optimization_recommendation

  FROM bulk_operation_performance bop
)

SELECT 
  operation_type,
  hour_bucket,

  -- Volume summary
  total_operations,
  total_documents_processed,
  ROUND((total_successful * 100.0 / NULLIF(total_documents_processed, 0)), 2) as success_rate_percent,

  -- Performance summary
  ROUND(avg_processing_time, 1) as avg_processing_time_ms,
  ROUND(p95_processing_time, 1) as p95_processing_time_ms,
  ROUND(avg_throughput, 0) as avg_documents_per_second,
  ROUND(peak_throughput, 0) as peak_documents_per_second,

  -- Trend indicators
  CASE 
    WHEN prev_hour_throughput IS NOT NULL THEN
      ROUND(((avg_throughput - prev_hour_throughput) / prev_hour_throughput * 100), 1)
    ELSE NULL
  END as throughput_change_percent,

  CASE 
    WHEN prev_hour_error_rate IS NOT NULL THEN
      ROUND((avg_error_rate - prev_hour_error_rate), 2)
    ELSE NULL
  END as error_rate_change,

  -- Configuration insights
  ROUND(avg_batch_size, 0) as optimal_batch_size,
  ROUND(avg_parallel_batches, 1) as avg_parallelization,
  most_common_write_concern,

  -- Performance assessment
  performance_classification,
  optimization_recommendation,

  -- Detailed recommendations
  CASE optimization_recommendation
    WHEN 'investigate_error_patterns' THEN 'Review error logs and implement better validation'
    WHEN 'optimize_batch_sizing' THEN 'Reduce batch size or increase timeout thresholds'  
    WHEN 'optimize_memory_usage' THEN 'Implement memory pooling and document streaming'
    WHEN 'review_indexing_strategy' THEN 'Add missing indexes for bulk operation filters'
    ELSE 'Continue current configuration - performance is optimal'
  END as detailed_recommendation

FROM performance_trends
WHERE total_operations > 0
ORDER BY operation_type, hour_bucket DESC;

-- QueryLeaf provides comprehensive bulk operation capabilities:
-- 1. Advanced batch processing with intelligent sizing and parallelization
-- 2. Sophisticated error handling and partial failure recovery  
-- 3. Comprehensive data validation and quality scoring
-- 4. Built-in audit trails and compliance tracking
-- 5. Performance monitoring and optimization recommendations
-- 6. Advanced conflict resolution and upsert strategies
-- 7. Safety checks and referential integrity validation
-- 8. Production-ready bulk operations with monitoring and alerting
-- 9. SQL-familiar syntax for complex bulk operation workflows
-- 10. Integration with MongoDB's native bulk operation optimizations

Best Practices for Production Bulk Operations

Performance Optimization and Batch Strategy

Essential principles for effective MongoDB bulk operation deployment:

  1. Batch Size Optimization: Calculate optimal batch sizes based on document size, operation type, and system resources
  2. Write Concern Management: Configure appropriate write concerns balancing performance with durability requirements
  3. Error Handling Strategy: Implement comprehensive error classification and recovery mechanisms for production resilience
  4. Validation and Safety: Design robust validation pipelines to ensure data quality and prevent harmful operations
  5. Performance Monitoring: Track operation metrics, throughput, and resource utilization for continuous optimization
  6. Resource Management: Monitor memory usage, connection pooling, and system resources during bulk operations

Scalability and Production Deployment

Optimize bulk operations for enterprise-scale requirements:

  1. Distributed Processing: Implement shard-aware batch distribution for optimal performance across MongoDB clusters
  2. Load Balancing: Design intelligent load balancing strategies that consider node capacity and network latency
  3. Fault Tolerance: Implement automatic failover and retry mechanisms for resilient bulk operation processing
  4. Capacity Planning: Monitor historical patterns and predict resource requirements for bulk operation scaling
  5. Compliance Integration: Ensure bulk operations meet audit, security, and compliance requirements
  6. Operational Integration: Integrate bulk operations with existing monitoring, alerting, and operational workflows

Conclusion

MongoDB bulk operations provide comprehensive high-performance batch processing capabilities that enable efficient handling of large-scale data operations through intelligent batching, advanced error handling, and sophisticated optimization strategies. The native bulk operation support ensures that batch processing benefits from MongoDB's write optimization, consistency guarantees, and scalability features.

Key MongoDB Bulk Operations benefits include:

  • High-Performance Processing: Optimized batch processing with intelligent sizing and parallel execution capabilities
  • Advanced Error Management: Comprehensive error handling with partial failure recovery and retry mechanisms
  • Data Quality Assurance: Built-in validation and safety checks to ensure data integrity during bulk operations
  • Resource Optimization: Intelligent memory management and resource utilization for optimal system performance
  • Production Readiness: Enterprise-ready bulk operations with monitoring, auditing, and compliance features
  • SQL Accessibility: Familiar SQL-style bulk operations through QueryLeaf for accessible high-throughput data management

Whether you're handling data imports, batch updates, inventory synchronization, or large-scale data cleanup operations, MongoDB bulk operations with QueryLeaf's familiar SQL interface provide the foundation for efficient, reliable, and scalable batch processing.

QueryLeaf Integration: QueryLeaf automatically optimizes MongoDB bulk operations while providing SQL-familiar syntax for batch processing, error handling, and performance monitoring. Advanced bulk operation patterns, validation strategies, and optimization techniques are seamlessly handled through familiar SQL constructs, making high-performance batch processing accessible to SQL-oriented development teams.

The combination of MongoDB's robust bulk operation capabilities with SQL-style batch processing operations makes it an ideal platform for applications requiring both high-throughput data processing and familiar database management patterns, ensuring your bulk operations can scale efficiently while maintaining data quality and operational reliability.

MongoDB Schema Validation and Data Integrity: Advanced Document Validation for Robust Database Design

Modern applications require robust data validation mechanisms to ensure data quality, maintain business rules, and prevent data corruption in production databases. Traditional NoSQL databases often sacrifice data validation for flexibility, leading to inconsistent data structures and difficult-to-debug application issues. MongoDB's document validation capabilities provide comprehensive schema enforcement while preserving the flexibility that makes document databases powerful for evolving applications.

MongoDB Schema Validation offers sophisticated document validation rules that can enforce field types, value constraints, required fields, and complex business logic at the database level. Unlike application-level validation that can be bypassed or inconsistently applied, database-level validation ensures data integrity regardless of how data enters the system, providing a critical safety net for production applications.

The Traditional Data Validation Challenge

Conventional approaches to data validation in both SQL and NoSQL systems have significant limitations:

-- Traditional relational database constraints - rigid but limited flexibility

-- PostgreSQL table with basic constraints
CREATE TABLE user_profiles (
    user_id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    email VARCHAR(320) NOT NULL,
    username VARCHAR(50) NOT NULL,
    full_name VARCHAR(200),
    age INTEGER,
    account_status VARCHAR(20) DEFAULT 'active',
    created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,

    -- Basic constraints
    CONSTRAINT ck_email_format CHECK (email ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$'),
    CONSTRAINT ck_age_valid CHECK (age >= 0 AND age <= 150),
    CONSTRAINT ck_status_valid CHECK (account_status IN ('active', 'inactive', 'suspended', 'pending')),
    CONSTRAINT ck_username_length CHECK (char_length(username) >= 3),

    -- Unique constraints
    UNIQUE(email),
    UNIQUE(username)
);

-- User preferences table with limited JSON validation
CREATE TABLE user_preferences (
    user_id UUID PRIMARY KEY REFERENCES user_profiles(user_id) ON DELETE CASCADE,
    preferences JSONB NOT NULL DEFAULT '{}',
    notification_settings JSONB,
    privacy_settings JSONB,

    -- Basic JSON structure validation (limited)
    CONSTRAINT ck_preferences_not_empty CHECK (jsonb_typeof(preferences) = 'object'),
    CONSTRAINT ck_notifications_structure CHECK (
        notification_settings IS NULL OR 
        (jsonb_typeof(notification_settings) = 'object' AND 
         notification_settings ? 'email' AND 
         notification_settings ? 'push')
    )
);

-- Product catalog with rigid structure
CREATE TABLE products (
    product_id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    name VARCHAR(500) NOT NULL,
    description TEXT,
    category VARCHAR(100) NOT NULL,
    price DECIMAL(10,2) NOT NULL,
    currency VARCHAR(3) NOT NULL DEFAULT 'USD',
    availability_status VARCHAR(20) NOT NULL DEFAULT 'available',

    -- Product specifications (limited flexibility)
    specifications JSONB,
    dimensions JSONB,
    weight_grams INTEGER,

    -- Basic validation constraints
    CONSTRAINT ck_price_positive CHECK (price > 0),
    CONSTRAINT ck_currency_code CHECK (currency ~ '^[A-Z]{3}$'),
    CONSTRAINT ck_availability CHECK (availability_status IN ('available', 'out_of_stock', 'discontinued')),
    CONSTRAINT ck_weight_positive CHECK (weight_grams > 0),

    -- Limited JSON validation
    CONSTRAINT ck_specifications_object CHECK (
        specifications IS NULL OR jsonb_typeof(specifications) = 'object'
    ),

    created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- Attempting complex validation with triggers (maintenance overhead)
CREATE OR REPLACE FUNCTION validate_user_preferences()
RETURNS TRIGGER AS $$
BEGIN
    -- Manual JSON validation logic
    IF NEW.notification_settings IS NOT NULL THEN
        IF NOT (NEW.notification_settings ? 'email' AND 
                NEW.notification_settings ? 'push' AND
                NEW.notification_settings ? 'sms') THEN
            RAISE EXCEPTION 'notification_settings must contain email, push, and sms keys';
        END IF;

        -- Validate nested structure
        IF NOT (jsonb_typeof(NEW.notification_settings->'email') = 'object' AND
                NEW.notification_settings->'email' ? 'enabled' AND
                jsonb_typeof(NEW.notification_settings->'email'->'enabled') = 'boolean') THEN
            RAISE EXCEPTION 'notification_settings.email must have enabled boolean field';
        END IF;
    END IF;

    -- Privacy settings validation
    IF NEW.privacy_settings IS NOT NULL THEN
        IF NOT (NEW.privacy_settings ? 'profile_visibility' AND
                NEW.privacy_settings->'profile_visibility' IN ('"public"', '"private"', '"friends"')) THEN
            RAISE EXCEPTION 'privacy_settings.profile_visibility must be public, private, or friends';
        END IF;
    END IF;

    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER validate_preferences_trigger
    BEFORE INSERT OR UPDATE ON user_preferences
    FOR EACH ROW EXECUTE FUNCTION validate_user_preferences();

-- Complex business rule validation (difficult to maintain)
CREATE OR REPLACE FUNCTION validate_product_business_rules()
RETURNS TRIGGER AS $$
BEGIN
    -- Price validation based on category
    IF NEW.category = 'electronics' AND NEW.price < 10.00 THEN
        RAISE EXCEPTION 'Electronics products must have minimum price of $10.00';
    END IF;

    IF NEW.category = 'luxury' AND NEW.price < 100.00 THEN
        RAISE EXCEPTION 'Luxury products must have minimum price of $100.00';
    END IF;

    -- Specifications validation by category
    IF NEW.category = 'electronics' THEN
        IF NEW.specifications IS NULL OR 
           NOT (NEW.specifications ? 'brand' AND NEW.specifications ? 'model') THEN
            RAISE EXCEPTION 'Electronics products must specify brand and model in specifications';
        END IF;
    END IF;

    -- Weight requirements
    IF NEW.category IN ('furniture', 'appliances') AND NEW.weight_grams IS NULL THEN
        RAISE EXCEPTION 'Furniture and appliances must specify weight';
    END IF;

    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER validate_product_rules_trigger
    BEFORE INSERT OR UPDATE ON products
    FOR EACH ROW EXECUTE FUNCTION validate_product_business_rules();

-- Attempt to query with validation checks (complex and inefficient)
WITH validation_summary AS (
    SELECT 
        'user_profiles' as table_name,
        COUNT(*) as total_records,
        COUNT(*) FILTER (WHERE email !~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$') as invalid_emails,
        COUNT(*) FILTER (WHERE age < 0 OR age > 150) as invalid_ages,
        COUNT(*) FILTER (WHERE account_status NOT IN ('active', 'inactive', 'suspended', 'pending')) as invalid_statuses
    FROM user_profiles

    UNION ALL

    SELECT 
        'products' as table_name,
        COUNT(*) as total_records,
        COUNT(*) FILTER (WHERE price <= 0) as invalid_prices,
        COUNT(*) FILTER (WHERE currency !~ '^[A-Z]{3}$') as invalid_currencies,
        COUNT(*) FILTER (WHERE specifications IS NOT NULL AND jsonb_typeof(specifications) != 'object') as invalid_specs
    FROM products
)
SELECT 
    table_name,
    total_records,
    invalid_emails,
    invalid_ages,
    invalid_statuses,
    invalid_prices,
    invalid_currencies,
    invalid_specs,

    -- Overall data quality score
    CASE 
        WHEN table_name = 'user_profiles' THEN
            (total_records - COALESCE(invalid_emails, 0) - COALESCE(invalid_ages, 0) - COALESCE(invalid_statuses, 0))::float / total_records * 100
        ELSE 
            (total_records - COALESCE(invalid_prices, 0) - COALESCE(invalid_currencies, 0) - COALESCE(invalid_specs, 0))::float / total_records * 100
    END as data_quality_percent

FROM validation_summary;

-- Problems with traditional validation approaches:
-- 1. Limited flexibility for evolving schemas and nested structures
-- 2. Complex trigger logic that's difficult to maintain and debug
-- 3. Performance overhead from extensive validation triggers
-- 4. Limited support for conditional validation based on document context
-- 5. No built-in support for array validation and nested object constraints
-- 6. Difficulty enforcing business rules that span multiple fields
-- 7. Poor integration with application development workflows
-- 8. Limited error messaging and validation feedback
-- 9. Complex migration procedures when validation rules change
-- 10. No support for schema versioning and gradual migration strategies

MongoDB Schema Validation provides comprehensive document validation capabilities:

// MongoDB Advanced Schema Validation - comprehensive document validation system
const { MongoClient } = require('mongodb');

const client = new MongoClient('mongodb://localhost:27017');
const db = client.db('advanced_validation_platform');

// Advanced MongoDB Schema Validation System
class MongoDBSchemaValidator {
  constructor(db, options = {}) {
    this.db = db;
    this.options = {
      validationLevel: options.validationLevel || 'strict', // strict, moderate
      validationAction: options.validationAction || 'error', // error, warn
      enableVersioning: options.enableVersioning || true,
      enableMetrics: options.enableMetrics || true,
      customValidators: options.customValidators || new Map(),
      ...options
    };

    this.validationSchemas = new Map();
    this.validationMetrics = {
      validationsPassed: 0,
      validationsFailed: 0,
      validationErrors: [],
      lastUpdated: new Date()
    };

    this.setupValidationCollections();
  }

  async setupValidationCollections() {
    console.log('Setting up advanced schema validation system...');

    try {
      // User profiles with comprehensive validation
      await this.createValidatedCollection('user_profiles', {
        $jsonSchema: {
          bsonType: 'object',
          title: 'User Profile Validation Schema',
          required: ['email', 'username', 'profile_type', 'created_at'],
          additionalProperties: false,

          properties: {
            _id: {
              bsonType: 'objectId',
              description: 'Unique identifier for user profile'
            },

            // Basic user information with comprehensive validation
            email: {
              bsonType: 'string',
              pattern: '^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$',
              maxLength: 320,
              description: 'Valid email address following RFC 5322 standard'
            },

            username: {
              bsonType: 'string',
              pattern: '^[a-zA-Z0-9_-]{3,30}$',
              description: 'Username: 3-30 characters, alphanumeric, underscore, or dash only'
            },

            full_name: {
              bsonType: 'string',
              minLength: 2,
              maxLength: 200,
              description: 'Full name: 2-200 characters'
            },

            profile_type: {
              enum: ['individual', 'business', 'organization', 'developer'],
              description: 'Type of user profile'
            },

            // Age validation with business rules
            age: {
              bsonType: 'int',
              minimum: 13, // Minimum age for account creation
              maximum: 150,
              description: 'User age: must be between 13 and 150'
            },

            // Account status with workflow validation
            account_status: {
              bsonType: 'object',
              required: ['status', 'last_updated'],
              additionalProperties: false,
              properties: {
                status: {
                  enum: ['active', 'inactive', 'suspended', 'pending_verification', 'closed'],
                  description: 'Current account status'
                },
                last_updated: {
                  bsonType: 'date',
                  description: 'When status was last updated'
                },
                reason: {
                  bsonType: 'string',
                  maxLength: 500,
                  description: 'Reason for status change (optional)'
                },
                updated_by: {
                  bsonType: 'objectId',
                  description: 'ID of user/admin who updated status'
                }
              }
            },

            // Contact information with regional validation
            contact_info: {
              bsonType: 'object',
              additionalProperties: false,
              properties: {
                phone: {
                  bsonType: 'object',
                  properties: {
                    country_code: {
                      bsonType: 'string',
                      pattern: '^\\+[1-9][0-9]{0,3}$',
                      description: 'Country code with + prefix'
                    },
                    number: {
                      bsonType: 'string',
                      pattern: '^[0-9]{7,15}$',
                      description: 'Phone number: 7-15 digits'
                    },
                    verified: {
                      bsonType: 'bool',
                      description: 'Whether phone number is verified'
                    },
                    verified_at: {
                      bsonType: 'date',
                      description: 'When phone was verified'
                    }
                  },
                  required: ['country_code', 'number', 'verified']
                },

                address: {
                  bsonType: 'object',
                  properties: {
                    street: { bsonType: 'string', maxLength: 200 },
                    city: { bsonType: 'string', maxLength: 100 },
                    state_province: { bsonType: 'string', maxLength: 100 },
                    postal_code: { bsonType: 'string', maxLength: 20 },
                    country: {
                      bsonType: 'string',
                      pattern: '^[A-Z]{2}$', // ISO 3166-1 alpha-2 country codes
                      description: 'Two-letter country code (ISO 3166-1)'
                    }
                  },
                  required: ['city', 'country']
                }
              }
            },

            // Nested preferences with conditional validation
            preferences: {
              bsonType: 'object',
              additionalProperties: false,
              properties: {
                notifications: {
                  bsonType: 'object',
                  required: ['email', 'push', 'sms'],
                  additionalProperties: false,
                  properties: {
                    email: {
                      bsonType: 'object',
                      required: ['enabled'],
                      properties: {
                        enabled: { bsonType: 'bool' },
                        frequency: {
                          enum: ['immediate', 'daily', 'weekly', 'never'],
                          description: 'Email notification frequency'
                        },
                        categories: {
                          bsonType: 'array',
                          items: {
                            enum: ['security', 'marketing', 'product_updates', 'billing']
                          },
                          uniqueItems: true,
                          description: 'Notification categories to receive'
                        }
                      }
                    },
                    push: {
                      bsonType: 'object',
                      required: ['enabled'],
                      properties: {
                        enabled: { bsonType: 'bool' },
                        quiet_hours: {
                          bsonType: 'object',
                          properties: {
                            enabled: { bsonType: 'bool' },
                            start_time: {
                              bsonType: 'string',
                              pattern: '^([01]?[0-9]|2[0-3]):[0-5][0-9]$',
                              description: 'Start time in HH:MM format'
                            },
                            end_time: {
                              bsonType: 'string',
                              pattern: '^([01]?[0-9]|2[0-3]):[0-5][0-9]$',
                              description: 'End time in HH:MM format'
                            }
                          },
                          required: ['enabled']
                        }
                      }
                    },
                    sms: {
                      bsonType: 'object',
                      required: ['enabled'],
                      properties: {
                        enabled: { bsonType: 'bool' },
                        emergency_only: {
                          bsonType: 'bool',
                          description: 'Only send SMS for emergency notifications'
                        }
                      }
                    }
                  }
                },

                privacy: {
                  bsonType: 'object',
                  required: ['profile_visibility', 'data_processing_consent'],
                  properties: {
                    profile_visibility: {
                      enum: ['public', 'friends_only', 'private'],
                      description: 'Who can view this profile'
                    },
                    search_visibility: {
                      bsonType: 'bool',
                      description: 'Whether profile appears in search results'
                    },
                    data_processing_consent: {
                      bsonType: 'object',
                      required: ['analytics', 'marketing', 'given_at'],
                      properties: {
                        analytics: { bsonType: 'bool' },
                        marketing: { bsonType: 'bool' },
                        third_party_sharing: { bsonType: 'bool' },
                        given_at: { bsonType: 'date' },
                        ip_address: { bsonType: 'string' },
                        user_agent: { bsonType: 'string' }
                      }
                    }
                  }
                },

                // User interface preferences
                ui_preferences: {
                  bsonType: 'object',
                  properties: {
                    theme: {
                      enum: ['light', 'dark', 'auto'],
                      description: 'User interface theme preference'
                    },
                    language: {
                      bsonType: 'string',
                      pattern: '^[a-z]{2}(-[A-Z]{2})?$',
                      description: 'Language code (ISO 639-1 with optional country)'
                    },
                    timezone: {
                      bsonType: 'string',
                      description: 'IANA timezone identifier'
                    },
                    date_format: {
                      enum: ['MM/DD/YYYY', 'DD/MM/YYYY', 'YYYY-MM-DD'],
                      description: 'Preferred date display format'
                    }
                  }
                }
              }
            },

            // Security settings with validation
            security: {
              bsonType: 'object',
              properties: {
                two_factor_enabled: { bsonType: 'bool' },
                backup_codes: {
                  bsonType: 'array',
                  maxItems: 10,
                  items: {
                    bsonType: 'string',
                    pattern: '^[A-Z0-9]{8}$',
                    description: '8-character backup codes'
                  },
                  uniqueItems: true
                },
                security_questions: {
                  bsonType: 'array',
                  maxItems: 5,
                  items: {
                    bsonType: 'object',
                    required: ['question', 'answer_hash'],
                    properties: {
                      question: {
                        bsonType: 'string',
                        maxLength: 200
                      },
                      answer_hash: {
                        bsonType: 'string',
                        description: 'Hashed security question answer'
                      },
                      created_at: { bsonType: 'date' }
                    }
                  }
                },
                login_restrictions: {
                  bsonType: 'object',
                  properties: {
                    allowed_countries: {
                      bsonType: 'array',
                      items: {
                        bsonType: 'string',
                        pattern: '^[A-Z]{2}$'
                      },
                      description: 'ISO country codes where login is allowed'
                    },
                    require_device_verification: { bsonType: 'bool' }
                  }
                }
              }
            },

            // Audit trail information
            created_at: {
              bsonType: 'date',
              description: 'Account creation timestamp'
            },

            updated_at: {
              bsonType: 'date',
              description: 'Last profile update timestamp'
            },

            created_by: {
              bsonType: 'objectId',
              description: 'ID of user/system that created this profile'
            },

            // Schema versioning
            schema_version: {
              bsonType: 'string',
              pattern: '^\\d+\\.\\d+\\.\\d+$',
              description: 'Schema version (semantic versioning)'
            }
          }
        }
      }, {
        validationLevel: 'strict',
        validationAction: 'error'
      });

      // Products collection with complex business rule validation
      await this.createValidatedCollection('products', {
        $jsonSchema: {
          bsonType: 'object',
          title: 'Product Validation Schema',
          required: ['name', 'category', 'pricing', 'availability', 'created_at'],
          additionalProperties: false,

          properties: {
            _id: { bsonType: 'objectId' },

            // Basic product information
            name: {
              bsonType: 'string',
              minLength: 2,
              maxLength: 500,
              description: 'Product name: 2-500 characters'
            },

            description: {
              bsonType: 'string',
              maxLength: 5000,
              description: 'Product description: max 5000 characters'
            },

            sku: {
              bsonType: 'string',
              pattern: '^[A-Z0-9]{3,20}$',
              description: 'Stock Keeping Unit: 3-20 uppercase alphanumeric characters'
            },

            category: {
              bsonType: 'object',
              required: ['primary', 'path'],
              properties: {
                primary: {
                  enum: ['electronics', 'clothing', 'home_garden', 'books', 'sports', 'automotive', 'health', 'toys'],
                  description: 'Primary product category'
                },
                secondary: {
                  bsonType: 'string',
                  maxLength: 100,
                  description: 'Secondary category classification'
                },
                path: {
                  bsonType: 'array',
                  items: { bsonType: 'string' },
                  minItems: 1,
                  maxItems: 5,
                  description: 'Category hierarchy path'
                },
                tags: {
                  bsonType: 'array',
                  items: {
                    bsonType: 'string',
                    pattern: '^[a-z0-9_-]+$',
                    maxLength: 50
                  },
                  maxItems: 20,
                  uniqueItems: true,
                  description: 'Product tags for search and filtering'
                }
              }
            },

            // Complex pricing structure with conditional validation
            pricing: {
              bsonType: 'object',
              required: ['base_price', 'currency', 'pricing_model'],
              additionalProperties: false,
              properties: {
                base_price: {
                  bsonType: 'decimal',
                  minimum: 0.01,
                  description: 'Base price must be positive'
                },
                currency: {
                  bsonType: 'string',
                  pattern: '^[A-Z]{3}$',
                  description: 'ISO 4217 currency code'
                },
                pricing_model: {
                  enum: ['fixed', 'tiered', 'subscription', 'auction', 'negotiable'],
                  description: 'Product pricing model'
                },

                // Conditional pricing based on model
                tier_pricing: {
                  bsonType: 'array',
                  items: {
                    bsonType: 'object',
                    required: ['min_quantity', 'price_per_unit'],
                    properties: {
                      min_quantity: {
                        bsonType: 'int',
                        minimum: 1
                      },
                      price_per_unit: {
                        bsonType: 'decimal',
                        minimum: 0.01
                      },
                      description: { bsonType: 'string', maxLength: 200 }
                    }
                  },
                  description: 'Tiered pricing structure (required if pricing_model is tiered)'
                },

                subscription_options: {
                  bsonType: 'object',
                  properties: {
                    billing_cycles: {
                      bsonType: 'array',
                      items: {
                        enum: ['monthly', 'quarterly', 'annually', 'biennial']
                      },
                      minItems: 1
                    },
                    trial_period_days: {
                      bsonType: 'int',
                      minimum: 0,
                      maximum: 365
                    }
                  },
                  required: ['billing_cycles'],
                  description: 'Subscription details (required if pricing_model is subscription)'
                },

                discounts: {
                  bsonType: 'array',
                  items: {
                    bsonType: 'object',
                    required: ['type', 'value', 'valid_from', 'valid_until'],
                    properties: {
                      type: {
                        enum: ['percentage', 'fixed_amount', 'buy_x_get_y'],
                        description: 'Type of discount'
                      },
                      value: {
                        bsonType: 'decimal',
                        minimum: 0,
                        description: 'Discount value (percentage or amount)'
                      },
                      min_purchase_amount: {
                        bsonType: 'decimal',
                        minimum: 0
                      },
                      valid_from: { bsonType: 'date' },
                      valid_until: { bsonType: 'date' },
                      max_uses: {
                        bsonType: 'int',
                        minimum: 1
                      },
                      code: {
                        bsonType: 'string',
                        pattern: '^[A-Z0-9]{4,20}$'
                      }
                    }
                  },
                  maxItems: 10
                }
              }
            },

            // Availability and inventory
            availability: {
              bsonType: 'object',
              required: ['status', 'stock_tracking'],
              properties: {
                status: {
                  enum: ['available', 'out_of_stock', 'discontinued', 'coming_soon', 'back_order'],
                  description: 'Product availability status'
                },
                stock_tracking: {
                  bsonType: 'object',
                  required: ['enabled'],
                  properties: {
                    enabled: { bsonType: 'bool' },
                    current_stock: {
                      bsonType: 'int',
                      minimum: 0,
                      description: 'Current stock quantity (required if tracking enabled)'
                    },
                    reserved_stock: {
                      bsonType: 'int',
                      minimum: 0,
                      description: 'Stock reserved for pending orders'
                    },
                    low_stock_threshold: {
                      bsonType: 'int',
                      minimum: 0,
                      description: 'Threshold for low stock alerts'
                    },
                    max_order_quantity: {
                      bsonType: 'int',
                      minimum: 1,
                      description: 'Maximum quantity per order'
                    }
                  }
                },
                estimated_delivery: {
                  bsonType: 'object',
                  properties: {
                    min_days: { bsonType: 'int', minimum: 0 },
                    max_days: { bsonType: 'int', minimum: 0 },
                    shipping_regions: {
                      bsonType: 'array',
                      items: {
                        bsonType: 'string',
                        pattern: '^[A-Z]{2}$'
                      }
                    }
                  }
                }
              }
            },

            // Product specifications with category-specific validation
            specifications: {
              bsonType: 'object',
              properties: {
                dimensions: {
                  bsonType: 'object',
                  required: ['unit'],
                  properties: {
                    length: { bsonType: 'decimal', minimum: 0 },
                    width: { bsonType: 'decimal', minimum: 0 },
                    height: { bsonType: 'decimal', minimum: 0 },
                    weight: { bsonType: 'decimal', minimum: 0 },
                    unit: {
                      enum: ['metric', 'imperial'],
                      description: 'Measurement unit system'
                    }
                  }
                },

                materials: {
                  bsonType: 'array',
                  items: {
                    bsonType: 'object',
                    required: ['name', 'percentage'],
                    properties: {
                      name: { bsonType: 'string', maxLength: 100 },
                      percentage: {
                        bsonType: 'decimal',
                        minimum: 0,
                        maximum: 100
                      },
                      certified: { bsonType: 'bool' },
                      certification: { bsonType: 'string', maxLength: 200 }
                    }
                  }
                },

                care_instructions: {
                  bsonType: 'array',
                  items: { bsonType: 'string', maxLength: 200 },
                  maxItems: 10
                },

                warranty: {
                  bsonType: 'object',
                  properties: {
                    duration_months: {
                      bsonType: 'int',
                      minimum: 0,
                      maximum: 600 // 50 years max
                    },
                    type: {
                      enum: ['manufacturer', 'store', 'extended', 'none']
                    },
                    coverage: {
                      bsonType: 'array',
                      items: {
                        enum: ['defects', 'wear_and_tear', 'accidental_damage', 'theft']
                      }
                    }
                  }
                },

                // Category-specific specifications (conditional validation)
                electronics: {
                  bsonType: 'object',
                  properties: {
                    brand: {
                      bsonType: 'string',
                      minLength: 2,
                      maxLength: 100,
                      description: 'Electronics must have a brand'
                    },
                    model: {
                      bsonType: 'string',
                      minLength: 1,
                      maxLength: 100,
                      description: 'Electronics must have a model'
                    },
                    power_requirements: {
                      bsonType: 'object',
                      properties: {
                        voltage: { bsonType: 'int', minimum: 1 },
                        wattage: { bsonType: 'int', minimum: 1 },
                        frequency: { bsonType: 'int', minimum: 50, maximum: 60 }
                      }
                    },
                    connectivity: {
                      bsonType: 'array',
                      items: {
                        enum: ['wifi', 'bluetooth', 'ethernet', 'usb', 'hdmi', 'aux', 'nfc']
                      }
                    }
                  }
                }
              }
            },

            // Quality and compliance
            quality_control: {
              bsonType: 'object',
              properties: {
                certifications: {
                  bsonType: 'array',
                  items: {
                    bsonType: 'object',
                    required: ['name', 'issuing_body', 'valid_until'],
                    properties: {
                      name: { bsonType: 'string', maxLength: 200 },
                      issuing_body: { bsonType: 'string', maxLength: 200 },
                      certificate_number: { bsonType: 'string', maxLength: 100 },
                      valid_until: { bsonType: 'date' },
                      document_url: { bsonType: 'string', maxLength: 500 }
                    }
                  }
                },
                safety_warnings: {
                  bsonType: 'array',
                  items: {
                    bsonType: 'object',
                    required: ['type', 'description'],
                    properties: {
                      type: {
                        enum: ['choking_hazard', 'electrical', 'chemical', 'fire', 'sharp_edges', 'other']
                      },
                      description: { bsonType: 'string', maxLength: 500 },
                      age_restriction: { bsonType: 'int', minimum: 0, maximum: 21 }
                    }
                  }
                }
              }
            },

            // Audit and metadata
            created_at: { bsonType: 'date' },
            updated_at: { bsonType: 'date' },
            created_by: { bsonType: 'objectId' },
            last_modified_by: { bsonType: 'objectId' },
            schema_version: {
              bsonType: 'string',
              pattern: '^\\d+\\.\\d+\\.\\d+$'
            }
          }
        }
      }, {
        validationLevel: 'strict',
        validationAction: 'error'
      });

      // Order validation with complex business rules
      await this.createValidatedCollection('orders', {
        $jsonSchema: {
          bsonType: 'object',
          title: 'Order Validation Schema',
          required: ['customer_id', 'items', 'totals', 'status', 'created_at'],
          additionalProperties: false,

          properties: {
            _id: { bsonType: 'objectId' },

            order_number: {
              bsonType: 'string',
              pattern: '^ORD-[0-9]{8}-[A-Z]{3}$',
              description: 'Order number format: ORD-12345678-ABC'
            },

            customer_id: {
              bsonType: 'objectId',
              description: 'Reference to customer profile'
            },

            // Order items with validation
            items: {
              bsonType: 'array',
              minItems: 1,
              maxItems: 100,
              items: {
                bsonType: 'object',
                required: ['product_id', 'quantity', 'unit_price', 'total_price'],
                additionalProperties: false,
                properties: {
                  product_id: { bsonType: 'objectId' },
                  product_name: { bsonType: 'string', maxLength: 500 },
                  sku: { bsonType: 'string' },
                  quantity: {
                    bsonType: 'int',
                    minimum: 1,
                    maximum: 1000
                  },
                  unit_price: {
                    bsonType: 'decimal',
                    minimum: 0
                  },
                  total_price: {
                    bsonType: 'decimal',
                    minimum: 0
                  },
                  discounts_applied: {
                    bsonType: 'array',
                    items: {
                      bsonType: 'object',
                      required: ['type', 'amount'],
                      properties: {
                        type: { bsonType: 'string' },
                        amount: { bsonType: 'decimal' },
                        code: { bsonType: 'string' }
                      }
                    }
                  },
                  customizations: {
                    bsonType: 'object',
                    description: 'Product customization options'
                  }
                }
              },
              description: 'Order must contain 1-100 items'
            },

            // Order totals with validation
            totals: {
              bsonType: 'object',
              required: ['subtotal', 'tax_amount', 'shipping_cost', 'total_amount', 'currency'],
              additionalProperties: false,
              properties: {
                subtotal: {
                  bsonType: 'decimal',
                  minimum: 0,
                  description: 'Subtotal before taxes and shipping'
                },
                tax_amount: {
                  bsonType: 'decimal',
                  minimum: 0,
                  description: 'Total tax amount'
                },
                tax_breakdown: {
                  bsonType: 'array',
                  items: {
                    bsonType: 'object',
                    required: ['type', 'rate', 'amount'],
                    properties: {
                      type: { bsonType: 'string', maxLength: 50 },
                      rate: { bsonType: 'decimal', minimum: 0, maximum: 1 },
                      amount: { bsonType: 'decimal', minimum: 0 }
                    }
                  }
                },
                shipping_cost: {
                  bsonType: 'decimal',
                  minimum: 0,
                  description: 'Shipping and handling cost'
                },
                discount_amount: {
                  bsonType: 'decimal',
                  minimum: 0,
                  description: 'Total discount amount'
                },
                total_amount: {
                  bsonType: 'decimal',
                  minimum: 0.01,
                  description: 'Final order total'
                },
                currency: {
                  bsonType: 'string',
                  pattern: '^[A-Z]{3}$',
                  description: 'ISO 4217 currency code'
                }
              }
            },

            // Order status workflow
            status: {
              bsonType: 'object',
              required: ['current', 'history'],
              additionalProperties: false,
              properties: {
                current: {
                  enum: ['pending', 'confirmed', 'processing', 'shipped', 'delivered', 'cancelled', 'refunded'],
                  description: 'Current order status'
                },
                history: {
                  bsonType: 'array',
                  minItems: 1,
                  items: {
                    bsonType: 'object',
                    required: ['status', 'timestamp'],
                    properties: {
                      status: {
                        enum: ['pending', 'confirmed', 'processing', 'shipped', 'delivered', 'cancelled', 'refunded']
                      },
                      timestamp: { bsonType: 'date' },
                      notes: { bsonType: 'string', maxLength: 1000 },
                      updated_by: { bsonType: 'objectId' }
                    }
                  }
                }
              }
            },

            // Shipping information
            shipping: {
              bsonType: 'object',
              required: ['method', 'address'],
              properties: {
                method: {
                  bsonType: 'object',
                  required: ['carrier', 'service_type', 'estimated_delivery'],
                  properties: {
                    carrier: { bsonType: 'string', maxLength: 100 },
                    service_type: { bsonType: 'string', maxLength: 100 },
                    tracking_number: { bsonType: 'string', maxLength: 100 },
                    estimated_delivery: { bsonType: 'date' },
                    actual_delivery: { bsonType: 'date' }
                  }
                },
                address: {
                  bsonType: 'object',
                  required: ['recipient_name', 'street_address', 'city', 'country'],
                  properties: {
                    recipient_name: { bsonType: 'string', maxLength: 200 },
                    street_address: { bsonType: 'string', maxLength: 500 },
                    city: { bsonType: 'string', maxLength: 100 },
                    state_province: { bsonType: 'string', maxLength: 100 },
                    postal_code: { bsonType: 'string', maxLength: 20 },
                    country: {
                      bsonType: 'string',
                      pattern: '^[A-Z]{2}$',
                      description: 'ISO 3166-1 alpha-2 country code'
                    },
                    special_instructions: { bsonType: 'string', maxLength: 500 }
                  }
                }
              }
            },

            // Payment information
            payment: {
              bsonType: 'object',
              required: ['method', 'status'],
              properties: {
                method: {
                  enum: ['credit_card', 'debit_card', 'paypal', 'bank_transfer', 'digital_wallet', 'cryptocurrency', 'cash_on_delivery'],
                  description: 'Payment method used'
                },
                status: {
                  enum: ['pending', 'authorized', 'captured', 'failed', 'refunded', 'partially_refunded'],
                  description: 'Payment processing status'
                },
                transaction_id: { bsonType: 'string', maxLength: 200 },
                authorization_code: { bsonType: 'string', maxLength: 100 },
                payment_processor: { bsonType: 'string', maxLength: 100 },
                processed_at: { bsonType: 'date' },
                failure_reason: { bsonType: 'string', maxLength: 500 },
                refund_details: {
                  bsonType: 'array',
                  items: {
                    bsonType: 'object',
                    required: ['amount', 'reason', 'processed_at'],
                    properties: {
                      amount: { bsonType: 'decimal', minimum: 0 },
                      reason: { bsonType: 'string', maxLength: 500 },
                      processed_at: { bsonType: 'date' },
                      refund_id: { bsonType: 'string' }
                    }
                  }
                }
              }
            },

            // Audit trail
            created_at: { bsonType: 'date' },
            updated_at: { bsonType: 'date' },
            schema_version: {
              bsonType: 'string',
              pattern: '^\\d+\\.\\d+\\.\\d+$'
            }
          }
        }
      }, {
        validationLevel: 'strict',
        validationAction: 'error'
      });

      console.log('Advanced validation schemas created successfully');
      return true;

    } catch (error) {
      console.error('Error setting up validation collections:', error);
      throw error;
    }
  }

  async createValidatedCollection(collectionName, validationSchema, options = {}) {
    console.log(`Creating validated collection: ${collectionName}`);

    try {
      // Check if collection already exists
      const collections = await this.db.listCollections({ name: collectionName }).toArray();

      if (collections.length > 0) {
        console.log(`Collection ${collectionName} already exists, updating validation`);

        // Update existing collection validation
        await this.db.command({
          collMod: collectionName,
          validator: validationSchema,
          validationLevel: options.validationLevel || this.options.validationLevel,
          validationAction: options.validationAction || this.options.validationAction
        });
      } else {
        // Create new collection with validation
        await this.db.createCollection(collectionName, {
          validator: validationSchema,
          validationLevel: options.validationLevel || this.options.validationLevel,
          validationAction: options.validationAction || this.options.validationAction
        });
      }

      // Store schema for versioning
      this.validationSchemas.set(collectionName, {
        schema: validationSchema,
        version: options.version || '1.0.0',
        createdAt: new Date(),
        ...options
      });

      console.log(`Validation schema applied to collection: ${collectionName}`);
      return true;

    } catch (error) {
      console.error(`Error creating validated collection ${collectionName}:`, error);
      throw error;
    }
  }

  async validateDocument(collectionName, document) {
    console.log(`Validating document for collection: ${collectionName}`);

    try {
      const schema = this.validationSchemas.get(collectionName);
      if (!schema) {
        throw new Error(`No validation schema found for collection: ${collectionName}`);
      }

      // Perform pre-validation checks
      const preValidationResult = await this.performPreValidation(collectionName, document);
      if (!preValidationResult.valid) {
        this.validationMetrics.validationsFailed++;
        return {
          valid: false,
          errors: preValidationResult.errors,
          warnings: preValidationResult.warnings || []
        };
      }

      // Test document against schema by attempting insertion with validation
      const testCollection = this.db.collection(collectionName);

      try {
        // Use a transaction to test validation without persisting
        const session = this.db.client.startSession();

        await session.withTransaction(async () => {
          await testCollection.insertOne(document, { session });
          // Abort transaction to avoid persisting test document
          await session.abortTransaction();
        });

        await session.endSession();

        this.validationMetrics.validationsPassed++;
        return {
          valid: true,
          errors: [],
          warnings: preValidationResult.warnings || []
        };

      } catch (validationError) {
        this.validationMetrics.validationsFailed++;
        this.validationMetrics.validationErrors.push({
          collection: collectionName,
          error: validationError.message,
          document: document,
          timestamp: new Date()
        });

        return {
          valid: false,
          errors: [this.parseValidationError(validationError)],
          warnings: preValidationResult.warnings || []
        };
      }

    } catch (error) {
      console.error(`Error validating document:`, error);
      return {
        valid: false,
        errors: [`Validation system error: ${error.message}`],
        warnings: []
      };
    }
  }

  async performPreValidation(collectionName, document) {
    // Custom pre-validation logic for business rules
    const warnings = [];
    const errors = [];

    if (collectionName === 'products') {
      // Category-specific validation
      if (document.category?.primary === 'electronics' && !document.specifications?.electronics) {
        errors.push('Electronics products must include electronics specifications');
      }

      // Pricing model validation
      if (document.pricing?.pricing_model === 'tiered' && !document.pricing?.tier_pricing) {
        errors.push('Tiered pricing model requires tier_pricing configuration');
      }

      if (document.pricing?.pricing_model === 'subscription' && !document.pricing?.subscription_options) {
        errors.push('Subscription pricing model requires subscription_options configuration');
      }

      // Stock validation
      if (document.availability?.stock_tracking?.enabled && 
          document.availability?.stock_tracking?.current_stock === undefined) {
        errors.push('Stock tracking enabled but current_stock not provided');
      }

      // Price validation by category
      if (document.category?.primary === 'electronics' && document.pricing?.base_price < 1.00) {
        warnings.push('Electronics products with price below $1.00 are unusual');
      }

      // Warranty validation
      if (document.specifications?.warranty?.duration_months > 120) {
        warnings.push('Warranty period over 10 years is unusual');
      }
    }

    if (collectionName === 'orders') {
      // Order total validation
      const itemsTotal = document.items?.reduce((sum, item) => sum + parseFloat(item.total_price), 0) || 0;
      const calculatedTotal = itemsTotal + parseFloat(document.totals?.tax_amount || 0) + 
                             parseFloat(document.totals?.shipping_cost || 0) - 
                             parseFloat(document.totals?.discount_amount || 0);

      if (Math.abs(calculatedTotal - parseFloat(document.totals?.total_amount || 0)) > 0.01) {
        errors.push('Order total calculation does not match sum of items, tax, and shipping');
      }

      // Status workflow validation
      if (document.status?.current === 'delivered' && !document.shipping?.method?.actual_delivery) {
        warnings.push('Order marked as delivered but no actual delivery date provided');
      }

      if (document.payment?.status === 'failed' && document.status?.current !== 'cancelled') {
        errors.push('Order with failed payment must be cancelled');
      }
    }

    if (collectionName === 'user_profiles') {
      // Age and contact validation
      if (document.age && document.age < 18 && document.contact_info?.phone) {
        warnings.push('Phone contact for users under 18 may require parental consent');
      }

      // Privacy compliance validation
      if (document.preferences?.privacy?.data_processing_consent?.marketing && 
          !document.preferences?.privacy?.data_processing_consent?.given_at) {
        errors.push('Marketing consent requires timestamp when consent was given');
      }

      // Security settings validation
      if (document.security?.two_factor_enabled && !document.security?.backup_codes) {
        warnings.push('Two-factor authentication enabled but no backup codes provided');
      }
    }

    return {
      valid: errors.length === 0,
      errors: errors,
      warnings: warnings
    };
  }

  parseValidationError(error) {
    // Parse MongoDB validation error messages into user-friendly format
    let message = error.message;

    // Extract specific field errors from MongoDB validation messages
    const fieldMatch = message.match(/Document failed validation.*properties\.(\w+)/);
    if (fieldMatch) {
      const field = fieldMatch[1];
      return `Validation failed for field '${field}': ${message}`;
    }

    // Extract type errors
    const typeMatch = message.match(/Expected type (\w+) but found (\w+)/);
    if (typeMatch) {
      return `Type mismatch: Expected ${typeMatch[1]} but received ${typeMatch[2]}`;
    }

    // Extract pattern errors
    const patternMatch = message.match(/String does not match regex pattern/);
    if (patternMatch) {
      return 'Value does not match required format pattern';
    }

    return message;
  }

  async getValidationMetrics(collectionName = null) {
    const metrics = {
      ...this.validationMetrics,
      collectionsWithValidation: this.validationSchemas.size,
      schemas: {}
    };

    // Add schema-specific metrics
    for (const [name, schema] of this.validationSchemas.entries()) {
      if (!collectionName || name === collectionName) {
        metrics.schemas[name] = {
          version: schema.version,
          createdAt: schema.createdAt,
          validationLevel: schema.validationLevel,
          validationAction: schema.validationAction
        };
      }
    }

    // Add recent validation errors
    if (collectionName) {
      metrics.recentErrors = this.validationMetrics.validationErrors
        .filter(error => error.collection === collectionName)
        .slice(-10);
    } else {
      metrics.recentErrors = this.validationMetrics.validationErrors.slice(-20);
    }

    return metrics;
  }

  async updateValidationSchema(collectionName, newSchema, version) {
    console.log(`Updating validation schema for collection: ${collectionName}`);

    try {
      // Backup current schema
      const currentSchema = this.validationSchemas.get(collectionName);
      if (currentSchema) {
        await this.backupSchema(collectionName, currentSchema);
      }

      // Update collection validation
      await this.db.command({
        collMod: collectionName,
        validator: newSchema,
        validationLevel: this.options.validationLevel,
        validationAction: this.options.validationAction
      });

      // Update stored schema
      this.validationSchemas.set(collectionName, {
        schema: newSchema,
        version: version,
        createdAt: new Date(),
        previousVersion: currentSchema?.version
      });

      console.log(`Schema updated for collection: ${collectionName} to version: ${version}`);
      return true;

    } catch (error) {
      console.error(`Error updating schema for ${collectionName}:`, error);
      throw error;
    }
  }

  async backupSchema(collectionName, schema) {
    // Store schema backup for rollback purposes
    const backupCollection = this.db.collection('_schema_backups');

    await backupCollection.insertOne({
      collectionName: collectionName,
      schema: schema,
      backedUpAt: new Date()
    });

    console.log(`Schema backed up for collection: ${collectionName}`);
  }

  async generateValidationReport() {
    console.log('Generating comprehensive validation report...');

    const report = {
      reportId: require('crypto').randomUUID(),
      generatedAt: new Date(),

      // Overall metrics
      overview: {
        totalCollectionsWithValidation: this.validationSchemas.size,
        totalValidationsPassed: this.validationMetrics.validationsPassed,
        totalValidationsFailed: this.validationMetrics.validationsFailed,
        successRate: this.validationMetrics.validationsPassed + this.validationMetrics.validationsFailed > 0 ?
          (this.validationMetrics.validationsPassed / (this.validationMetrics.validationsPassed + this.validationMetrics.validationsFailed) * 100).toFixed(2) :
          0,
        lastUpdated: this.validationMetrics.lastUpdated
      },

      // Collection-specific details
      collections: {},

      // Error analysis
      errorAnalysis: {
        totalErrors: this.validationMetrics.validationErrors.length,
        errorsByCollection: {},
        commonErrors: {},
        recentErrors: this.validationMetrics.validationErrors.slice(-10)
      },

      // Recommendations
      recommendations: []
    };

    // Analyze each collection
    for (const [collectionName, schema] of this.validationSchemas.entries()) {
      const collectionErrors = this.validationMetrics.validationErrors
        .filter(error => error.collection === collectionName);

      report.collections[collectionName] = {
        schemaVersion: schema.version,
        validationLevel: schema.validationLevel,
        validationAction: schema.validationAction,
        errorCount: collectionErrors.length,
        lastError: collectionErrors.length > 0 ? collectionErrors[collectionErrors.length - 1] : null
      };

      report.errorAnalysis.errorsByCollection[collectionName] = collectionErrors.length;

      // Generate recommendations
      if (collectionErrors.length > 10) {
        report.recommendations.push({
          type: 'high_error_rate',
          collection: collectionName,
          message: `Collection ${collectionName} has ${collectionErrors.length} validation errors. Consider reviewing schema requirements.`
        });
      }

      if (schema.validationLevel === 'moderate') {
        report.recommendations.push({
          type: 'validation_level',
          collection: collectionName,
          message: `Collection ${collectionName} uses moderate validation. Consider upgrading to strict for better data integrity.`
        });
      }
    }

    // Analyze common error patterns
    const errorMessages = this.validationMetrics.validationErrors.map(error => error.error);
    const errorCounts = {};
    errorMessages.forEach(msg => {
      const key = msg.substring(0, 50) + '...';
      errorCounts[key] = (errorCounts[key] || 0) + 1;
    });

    report.errorAnalysis.commonErrors = Object.entries(errorCounts)
      .sort(([,a], [,b]) => b - a)
      .slice(0, 10)
      .reduce((obj, [key, count]) => ({ ...obj, [key]: count }), {});

    return report;
  }
}

// Example usage and testing
const validationSystem = new MongoDBSchemaValidator(db, {
  validationLevel: 'strict',
  validationAction: 'error',
  enableVersioning: true,
  enableMetrics: true
});

// Benefits of MongoDB Schema Validation:
// - Database-level data integrity enforcement
// - Flexible validation rules with conditional logic
// - Support for complex nested document validation
// - Real-time validation with detailed error reporting
// - Schema versioning and migration capabilities
// - Business rule enforcement at the database level
// - Integration with application development workflows
// - Comprehensive validation metrics and reporting
// - Support for gradual migration and validation levels
// - Advanced error handling and user-friendly feedback

module.exports = {
  MongoDBSchemaValidator
};

Understanding MongoDB Schema Validation Architecture

Advanced Validation Patterns and Business Rule Enforcement

Implement sophisticated validation strategies for production MongoDB deployments:

// Production-ready MongoDB Schema Validation with advanced business rules
class ProductionSchemaValidationManager extends MongoDBSchemaValidator {
  constructor(db, productionConfig) {
    super(db, productionConfig);

    this.productionConfig = {
      ...productionConfig,
      enableConditionalValidation: true,
      enableCrossCollectionValidation: true,
      enableDataMigration: true,
      enableComplianceValidation: true,
      enablePerformanceOptimization: true
    };

    this.setupProductionValidationFeatures();
    this.initializeComplianceFrameworks();
    this.setupValidationMiddleware();
  }

  async implementAdvancedValidationPatterns() {
    console.log('Implementing advanced validation patterns...');

    // Conditional validation based on document context
    const conditionalValidationRules = {
      // User profile validation based on account type
      userProfileConditional: {
        $or: [
          {
            profile_type: 'individual',
            $and: [
              { age: { $gte: 13 } },
              { full_name: { $exists: true } }
            ]
          },
          {
            profile_type: 'business',
            $and: [
              { business_info: { $exists: true } },
              { 'business_info.registration_number': { $exists: true } },
              { 'business_info.tax_id': { $exists: true } }
            ]
          },
          {
            profile_type: 'organization',
            $and: [
              { organization_info: { $exists: true } },
              { 'organization_info.type': { $in: ['nonprofit', 'government', 'educational'] } }
            ]
          }
        ]
      },

      // Product validation based on category
      productCategoryConditional: {
        $or: [
          {
            'category.primary': 'electronics',
            $and: [
              { 'specifications.electronics.brand': { $exists: true } },
              { 'specifications.electronics.model': { $exists: true } },
              { 'specifications.warranty.duration_months': { $gte: 12 } }
            ]
          },
          {
            'category.primary': 'clothing',
            $and: [
              { 'specifications.materials': { $exists: true } },
              { 'specifications.care_instructions': { $exists: true } }
            ]
          },
          {
            'category.primary': { $in: ['food', 'supplements'] },
            $and: [
              { 'specifications.nutrition_facts': { $exists: true } },
              { 'specifications.allergen_info': { $exists: true } },
              { expiration_date: { $exists: true } }
            ]
          }
        ]
      }
    };

    return await this.deployConditionalValidationRules(conditionalValidationRules);
  }

  async setupComplianceValidationFrameworks() {
    console.log('Setting up compliance validation frameworks...');

    const complianceFrameworks = {
      // GDPR compliance validation
      gdprCompliance: {
        userDataProcessing: {
          $and: [
            { 'preferences.privacy.data_processing_consent.given_at': { $exists: true } },
            { 'preferences.privacy.data_processing_consent.ip_address': { $exists: true } },
            { 'preferences.privacy.data_processing_consent.analytics': { $type: 'bool' } },
            { 'preferences.privacy.data_processing_consent.marketing': { $type: 'bool' } }
          ]
        },
        dataRetention: {
          $or: [
            { account_status: 'active' },
            { 
              $and: [
                { account_status: 'closed' },
                { data_retention_expiry: { $gte: new Date() } }
              ]
            }
          ]
        }
      },

      // PCI DSS compliance for payment data
      pciCompliance: {
        paymentDataHandling: {
          $and: [
            { 'payment.card_number': { $exists: false } }, // No plain text card numbers
            { 'payment.cvv': { $exists: false } }, // No CVV storage
            { 'payment.transaction_id': { $exists: true } },
            { 'payment.payment_processor': { $exists: true } }
          ]
        }
      },

      // SOX compliance for financial records
      soxCompliance: {
        financialRecordIntegrity: {
          $and: [
            { audit_trail: { $exists: true } },
            { 'audit_trail.created_by': { $exists: true } },
            { 'audit_trail.last_modified_by': { $exists: true } },
            { 'audit_trail.approval_chain': { $exists: true } }
          ]
        }
      }
    };

    return await this.implementComplianceFrameworks(complianceFrameworks);
  }

  async performCrossCollectionValidation(collectionName, document) {
    console.log(`Performing cross-collection validation for: ${collectionName}`);

    const crossValidationRules = [];

    if (collectionName === 'orders') {
      // Validate customer exists
      const customer = await this.db.collection('user_profiles')
        .findOne({ _id: document.customer_id });

      if (!customer) {
        crossValidationRules.push({
          field: 'customer_id',
          error: 'Customer does not exist'
        });
      } else if (customer.account_status?.status !== 'active') {
        crossValidationRules.push({
          field: 'customer_id',
          error: 'Customer account is not active'
        });
      }

      // Validate products exist and are available
      for (const item of document.items || []) {
        const product = await this.db.collection('products')
          .findOne({ _id: item.product_id });

        if (!product) {
          crossValidationRules.push({
            field: `items.product_id`,
            error: `Product ${item.product_id} does not exist`
          });
        } else {
          // Check product availability
          if (product.availability?.status !== 'available') {
            crossValidationRules.push({
              field: `items.product_id`,
              error: `Product ${product.name} is not available`
            });
          }

          // Check stock if tracking is enabled
          if (product.availability?.stock_tracking?.enabled) {
            const availableStock = product.availability.stock_tracking.current_stock - 
                                  (product.availability.stock_tracking.reserved_stock || 0);

            if (item.quantity > availableStock) {
              crossValidationRules.push({
                field: `items.quantity`,
                error: `Insufficient stock for ${product.name}. Available: ${availableStock}, Requested: ${item.quantity}`
              });
            }
          }

          // Validate pricing consistency
          if (Math.abs(parseFloat(item.unit_price) - parseFloat(product.pricing.base_price)) > 0.01) {
            crossValidationRules.push({
              field: `items.unit_price`,
              warning: `Unit price for ${product.name} may be outdated`
            });
          }
        }
      }
    }

    if (collectionName === 'user_profiles') {
      // Check for duplicate email addresses
      const existingUser = await this.db.collection('user_profiles')
        .findOne({ 
          email: document.email,
          _id: { $ne: document._id }
        });

      if (existingUser) {
        crossValidationRules.push({
          field: 'email',
          error: 'Email address is already registered'
        });
      }

      // Check for duplicate usernames
      const existingUsername = await this.db.collection('user_profiles')
        .findOne({ 
          username: document.username,
          _id: { $ne: document._id }
        });

      if (existingUsername) {
        crossValidationRules.push({
          field: 'username',
          error: 'Username is already taken'
        });
      }
    }

    return {
      valid: crossValidationRules.filter(rule => rule.error).length === 0,
      errors: crossValidationRules.filter(rule => rule.error),
      warnings: crossValidationRules.filter(rule => rule.warning)
    };
  }

  async implementDataMigrationValidation() {
    console.log('Implementing data migration validation strategies...');

    const migrationStrategies = {
      // Gradual validation rollout
      gradualValidation: {
        phase1: { validationLevel: 'off' }, // No validation
        phase2: { validationLevel: 'moderate', validationAction: 'warn' }, // Warnings only
        phase3: { validationLevel: 'moderate', validationAction: 'error' }, // Moderate validation
        phase4: { validationLevel: 'strict', validationAction: 'error' } // Full validation
      },

      // Schema version migration
      schemaVersioning: {
        v1_to_v2: {
          transformationRules: {
            'old_field': 'new_field',
            'deprecated_structure': 'new_structure'
          },
          validationOverrides: {
            allowMissingFields: ['optional_new_field'],
            temporaryRules: {
              'legacy_format': { $exists: true }
            }
          }
        }
      },

      // Data quality improvement
      dataQualityEnforcement: {
        cleanupRules: [
          { field: 'email', action: 'trim_and_lowercase' },
          { field: 'phone', action: 'normalize_format' },
          { field: 'tags', action: 'remove_duplicates' }
        ],
        enrichmentRules: [
          { field: 'created_at', action: 'set_if_missing', value: new Date() },
          { field: 'schema_version', action: 'set_current_version' }
        ]
      }
    };

    return await this.deployMigrationStrategies(migrationStrategies);
  }
}

SQL-Style Schema Validation with QueryLeaf

QueryLeaf provides familiar SQL syntax for MongoDB schema validation and data integrity operations:

-- QueryLeaf advanced schema validation with SQL-familiar syntax

-- Create collections with comprehensive validation rules
CREATE COLLECTION user_profiles
WITH VALIDATION (
  -- Basic field requirements and types
  email VARCHAR(320) NOT NULL UNIQUE 
    PATTERN '^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$',
  username VARCHAR(30) NOT NULL UNIQUE 
    PATTERN '^[a-zA-Z0-9_-]{3,30}$',
  full_name VARCHAR(200) NOT NULL,
  profile_type ENUM('individual', 'business', 'organization', 'developer') NOT NULL,
  age INT CHECK (age >= 13 AND age <= 150),

  -- Complex nested object validation
  account_status OBJECT (
    status ENUM('active', 'inactive', 'suspended', 'pending_verification', 'closed') NOT NULL,
    last_updated DATETIME NOT NULL,
    reason VARCHAR(500),
    updated_by OBJECTID
  ) NOT NULL,

  -- Contact information with regional validation
  contact_info OBJECT (
    phone OBJECT (
      country_code VARCHAR(5) PATTERN '^\+[1-9][0-9]{0,3}$' NOT NULL,
      number VARCHAR(15) PATTERN '^[0-9]{7,15}$' NOT NULL,
      verified BOOLEAN NOT NULL,
      verified_at DATETIME
    ),
    address OBJECT (
      street VARCHAR(200),
      city VARCHAR(100) NOT NULL,
      state_province VARCHAR(100),
      postal_code VARCHAR(20),
      country CHAR(2) PATTERN '^[A-Z]{2}$' NOT NULL
    )
  ),

  -- Nested preferences with conditional validation
  preferences OBJECT (
    notifications OBJECT (
      email OBJECT (
        enabled BOOLEAN NOT NULL,
        frequency ENUM('immediate', 'daily', 'weekly', 'never'),
        categories ARRAY OF ENUM('security', 'marketing', 'product_updates', 'billing') UNIQUE
      ) NOT NULL,
      push OBJECT (
        enabled BOOLEAN NOT NULL,
        quiet_hours OBJECT (
          enabled BOOLEAN NOT NULL,
          start_time TIME PATTERN '^([01]?[0-9]|2[0-3]):[0-5][0-9]$',
          end_time TIME PATTERN '^([01]?[0-9]|2[0-3]):[0-5][0-9]$'
        )
      ) NOT NULL,
      sms OBJECT (
        enabled BOOLEAN NOT NULL,
        emergency_only BOOLEAN
      ) NOT NULL
    ),
    privacy OBJECT (
      profile_visibility ENUM('public', 'friends_only', 'private') NOT NULL,
      search_visibility BOOLEAN,
      data_processing_consent OBJECT (
        analytics BOOLEAN NOT NULL,
        marketing BOOLEAN NOT NULL,
        third_party_sharing BOOLEAN,
        given_at DATETIME NOT NULL,
        ip_address VARCHAR(45),
        user_agent TEXT
      ) NOT NULL
    ) NOT NULL
  ),

  -- Security settings
  security OBJECT (
    two_factor_enabled BOOLEAN,
    backup_codes ARRAY OF VARCHAR(8) PATTERN '^[A-Z0-9]{8}$' MAX_SIZE 10 UNIQUE,
    security_questions ARRAY OF OBJECT (
      question VARCHAR(200) NOT NULL,
      answer_hash VARCHAR(255) NOT NULL,
      created_at DATETIME
    ) MAX_SIZE 5
  ),

  -- Audit fields
  created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
  updated_at DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE,
  created_by OBJECTID,
  schema_version VARCHAR(10) PATTERN '^\d+\.\d+\.\d+$'

) WITH (
  validation_level = 'strict',
  validation_action = 'error',
  additional_properties = false
);

-- Product collection with category-specific conditional validation
CREATE COLLECTION products
WITH VALIDATION (
  name VARCHAR(500) NOT NULL,
  description TEXT MAX_LENGTH 5000,
  sku VARCHAR(20) PATTERN '^[A-Z0-9]{3,20}$' UNIQUE,

  -- Category with hierarchical structure
  category OBJECT (
    primary ENUM('electronics', 'clothing', 'home_garden', 'books', 'sports', 'automotive', 'health', 'toys') NOT NULL,
    secondary VARCHAR(100),
    path ARRAY OF VARCHAR(100) MIN_SIZE 1 MAX_SIZE 5 NOT NULL,
    tags ARRAY OF VARCHAR(50) PATTERN '^[a-z0-9_-]+$' MAX_SIZE 20 UNIQUE
  ) NOT NULL,

  -- Complex pricing structure
  pricing OBJECT (
    base_price DECIMAL(10,2) CHECK (base_price > 0) NOT NULL,
    currency CHAR(3) PATTERN '^[A-Z]{3}$' NOT NULL,
    pricing_model ENUM('fixed', 'tiered', 'subscription', 'auction', 'negotiable') NOT NULL,

    -- Conditional validation based on pricing model
    tier_pricing ARRAY OF OBJECT (
      min_quantity INT CHECK (min_quantity >= 1) NOT NULL,
      price_per_unit DECIMAL(10,2) CHECK (price_per_unit > 0) NOT NULL,
      description VARCHAR(200)
    ) -- Required when pricing_model = 'tiered'
    CHECK (
      (pricing_model != 'tiered') OR 
      (pricing_model = 'tiered' AND tier_pricing IS NOT NULL AND ARRAY_LENGTH(tier_pricing) > 0)
    ),

    subscription_options OBJECT (
      billing_cycles ARRAY OF ENUM('monthly', 'quarterly', 'annually', 'biennial') MIN_SIZE 1 NOT NULL,
      trial_period_days INT CHECK (trial_period_days >= 0 AND trial_period_days <= 365)
    ) -- Required when pricing_model = 'subscription'
    CHECK (
      (pricing_model != 'subscription') OR 
      (pricing_model = 'subscription' AND subscription_options IS NOT NULL)
    ),

    discounts ARRAY OF OBJECT (
      type ENUM('percentage', 'fixed_amount', 'buy_x_get_y') NOT NULL,
      value DECIMAL(8,2) CHECK (value >= 0) NOT NULL,
      min_purchase_amount DECIMAL(10,2) CHECK (min_purchase_amount >= 0),
      valid_from DATETIME NOT NULL,
      valid_until DATETIME NOT NULL,
      max_uses INT CHECK (max_uses >= 1),
      code VARCHAR(20) PATTERN '^[A-Z0-9]{4,20}$',
      CHECK (valid_until > valid_from)
    ) MAX_SIZE 10
  ) NOT NULL,

  -- Availability and inventory
  availability OBJECT (
    status ENUM('available', 'out_of_stock', 'discontinued', 'coming_soon', 'back_order') NOT NULL,
    stock_tracking OBJECT (
      enabled BOOLEAN NOT NULL,
      current_stock INT CHECK (current_stock >= 0), -- Required when enabled = true
      reserved_stock INT CHECK (reserved_stock >= 0),
      low_stock_threshold INT CHECK (low_stock_threshold >= 0),
      max_order_quantity INT CHECK (max_order_quantity >= 1),
      CHECK (
        (enabled = false) OR 
        (enabled = true AND current_stock IS NOT NULL)
      )
    ) NOT NULL
  ) NOT NULL,

  -- Category-specific specifications with conditional validation
  specifications OBJECT (
    -- Electronics-specific fields (required when category.primary = 'electronics')
    electronics OBJECT (
      brand VARCHAR(100) NOT NULL,
      model VARCHAR(100) NOT NULL,
      power_requirements OBJECT (
        voltage INT CHECK (voltage > 0),
        wattage INT CHECK (wattage > 0),
        frequency INT CHECK (frequency IN (50, 60))
      ),
      connectivity ARRAY OF ENUM('wifi', 'bluetooth', 'ethernet', 'usb', 'hdmi', 'aux', 'nfc')
    ) CHECK (
      (category.primary != 'electronics') OR 
      (category.primary = 'electronics' AND electronics IS NOT NULL)
    ),

    -- Clothing-specific fields (required when category.primary = 'clothing')
    clothing OBJECT (
      sizes ARRAY OF VARCHAR(10) MIN_SIZE 1 NOT NULL,
      colors ARRAY OF VARCHAR(50) MIN_SIZE 1 NOT NULL,
      materials ARRAY OF OBJECT (
        name VARCHAR(100) NOT NULL,
        percentage DECIMAL(5,2) CHECK (percentage > 0 AND percentage <= 100) NOT NULL
      ) NOT NULL,
      care_instructions ARRAY OF VARCHAR(200) MAX_SIZE 10
    ) CHECK (
      (category.primary != 'clothing') OR 
      (category.primary = 'clothing' AND clothing IS NOT NULL)
    ),

    -- Common specifications for all products
    dimensions OBJECT (
      length DECIMAL(8,2) CHECK (length > 0),
      width DECIMAL(8,2) CHECK (width > 0),
      height DECIMAL(8,2) CHECK (height > 0),
      weight DECIMAL(8,2) CHECK (weight > 0),
      unit ENUM('metric', 'imperial') NOT NULL
    ),

    warranty OBJECT (
      duration_months INT CHECK (duration_months >= 0 AND duration_months <= 600),
      type ENUM('manufacturer', 'store', 'extended', 'none'),
      coverage ARRAY OF ENUM('defects', 'wear_and_tear', 'accidental_damage', 'theft')
    )
  ),

  -- Quality and compliance
  quality_control OBJECT (
    certifications ARRAY OF OBJECT (
      name VARCHAR(200) NOT NULL,
      issuing_body VARCHAR(200) NOT NULL,
      certificate_number VARCHAR(100),
      valid_until DATETIME NOT NULL,
      document_url TEXT
    ),
    safety_warnings ARRAY OF OBJECT (
      type ENUM('choking_hazard', 'electrical', 'chemical', 'fire', 'sharp_edges', 'other') NOT NULL,
      description VARCHAR(500) NOT NULL,
      age_restriction INT CHECK (age_restriction >= 0 AND age_restriction <= 21)
    )
  ),

  -- Audit trail
  created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
  updated_at DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE,
  created_by OBJECTID,
  last_modified_by OBJECTID,
  schema_version VARCHAR(10) PATTERN '^\d+\.\d+\.\d+$'

) WITH (
  validation_level = 'strict',
  validation_action = 'error'
);

-- Order collection with complex business rule validation
CREATE COLLECTION orders
WITH VALIDATION (
  order_number VARCHAR(20) PATTERN '^ORD-[0-9]{8}-[A-Z]{3}$' UNIQUE,
  customer_id OBJECTID NOT NULL REFERENCES user_profiles(_id),

  -- Order items with item-level validation
  items ARRAY OF OBJECT (
    product_id OBJECTID NOT NULL REFERENCES products(_id),
    product_name VARCHAR(500),
    sku VARCHAR(20),
    quantity INT CHECK (quantity >= 1 AND quantity <= 1000) NOT NULL,
    unit_price DECIMAL(10,2) CHECK (unit_price >= 0) NOT NULL,
    total_price DECIMAL(10,2) CHECK (total_price >= 0) NOT NULL,

    -- Validate that total_price = quantity * unit_price
    CHECK (ABS(total_price - (quantity * unit_price)) < 0.01),

    discounts_applied ARRAY OF OBJECT (
      type VARCHAR(50) NOT NULL,
      amount DECIMAL(8,2) NOT NULL,
      code VARCHAR(20)
    ),
    customizations OBJECT
  ) MIN_SIZE 1 MAX_SIZE 100 NOT NULL,

  -- Order totals with cross-field validation
  totals OBJECT (
    subtotal DECIMAL(10,2) CHECK (subtotal >= 0) NOT NULL,
    tax_amount DECIMAL(10,2) CHECK (tax_amount >= 0) NOT NULL,
    shipping_cost DECIMAL(10,2) CHECK (shipping_cost >= 0) NOT NULL,
    discount_amount DECIMAL(10,2) CHECK (discount_amount >= 0),
    total_amount DECIMAL(10,2) CHECK (total_amount >= 0.01) NOT NULL,
    currency CHAR(3) PATTERN '^[A-Z]{3}$' NOT NULL,

    tax_breakdown ARRAY OF OBJECT (
      type VARCHAR(50) NOT NULL,
      rate DECIMAL(6,4) CHECK (rate >= 0 AND rate <= 1) NOT NULL,
      amount DECIMAL(10,2) CHECK (amount >= 0) NOT NULL
    ),

    -- Validate total calculation
    CHECK (
      ABS(total_amount - (subtotal + tax_amount + shipping_cost - COALESCE(discount_amount, 0))) < 0.01
    )
  ) NOT NULL,

  -- Order status with workflow validation
  status OBJECT (
    current ENUM('pending', 'confirmed', 'processing', 'shipped', 'delivered', 'cancelled', 'refunded') NOT NULL,
    history ARRAY OF OBJECT (
      status ENUM('pending', 'confirmed', 'processing', 'shipped', 'delivered', 'cancelled', 'refunded') NOT NULL,
      timestamp DATETIME NOT NULL,
      notes TEXT,
      updated_by OBJECTID
    ) MIN_SIZE 1 NOT NULL
  ) NOT NULL,

  -- Shipping information
  shipping OBJECT (
    method OBJECT (
      carrier VARCHAR(100) NOT NULL,
      service_type VARCHAR(100) NOT NULL,
      tracking_number VARCHAR(100),
      estimated_delivery DATETIME NOT NULL,
      actual_delivery DATETIME,
      CHECK (actual_delivery IS NULL OR actual_delivery >= estimated_delivery)
    ) NOT NULL,
    address OBJECT (
      recipient_name VARCHAR(200) NOT NULL,
      street_address VARCHAR(500) NOT NULL,
      city VARCHAR(100) NOT NULL,
      state_province VARCHAR(100),
      postal_code VARCHAR(20),
      country CHAR(2) PATTERN '^[A-Z]{2}$' NOT NULL,
      special_instructions TEXT
    ) NOT NULL
  ),

  -- Payment information with validation
  payment OBJECT (
    method ENUM('credit_card', 'debit_card', 'paypal', 'bank_transfer', 'digital_wallet', 'cryptocurrency', 'cash_on_delivery') NOT NULL,
    status ENUM('pending', 'authorized', 'captured', 'failed', 'refunded', 'partially_refunded') NOT NULL,
    transaction_id VARCHAR(200),
    authorization_code VARCHAR(100),
    payment_processor VARCHAR(100),
    processed_at DATETIME,
    failure_reason TEXT,

    refund_details ARRAY OF OBJECT (
      amount DECIMAL(10,2) CHECK (amount > 0) NOT NULL,
      reason TEXT NOT NULL,
      processed_at DATETIME NOT NULL,
      refund_id VARCHAR(100)
    ),

    -- Business rule: failed payments must result in cancelled orders
    CHECK (
      (status != 'failed') OR 
      (status = 'failed' AND status.current = 'cancelled')
    )
  ),

  -- Audit fields
  created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
  updated_at DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE,
  schema_version VARCHAR(10) PATTERN '^\d+\.\d+\.\d+$'

) WITH (
  validation_level = 'strict',
  validation_action = 'error'
);

-- Data validation analysis and reporting queries

-- Comprehensive validation status report
WITH validation_metrics AS (
  SELECT 
    collection_name,
    validation_level,
    validation_action,
    schema_version,

    -- Document count and validation statistics
    COUNT(*) as total_documents,
    COUNT(*) FILTER (WHERE validation_passed = true) as valid_documents,
    COUNT(*) FILTER (WHERE validation_passed = false) as invalid_documents,

    -- Calculate data quality score
    (COUNT(*) FILTER (WHERE validation_passed = true)::numeric / COUNT(*)) * 100 as data_quality_percent,

    -- Validation error analysis
    COUNT(DISTINCT validation_error_type) as unique_error_types,
    MODE() WITHIN GROUP (ORDER BY validation_error_type) as most_common_error,

    -- Recent validation trends
    COUNT(*) FILTER (WHERE validated_at >= CURRENT_TIMESTAMP - INTERVAL '24 hours') as validations_last_24h,
    COUNT(*) FILTER (WHERE validation_passed = false AND validated_at >= CURRENT_TIMESTAMP - INTERVAL '24 hours') as errors_last_24h

  FROM VALIDATION_RESULTS()
  GROUP BY collection_name, validation_level, validation_action, schema_version
),

validation_error_details AS (
  SELECT 
    collection_name,
    validation_error_type,
    validation_error_field,
    COUNT(*) as error_frequency,
    AVG(EXTRACT(EPOCH FROM (CURRENT_TIMESTAMP - first_occurred))) as avg_age_seconds,
    array_agg(
      json_build_object(
        'document_id', document_id,
        'error_message', validation_error_message,
        'occurred_at', occurred_at
      ) ORDER BY occurred_at DESC
    )[1:5] as recent_examples

  FROM VALIDATION_ERRORS()
  WHERE occurred_at >= CURRENT_TIMESTAMP - INTERVAL '7 days'
  GROUP BY collection_name, validation_error_type, validation_error_field
),

collection_health_assessment AS (
  SELECT 
    vm.collection_name,
    vm.total_documents,
    vm.data_quality_percent,
    vm.validations_last_24h,
    vm.errors_last_24h,

    -- Health status determination
    CASE 
      WHEN vm.data_quality_percent >= 99.5 THEN 'EXCELLENT'
      WHEN vm.data_quality_percent >= 95.0 THEN 'GOOD'
      WHEN vm.data_quality_percent >= 90.0 THEN 'FAIR'
      WHEN vm.data_quality_percent >= 80.0 THEN 'POOR'
      ELSE 'CRITICAL'
    END as health_status,

    -- Trending analysis
    CASE 
      WHEN vm.errors_last_24h = 0 THEN 'STABLE'
      WHEN vm.errors_last_24h <= vm.total_documents * 0.01 THEN 'MINOR_ISSUES'
      WHEN vm.errors_last_24h <= vm.total_documents * 0.05 THEN 'MODERATE_ISSUES'
      ELSE 'SIGNIFICANT_ISSUES'
    END as trend_status,

    -- Top error types
    array_agg(
      json_build_object(
        'error_type', ved.validation_error_type,
        'field', ved.validation_error_field,
        'frequency', ved.error_frequency,
        'avg_age_hours', ROUND(ved.avg_age_seconds / 3600.0, 1)
      ) ORDER BY ved.error_frequency DESC
    )[1:3] as top_errors

  FROM validation_metrics vm
  LEFT JOIN validation_error_details ved ON vm.collection_name = ved.collection_name
  GROUP BY vm.collection_name, vm.total_documents, vm.data_quality_percent, 
           vm.validations_last_24h, vm.errors_last_24h
)

SELECT 
  collection_name,
  total_documents,
  ROUND(data_quality_percent, 2) as data_quality_pct,
  health_status,
  trend_status,
  validations_last_24h,
  errors_last_24h,
  top_errors,

  -- Recommendations based on health status
  CASE health_status
    WHEN 'CRITICAL' THEN 'URGENT: Review validation rules and fix data quality issues immediately'
    WHEN 'POOR' THEN 'Review validation errors and implement data cleanup procedures'
    WHEN 'FAIR' THEN 'Monitor validation trends and address recurring error patterns'
    WHEN 'GOOD' THEN 'Continue monitoring and maintain current validation standards'
    ELSE 'Data quality is excellent - consider sharing best practices'
  END as recommendation,

  -- Priority level for remediation
  CASE 
    WHEN health_status IN ('CRITICAL', 'POOR') AND trend_status = 'SIGNIFICANT_ISSUES' THEN 'P0_CRITICAL'
    WHEN health_status = 'POOR' OR trend_status = 'SIGNIFICANT_ISSUES' THEN 'P1_HIGH'
    WHEN health_status = 'FAIR' AND trend_status = 'MODERATE_ISSUES' THEN 'P2_MEDIUM'
    WHEN trend_status = 'MINOR_ISSUES' THEN 'P3_LOW'
    ELSE 'P4_MONITORING'
  END as priority_level,

  CURRENT_TIMESTAMP as report_generated_at

FROM collection_health_assessment
ORDER BY 
  CASE health_status
    WHEN 'CRITICAL' THEN 1
    WHEN 'POOR' THEN 2 
    WHEN 'FAIR' THEN 3
    WHEN 'GOOD' THEN 4
    ELSE 5
  END,
  errors_last_24h DESC;

-- Advanced validation rule analysis
WITH validation_rule_effectiveness AS (
  SELECT 
    vr.collection_name,
    vr.rule_name,
    vr.rule_type,
    vr.field_path,

    -- Rule utilization metrics
    COUNT(DISTINCT ve.document_id) as documents_validated,
    COUNT(*) FILTER (WHERE ve.validation_passed = false) as violations_caught,
    COUNT(*) FILTER (WHERE ve.validation_passed = true) as validations_passed,

    -- Effectiveness calculation
    CASE 
      WHEN COUNT(*) > 0 THEN
        (COUNT(*) FILTER (WHERE ve.validation_passed = false)::numeric / COUNT(*)) * 100
      ELSE 0
    END as violation_rate_percent,

    -- Performance impact
    AVG(ve.validation_duration_ms) as avg_validation_time_ms,
    MAX(ve.validation_duration_ms) as max_validation_time_ms,

    -- Rule complexity assessment
    CASE vr.rule_type
      WHEN 'simple_type_check' THEN 1
      WHEN 'pattern_match' THEN 2
      WHEN 'range_check' THEN 2
      WHEN 'conditional_logic' THEN 4
      WHEN 'cross_field_validation' THEN 5
      WHEN 'cross_collection_validation' THEN 8
      ELSE 3
    END as complexity_score

  FROM VALIDATION_RULES() vr
  LEFT JOIN VALIDATION_EVENTS() ve ON (
    vr.collection_name = ve.collection_name AND 
    vr.rule_name = ve.rule_triggered
  )
  WHERE ve.validated_at >= CURRENT_TIMESTAMP - INTERVAL '30 days'
  GROUP BY vr.collection_name, vr.rule_name, vr.rule_type, vr.field_path
),

rule_optimization_analysis AS (
  SELECT 
    vre.*,

    -- Performance classification
    CASE 
      WHEN avg_validation_time_ms > 1000 THEN 'SLOW'
      WHEN avg_validation_time_ms > 100 THEN 'MODERATE'
      ELSE 'FAST'
    END as performance_class,

    -- Effectiveness classification  
    CASE 
      WHEN violation_rate_percent > 10 THEN 'HIGH_VIOLATION'
      WHEN violation_rate_percent > 5 THEN 'MODERATE_VIOLATION'
      WHEN violation_rate_percent > 0 THEN 'LOW_VIOLATION'
      ELSE 'NO_VIOLATIONS'
    END as effectiveness_class,

    -- Optimization recommendations
    CASE 
      WHEN avg_validation_time_ms > 1000 AND violation_rate_percent = 0 THEN 'Consider removing or simplifying unused rule'
      WHEN avg_validation_time_ms > 500 AND violation_rate_percent < 1 THEN 'Rule may be too strict or complex'
      WHEN violation_rate_percent > 15 THEN 'High violation rate indicates data quality issues'
      WHEN complexity_score > 6 AND avg_validation_time_ms > 100 THEN 'Complex rule impacting performance'
      ELSE 'Rule is operating within normal parameters'
    END as optimization_recommendation

  FROM validation_rule_effectiveness vre
)

SELECT 
  collection_name,
  rule_name,
  rule_type,
  field_path,
  documents_validated,
  violations_caught,
  ROUND(violation_rate_percent, 2) as violation_rate_pct,
  ROUND(avg_validation_time_ms, 2) as avg_validation_ms,
  complexity_score,
  performance_class,
  effectiveness_class,
  optimization_recommendation,

  -- Priority for optimization
  CASE 
    WHEN performance_class = 'SLOW' AND effectiveness_class = 'NO_VIOLATIONS' THEN 'HIGH_PRIORITY'
    WHEN performance_class = 'SLOW' AND violation_rate_percent < 1 THEN 'MEDIUM_PRIORITY'
    WHEN effectiveness_class = 'HIGH_VIOLATION' THEN 'DATA_QUALITY_ISSUE'
    ELSE 'LOW_PRIORITY'
  END as optimization_priority

FROM rule_optimization_analysis
WHERE documents_validated > 0
ORDER BY 
  CASE optimization_priority
    WHEN 'HIGH_PRIORITY' THEN 1
    WHEN 'DATA_QUALITY_ISSUE' THEN 2
    WHEN 'MEDIUM_PRIORITY' THEN 3
    ELSE 4
  END,
  avg_validation_time_ms DESC;

-- QueryLeaf provides comprehensive MongoDB schema validation capabilities:
-- 1. SQL-familiar validation syntax with complex nested object support
-- 2. Conditional validation rules based on document context and business logic
-- 3. Cross-field and cross-collection validation for referential integrity
-- 4. Advanced pattern matching and constraint enforcement
-- 5. Comprehensive validation reporting and error analysis
-- 6. Performance monitoring and rule optimization recommendations
-- 7. Schema versioning and migration support with gradual enforcement
-- 8. Compliance framework integration for regulatory requirements
-- 9. Real-time validation metrics and health monitoring
-- 10. Production-ready validation management with automated optimization

Best Practices for Production Schema Validation

Validation Strategy Design

Essential principles for effective MongoDB schema validation implementation:

  1. Incremental Implementation: Start with moderate validation levels and gradually increase strictness
  2. Business Rule Alignment: Ensure validation rules reflect actual business requirements and constraints
  3. Performance Consideration: Balance comprehensive validation with acceptable performance overhead
  4. Error Handling: Implement user-friendly error messages and validation feedback systems
  5. Schema Evolution: Plan for schema changes and maintain backwards compatibility during transitions
  6. Monitoring and Alerting: Continuously monitor validation effectiveness and data quality metrics

Compliance and Data Integrity

Implement validation frameworks for regulatory and business compliance:

  1. Regulatory Compliance: Integrate validation rules for GDPR, PCI DSS, SOX, and industry-specific requirements
  2. Data Quality Enforcement: Establish validation rules that maintain high data quality standards
  3. Audit Trail Maintenance: Ensure all validation events and changes are properly logged and tracked
  4. Cross-System Validation: Implement validation that works across multiple applications and data sources
  5. Documentation Standards: Maintain comprehensive documentation of validation rules and business logic
  6. Testing Procedures: Establish thorough testing procedures for validation rule changes and updates

Conclusion

MongoDB Schema Validation provides comprehensive document validation capabilities that ensure data integrity, enforce business rules, and maintain data quality at the database level. Unlike application-level validation that can be bypassed or inconsistently applied, MongoDB's validation system provides a reliable foundation for data governance and compliance in production environments.

Key MongoDB Schema Validation benefits include:

  • Database-Level Integrity: Enforcement of data validation rules regardless of application or data source
  • Flexible Rule Definition: Support for complex nested validation, conditional logic, and business rule enforcement
  • Real-Time Validation: Immediate validation feedback with detailed error reporting and user guidance
  • Schema Evolution: Support for gradual migration strategies and schema versioning for evolving applications
  • Performance Optimization: Efficient validation processing with minimal impact on application performance
  • Compliance Support: Built-in frameworks for regulatory compliance and data governance requirements

Whether you're building new applications with strict data requirements, migrating existing systems to enforce better data quality, or implementing compliance frameworks, MongoDB Schema Validation with QueryLeaf's familiar SQL interface provides the foundation for robust data integrity management.

QueryLeaf Integration: QueryLeaf automatically translates SQL-style validation rules into MongoDB's native JSON Schema validation, making advanced document validation accessible through familiar SQL constraint syntax. Complex nested object validation, conditional business rules, and cross-collection integrity checks are seamlessly handled through familiar SQL patterns, enabling sophisticated data validation without requiring deep MongoDB expertise.

The combination of MongoDB's powerful validation capabilities with SQL-style rule definition makes it an ideal platform for applications requiring both flexible document storage and rigorous data integrity enforcement, ensuring your data remains consistent and reliable as your application scales and evolves.

MongoDB Performance Monitoring and Diagnostics: Advanced Optimization Techniques for Production Database Management

Production MongoDB deployments require comprehensive performance monitoring and optimization strategies to maintain optimal query response times, efficient resource utilization, and predictable application performance under varying workload conditions. Traditional database monitoring approaches often struggle with MongoDB's document-oriented structure, dynamic schema capabilities, and distributed architecture patterns, making specialized monitoring tools and techniques essential for effective performance management.

MongoDB provides sophisticated built-in performance monitoring capabilities including query profiling, execution statistics, index utilization analysis, and comprehensive metrics collection that enable deep insights into database performance characteristics. Unlike relational databases that rely primarily on table-level statistics, MongoDB's monitoring encompasses collection-level metrics, document-level analysis, aggregation pipeline performance, and shard-level resource utilization patterns.

The Traditional Database Monitoring Challenge

Conventional database monitoring approaches often lack the granularity and flexibility needed for MongoDB environments:

-- Traditional PostgreSQL performance monitoring - limited insight into document-level operations

-- Basic query performance analysis with limited MongoDB-style insights
SELECT 
  schemaname,
  tablename,
  attname,
  n_distinct,
  correlation,
  most_common_vals,
  most_common_freqs,

  -- Basic statistics available in PostgreSQL
  pg_stat_get_live_tuples(c.oid) as live_tuples,
  pg_stat_get_dead_tuples(c.oid) as dead_tuples,
  pg_stat_get_tuples_inserted(c.oid) as tuples_inserted,
  pg_stat_get_tuples_updated(c.oid) as tuples_updated,
  pg_stat_get_tuples_deleted(c.oid) as tuples_deleted,

  -- Table scan statistics
  pg_stat_get_numscans(c.oid) as table_scans,
  pg_stat_get_tuples_returned(c.oid) as tuples_returned,
  pg_stat_get_tuples_fetched(c.oid) as tuples_fetched,

  -- Index usage statistics (limited compared to MongoDB index insights)
  pg_stat_get_blocks_fetched(c.oid) as blocks_fetched,
  pg_stat_get_blocks_hit(c.oid) as blocks_hit

FROM pg_stats ps
JOIN pg_class c ON ps.tablename = c.relname
JOIN pg_namespace n ON c.relnamespace = n.oid
WHERE ps.schemaname = 'public'
ORDER BY pg_stat_get_live_tuples(c.oid) DESC;

-- Query performance analysis with limited flexibility for document operations
WITH slow_queries AS (
  SELECT 
    query,
    calls,
    total_time,
    mean_time,
    stddev_time,
    min_time,
    max_time,
    rows,

    -- Limited insight into query complexity and document operations
    100.0 * shared_blks_hit / nullif(shared_blks_hit + shared_blks_read, 0) AS hit_percent,

    -- Basic classification limited to SQL operations
    CASE 
      WHEN query LIKE 'SELECT%' THEN 'read'
      WHEN query LIKE 'INSERT%' THEN 'write'
      WHEN query LIKE 'UPDATE%' THEN 'update'
      WHEN query LIKE 'DELETE%' THEN 'delete'
      ELSE 'other'
    END as query_type

  FROM pg_stat_statements
  WHERE calls > 100  -- Focus on frequently executed queries
)
SELECT 
  query_type,
  COUNT(*) as query_count,
  SUM(calls) as total_calls,
  AVG(mean_time) as avg_response_time,
  SUM(total_time) as total_execution_time,
  AVG(hit_percent) as avg_cache_hit_rate,

  -- Limited aggregation capabilities compared to MongoDB aggregation insights
  percentile_cont(0.95) WITHIN GROUP (ORDER BY mean_time) as p95_response_time,
  percentile_cont(0.99) WITHIN GROUP (ORDER BY mean_time) as p99_response_time

FROM slow_queries
GROUP BY query_type
ORDER BY total_execution_time DESC;

-- Problems with traditional monitoring approaches:
-- 1. Limited understanding of document-level operations and nested field access
-- 2. No insight into aggregation pipeline performance and optimization
-- 3. Lack of collection-level and field-level usage statistics
-- 4. No support for analyzing dynamic schema evolution and performance impact
-- 5. Limited index utilization analysis for compound and sparse indexes
-- 6. No understanding of MongoDB-specific operations like upserts and bulk operations
-- 7. Inability to analyze shard key distribution and query routing efficiency
-- 8. No support for analyzing replica set read preference impact on performance
-- 9. Limited insight into connection pooling and driver-level optimization opportunities
-- 10. No understanding of MongoDB-specific caching behavior and working set analysis

-- Manual index analysis with limited insights into MongoDB index strategies
SELECT 
  schemaname,
  tablename,
  indexname,
  idx_tup_read,
  idx_tup_fetch,
  idx_blks_read,
  idx_blks_hit,

  -- Basic index efficiency calculation (limited compared to MongoDB index metrics)
  CASE 
    WHEN idx_tup_read > 0 THEN 
      ROUND(100.0 * idx_tup_fetch / idx_tup_read, 2)
    ELSE 0 
  END as index_efficiency_percent,

  -- Cache hit ratio (basic compared to MongoDB's comprehensive cache analysis)
  CASE 
    WHEN (idx_blks_read + idx_blks_hit) > 0 THEN
      ROUND(100.0 * idx_blks_hit / (idx_blks_read + idx_blks_hit), 2)
    ELSE 0
  END as cache_hit_percent

FROM pg_stat_user_indexes
ORDER BY idx_tup_read DESC;

-- Limitations of traditional approaches:
-- 1. No understanding of MongoDB's document structure impact on performance
-- 2. Limited aggregation pipeline analysis and optimization insights  
-- 3. No collection-level sharding and distribution analysis
-- 4. Lack of real-time profiling capabilities for individual operations
-- 5. No support for analyzing GridFS performance and large document handling
-- 6. Limited understanding of MongoDB's memory management and working set optimization
-- 7. No insight into oplog performance and replica set optimization
-- 8. Inability to analyze change streams and real-time operation performance
-- 9. Limited connection and driver optimization analysis
-- 10. No support for analyzing MongoDB Atlas-specific performance metrics

MongoDB provides comprehensive performance monitoring and optimization capabilities:

// MongoDB Advanced Performance Monitoring and Optimization System
const { MongoClient } = require('mongodb');

const client = new MongoClient('mongodb://localhost:27017');
const db = client.db('production_performance_monitoring');

// Comprehensive MongoDB Performance Monitoring and Diagnostics Manager
class AdvancedMongoPerformanceMonitor {
  constructor(db, config = {}) {
    this.db = db;
    this.adminDb = db.admin();
    this.collections = {
      performanceMetrics: db.collection('performance_metrics'),
      slowQueries: db.collection('slow_queries'),
      indexAnalysis: db.collection('index_analysis'),
      collectionStats: db.collection('collection_stats'),
      profilingData: db.collection('profiling_data'),
      optimizationRecommendations: db.collection('optimization_recommendations')
    };

    // Advanced monitoring configuration
    this.config = {
      profilingLevel: config.profilingLevel || 2, // Profile all operations
      slowOperationThreshold: config.slowOperationThreshold || 100, // 100ms
      samplingRate: config.samplingRate || 1.0, // Sample all operations
      metricsCollectionInterval: config.metricsCollectionInterval || 60000, // 1 minute
      indexAnalysisInterval: config.indexAnalysisInterval || 300000, // 5 minutes
      performanceReportInterval: config.performanceReportInterval || 900000, // 15 minutes

      // Advanced monitoring features
      enableOperationProfiling: config.enableOperationProfiling !== false,
      enableIndexAnalysis: config.enableIndexAnalysis !== false,
      enableCollectionStats: config.enableCollectionStats !== false,
      enableQueryOptimization: config.enableQueryOptimization !== false,
      enableRealTimeAlerts: config.enableRealTimeAlerts !== false,
      enablePerformanceBaseline: config.enablePerformanceBaseline !== false,

      // Alerting thresholds
      alertThresholds: {
        avgResponseTime: config.alertThresholds?.avgResponseTime || 500, // 500ms
        connectionCount: config.alertThresholds?.connectionCount || 1000,
        indexHitRatio: config.alertThresholds?.indexHitRatio || 0.95,
        replicationLag: config.alertThresholds?.replicationLag || 5000, // 5 seconds
        diskUtilization: config.alertThresholds?.diskUtilization || 0.8, // 80%
        memoryUtilization: config.alertThresholds?.memoryUtilization || 0.85 // 85%
      },

      // Optimization settings
      optimizationRules: {
        enableAutoIndexSuggestions: true,
        enableQueryRewriting: false,
        enableCollectionCompaction: false,
        enableShardKeyAnalysis: true
      }
    };

    // Performance metrics storage
    this.metrics = {
      operationCounts: new Map(),
      responseTimes: new Map(),
      indexUsage: new Map(),
      collectionMetrics: new Map()
    };

    // Initialize monitoring systems
    this.initializePerformanceMonitoring();
    this.setupRealTimeProfiler();
    this.startPerformanceCollection();
  }

  async initializePerformanceMonitoring() {
    console.log('Initializing comprehensive MongoDB performance monitoring...');

    try {
      // Enable database profiling with advanced configuration
      await this.enableAdvancedProfiling();

      // Setup performance metrics collection
      await this.setupMetricsCollection();

      // Initialize index analysis
      await this.initializeIndexAnalysis();

      // Setup collection statistics monitoring
      await this.setupCollectionStatsMonitoring();

      // Initialize performance baseline
      if (this.config.enablePerformanceBaseline) {
        await this.initializePerformanceBaseline();
      }

      console.log('Performance monitoring system initialized successfully');

    } catch (error) {
      console.error('Error initializing performance monitoring:', error);
      throw error;
    }
  }

  async enableAdvancedProfiling() {
    console.log('Enabling advanced database profiling...');

    try {
      // Enable profiling for all operations with detailed analysis
      const profilingResult = await this.db.command({
        profile: this.config.profilingLevel,
        slowms: this.config.slowOperationThreshold,
        sampleRate: this.config.samplingRate,

        // Advanced profiling options
        filter: {
          // Profile operations based on specific criteria
          $or: [
            { ts: { $gte: new Date(Date.now() - 3600000) } }, // Last hour
            { millis: { $gte: this.config.slowOperationThreshold } }, // Slow operations
            { planSummary: { $regex: 'COLLSCAN' } }, // Collection scans
            { 'locks.Global.acquireCount.r': { $exists: true } } // Lock-intensive operations
          ]
        }
      });

      console.log('Database profiling enabled:', profilingResult);

      // Configure profiler collection size for optimal performance
      await this.configureProfilerCollection();

    } catch (error) {
      console.error('Error enabling profiling:', error);
      throw error;
    }
  }

  async configureProfilerCollection() {
    try {
      // Ensure profiler collection is appropriately sized
      const profilerCollStats = await this.db.collection('system.profile').stats();

      if (profilerCollStats.capped && profilerCollStats.maxSize < 100 * 1024 * 1024) {
        console.log('Recreating profiler collection with larger size...');

        // Drop and recreate with optimal size
        await this.db.collection('system.profile').drop();
        await this.db.createCollection('system.profile', {
          capped: true,
          size: 100 * 1024 * 1024, // 100MB
          max: 1000000 // 1M documents
        });
      }

    } catch (error) {
      console.warn('Could not configure profiler collection:', error.message);
    }
  }

  async collectComprehensivePerformanceMetrics() {
    console.log('Collecting comprehensive performance metrics...');

    try {
      const startTime = Date.now();

      // Collect server status metrics
      const serverStatus = await this.adminDb.command({ serverStatus: 1 });

      // Collect database statistics
      const dbStats = await this.db.stats();

      // Collect profiling data
      const profilingData = await this.analyzeProfilingData();

      // Collect index usage statistics
      const indexStats = await this.analyzeIndexUsage();

      // Collect collection-level metrics
      const collectionMetrics = await this.collectCollectionMetrics();

      // Collect operation metrics
      const operationMetrics = await this.analyzeOperationMetrics();

      // Collect connection metrics
      const connectionMetrics = this.extractConnectionMetrics(serverStatus);

      // Collect memory and resource metrics
      const resourceMetrics = this.extractResourceMetrics(serverStatus);

      // Collect replication metrics (if applicable)
      const replicationMetrics = await this.collectReplicationMetrics();

      // Collect sharding metrics (if applicable)  
      const shardingMetrics = await this.collectShardingMetrics();

      // Assemble comprehensive performance report
      const performanceReport = {
        timestamp: new Date(),
        collectionTime: Date.now() - startTime,

        // Core performance metrics
        serverStatus: {
          uptime: serverStatus.uptime,
          version: serverStatus.version,
          process: serverStatus.process,
          pid: serverStatus.pid,
          host: serverStatus.host
        },

        // Database-level metrics
        database: {
          collections: dbStats.collections,
          objects: dbStats.objects,
          avgObjSize: dbStats.avgObjSize,
          dataSize: dbStats.dataSize,
          storageSize: dbStats.storageSize,
          indexes: dbStats.indexes,
          indexSize: dbStats.indexSize,

          // Efficiency metrics
          dataToIndexRatio: dbStats.indexSize > 0 ? dbStats.dataSize / dbStats.indexSize : 0,
          storageEfficiency: dbStats.dataSize / dbStats.storageSize,
          avgDocumentSize: dbStats.avgObjSize
        },

        // Operation performance metrics
        operations: operationMetrics,

        // Query performance analysis
        queryPerformance: profilingData,

        // Index performance analysis
        indexPerformance: indexStats,

        // Collection-level metrics
        collections: collectionMetrics,

        // Connection and concurrency metrics
        connections: connectionMetrics,

        // Resource utilization metrics
        resources: resourceMetrics,

        // Replication metrics
        replication: replicationMetrics,

        // Sharding metrics (if applicable)
        sharding: shardingMetrics,

        // Performance analysis
        analysis: await this.generatePerformanceAnalysis({
          serverStatus,
          dbStats,
          profilingData,
          indexStats,
          collectionMetrics,
          operationMetrics,
          connectionMetrics,
          resourceMetrics
        }),

        // Optimization recommendations
        recommendations: await this.generateOptimizationRecommendations({
          profilingData,
          indexStats,
          collectionMetrics,
          operationMetrics
        })
      };

      // Store performance metrics
      await this.collections.performanceMetrics.insertOne(performanceReport);

      // Update real-time metrics
      this.updateRealTimeMetrics(performanceReport);

      // Check for performance alerts
      await this.checkPerformanceAlerts(performanceReport);

      return performanceReport;

    } catch (error) {
      console.error('Error collecting performance metrics:', error);
      throw error;
    }
  }

  async analyzeProfilingData(timeWindow = 300000) {
    console.log('Analyzing profiling data for query performance insights...');

    try {
      const cutoffTime = new Date(Date.now() - timeWindow);

      // Aggregate profiling data with comprehensive analysis
      const profilingAnalysis = await this.db.collection('system.profile').aggregate([
        {
          $match: {
            ts: { $gte: cutoffTime },
            ns: { $regex: `^${this.db.databaseName}\.` } // Current database only
          }
        },
        {
          $addFields: {
            // Categorize operations
            operationType: {
              $switch: {
                branches: [
                  { case: { $ne: ['$command.find', null] }, then: 'find' },
                  { case: { $ne: ['$command.aggregate', null] }, then: 'aggregate' },
                  { case: { $ne: ['$command.insert', null] }, then: 'insert' },
                  { case: { $ne: ['$command.update', null] }, then: 'update' },
                  { case: { $ne: ['$command.delete', null] }, then: 'delete' },
                  { case: { $ne: ['$command.count', null] }, then: 'count' },
                  { case: { $ne: ['$command.distinct', null] }, then: 'distinct' }
                ],
                default: 'other'
              }
            },

            // Analyze execution efficiency
            executionEfficiency: {
              $cond: {
                if: { $and: [{ $gt: ['$docsExamined', 0] }, { $gt: ['$nreturned', 0] }] },
                then: { $divide: ['$nreturned', '$docsExamined'] },
                else: 0
              }
            },

            // Categorize response times
            responseTimeCategory: {
              $switch: {
                branches: [
                  { case: { $lt: ['$millis', 10] }, then: 'very_fast' },
                  { case: { $lt: ['$millis', 100] }, then: 'fast' },
                  { case: { $lt: ['$millis', 500] }, then: 'moderate' },
                  { case: { $lt: ['$millis', 2000] }, then: 'slow' }
                ],
                default: 'very_slow'
              }
            },

            // Index usage analysis
            indexUsageType: {
              $cond: {
                if: { $regexMatch: { input: { $ifNull: ['$planSummary', ''] }, regex: 'IXSCAN' } },
                then: 'index_scan',
                else: {
                  $cond: {
                    if: { $regexMatch: { input: { $ifNull: ['$planSummary', ''] }, regex: 'COLLSCAN' } },
                    then: 'collection_scan',
                    else: 'other'
                  }
                }
              }
            }
          }
        },
        {
          $group: {
            _id: {
              collection: { $arrayElemAt: [{ $split: ['$ns', '.'] }, -1] },
              operationType: '$operationType',
              indexUsageType: '$indexUsageType'
            },

            // Performance statistics
            totalOperations: { $sum: 1 },
            avgResponseTime: { $avg: '$millis' },
            minResponseTime: { $min: '$millis' },
            maxResponseTime: { $max: '$millis' },
            p95ResponseTime: { $percentile: { input: '$millis', p: [0.95] } },
            p99ResponseTime: { $percentile: { input: '$millis', p: [0.99] } },

            // Document examination efficiency
            totalDocsExamined: { $sum: { $ifNull: ['$docsExamined', 0] } },
            totalDocsReturned: { $sum: { $ifNull: ['$nreturned', 0] } },
            avgExecutionEfficiency: { $avg: '$executionEfficiency' },

            // Response time distribution
            veryFastOps: { $sum: { $cond: [{ $eq: ['$responseTimeCategory', 'very_fast'] }, 1, 0] } },
            fastOps: { $sum: { $cond: [{ $eq: ['$responseTimeCategory', 'fast'] }, 1, 0] } },
            moderateOps: { $sum: { $cond: [{ $eq: ['$responseTimeCategory', 'moderate'] }, 1, 0] } },
            slowOps: { $sum: { $cond: [{ $eq: ['$responseTimeCategory', 'slow'] }, 1, 0] } },
            verySlowOps: { $sum: { $cond: [{ $eq: ['$responseTimeCategory', 'very_slow'] }, 1, 0] } },

            // Sample queries for analysis
            sampleQueries: { $push: { 
              command: '$command',
              millis: '$millis',
              planSummary: '$planSummary',
              ts: '$ts'
            } }
          }
        },
        {
          $addFields: {
            // Calculate efficiency metrics
            overallEfficiency: {
              $cond: {
                if: { $gt: ['$totalDocsExamined', 0] },
                then: { $divide: ['$totalDocsReturned', '$totalDocsExamined'] },
                else: 1
              }
            },

            // Calculate performance score
            performanceScore: {
              $multiply: [
                // Response time component (lower is better)
                { $subtract: [1, { $min: [{ $divide: ['$avgResponseTime', 2000] }, 1] }] },
                // Efficiency component (higher is better)
                { $multiply: ['$avgExecutionEfficiency', 100] }
              ]
            },

            // Performance classification
            performanceClass: {
              $switch: {
                branches: [
                  { case: { $gte: ['$performanceScore', 80] }, then: 'excellent' },
                  { case: { $gte: ['$performanceScore', 60] }, then: 'good' },
                  { case: { $gte: ['$performanceScore', 40] }, then: 'fair' },
                  { case: { $gte: ['$performanceScore', 20] }, then: 'poor' }
                ],
                default: 'critical'
              }
            }
          }
        },
        {
          $project: {
            collection: '$_id.collection',
            operationType: '$_id.operationType',
            indexUsageType: '$_id.indexUsageType',

            // Core metrics
            totalOperations: 1,
            avgResponseTime: { $round: ['$avgResponseTime', 2] },
            minResponseTime: 1,
            maxResponseTime: 1,
            p95ResponseTime: { $round: [{ $arrayElemAt: ['$p95ResponseTime', 0] }, 2] },
            p99ResponseTime: { $round: [{ $arrayElemAt: ['$p99ResponseTime', 0] }, 2] },

            // Efficiency metrics
            totalDocsExamined: 1,
            totalDocsReturned: 1,
            overallEfficiency: { $round: ['$overallEfficiency', 4] },
            avgExecutionEfficiency: { $round: ['$avgExecutionEfficiency', 4] },

            // Performance distribution
            responseTimeDistribution: {
              veryFast: '$veryFastOps',
              fast: '$fastOps',
              moderate: '$moderateOps',
              slow: '$slowOps',
              verySlow: '$verySlowOps'
            },

            // Performance scoring
            performanceScore: { $round: ['$performanceScore', 2] },
            performanceClass: 1,

            // Sample queries (limit to 3 most recent)
            sampleQueries: { $slice: [{ $sortArray: { input: '$sampleQueries', sortBy: { ts: -1 } } }, 3] }
          }
        },
        { $sort: { avgResponseTime: -1 } }
      ]).toArray();

      return {
        analysisTimeWindow: timeWindow,
        totalProfiledOperations: profilingAnalysis.reduce((sum, item) => sum + item.totalOperations, 0),
        collections: profilingAnalysis,

        // Summary statistics
        summary: {
          avgResponseTimeOverall: profilingAnalysis.reduce((sum, item) => sum + (item.avgResponseTime * item.totalOperations), 0) / 
                                 Math.max(profilingAnalysis.reduce((sum, item) => sum + item.totalOperations, 0), 1),

          slowOperationsCount: profilingAnalysis.reduce((sum, item) => sum + item.responseTimeDistribution.slow + item.responseTimeDistribution.verySlow, 0),

          collectionScansCount: profilingAnalysis.filter(item => item.indexUsageType === 'collection_scan')
                                               .reduce((sum, item) => sum + item.totalOperations, 0),

          inefficientOperationsCount: profilingAnalysis.filter(item => item.overallEfficiency < 0.1)
                                                      .reduce((sum, item) => sum + item.totalOperations, 0)
        }
      };

    } catch (error) {
      console.error('Error analyzing profiling data:', error);
      return { error: error.message, collections: [] };
    }
  }

  async analyzeIndexUsage() {
    console.log('Analyzing index usage and efficiency...');

    try {
      const collections = await this.db.listCollections().toArray();
      const indexAnalysis = [];

      for (const collInfo of collections) {
        const collection = this.db.collection(collInfo.name);

        try {
          // Get index statistics
          const indexStats = await collection.aggregate([
            { $indexStats: {} }
          ]).toArray();

          // Get collection statistics for context
          const collStats = await collection.stats();

          // Analyze each index
          for (const index of indexStats) {
            const indexAnalysisItem = {
              collection: collInfo.name,
              indexName: index.name,
              indexSpec: index.spec,

              // Usage statistics
              accesses: {
                ops: index.accesses?.ops || 0,
                since: index.accesses?.since || new Date()
              },

              // Index characteristics
              indexSize: index.size || 0,
              isUnique: index.spec && Object.values(index.spec).some(v => v === 1 && index.unique),
              isSparse: index.sparse || false,
              isPartial: !!index.partialFilterExpression,
              isCompound: Object.keys(index.spec || {}).length > 1,

              // Calculate index efficiency metrics
              collectionDocuments: collStats.count,
              collectionSize: collStats.size,
              indexToCollectionRatio: collStats.size > 0 ? index.size / collStats.size : 0,

              // Usage analysis
              usageCategory: this.categorizeIndexUsage(index.accesses?.ops || 0, collStats.count),

              // Performance metrics
              avgDocumentSize: collStats.avgObjSize || 0,
              indexSelectivity: this.estimateIndexSelectivity(index.spec, collStats.count)
            };

            indexAnalysis.push(indexAnalysisItem);
          }

        } catch (collError) {
          console.warn(`Error analyzing indexes for collection ${collInfo.name}:`, collError.message);
        }
      }

      // Generate index usage report
      return {
        totalIndexes: indexAnalysis.length,
        indexes: indexAnalysis,

        // Index usage summary
        usageSummary: {
          highUsage: indexAnalysis.filter(idx => idx.usageCategory === 'high').length,
          mediumUsage: indexAnalysis.filter(idx => idx.usageCategory === 'medium').length,
          lowUsage: indexAnalysis.filter(idx => idx.usageCategory === 'low').length,
          unused: indexAnalysis.filter(idx => idx.usageCategory === 'unused').length
        },

        // Index type distribution
        typeDistribution: {
          simple: indexAnalysis.filter(idx => !idx.isCompound).length,
          compound: indexAnalysis.filter(idx => idx.isCompound).length,
          unique: indexAnalysis.filter(idx => idx.isUnique).length,
          sparse: indexAnalysis.filter(idx => idx.isSparse).length,
          partial: indexAnalysis.filter(idx => idx.isPartial).length
        },

        // Performance insights
        performanceInsights: {
          totalIndexSize: indexAnalysis.reduce((sum, idx) => sum + idx.indexSize, 0),
          avgIndexToCollectionRatio: indexAnalysis.reduce((sum, idx) => sum + idx.indexToCollectionRatio, 0) / indexAnalysis.length,
          potentiallyRedundantIndexes: indexAnalysis.filter(idx => idx.usageCategory === 'unused' && idx.indexName !== '_id_'),
          oversizedIndexes: indexAnalysis.filter(idx => idx.indexToCollectionRatio > 0.5)
        }
      };

    } catch (error) {
      console.error('Error analyzing index usage:', error);
      return { error: error.message, indexes: [] };
    }
  }

  categorizeIndexUsage(accessCount, collectionDocuments) {
    if (accessCount === 0) return 'unused';
    if (accessCount < collectionDocuments * 0.01) return 'low';
    if (accessCount < collectionDocuments * 0.1) return 'medium';
    return 'high';
  }

  estimateIndexSelectivity(indexSpec, collectionDocuments) {
    // Simple estimation - in practice, would need sampling
    if (!indexSpec || collectionDocuments === 0) return 1;

    // Compound indexes generally more selective
    if (Object.keys(indexSpec).length > 1) return 0.1;

    // Simple heuristic based on field types
    return 0.5; // Default moderate selectivity
  }

  async collectCollectionMetrics() {
    console.log('Collecting detailed collection-level metrics...');

    try {
      const collections = await this.db.listCollections().toArray();
      const collectionMetrics = [];

      for (const collInfo of collections) {
        try {
          const collection = this.db.collection(collInfo.name);
          const stats = await collection.stats();

          // Calculate additional metrics
          const avgDocSize = stats.avgObjSize || 0;
          const storageEfficiency = stats.size > 0 ? stats.size / stats.storageSize : 0;
          const indexOverhead = stats.size > 0 ? stats.totalIndexSize / stats.size : 0;

          const collectionMetric = {
            name: collInfo.name,
            type: collInfo.type,

            // Core statistics
            documentCount: stats.count,
            dataSize: stats.size,
            storageSize: stats.storageSize,
            avgDocumentSize: avgDocSize,

            // Index statistics
            indexCount: stats.nindexes,
            totalIndexSize: stats.totalIndexSize,

            // Efficiency metrics
            storageEfficiency: storageEfficiency,
            indexOverhead: indexOverhead,
            fragmentationRatio: stats.storageSize > 0 ? 1 - (stats.size / stats.storageSize) : 0,

            // Performance characteristics
            performanceCategory: this.categorizeCollectionPerformance({
              documentCount: stats.count,
              avgDocumentSize: avgDocSize,
              indexOverhead: indexOverhead,
              storageEfficiency: storageEfficiency
            }),

            // Optimization opportunities
            optimizationFlags: {
              highFragmentation: (1 - storageEfficiency) > 0.3,
              excessiveIndexing: indexOverhead > 1.0,
              largeDocs: avgDocSize > 16384, // 16KB
              noIndexes: stats.nindexes <= 1 // Only _id index
            },

            timestamp: new Date()
          };

          collectionMetrics.push(collectionMetric);

        } catch (collError) {
          console.warn(`Error collecting stats for collection ${collInfo.name}:`, collError.message);
        }
      }

      return {
        collections: collectionMetrics,
        summary: {
          totalCollections: collectionMetrics.length,
          totalDocuments: collectionMetrics.reduce((sum, c) => sum + c.documentCount, 0),
          totalDataSize: collectionMetrics.reduce((sum, c) => sum + c.dataSize, 0),
          totalStorageSize: collectionMetrics.reduce((sum, c) => sum + c.storageSize, 0),
          totalIndexSize: collectionMetrics.reduce((sum, c) => sum + c.totalIndexSize, 0),
          avgStorageEfficiency: collectionMetrics.reduce((sum, c) => sum + c.storageEfficiency, 0) / collectionMetrics.length
        }
      };

    } catch (error) {
      console.error('Error collecting collection metrics:', error);
      return { error: error.message, collections: [] };
    }
  }

  categorizeCollectionPerformance({ documentCount, avgDocumentSize, indexOverhead, storageEfficiency }) {
    let score = 0;

    // Document count efficiency
    if (documentCount < 10000) score += 10;
    else if (documentCount < 1000000) score += 5;

    // Document size efficiency
    if (avgDocumentSize < 1024) score += 10; // < 1KB
    else if (avgDocumentSize < 16384) score += 5; // < 16KB

    // Index efficiency
    if (indexOverhead < 0.2) score += 10;
    else if (indexOverhead < 0.5) score += 5;

    // Storage efficiency
    if (storageEfficiency > 0.8) score += 10;
    else if (storageEfficiency > 0.6) score += 5;

    if (score >= 30) return 'excellent';
    if (score >= 20) return 'good';
    if (score >= 10) return 'fair';
    return 'poor';
  }

  async generateOptimizationRecommendations(performanceData) {
    console.log('Generating performance optimization recommendations...');

    const recommendations = [];

    try {
      // Analyze profiling data for query optimization
      if (performanceData.profilingData?.collections) {
        for (const collection of performanceData.profilingData.collections) {
          // Recommend indexes for collection scans
          if (collection.indexUsageType === 'collection_scan' && collection.totalOperations > 100) {
            recommendations.push({
              type: 'index_recommendation',
              priority: 'high',
              collection: collection.collection,
              title: 'Add index to eliminate collection scans',
              description: `Collection "${collection.collection}" has ${collection.totalOperations} collection scans with average response time of ${collection.avgResponseTime}ms`,
              recommendation: `Consider adding an index on frequently queried fields for ${collection.operationType} operations`,
              impact: 'high',
              effort: 'medium',
              estimatedImprovement: '60-90% response time reduction'
            });
          }

          // Recommend query optimization for slow operations
          if (collection.avgResponseTime > 1000) {
            recommendations.push({
              type: 'query_optimization',
              priority: 'high',
              collection: collection.collection,
              title: 'Optimize slow queries',
              description: `Queries on "${collection.collection}" average ${collection.avgResponseTime}ms response time`,
              recommendation: 'Review query patterns and consider compound indexes or query restructuring',
              impact: 'high',
              effort: 'medium',
              estimatedImprovement: '40-70% response time reduction'
            });
          }

          // Recommend efficiency improvements
          if (collection.overallEfficiency < 0.1) {
            recommendations.push({
              type: 'efficiency_improvement',
              priority: 'medium',
              collection: collection.collection,
              title: 'Improve query efficiency',
              description: `Queries examine ${collection.totalDocsExamined} documents but return only ${collection.totalDocsReturned} (${Math.round(collection.overallEfficiency * 100)}% efficiency)`,
              recommendation: 'Add more selective indexes or modify query patterns to reduce document examination',
              impact: 'medium',
              effort: 'medium',
              estimatedImprovement: '30-50% efficiency improvement'
            });
          }
        }
      }

      // Analyze index usage for recommendations
      if (performanceData.indexStats?.indexes) {
        for (const index of performanceData.indexStats.indexes) {
          // Recommend removing unused indexes
          if (index.usageCategory === 'unused' && index.indexName !== '_id_') {
            recommendations.push({
              type: 'index_removal',
              priority: 'low',
              collection: index.collection,
              title: 'Remove unused index',
              description: `Index "${index.indexName}" on collection "${index.collection}" is unused`,
              recommendation: 'Consider removing this index to reduce storage overhead and improve write performance',
              impact: 'low',
              effort: 'low',
              estimatedImprovement: 'Reduced storage usage and faster writes'
            });
          }

          // Recommend index optimization for oversized indexes
          if (index.indexToCollectionRatio > 0.5) {
            recommendations.push({
              type: 'index_optimization',
              priority: 'medium',
              collection: index.collection,
              title: 'Optimize oversized index',
              description: `Index "${index.indexName}" size is ${Math.round(index.indexToCollectionRatio * 100)}% of collection size`,
              recommendation: 'Review index design and consider using sparse or partial indexes',
              impact: 'medium',
              effort: 'medium',
              estimatedImprovement: '20-40% storage reduction'
            });
          }
        }
      }

      // Analyze collection metrics for recommendations
      if (performanceData.collectionMetrics?.collections) {
        for (const collection of performanceData.collectionMetrics.collections) {
          // Recommend addressing fragmentation
          if (collection.optimizationFlags.highFragmentation) {
            recommendations.push({
              type: 'storage_optimization',
              priority: 'medium',
              collection: collection.name,
              title: 'Address storage fragmentation',
              description: `Collection "${collection.name}" has ${Math.round(collection.fragmentationRatio * 100)}% fragmentation`,
              recommendation: 'Consider running compact command or rebuilding indexes during maintenance window',
              impact: 'medium',
              effort: 'high',
              estimatedImprovement: '15-30% storage efficiency improvement'
            });
          }

          // Recommend index strategy for collections with no custom indexes
          if (collection.optimizationFlags.noIndexes && collection.documentCount > 1000) {
            recommendations.push({
              type: 'index_strategy',
              priority: 'medium',
              collection: collection.name,
              title: 'Implement indexing strategy',
              description: `Collection "${collection.name}" has ${collection.documentCount} documents but no custom indexes`,
              recommendation: 'Analyze query patterns and add appropriate indexes for common queries',
              impact: 'high',
              effort: 'medium',
              estimatedImprovement: '50-80% query performance improvement'
            });
          }
        }
      }

      // Sort recommendations by priority and impact
      recommendations.sort((a, b) => {
        const priorityOrder = { high: 3, medium: 2, low: 1 };
        const impactOrder = { high: 3, medium: 2, low: 1 };

        const priorityDiff = priorityOrder[b.priority] - priorityOrder[a.priority];
        if (priorityDiff !== 0) return priorityDiff;

        return impactOrder[b.impact] - impactOrder[a.impact];
      });

      return {
        totalRecommendations: recommendations.length,
        recommendations: recommendations,

        // Summary by type
        summaryByType: {
          indexRecommendations: recommendations.filter(r => r.type.includes('index')).length,
          queryOptimizations: recommendations.filter(r => r.type === 'query_optimization').length,
          storageOptimizations: recommendations.filter(r => r.type === 'storage_optimization').length,
          efficiencyImprovements: recommendations.filter(r => r.type === 'efficiency_improvement').length
        },

        // Priority distribution
        priorityDistribution: {
          high: recommendations.filter(r => r.priority === 'high').length,
          medium: recommendations.filter(r => r.priority === 'medium').length,
          low: recommendations.filter(r => r.priority === 'low').length
        },

        generatedAt: new Date()
      };

    } catch (error) {
      console.error('Error generating optimization recommendations:', error);
      return { error: error.message, recommendations: [] };
    }
  }

  async generatePerformanceReport() {
    console.log('Generating comprehensive performance report...');

    try {
      // Collect all performance metrics
      const performanceData = await this.collectComprehensivePerformanceMetrics();

      // Generate executive summary
      const executiveSummary = this.generateExecutiveSummary(performanceData);

      // Create comprehensive report
      const performanceReport = {
        reportId: require('crypto').randomUUID(),
        generatedAt: new Date(),
        reportPeriod: {
          start: new Date(Date.now() - 3600000), // Last hour
          end: new Date()
        },

        // Executive summary
        executiveSummary: executiveSummary,

        // Detailed performance data
        performanceData: performanceData,

        // Key performance indicators
        kpis: {
          avgResponseTime: performanceData.queryPerformance?.summary?.avgResponseTimeOverall || 0,
          slowQueriesCount: performanceData.queryPerformance?.summary?.slowOperationsCount || 0,
          collectionScansCount: performanceData.queryPerformance?.summary?.collectionScansCount || 0,
          indexEfficiency: this.calculateOverallIndexEfficiency(performanceData.indexPerformance),
          storageEfficiency: performanceData.collections?.summary?.avgStorageEfficiency || 0,
          connectionUtilization: performanceData.connections?.utilizationPercent || 0
        },

        // Performance trends (if baseline available)
        trends: await this.calculatePerformanceTrends(),

        // Optimization recommendations
        recommendations: performanceData.recommendations,

        // Action items
        actionItems: this.generateActionItems(performanceData.recommendations),

        // Health score
        overallHealthScore: this.calculateOverallHealthScore(performanceData)
      };

      // Store report
      await this.collections.performanceMetrics.insertOne(performanceReport);

      return performanceReport;

    } catch (error) {
      console.error('Error generating performance report:', error);
      throw error;
    }
  }

  generateExecutiveSummary(performanceData) {
    const issues = [];
    const highlights = [];

    // Identify key issues
    if (performanceData.queryPerformance?.summary?.avgResponseTimeOverall > 500) {
      issues.push(`Average query response time is ${Math.round(performanceData.queryPerformance.summary.avgResponseTimeOverall)}ms (target: <100ms)`);
    }

    if (performanceData.queryPerformance?.summary?.collectionScansCount > 0) {
      issues.push(`${performanceData.queryPerformance.summary.collectionScansCount} queries are performing collection scans`);
    }

    if (performanceData.collections?.summary?.avgStorageEfficiency < 0.7) {
      issues.push(`Storage efficiency is ${Math.round(performanceData.collections.summary.avgStorageEfficiency * 100)}% (target: >80%)`);
    }

    // Identify highlights
    if (performanceData.queryPerformance?.summary?.avgResponseTimeOverall < 100) {
      highlights.push('Query performance is excellent with average response time under 100ms');
    }

    if (performanceData.indexPerformance?.usageSummary?.unused < 2) {
      highlights.push('Index usage is well optimized with minimal unused indexes');
    }

    return {
      status: issues.length === 0 ? 'healthy' : issues.length < 3 ? 'warning' : 'critical',
      keyIssues: issues,
      highlights: highlights,
      recommendationsCount: performanceData.recommendations?.totalRecommendations || 0,
      criticalRecommendations: performanceData.recommendations?.priorityDistribution?.high || 0
    };
  }

  calculateOverallIndexEfficiency(indexPerformance) {
    if (!indexPerformance?.indexes || indexPerformance.indexes.length === 0) return 0;

    const usedIndexes = indexPerformance.indexes.filter(idx => idx.usageCategory !== 'unused').length;
    return usedIndexes / indexPerformance.indexes.length;
  }

  generateActionItems(recommendations) {
    if (!recommendations?.recommendations) return [];

    return recommendations.recommendations
      .filter(rec => rec.priority === 'high')
      .slice(0, 5) // Top 5 high-priority items
      .map(rec => ({
        title: rec.title,
        collection: rec.collection,
        action: rec.recommendation,
        estimatedEffort: rec.effort,
        expectedImpact: rec.estimatedImprovement
      }));
  }

  calculateOverallHealthScore(performanceData) {
    let score = 100;

    // Query performance impact
    const avgResponseTime = performanceData.queryPerformance?.summary?.avgResponseTimeOverall || 0;
    if (avgResponseTime > 1000) score -= 30;
    else if (avgResponseTime > 500) score -= 20;
    else if (avgResponseTime > 100) score -= 10;

    // Collection scans impact
    const collectionScans = performanceData.queryPerformance?.summary?.collectionScansCount || 0;
    if (collectionScans > 100) score -= 25;
    else if (collectionScans > 10) score -= 15;
    else if (collectionScans > 0) score -= 5;

    // Storage efficiency impact
    const storageEfficiency = performanceData.collections?.summary?.avgStorageEfficiency || 1;
    if (storageEfficiency < 0.5) score -= 20;
    else if (storageEfficiency < 0.7) score -= 10;

    // Index efficiency impact
    const indexEfficiency = this.calculateOverallIndexEfficiency(performanceData.indexPerformance);
    if (indexEfficiency < 0.7) score -= 15;
    else if (indexEfficiency < 0.9) score -= 5;

    return Math.max(0, score);
  }

  // Additional helper methods for comprehensive monitoring

  extractConnectionMetrics(serverStatus) {
    const connections = serverStatus.connections || {};
    const network = serverStatus.network || {};

    return {
      current: connections.current || 0,
      available: connections.available || 0,
      totalCreated: connections.totalCreated || 0,
      utilizationPercent: connections.available > 0 ? 
        (connections.current / (connections.current + connections.available)) * 100 : 0,

      // Network metrics
      bytesIn: network.bytesIn || 0,
      bytesOut: network.bytesOut || 0,
      numRequests: network.numRequests || 0
    };
  }

  extractResourceMetrics(serverStatus) {
    const mem = serverStatus.mem || {};
    const extra_info = serverStatus.extra_info || {};

    return {
      // Memory usage
      residentMemoryMB: mem.resident || 0,
      virtualMemoryMB: mem.virtual || 0,
      mappedMemoryMB: mem.mapped || 0,

      // System metrics
      pageFaults: extra_info.page_faults || 0,
      heapUsageMB: mem.heap_usage_bytes ? mem.heap_usage_bytes / (1024 * 1024) : 0,

      // CPU and system load would require additional system commands
      cpuUsagePercent: 0, // Would need external monitoring
      diskIOPS: 0 // Would need external monitoring
    };
  }

  async collectReplicationMetrics() {
    try {
      const replSetStatus = await this.adminDb.command({ replSetGetStatus: 1 });

      if (!replSetStatus.ok) {
        return { replicated: false };
      }

      const primary = replSetStatus.members.find(m => m.state === 1);
      const secondaries = replSetStatus.members.filter(m => m.state === 2);

      return {
        replicated: true,
        setName: replSetStatus.set,
        primary: primary ? {
          name: primary.name,
          health: primary.health,
          uptime: primary.uptime
        } : null,
        secondaries: secondaries.map(s => ({
          name: s.name,
          health: s.health,
          lag: primary && s.optimeDate ? primary.optimeDate - s.optimeDate : 0,
          uptime: s.uptime
        })),
        totalMembers: replSetStatus.members.length
      };
    } catch (error) {
      return { replicated: false, error: error.message };
    }
  }

  async collectShardingMetrics() {
    try {
      const shardingStatus = await this.adminDb.command({ isdbgrid: 1 });

      if (!shardingStatus.isdbgrid) {
        return { sharded: false };
      }

      const configDB = this.client.db('config');
      const shards = await configDB.collection('shards').find().toArray();
      const chunks = await configDB.collection('chunks').find().toArray();

      return {
        sharded: true,
        shardCount: shards.length,
        totalChunks: chunks.length,
        shards: shards.map(s => ({
          id: s._id,
          host: s.host,
          state: s.state
        }))
      };
    } catch (error) {
      return { sharded: false, error: error.message };
    }
  }

  async startPerformanceCollection() {
    console.log('Starting continuous performance metrics collection...');

    // Collect metrics at regular intervals
    setInterval(async () => {
      try {
        await this.collectComprehensivePerformanceMetrics();
      } catch (error) {
        console.error('Error in scheduled performance collection:', error);
      }
    }, this.config.metricsCollectionInterval);

    // Generate reports at longer intervals
    setInterval(async () => {
      try {
        await this.generatePerformanceReport();
      } catch (error) {
        console.error('Error in scheduled report generation:', error);
      }
    }, this.config.performanceReportInterval);
  }

  updateRealTimeMetrics(performanceData) {
    // Update in-memory metrics for real-time dashboard
    this.metrics.operationCounts.set('current', performanceData.operations);
    this.metrics.responseTimes.set('current', performanceData.queryPerformance);
    this.metrics.indexUsage.set('current', performanceData.indexPerformance);
    this.metrics.collectionMetrics.set('current', performanceData.collections);
  }

  async checkPerformanceAlerts(performanceData) {
    const alerts = [];

    // Check response time thresholds
    const avgResponseTime = performanceData.queryPerformance?.summary?.avgResponseTimeOverall || 0;
    if (avgResponseTime > this.config.alertThresholds.avgResponseTime) {
      alerts.push({
        type: 'high_response_time',
        severity: 'warning',
        message: `Average response time ${avgResponseTime}ms exceeds threshold ${this.config.alertThresholds.avgResponseTime}ms`
      });
    }

    // Check collection scans
    const collectionScans = performanceData.queryPerformance?.summary?.collectionScansCount || 0;
    if (collectionScans > 0) {
      alerts.push({
        type: 'collection_scans',
        severity: 'warning',
        message: `${collectionScans} queries performing collection scans`
      });
    }

    // Process alerts if any
    if (alerts.length > 0 && this.config.enableRealTimeAlerts) {
      await this.processPerformanceAlerts(alerts);
    }
  }

  async processPerformanceAlerts(alerts) {
    for (const alert of alerts) {
      console.warn(`⚠️ Performance Alert [${alert.severity}]: ${alert.message}`);

      // Store alert for historical tracking
      await this.collections.performanceMetrics.insertOne({
        type: 'alert',
        alert: alert,
        timestamp: new Date()
      });

      // Trigger external alerting systems here
      // (email, Slack, PagerDuty, etc.)
    }
  }
}

// Benefits of MongoDB Advanced Performance Monitoring:
// - Comprehensive query profiling with detailed execution analysis
// - Advanced index usage analysis and optimization recommendations
// - Collection-level performance metrics and storage efficiency tracking
// - Real-time performance monitoring with automated alerting
// - Intelligent optimization recommendations based on actual usage patterns
// - Integration with MongoDB's native profiling and statistics capabilities
// - Production-ready monitoring suitable for large-scale deployments
// - Historical performance trend analysis and baseline establishment
// - Automated performance report generation with executive summaries
// - SQL-compatible monitoring operations through QueryLeaf integration

module.exports = {
  AdvancedMongoPerformanceMonitor
};

Understanding MongoDB Performance Monitoring Architecture

Advanced Profiling and Optimization Strategies

Implement sophisticated monitoring patterns for production MongoDB deployments:

// Production-ready MongoDB performance monitoring with advanced optimization patterns
class ProductionPerformanceOptimizer extends AdvancedMongoPerformanceMonitor {
  constructor(db, productionConfig) {
    super(db, productionConfig);

    this.productionConfig = {
      ...productionConfig,
      enablePredictiveAnalytics: true,
      enableAutomaticOptimization: false, // Require manual approval
      enableCapacityPlanning: true,
      enablePerformanceBaseline: true,
      enableAnomalyDetection: true,
      enableCostOptimization: true
    };

    this.setupProductionOptimizations();
    this.initializePredictiveAnalytics();
    this.setupCapacityPlanningModels();
  }

  async implementAdvancedQueryOptimization(optimizationConfig) {
    console.log('Implementing advanced query optimization strategies...');

    const optimizationStrategies = {
      // Intelligent index recommendations
      indexOptimization: {
        compoundIndexAnalysis: true,
        partialIndexOptimization: true,
        sparseIndexRecommendations: true,
        indexIntersectionAnalysis: true
      },

      // Query pattern analysis
      queryOptimization: {
        aggregationPipelineOptimization: true,
        queryShapeAnalysis: true,
        executionPlanOptimization: true,
        sortOptimization: true
      },

      // Schema optimization
      schemaOptimization: {
        documentStructureAnalysis: true,
        fieldUsageAnalysis: true,
        embeddingVsReferencingAnalysis: true,
        denormalizationRecommendations: true
      },

      // Resource optimization
      resourceOptimization: {
        connectionPoolOptimization: true,
        memoryUsageOptimization: true,
        diskIOOptimization: true,
        networkOptimization: true
      }
    };

    return await this.executeOptimizationStrategies(optimizationStrategies);
  }

  async setupCapacityPlanningModels(planningRequirements) {
    console.log('Setting up capacity planning and growth prediction models...');

    const planningModels = {
      // Growth prediction models
      growthPrediction: {
        documentGrowthRate: await this.analyzeDocumentGrowthRate(),
        storageGrowthProjection: await this.projectStorageGrowth(),
        queryVolumeProjection: await this.projectQueryVolumeGrowth(),
        indexGrowthAnalysis: await this.analyzeIndexGrowthPatterns()
      },

      // Resource requirement models
      resourcePlanning: {
        cpuRequirements: await this.calculateCPURequirements(),
        memoryRequirements: await this.calculateMemoryRequirements(),
        storageRequirements: await this.calculateStorageRequirements(),
        networkRequirements: await this.calculateNetworkRequirements()
      },

      // Scaling recommendations
      scalingStrategy: {
        verticalScaling: await this.analyzeVerticalScalingNeeds(),
        horizontalScaling: await this.analyzeHorizontalScalingNeeds(),
        shardingRecommendations: await this.analyzeShardingRequirements(),
        replicaSetOptimization: await this.analyzeReplicaSetOptimization()
      }
    };

    return await this.implementCapacityPlanningModels(planningModels);
  }

  async enableAnomalyDetection(detectionConfig) {
    console.log('Enabling performance anomaly detection system...');

    const anomalyDetectionSystem = {
      // Statistical anomaly detection
      statisticalDetection: {
        responseTimeAnomalies: true,
        queryVolumeAnomalies: true,
        indexUsageAnomalies: true,
        resourceUsageAnomalies: true
      },

      // Machine learning based detection
      mlDetection: {
        queryPatternAnomalies: true,
        performanceDegradationPrediction: true,
        capacityThresholdPrediction: true,
        failurePatternRecognition: true
      },

      // Business logic anomalies
      businessLogicDetection: {
        unexpectedDataPatterns: true,
        unusualApplicationBehavior: true,
        securityAnomalies: true,
        complianceViolations: true
      }
    };

    return await this.implementAnomalyDetectionSystem(anomalyDetectionSystem);
  }
}

SQL-Style Performance Monitoring with QueryLeaf

QueryLeaf provides familiar SQL syntax for MongoDB performance monitoring and optimization operations:

-- QueryLeaf advanced performance monitoring and optimization with SQL-familiar syntax

-- Enable comprehensive database profiling with advanced configuration
CONFIGURE PROFILING 
SET profiling_level = 2,
    slow_operation_threshold = 100,
    sample_rate = 1.0,
    filter_criteria = {
      include_slow_ops: true,
      include_collection_scans: true,
      include_lock_operations: true,
      include_index_analysis: true
    },
    collection_size = '100MB',
    max_documents = 1000000;

-- Comprehensive performance metrics analysis with detailed insights
WITH performance_analysis AS (
  SELECT 
    -- Operation characteristics
    operation_type,
    collection_name,
    execution_time_ms,
    documents_examined,
    documents_returned,
    index_keys_examined,
    execution_plan,

    -- Efficiency calculations
    CASE 
      WHEN documents_examined > 0 THEN 
        CAST(documents_returned AS FLOAT) / documents_examined
      ELSE 1.0
    END as query_efficiency,

    -- Performance categorization
    CASE 
      WHEN execution_time_ms < 10 THEN 'very_fast'
      WHEN execution_time_ms < 100 THEN 'fast'
      WHEN execution_time_ms < 500 THEN 'moderate'
      WHEN execution_time_ms < 2000 THEN 'slow'
      ELSE 'very_slow'
    END as performance_category,

    -- Index usage analysis
    CASE 
      WHEN execution_plan LIKE '%IXSCAN%' THEN 'index_scan'
      WHEN execution_plan LIKE '%COLLSCAN%' THEN 'collection_scan'
      ELSE 'other'
    END as index_usage_type,

    -- Lock analysis
    locks_acquired,
    lock_wait_time_ms,

    -- Resource usage
    cpu_time_ms,
    memory_usage_bytes,

    -- Timestamp for trend analysis
    DATE_TRUNC('minute', operation_timestamp) as time_bucket

  FROM PROFILE_DATA
  WHERE operation_timestamp >= CURRENT_TIMESTAMP - INTERVAL '1 hour'
    AND database_name = CURRENT_DATABASE()
),

aggregated_metrics AS (
  SELECT 
    collection_name,
    operation_type,
    index_usage_type,
    time_bucket,

    -- Operation volume metrics
    COUNT(*) as operation_count,

    -- Performance metrics
    AVG(execution_time_ms) as avg_response_time,
    PERCENTILE_CONT(0.50) WITHIN GROUP (ORDER BY execution_time_ms) as median_response_time,
    PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY execution_time_ms) as p95_response_time,
    PERCENTILE_CONT(0.99) WITHIN GROUP (ORDER BY execution_time_ms) as p99_response_time,
    MIN(execution_time_ms) as min_response_time,
    MAX(execution_time_ms) as max_response_time,

    -- Efficiency metrics
    AVG(query_efficiency) as avg_efficiency,
    SUM(documents_examined) as total_docs_examined,
    SUM(documents_returned) as total_docs_returned,
    SUM(index_keys_examined) as total_index_keys_examined,

    -- Performance distribution
    COUNT(*) FILTER (WHERE performance_category = 'very_fast') as very_fast_ops,
    COUNT(*) FILTER (WHERE performance_category = 'fast') as fast_ops,
    COUNT(*) FILTER (WHERE performance_category = 'moderate') as moderate_ops,
    COUNT(*) FILTER (WHERE performance_category = 'slow') as slow_ops,
    COUNT(*) FILTER (WHERE performance_category = 'very_slow') as very_slow_ops,

    -- Resource utilization
    AVG(cpu_time_ms) as avg_cpu_time,
    AVG(memory_usage_bytes) as avg_memory_usage,
    SUM(lock_wait_time_ms) as total_lock_wait_time,

    -- Index efficiency
    COUNT(*) FILTER (WHERE index_usage_type = 'collection_scan') as collection_scan_count,
    COUNT(*) FILTER (WHERE index_usage_type = 'index_scan') as index_scan_count,

    -- Calculate performance score
    (
      -- Response time component (lower is better)
      (1000 - LEAST(AVG(execution_time_ms), 1000)) / 1000 * 40 +

      -- Efficiency component (higher is better)  
      AVG(query_efficiency) * 30 +

      -- Index usage component (index scans preferred)
      CASE 
        WHEN COUNT(*) FILTER (WHERE index_usage_type = 'index_scan') > 
             COUNT(*) FILTER (WHERE index_usage_type = 'collection_scan') THEN 20
        ELSE 0
      END +

      -- Volume stability component
      LEAST(COUNT(*) / 100.0, 1.0) * 10

    ) as performance_score

  FROM performance_analysis
  GROUP BY collection_name, operation_type, index_usage_type, time_bucket
),

performance_trends AS (
  SELECT 
    am.*,

    -- Trend analysis with window functions
    LAG(avg_response_time) OVER (
      PARTITION BY collection_name, operation_type, index_usage_type
      ORDER BY time_bucket
    ) as prev_response_time,

    LAG(operation_count) OVER (
      PARTITION BY collection_name, operation_type, index_usage_type  
      ORDER BY time_bucket
    ) as prev_operation_count,

    -- Moving averages for smoothing
    AVG(avg_response_time) OVER (
      PARTITION BY collection_name, operation_type, index_usage_type
      ORDER BY time_bucket
      ROWS BETWEEN 4 PRECEDING AND CURRENT ROW
    ) as moving_avg_response_time,

    AVG(performance_score) OVER (
      PARTITION BY collection_name, operation_type, index_usage_type
      ORDER BY time_bucket  
      ROWS BETWEEN 4 PRECEDING AND CURRENT ROW
    ) as moving_avg_performance_score

  FROM aggregated_metrics am
)

SELECT 
  collection_name,
  operation_type,
  index_usage_type,
  time_bucket,

  -- Core performance metrics
  operation_count,
  ROUND(avg_response_time::NUMERIC, 2) as avg_response_time_ms,
  ROUND(median_response_time::NUMERIC, 2) as median_response_time_ms,
  ROUND(p95_response_time::NUMERIC, 2) as p95_response_time_ms,
  ROUND(p99_response_time::NUMERIC, 2) as p99_response_time_ms,

  -- Efficiency metrics
  ROUND((avg_efficiency * 100)::NUMERIC, 2) as efficiency_percentage,
  total_docs_examined,
  total_docs_returned,

  -- Performance distribution
  JSON_OBJECT(
    'very_fast', very_fast_ops,
    'fast', fast_ops, 
    'moderate', moderate_ops,
    'slow', slow_ops,
    'very_slow', very_slow_ops
  ) as performance_distribution,

  -- Index usage analysis
  collection_scan_count,
  index_scan_count,
  ROUND(
    (index_scan_count::FLOAT / NULLIF(collection_scan_count + index_scan_count, 0) * 100)::NUMERIC, 
    2
  ) as index_usage_percentage,

  -- Performance scoring
  ROUND(performance_score::NUMERIC, 2) as performance_score,
  CASE 
    WHEN performance_score >= 90 THEN 'excellent'
    WHEN performance_score >= 75 THEN 'good'
    WHEN performance_score >= 60 THEN 'fair'
    WHEN performance_score >= 40 THEN 'poor'
    ELSE 'critical'
  END as performance_grade,

  -- Trend analysis
  CASE 
    WHEN prev_response_time IS NOT NULL THEN
      ROUND(((avg_response_time - prev_response_time) / prev_response_time * 100)::NUMERIC, 2)
    ELSE NULL
  END as response_time_change_percent,

  CASE 
    WHEN prev_operation_count IS NOT NULL THEN
      ROUND(((operation_count - prev_operation_count)::FLOAT / prev_operation_count * 100)::NUMERIC, 2)
    ELSE NULL
  END as volume_change_percent,

  -- Moving averages for trend smoothing
  ROUND(moving_avg_response_time::NUMERIC, 2) as trend_response_time,
  ROUND(moving_avg_performance_score::NUMERIC, 2) as trend_performance_score,

  -- Resource utilization
  ROUND(avg_cpu_time::NUMERIC, 2) as avg_cpu_time_ms,
  ROUND((avg_memory_usage / 1024.0 / 1024)::NUMERIC, 2) as avg_memory_usage_mb,
  total_lock_wait_time as total_lock_wait_ms,

  -- Alert indicators
  CASE 
    WHEN avg_response_time > 1000 THEN 'high_response_time'
    WHEN collection_scan_count > index_scan_count THEN 'excessive_collection_scans'
    WHEN avg_efficiency < 0.1 THEN 'low_efficiency'
    WHEN total_lock_wait_time > 1000 THEN 'lock_contention'
    ELSE 'normal'
  END as alert_status,

  CURRENT_TIMESTAMP as analysis_timestamp

FROM performance_trends
WHERE operation_count > 0  -- Filter out empty buckets
ORDER BY 
  performance_score ASC,  -- Show problematic areas first
  avg_response_time DESC,
  collection_name,
  operation_type;

-- Advanced index analysis and optimization recommendations
WITH index_statistics AS (
  SELECT 
    collection_name,
    index_name,
    index_spec,
    index_size_bytes,

    -- Usage statistics
    access_count,
    last_access_time,

    -- Index characteristics
    is_unique,
    is_sparse, 
    is_partial,
    is_compound,

    -- Calculate metrics
    EXTRACT(DAYS FROM CURRENT_TIMESTAMP - last_access_time) as days_since_access,

    -- Index type classification
    CASE 
      WHEN access_count = 0 THEN 'unused'
      WHEN access_count < 100 THEN 'low_usage'
      WHEN access_count < 10000 THEN 'medium_usage'
      ELSE 'high_usage'
    END as usage_category,

    -- Get collection statistics for context
    (SELECT document_count FROM COLLECTION_STATS cs WHERE cs.collection_name = idx.collection_name) as collection_doc_count,
    (SELECT total_size_bytes FROM COLLECTION_STATS cs WHERE cs.collection_name = idx.collection_name) as collection_size_bytes

  FROM INDEX_STATS idx
  WHERE database_name = CURRENT_DATABASE()
),

index_analysis AS (
  SELECT 
    *,

    -- Calculate index efficiency metrics
    CASE 
      WHEN collection_size_bytes > 0 THEN 
        CAST(index_size_bytes AS FLOAT) / collection_size_bytes
      ELSE 0
    END as size_ratio,

    -- Usage intensity
    CASE 
      WHEN collection_doc_count > 0 THEN
        CAST(access_count AS FLOAT) / collection_doc_count
      ELSE 0
    END as usage_intensity,

    -- ROI calculation (simplified)
    CASE 
      WHEN index_size_bytes > 0 THEN
        CAST(access_count AS FLOAT) / (index_size_bytes / 1024 / 1024)  -- accesses per MB
      ELSE 0
    END as access_per_mb,

    -- Optimization opportunity scoring
    CASE 
      WHEN access_count = 0 AND index_name != '_id_' THEN 100  -- Remove unused
      WHEN access_count < 10 AND days_since_access > 30 THEN 80  -- Consider removal
      WHEN size_ratio > 0.5 THEN 60  -- Oversized index
      WHEN is_compound = false AND usage_intensity < 0.01 THEN 40  -- Underutilized single field
      ELSE 0
    END as optimization_priority

  FROM index_statistics
),

optimization_recommendations AS (
  SELECT 
    collection_name,
    index_name,
    usage_category,

    -- Current metrics
    access_count,
    ROUND((index_size_bytes / 1024.0 / 1024)::NUMERIC, 2) as index_size_mb,
    ROUND((size_ratio * 100)::NUMERIC, 2) as size_ratio_percent,
    days_since_access,

    -- Optimization recommendations
    CASE 
      WHEN optimization_priority >= 100 THEN 
        JSON_OBJECT(
          'action', 'remove_index',
          'reason', 'Index is unused and consuming storage',
          'impact', 'Reduced storage usage and faster writes',
          'priority', 'high'
        )
      WHEN optimization_priority >= 80 THEN
        JSON_OBJECT(
          'action', 'consider_removal',
          'reason', 'Index has very low usage and is stale',
          'impact', 'Potential storage savings with minimal risk',
          'priority', 'medium'
        )
      WHEN optimization_priority >= 60 THEN
        JSON_OBJECT(
          'action', 'optimize_index',
          'reason', 'Index size is disproportionately large',
          'impact', 'Consider sparse or partial index options',
          'priority', 'medium'
        )
      WHEN optimization_priority >= 40 THEN
        JSON_OBJECT(
          'action', 'review_usage',
          'reason', 'Single field index with low utilization',
          'impact', 'Evaluate if compound index would be more effective',
          'priority', 'low'
        )
      ELSE
        JSON_OBJECT(
          'action', 'maintain',
          'reason', 'Index appears to be well utilized',
          'impact', 'No immediate action required',
          'priority', 'none'
        )
    END as recommendation,

    -- Performance impact estimation
    CASE 
      WHEN optimization_priority >= 80 THEN
        JSON_OBJECT(
          'storage_savings_mb', ROUND((index_size_bytes / 1024.0 / 1024)::NUMERIC, 2),
          'write_performance_improvement', '5-15%',
          'query_performance_impact', 'minimal'
        )
      WHEN optimization_priority >= 40 THEN
        JSON_OBJECT(
          'storage_savings_mb', ROUND((index_size_bytes / 1024.0 / 1024 * 0.3)::NUMERIC, 2),
          'write_performance_improvement', '2-8%', 
          'query_performance_impact', 'requires_analysis'
        )
      ELSE
        JSON_OBJECT(
          'storage_savings_mb', 0,
          'write_performance_improvement', '0%',
          'query_performance_impact', 'none'
        )
    END as impact_estimate,

    optimization_priority

  FROM index_analysis
  WHERE optimization_priority > 0
)

SELECT 
  collection_name,
  index_name,
  usage_category,
  access_count,
  index_size_mb,
  size_ratio_percent,
  days_since_access,

  -- Recommendation details
  JSON_EXTRACT(recommendation, '$.action') as recommended_action,
  JSON_EXTRACT(recommendation, '$.reason') as recommendation_reason,
  JSON_EXTRACT(recommendation, '$.impact') as expected_impact,
  JSON_EXTRACT(recommendation, '$.priority') as priority_level,

  -- Impact estimation
  CAST(JSON_EXTRACT(impact_estimate, '$.storage_savings_mb') AS DECIMAL(10,2)) as potential_storage_savings_mb,
  JSON_EXTRACT(impact_estimate, '$.write_performance_improvement') as write_performance_gain,
  JSON_EXTRACT(impact_estimate, '$.query_performance_impact') as query_impact_assessment,

  -- Implementation guidance
  CASE 
    WHEN JSON_EXTRACT(recommendation, '$.action') = 'remove_index' THEN
      'DROP INDEX ' || index_name || ' ON ' || collection_name
    WHEN JSON_EXTRACT(recommendation, '$.action') = 'optimize_index' THEN
      'Review index definition and consider sparse/partial options'
    ELSE 'Monitor usage patterns before taking action'
  END as implementation_command,

  optimization_priority,
  CURRENT_TIMESTAMP as analysis_date

FROM optimization_recommendations
ORDER BY optimization_priority DESC, index_size_mb DESC;

-- Real-time performance monitoring dashboard query
CREATE VIEW real_time_performance_dashboard AS
WITH current_metrics AS (
  SELECT 
    -- Time-based grouping for real-time updates
    DATE_TRUNC('minute', CURRENT_TIMESTAMP) as current_minute,

    -- Operation volume in last minute
    (SELECT COUNT(*) FROM PROFILE_DATA 
     WHERE operation_timestamp >= CURRENT_TIMESTAMP - INTERVAL '1 minute') as ops_per_minute,

    -- Average response time in last minute  
    (SELECT AVG(execution_time_ms) FROM PROFILE_DATA
     WHERE operation_timestamp >= CURRENT_TIMESTAMP - INTERVAL '1 minute') as avg_response_time_1m,

    -- Collection scans in last minute
    (SELECT COUNT(*) FROM PROFILE_DATA
     WHERE operation_timestamp >= CURRENT_TIMESTAMP - INTERVAL '1 minute'
     AND execution_plan LIKE '%COLLSCAN%') as collection_scans_1m,

    -- Slow queries in last minute (>500ms)
    (SELECT COUNT(*) FROM PROFILE_DATA  
     WHERE operation_timestamp >= CURRENT_TIMESTAMP - INTERVAL '1 minute'
     AND execution_time_ms > 500) as slow_queries_1m,

    -- Connection statistics
    (SELECT current_connections FROM CONNECTION_STATS) as current_connections,
    (SELECT max_connections FROM CONNECTION_STATS) as max_connections,

    -- Memory usage
    (SELECT resident_memory_mb FROM MEMORY_STATS) as memory_usage_mb,
    (SELECT cache_hit_ratio FROM MEMORY_STATS) as cache_hit_ratio,

    -- Storage metrics
    (SELECT SUM(data_size_bytes) FROM COLLECTION_STATS) as total_data_size_bytes,
    (SELECT SUM(storage_size_bytes) FROM COLLECTION_STATS) as total_storage_size_bytes,
    (SELECT SUM(index_size_bytes) FROM COLLECTION_STATS) as total_index_size_bytes
),

health_indicators AS (
  SELECT 
    cm.*,

    -- Calculate health scores
    CASE 
      WHEN avg_response_time_1m > 1000 THEN 'critical'
      WHEN avg_response_time_1m > 500 THEN 'warning' 
      WHEN avg_response_time_1m > 100 THEN 'ok'
      ELSE 'excellent'
    END as response_time_health,

    CASE 
      WHEN collection_scans_1m > 10 THEN 'critical'
      WHEN collection_scans_1m > 5 THEN 'warning'
      WHEN collection_scans_1m > 0 THEN 'ok'
      ELSE 'excellent'  
    END as index_usage_health,

    CASE 
      WHEN current_connections::FLOAT / NULLIF(max_connections, 0) > 0.9 THEN 'critical'
      WHEN current_connections::FLOAT / NULLIF(max_connections, 0) > 0.8 THEN 'warning'
      WHEN current_connections::FLOAT / NULLIF(max_connections, 0) > 0.7 THEN 'ok'
      ELSE 'excellent'
    END as connection_health,

    CASE 
      WHEN cache_hit_ratio < 0.8 THEN 'critical'
      WHEN cache_hit_ratio < 0.9 THEN 'warning'
      WHEN cache_hit_ratio < 0.95 THEN 'ok'
      ELSE 'excellent'
    END as memory_health

  FROM current_metrics cm
)

SELECT 
  current_minute,

  -- Real-time performance metrics
  ops_per_minute,
  ROUND(avg_response_time_1m::NUMERIC, 2) as avg_response_time_ms,
  collection_scans_1m,
  slow_queries_1m,

  -- Health indicators
  response_time_health,
  index_usage_health, 
  connection_health,
  memory_health,

  -- Overall health score
  CASE 
    WHEN response_time_health = 'critical' OR index_usage_health = 'critical' OR 
         connection_health = 'critical' OR memory_health = 'critical' THEN 'critical'
    WHEN response_time_health = 'warning' OR index_usage_health = 'warning' OR
         connection_health = 'warning' OR memory_health = 'warning' THEN 'warning'  
    WHEN response_time_health = 'ok' OR index_usage_health = 'ok' OR
         connection_health = 'ok' OR memory_health = 'ok' THEN 'ok'
    ELSE 'excellent'
  END as overall_health,

  -- Resource utilization
  current_connections,
  max_connections,
  ROUND((current_connections::FLOAT / NULLIF(max_connections, 0) * 100)::NUMERIC, 2) as connection_usage_percent,

  memory_usage_mb,
  ROUND((cache_hit_ratio * 100)::NUMERIC, 2) as cache_hit_percent,

  -- Storage information
  ROUND((total_data_size_bytes / 1024.0 / 1024 / 1024)::NUMERIC, 2) as total_data_gb,
  ROUND((total_storage_size_bytes / 1024.0 / 1024 / 1024)::NUMERIC, 2) as total_storage_gb,
  ROUND((total_index_size_bytes / 1024.0 / 1024 / 1024)::NUMERIC, 2) as total_index_gb,

  -- Efficiency metrics
  ROUND((total_data_size_bytes::FLOAT / NULLIF(total_storage_size_bytes, 0))::NUMERIC, 4) as storage_efficiency,
  ROUND((total_index_size_bytes::FLOAT / NULLIF(total_data_size_bytes, 0))::NUMERIC, 4) as index_to_data_ratio,

  -- Alert conditions
  CASE 
    WHEN ops_per_minute = 0 THEN 'no_activity'
    WHEN slow_queries_1m > ops_per_minute * 0.1 THEN 'high_slow_query_ratio'
    WHEN collection_scans_1m > ops_per_minute * 0.05 THEN 'excessive_collection_scans'
    ELSE 'normal'
  END as alert_condition,

  -- Recommendations
  ARRAY[
    CASE WHEN response_time_health IN ('critical', 'warning') THEN 'Review slow queries and indexing strategy' END,
    CASE WHEN index_usage_health IN ('critical', 'warning') THEN 'Add indexes to eliminate collection scans' END, 
    CASE WHEN connection_health IN ('critical', 'warning') THEN 'Monitor connection pooling and usage patterns' END,
    CASE WHEN memory_health IN ('critical', 'warning') THEN 'Review memory allocation and cache settings' END
  ]::TEXT[] as immediate_recommendations

FROM health_indicators;

-- QueryLeaf provides comprehensive MongoDB performance monitoring capabilities:
-- 1. SQL-familiar syntax for MongoDB profiling configuration and analysis
-- 2. Advanced performance metrics collection with detailed execution insights  
-- 3. Real-time index usage analysis and optimization recommendations
-- 4. Comprehensive query performance analysis with efficiency scoring
-- 5. Production-ready monitoring dashboards with health indicators
-- 6. Automated optimization recommendations based on actual usage patterns
-- 7. Trend analysis and performance baseline establishment
-- 8. Integration with MongoDB's native profiling and statistics systems
-- 9. Advanced alerting and anomaly detection capabilities
-- 10. Capacity planning and resource optimization insights

Best Practices for Production MongoDB Performance Monitoring

Monitoring Strategy Implementation

Essential principles for effective MongoDB performance monitoring and optimization:

  1. Profiling Configuration: Configure appropriate profiling levels and sampling rates to balance insight with performance impact
  2. Metrics Collection: Implement comprehensive metrics collection covering queries, indexes, resources, and business operations
  3. Baseline Establishment: Establish performance baselines to enable meaningful trend analysis and anomaly detection
  4. Alert Strategy: Design intelligent alerting that focuses on actionable issues rather than metric noise
  5. Optimization Workflow: Implement systematic optimization workflows with testing and validation procedures
  6. Capacity Planning: Utilize historical data and growth patterns for proactive capacity planning and scaling decisions

Production Deployment Optimization

Optimize MongoDB monitoring deployments for enterprise environments:

  1. Automated Analysis: Implement automated performance analysis and recommendation generation to reduce manual overhead
  2. Integration Ecosystem: Integrate monitoring with existing observability platforms and operational workflows
  3. Cost Optimization: Balance monitoring comprehensiveness with resource costs and performance impact
  4. Scalability Design: Design monitoring systems that scale effectively with database growth and complexity
  5. Security Integration: Ensure monitoring systems comply with security requirements and access control policies
  6. Documentation Standards: Maintain comprehensive documentation of monitoring configurations, thresholds, and procedures

Conclusion

MongoDB performance monitoring and optimization requires sophisticated tooling and methodologies that understand the unique characteristics of document databases, distributed architectures, and dynamic schema patterns. Advanced monitoring capabilities including query profiling, index analysis, resource tracking, and automated optimization recommendations enable proactive performance management that prevents issues before they impact application users.

Key MongoDB Performance Monitoring benefits include:

  • Comprehensive Profiling: Deep insights into query execution, index usage, and resource utilization patterns
  • Intelligent Optimization: Automated analysis and recommendations based on actual usage patterns and performance data
  • Real-time Monitoring: Continuous performance tracking with proactive alerting and anomaly detection
  • Capacity Planning: Data-driven insights for scaling decisions and resource optimization
  • Production Integration: Enterprise-ready monitoring that integrates with existing operational workflows
  • SQL Accessibility: Familiar SQL-style monitoring operations through QueryLeaf for accessible performance management

Whether you're managing development environments, production deployments, or large-scale distributed MongoDB systems, comprehensive performance monitoring with QueryLeaf's familiar SQL interface provides the foundation for optimal database performance and reliability.

QueryLeaf Integration: QueryLeaf automatically translates SQL-style monitoring queries into MongoDB's native profiling and statistics operations, making advanced performance analysis accessible to SQL-oriented teams. Complex profiling configurations, index analysis, and optimization recommendations are seamlessly handled through familiar SQL constructs, enabling sophisticated performance management without requiring deep MongoDB expertise.

The combination of MongoDB's robust performance monitoring capabilities with SQL-style analysis operations makes it an ideal platform for applications requiring both advanced performance optimization and familiar database management patterns, ensuring your MongoDB deployments maintain optimal performance as they scale and evolve.

MongoDB Vector Search and AI Applications: Building Semantic Search and Similarity Systems for Modern AI-Powered Applications

Modern artificial intelligence applications require sophisticated search capabilities that understand semantic meaning beyond traditional keyword matching, enabling natural language queries, content recommendation systems, and intelligent document retrieval based on conceptual similarity rather than exact text matches. Traditional full-text search approaches struggle with understanding context, synonyms, and conceptual relationships, limiting their effectiveness for AI-powered applications that need to comprehend user intent and content meaning.

MongoDB Vector Search provides comprehensive vector similarity capabilities that enable semantic search, recommendation engines, and AI-powered content discovery through high-dimensional vector embeddings and advanced similarity algorithms. Unlike traditional search systems that rely on exact keyword matching, MongoDB Vector Search leverages machine learning embeddings to understand content semantics, enabling applications to find conceptually similar documents, perform natural language search, and power intelligent recommendation systems.

The Traditional Search Limitation Challenge

Conventional text-based search approaches have significant limitations for modern AI applications:

-- Traditional PostgreSQL full-text search - limited semantic understanding and context awareness

-- Basic full-text search setup with limited semantic capabilities
CREATE TABLE documents (
    document_id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    title VARCHAR(500) NOT NULL,
    content TEXT NOT NULL,
    category VARCHAR(100),
    author VARCHAR(200),
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,

    -- Traditional metadata
    tags TEXT[],
    keywords VARCHAR(1000),
    summary TEXT,
    document_type VARCHAR(50),
    language VARCHAR(10) DEFAULT 'en',

    -- Basic search vectors (limited functionality)
    search_vector tsvector GENERATED ALWAYS AS (
        setweight(to_tsvector('english', coalesce(title, '')), 'A') ||
        setweight(to_tsvector('english', coalesce(content, '')), 'B') ||
        setweight(to_tsvector('english', coalesce(summary, '')), 'C') ||
        setweight(to_tsvector('english', array_to_string(coalesce(tags, '{}'), ' ')), 'D')
    ) STORED
);

-- Additional tables for recommendation attempts
CREATE TABLE user_interactions (
    interaction_id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    user_id UUID NOT NULL,
    document_id UUID NOT NULL REFERENCES documents(document_id),
    interaction_type VARCHAR(50) NOT NULL, -- 'view', 'like', 'share', 'download'
    interaction_timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    duration_seconds INTEGER,
    rating INTEGER CHECK (rating BETWEEN 1 AND 5)
);

CREATE TABLE document_similarity (
    document_id_1 UUID NOT NULL REFERENCES documents(document_id),
    document_id_2 UUID NOT NULL REFERENCES documents(document_id),
    similarity_score DECIMAL(5,4) NOT NULL CHECK (similarity_score BETWEEN 0 AND 1),
    similarity_type VARCHAR(50) NOT NULL, -- 'keyword', 'category', 'manual'
    calculated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    PRIMARY KEY (document_id_1, document_id_2)
);

-- Traditional keyword-based search with limited semantic understanding
WITH search_query AS (
    SELECT 
        'machine learning artificial intelligence neural networks deep learning' as query_text,
        to_tsquery('english', 
            'machine & learning & artificial & intelligence & neural & networks & deep & learning'
        ) as search_tsquery
),
basic_search AS (
    SELECT 
        d.document_id,
        d.title,
        d.content,
        d.category,
        d.author,
        d.created_at,
        d.tags,

        -- Basic relevance scoring (limited effectiveness)
        ts_rank(d.search_vector, sq.search_tsquery) as basic_relevance,
        ts_rank_cd(d.search_vector, sq.search_tsquery) as weighted_relevance,

        -- Simple keyword matching
        array_length(
            string_to_array(
                regexp_replace(
                    lower(d.title || ' ' || d.content), 
                    '[^a-z0-9\s]', ' ', 'g'
                ), 
                ' '
            ) & string_to_array(lower(sq.query_text), ' '), 1
        ) as keyword_matches,

        -- Category-based scoring
        CASE d.category 
            WHEN 'AI/ML' THEN 2.0
            WHEN 'Technology' THEN 1.5 
            WHEN 'Science' THEN 1.2
            ELSE 1.0 
        END as category_boost,

        -- Recency scoring
        CASE 
            WHEN d.created_at > CURRENT_DATE - INTERVAL '30 days' THEN 1.5
            WHEN d.created_at > CURRENT_DATE - INTERVAL '90 days' THEN 1.2  
            WHEN d.created_at > CURRENT_DATE - INTERVAL '365 days' THEN 1.1
            ELSE 1.0
        END as recency_boost

    FROM documents d
    CROSS JOIN search_query sq
    WHERE d.search_vector @@ sq.search_tsquery
),
popularity_metrics AS (
    SELECT 
        ui.document_id,
        COUNT(*) as interaction_count,
        COUNT(*) FILTER (WHERE ui.interaction_type = 'view') as view_count,
        COUNT(*) FILTER (WHERE ui.interaction_type = 'like') as like_count,
        COUNT(*) FILTER (WHERE ui.interaction_type = 'share') as share_count,
        AVG(ui.rating) FILTER (WHERE ui.rating IS NOT NULL) as avg_rating,
        AVG(ui.duration_seconds) FILTER (WHERE ui.duration_seconds IS NOT NULL) as avg_duration
    FROM user_interactions ui
    WHERE ui.interaction_timestamp > CURRENT_DATE - INTERVAL '90 days'
    GROUP BY ui.document_id
),
similarity_expansion AS (
    -- Manual similarity relationships (limited and maintenance-heavy)
    SELECT DISTINCT
        ds.document_id_2 as document_id,
        MAX(ds.similarity_score) as max_similarity,
        COUNT(*) as similar_document_count
    FROM basic_search bs
    JOIN document_similarity ds ON bs.document_id = ds.document_id_1
    WHERE ds.similarity_score > 0.3
    GROUP BY ds.document_id_2
),
final_search_results AS (
    SELECT 
        bs.document_id,
        bs.title,
        SUBSTRING(bs.content, 1, 300) || '...' as content_preview,
        bs.category,
        bs.author,
        bs.created_at,
        bs.tags,

        -- Complex relevance calculation (limited effectiveness)
        (
            (bs.basic_relevance * 10) +
            (bs.weighted_relevance * 15) + 
            (COALESCE(bs.keyword_matches, 0) * 5) +
            (bs.category_boost * 3) +
            (bs.recency_boost * 2) +
            (COALESCE(pm.like_count, 0) * 0.1) +
            (COALESCE(pm.avg_rating, 0) * 2) +
            (COALESCE(se.max_similarity, 0) * 8)
        ) as final_relevance_score,

        -- Metrics for debugging
        bs.basic_relevance,
        bs.keyword_matches,
        pm.interaction_count,
        pm.like_count,
        pm.avg_rating,
        se.similar_document_count,

        -- Formatted information
        CASE 
            WHEN pm.interaction_count > 100 THEN 'Popular'
            WHEN pm.interaction_count > 50 THEN 'Moderately Popular'
            WHEN pm.interaction_count > 10 THEN 'Some Interest'
            ELSE 'New/Limited Interest'
        END as popularity_status

    FROM basic_search bs
    LEFT JOIN popularity_metrics pm ON bs.document_id = pm.document_id
    LEFT JOIN similarity_expansion se ON bs.document_id = se.document_id
)
SELECT 
    document_id,
    title,
    content_preview,
    category,
    author,
    created_at,
    tags,
    ROUND(final_relevance_score::numeric, 2) as relevance_score,
    popularity_status,

    -- Limited recommendation capability
    CASE 
        WHEN final_relevance_score > 25 THEN 'Highly Relevant'
        WHEN final_relevance_score > 15 THEN 'Relevant' 
        WHEN final_relevance_score > 8 THEN 'Potentially Relevant'
        ELSE 'Low Relevance'
    END as relevance_category

FROM final_search_results
WHERE final_relevance_score > 5  -- Filter low-relevance results
ORDER BY final_relevance_score DESC
LIMIT 20;

-- Problems with traditional search approaches:
-- 1. No semantic understanding - "ML" vs "machine learning" treated as completely different
-- 2. Limited context awareness - cannot understand conceptual relationships
-- 3. Poor synonym handling - requires manual synonym dictionaries
-- 4. No natural language query support - requires exact keyword matching
-- 5. Complex manual similarity calculations that don't scale
-- 6. No understanding of document embeddings or vector representations
-- 7. Limited recommendation capabilities based on simple collaborative filtering
-- 8. Poor handling of multilingual content and cross-language search
-- 9. No support for image, audio, or multi-modal content search
-- 10. Maintenance-heavy similarity relationships that become stale

-- Attempt at content-based recommendation (ineffective)
CREATE OR REPLACE FUNCTION calculate_basic_similarity(doc1_id UUID, doc2_id UUID)
RETURNS DECIMAL AS $$
DECLARE
    doc1_vector tsvector;
    doc2_vector tsvector;
    similarity_score DECIMAL;
BEGIN
    SELECT search_vector INTO doc1_vector FROM documents WHERE document_id = doc1_id;
    SELECT search_vector INTO doc2_vector FROM documents WHERE document_id = doc2_id;

    -- Extremely limited similarity calculation
    SELECT ts_rank(doc1_vector, plainto_tsquery('english', 
        array_to_string(
            string_to_array(
                regexp_replace(doc2_vector::text, '[^a-zA-Z0-9\s]', ' ', 'g'), 
                ' '
            ), 
            ' '
        )
    )) INTO similarity_score;

    RETURN COALESCE(similarity_score, 0);
END;
$$ LANGUAGE plpgsql;

-- Manual batch similarity calculation (expensive and inaccurate)
INSERT INTO document_similarity (document_id_1, document_id_2, similarity_score, similarity_type)
SELECT 
    d1.document_id,
    d2.document_id,
    calculate_basic_similarity(d1.document_id, d2.document_id),
    'keyword'
FROM documents d1
CROSS JOIN documents d2
WHERE d1.document_id != d2.document_id
  AND d1.category = d2.category  -- Only calculate within same category
  AND NOT EXISTS (
    SELECT 1 FROM document_similarity ds 
    WHERE ds.document_id_1 = d1.document_id 
    AND ds.document_id_2 = d2.document_id
  )
LIMIT 10000; -- Batch processing required due to computational cost

-- Traditional approach limitations:
-- 1. No understanding of semantic meaning or context
-- 2. Poor performance with large document collections
-- 3. Manual maintenance of similarity relationships
-- 4. Limited multilingual and cross-domain search capabilities  
-- 5. No support for natural language queries or conversational search
-- 6. Inability to handle synonyms and conceptual relationships
-- 7. No integration with modern AI/ML embedding models
-- 8. Poor recommendation quality based on simple keyword overlap
-- 9. No support for multi-modal content (images, videos, audio)
-- 10. Scalability issues with growing content collections

MongoDB Vector Search provides sophisticated AI-powered semantic capabilities:

// MongoDB Vector Search - advanced AI-powered semantic search with comprehensive embedding management
const { MongoClient } = require('mongodb');
const { OpenAI } = require('openai');
const tf = require('@tensorflow/tfjs-node');

const client = new MongoClient('mongodb+srv://username:password@cluster.mongodb.net');
const db = client.db('advanced_ai_search_platform');

// Advanced AI-powered search and recommendation engine
class AdvancedVectorSearchEngine {
  constructor(db, aiConfig = {}) {
    this.db = db;
    this.collections = {
      documents: db.collection('documents'),
      embeddings: db.collection('document_embeddings'),
      userProfiles: db.collection('user_profiles'), 
      searchLogs: db.collection('search_logs'),
      recommendations: db.collection('recommendations'),
      modelMetadata: db.collection('model_metadata')
    };

    // AI model configuration
    this.aiConfig = {
      embeddingModel: aiConfig.embeddingModel || 'text-embedding-3-large',
      embeddingDimensions: aiConfig.embeddingDimensions || 3072,
      maxTokens: aiConfig.maxTokens || 8191,
      batchSize: aiConfig.batchSize || 50,
      similarityThreshold: aiConfig.similarityThreshold || 0.7,

      // Advanced AI configurations
      useMultimodalEmbeddings: aiConfig.useMultimodalEmbeddings || false,
      enableSemanticCaching: aiConfig.enableSemanticCaching || true,
      enableQueryExpansion: aiConfig.enableQueryExpansion || true,
      enablePersonalization: aiConfig.enablePersonalization || true,

      // Model providers
      openaiApiKey: aiConfig.openaiApiKey || process.env.OPENAI_API_KEY,
      huggingFaceApiKey: aiConfig.huggingFaceApiKey || process.env.HUGGINGFACE_API_KEY,
      cohereApiKey: aiConfig.cohereApiKey || process.env.COHERE_API_KEY
    };

    // Initialize AI clients
    this.openai = new OpenAI({ apiKey: this.aiConfig.openaiApiKey });
    this.embeddingCache = new Map();
    this.searchCache = new Map();

    this.setupVectorSearchIndexes();
    this.initializeEmbeddingModels();
  }

  async setupVectorSearchIndexes() {
    console.log('Setting up MongoDB Vector Search indexes...');

    try {
      // Primary document embedding index
      await this.collections.documents.createSearchIndex({
        name: 'document_vector_index',
        definition: {
          fields: [
            {
              type: 'vector',
              path: 'embedding',
              numDimensions: this.aiConfig.embeddingDimensions,
              similarity: 'cosine'
            },
            {
              type: 'filter',
              path: 'category'
            },
            {
              type: 'filter', 
              path: 'language'
            },
            {
              type: 'filter',
              path: 'contentType'
            },
            {
              type: 'filter',
              path: 'accessLevel'
            },
            {
              type: 'filter',
              path: 'createdAt'
            }
          ]
        }
      });

      // Multi-modal content index for images and multimedia
      await this.collections.documents.createSearchIndex({
        name: 'multimodal_vector_index',
        definition: {
          fields: [
            {
              type: 'vector',
              path: 'multimodalEmbedding',
              numDimensions: 1536, // Different dimension for multi-modal models
              similarity: 'cosine'
            },
            {
              type: 'filter',
              path: 'mediaType'
            }
          ]
        }
      });

      // User profile vector index for personalization
      await this.collections.userProfiles.createSearchIndex({
        name: 'user_profile_vector_index',
        definition: {
          fields: [
            {
              type: 'vector',
              path: 'interestEmbedding',
              numDimensions: this.aiConfig.embeddingDimensions,
              similarity: 'cosine'
            }
          ]
        }
      });

      console.log('Vector Search indexes created successfully');
    } catch (error) {
      console.error('Error setting up Vector Search indexes:', error);
      throw error;
    }
  }

  async generateDocumentEmbedding(document, options = {}) {
    console.log(`Generating embeddings for document: ${document.title}`);

    try {
      // Prepare content for embedding generation
      const embeddingContent = this.prepareContentForEmbedding(document, options);

      // Check cache first
      const cacheKey = this.generateCacheKey(embeddingContent);
      if (this.embeddingCache.has(cacheKey) && this.aiConfig.enableSemanticCaching) {
        console.log('Using cached embedding');
        return this.embeddingCache.get(cacheKey);
      }

      // Generate embedding using OpenAI
      const embeddingResponse = await this.openai.embeddings.create({
        model: this.aiConfig.embeddingModel,
        input: embeddingContent,
        dimensions: this.aiConfig.embeddingDimensions
      });

      const embedding = embeddingResponse.data[0].embedding;

      // Cache the embedding
      if (this.aiConfig.enableSemanticCaching) {
        this.embeddingCache.set(cacheKey, embedding);
      }

      // Store embedding with comprehensive metadata
      const embeddingDocument = {
        documentId: document._id,
        embedding: embedding,

        // Embedding metadata
        model: this.aiConfig.embeddingModel,
        dimensions: this.aiConfig.embeddingDimensions,
        contentLength: embeddingContent.length,
        tokensUsed: embeddingResponse.usage?.total_tokens || 0,

        // Content characteristics
        contentType: document.contentType || 'text',
        language: document.language || 'en',
        category: document.category,

        // Processing metadata
        generatedAt: new Date(),
        modelVersion: embeddingResponse.model,
        processingTime: Date.now() - (options.startTime || Date.now()),

        // Quality metrics
        contentQuality: this.assessContentQuality(document),
        embeddingNorm: this.calculateVectorNorm(embedding),

        // Optimization metadata
        batchProcessed: options.batchProcessed || false,
        cacheHit: false
      };

      // Store in embedding collection for tracking
      await this.collections.embeddings.insertOne(embeddingDocument);

      // Update main document with embedding
      await this.collections.documents.updateOne(
        { _id: document._id },
        {
          $set: {
            embedding: embedding,
            embeddingMetadata: {
              model: this.aiConfig.embeddingModel,
              generatedAt: new Date(),
              dimensions: this.aiConfig.embeddingDimensions,
              contentHash: this.generateContentHash(embeddingContent)
            }
          }
        }
      );

      return embedding;

    } catch (error) {
      console.error(`Error generating embedding for document ${document._id}:`, error);
      throw error;
    }
  }

  prepareContentForEmbedding(document, options = {}) {
    // Intelligent content preparation for optimal embedding generation
    let content = '';

    // Title with higher weight
    if (document.title) {
      content += `Title: ${document.title}\n\n`;
    }

    // Summary if available
    if (document.summary) {
      content += `Summary: ${document.summary}\n\n`;
    }

    // Main content with intelligent truncation
    if (document.content) {
      const maxContentLength = this.aiConfig.maxTokens * 0.7; // Reserve space for title/metadata
      let mainContent = document.content;

      if (mainContent.length > maxContentLength) {
        // Intelligent content truncation - keep beginning and key sections
        const beginningChunk = mainContent.substring(0, maxContentLength * 0.6);
        const endingChunk = mainContent.substring(mainContent.length - maxContentLength * 0.2);

        mainContent = beginningChunk + '\n...\n' + endingChunk;
      }

      content += `Content: ${mainContent}\n\n`;
    }

    // Metadata context
    if (document.category) {
      content += `Category: ${document.category}\n`;
    }

    if (document.tags && document.tags.length > 0) {
      content += `Tags: ${document.tags.join(', ')}\n`;
    }

    if (document.keywords) {
      content += `Keywords: ${document.keywords}\n`;
    }

    return content.trim();
  }

  async performSemanticSearch(query, options = {}) {
    console.log(`Performing semantic search for: "${query}"`);
    const startTime = Date.now();

    try {
      // Generate query embedding
      const queryEmbedding = await this.generateQueryEmbedding(query, options);

      // Build comprehensive search pipeline
      const searchPipeline = await this.buildSemanticSearchPipeline(queryEmbedding, query, options);

      // Execute vector search with MongoDB Atlas Vector Search
      const searchResults = await this.collections.documents.aggregate(searchPipeline).toArray();

      // Post-process and enhance results
      const enhancedResults = await this.enhanceSearchResults(searchResults, query, options);

      // Log search for analytics and improvement
      await this.logSearchQuery(query, queryEmbedding, enhancedResults, options);

      // Generate personalized recommendations if user context available
      let personalizedRecommendations = [];
      if (options.userId && this.aiConfig.enablePersonalization) {
        personalizedRecommendations = await this.generatePersonalizedRecommendations(
          options.userId, 
          enhancedResults.slice(0, 5),
          options
        );
      }

      return {
        query: query,
        results: enhancedResults,
        personalizedRecommendations: personalizedRecommendations,

        // Search metadata
        metadata: {
          totalResults: enhancedResults.length,
          searchTime: Date.now() - startTime,
          queryEmbeddingDimensions: queryEmbedding.length,
          embeddingModel: this.aiConfig.embeddingModel,
          similarityThreshold: options.similarityThreshold || this.aiConfig.similarityThreshold,
          filtersApplied: this.extractAppliedFilters(options),
          personalizationEnabled: this.aiConfig.enablePersonalization && !!options.userId
        },

        // Query insights
        insights: {
          queryComplexity: this.assessQueryComplexity(query),
          semanticCategories: this.identifySemanticCategories(enhancedResults),
          resultDiversity: this.calculateResultDiversity(enhancedResults),
          averageSimilarity: this.calculateAverageSimilarity(enhancedResults)
        },

        // Related queries and suggestions
        relatedQueries: await this.generateRelatedQueries(query, enhancedResults),
        searchSuggestions: await this.generateSearchSuggestions(query, options)
      };

    } catch (error) {
      console.error(`Semantic search error for query "${query}":`, error);
      throw error;
    }
  }

  async buildSemanticSearchPipeline(queryEmbedding, query, options = {}) {
    const pipeline = [];

    // Stage 1: Vector similarity search
    pipeline.push({
      $vectorSearch: {
        index: options.multimodal ? 'multimodal_vector_index' : 'document_vector_index',
        path: options.multimodal ? 'multimodalEmbedding' : 'embedding',
        queryVector: queryEmbedding,
        numCandidates: options.numCandidates || 1000,
        limit: options.vectorSearchLimit || 100,

        // Apply filters for performance and relevance
        filter: this.buildSearchFilters(options)
      }
    });

    // Stage 2: Add similarity score and metadata
    pipeline.push({
      $addFields: {
        vectorSimilarityScore: { $meta: 'vectorSearchScore' },
        searchMetadata: {
          searchTime: new Date(),
          searchQuery: query,
          searchModel: this.aiConfig.embeddingModel
        }
      }
    });

    // Stage 3: Hybrid scoring combining vector similarity with text relevance
    if (options.enableHybridSearch !== false) {
      pipeline.push({
        $addFields: {
          // Text match scoring for hybrid approach
          textMatchScore: {
            $cond: {
              if: { $regexMatch: { input: '$title', regex: query, options: 'i' } },
              then: 0.3,
              else: {
                $cond: {
                  if: { $regexMatch: { input: '$content', regex: query, options: 'i' } },
                  then: 0.2,
                  else: 0
                }
              }
            }
          },

          // Recency scoring
          recencyScore: {
            $switch: {
              branches: [
                {
                  case: { $gte: ['$createdAt', new Date(Date.now() - 30 * 24 * 60 * 60 * 1000)] },
                  then: 0.1
                },
                {
                  case: { $gte: ['$createdAt', new Date(Date.now() - 90 * 24 * 60 * 60 * 1000)] },
                  then: 0.05
                }
              ],
              default: 0
            }
          },

          // Popularity scoring based on user interactions
          popularityScore: {
            $multiply: [
              { $log10: { $add: [{ $ifNull: ['$metrics.viewCount', 0] }, 1] } },
              0.05
            ]
          },

          // Content quality scoring
          qualityScore: {
            $multiply: [
              { $divide: [{ $strLenCP: { $ifNull: ['$content', ''] } }, 10000] },
              0.02
            ]
          }
        }
      });

      // Combined hybrid score
      pipeline.push({
        $addFields: {
          hybridScore: {
            $add: [
              { $multiply: ['$vectorSimilarityScore', 0.7] }, // Vector similarity weight
              '$textMatchScore',
              '$recencyScore', 
              '$popularityScore',
              '$qualityScore'
            ]
          }
        }
      });
    }

    // Stage 4: Apply similarity threshold filtering
    pipeline.push({
      $match: {
        vectorSimilarityScore: { 
          $gte: options.similarityThreshold || this.aiConfig.similarityThreshold 
        }
      }
    });

    // Stage 5: Lookup related collections for rich context
    pipeline.push({
      $lookup: {
        from: 'users',
        localField: 'createdBy',
        foreignField: '_id',
        as: 'authorInfo',
        pipeline: [
          { $project: { name: 1, avatar: 1, expertise: 1, reputation: 1 } }
        ]
      }
    });

    // Stage 6: Add computed fields for result enhancement
    pipeline.push({
      $addFields: {
        // Content preview generation
        contentPreview: {
          $cond: {
            if: { $gt: [{ $strLenCP: { $ifNull: ['$content', ''] } }, 300] },
            then: { $concat: [{ $substr: ['$content', 0, 300] }, '...'] },
            else: '$content'
          }
        },

        // Relevance category
        relevanceCategory: {
          $switch: {
            branches: [
              { case: { $gte: ['$vectorSimilarityScore', 0.9] }, then: 'Highly Relevant' },
              { case: { $gte: ['$vectorSimilarityScore', 0.8] }, then: 'Very Relevant' },
              { case: { $gte: ['$vectorSimilarityScore', 0.7] }, then: 'Relevant' },
              { case: { $gte: ['$vectorSimilarityScore', 0.6] }, then: 'Moderately Relevant' }
            ],
            default: 'Potentially Relevant'
          }
        },

        // Author information
        authorName: { $arrayElemAt: ['$authorInfo.name', 0] },
        authorExpertise: { $arrayElemAt: ['$authorInfo.expertise', 0] },

        // Formatted metadata
        formattedCreatedAt: {
          $dateToString: {
            format: '%Y-%m-%d',
            date: '$createdAt'
          }
        }
      }
    });

    // Stage 7: Final projection for clean output
    pipeline.push({
      $project: {
        _id: 1,
        title: 1,
        contentPreview: 1,
        category: 1,
        tags: 1,
        language: 1,
        contentType: 1,
        createdAt: 1,
        formattedCreatedAt: 1,

        // Scoring information
        vectorSimilarityScore: { $round: ['$vectorSimilarityScore', 4] },
        hybridScore: { $round: [{ $ifNull: ['$hybridScore', '$vectorSimilarityScore'] }, 4] },
        relevanceCategory: 1,

        // Author information
        authorName: 1,
        authorExpertise: 1,

        // Access and metadata
        accessLevel: 1,
        downloadUrl: { $concat: ['/api/documents/', { $toString: '$_id' }] },

        // Analytics metadata
        metrics: {
          viewCount: { $ifNull: ['$metrics.viewCount', 0] },
          likeCount: { $ifNull: ['$metrics.likeCount', 0] },
          shareCount: { $ifNull: ['$metrics.shareCount', 0] }
        },

        searchMetadata: 1
      }
    });

    // Stage 8: Sort by hybrid score or vector similarity
    const sortField = options.enableHybridSearch !== false ? 'hybridScore' : 'vectorSimilarityScore';
    pipeline.push({ $sort: { [sortField]: -1 } });

    // Stage 9: Apply final limit
    pipeline.push({ $limit: options.limit || 20 });

    return pipeline;
  }

  buildSearchFilters(options) {
    const filters = {};

    // Category filtering
    if (options.category) {
      filters.category = { $eq: options.category };
    }

    // Language filtering
    if (options.language) {
      filters.language = { $eq: options.language };
    }

    // Content type filtering
    if (options.contentType) {
      filters.contentType = { $eq: options.contentType };
    }

    // Access level filtering
    if (options.accessLevel) {
      filters.accessLevel = { $eq: options.accessLevel };
    }

    // Date range filtering
    if (options.dateFrom || options.dateTo) {
      filters.createdAt = {};
      if (options.dateFrom) filters.createdAt.$gte = new Date(options.dateFrom);
      if (options.dateTo) filters.createdAt.$lte = new Date(options.dateTo);
    }

    // Author filtering
    if (options.authorId) {
      filters.createdBy = { $eq: options.authorId };
    }

    // Tags filtering
    if (options.tags && options.tags.length > 0) {
      filters.tags = { $in: options.tags };
    }

    return filters;
  }

  async generateQueryEmbedding(query, options = {}) {
    console.log(`Generating query embedding for: "${query}"`);

    try {
      // Enhance query with expansion if enabled
      let enhancedQuery = query;

      if (this.aiConfig.enableQueryExpansion && options.expandQuery !== false) {
        enhancedQuery = await this.expandQuery(query, options);
      }

      // Generate embedding
      const embeddingResponse = await this.openai.embeddings.create({
        model: this.aiConfig.embeddingModel,
        input: enhancedQuery,
        dimensions: this.aiConfig.embeddingDimensions
      });

      return embeddingResponse.data[0].embedding;

    } catch (error) {
      console.error(`Error generating query embedding for "${query}":`, error);
      throw error;
    }
  }

  async expandQuery(query, options = {}) {
    console.log(`Expanding query: "${query}"`);

    try {
      // Use GPT to expand the query with related terms and concepts
      const expansionPrompt = `
        Given the search query: "${query}"

        Generate an expanded version that includes:
        1. Synonyms and related terms
        2. Alternative phrasings
        3. Conceptually related topics
        4. Common variations and abbreviations

        Keep the expansion focused and relevant. Return only the expanded query text.

        Original query: ${query}
        Expanded query:`;

      const completion = await this.openai.chat.completions.create({
        model: 'gpt-4',
        messages: [{ role: 'user', content: expansionPrompt }],
        max_tokens: 150,
        temperature: 0.3
      });

      const expandedQuery = completion.choices[0].message.content.trim();
      console.log(`Query expanded to: "${expandedQuery}"`);

      return expandedQuery;

    } catch (error) {
      console.error(`Error expanding query "${query}":`, error);
      return query; // Fall back to original query
    }
  }

  async generatePersonalizedRecommendations(userId, searchResults, options = {}) {
    console.log(`Generating personalized recommendations for user: ${userId}`);

    try {
      // Get user profile and interaction history
      const userProfile = await this.collections.userProfiles.findOne({ userId: userId });
      if (!userProfile) {
        console.log('No user profile found, returning general recommendations');
        return this.generateGeneralRecommendations(searchResults, options);
      }

      // Generate personalized recommendations based on user interests
      const recommendationPipeline = [
        {
          $vectorSearch: {
            index: 'document_vector_index',
            path: 'embedding', 
            queryVector: userProfile.interestEmbedding,
            numCandidates: 500,
            limit: 50,
            filter: {
              _id: { $nin: searchResults.map(r => r._id) }, // Exclude current results
              accessLevel: { $in: ['public', 'user'] }
            }
          }
        },
        {
          $addFields: {
            personalizedScore: { $meta: 'vectorSearchScore' },
            recommendationReason: 'Based on your interests and reading history'
          }
        },
        {
          $lookup: {
            from: 'users',
            localField: 'createdBy',
            foreignField: '_id',
            as: 'authorInfo',
            pipeline: [{ $project: { name: 1, expertise: 1 } }]
          }
        },
        {
          $project: {
            _id: 1,
            title: 1,
            category: 1,
            tags: 1,
            createdAt: 1,
            personalizedScore: { $round: ['$personalizedScore', 4] },
            recommendationReason: 1,
            authorName: { $arrayElemAt: ['$authorInfo.name', 0] },
            downloadUrl: { $concat: ['/api/documents/', { $toString: '$_id' }] }
          }
        },
        { $sort: { personalizedScore: -1 } },
        { $limit: options.recommendationLimit || 10 }
      ];

      const recommendations = await this.collections.documents.aggregate(recommendationPipeline).toArray();

      return recommendations;

    } catch (error) {
      console.error(`Error generating personalized recommendations for user ${userId}:`, error);
      return [];
    }
  }

  async enhanceSearchResults(results, query, options = {}) {
    console.log(`Enhancing ${results.length} search results`);

    try {
      // Add result enhancements
      const enhancedResults = await Promise.all(results.map(async (result, index) => {
        // Calculate additional metadata
        const enhancedResult = {
          ...result,

          // Result ranking
          rank: index + 1,

          // Enhanced content preview with query highlighting
          highlightedPreview: this.highlightQueryInText(result.contentPreview || '', query),

          // Semantic category classification
          semanticCategory: await this.classifyContentSemantics(result),

          // Reading time estimation
          estimatedReadingTime: this.estimateReadingTime(result.content || result.contentPreview || ''),

          // Related concepts extraction
          extractedConcepts: this.extractKeyConcepts(result.title + ' ' + (result.contentPreview || '')),

          // Confidence scoring
          confidenceScore: this.calculateConfidenceScore(result),

          // Access recommendations
          accessRecommendation: this.generateAccessRecommendation(result, options)
        };

        return enhancedResult;
      }));

      return enhancedResults;

    } catch (error) {
      console.error('Error enhancing search results:', error);
      return results; // Return original results if enhancement fails
    }
  }

  highlightQueryInText(text, query) {
    if (!text || !query) return text;

    // Simple highlighting - in production, use more sophisticated highlighting
    const queryWords = query.toLowerCase().split(/\s+/);
    let highlightedText = text;

    queryWords.forEach(word => {
      if (word.length > 2) { // Only highlight words longer than 2 characters
        const regex = new RegExp(`\\b${word}\\b`, 'gi');
        highlightedText = highlightedText.replace(regex, `**${word}**`);
      }
    });

    return highlightedText;
  }

  estimateReadingTime(text) {
    const wordsPerMinute = 250; // Average reading speed
    const wordCount = text.split(/\s+/).length;
    const readingTime = Math.ceil(wordCount / wordsPerMinute);

    return {
      minutes: readingTime,
      wordCount: wordCount,
      formattedTime: readingTime === 1 ? '1 minute' : `${readingTime} minutes`
    };
  }

  extractKeyConcepts(text) {
    // Simple concept extraction - in production, use NLP libraries
    const concepts = [];
    const words = text.toLowerCase().split(/\s+/);

    // Technical terms and concepts (simplified approach)
    const technicalTerms = [
      'artificial intelligence', 'machine learning', 'deep learning', 'neural networks',
      'data science', 'analytics', 'algorithm', 'optimization', 'automation',
      'cloud computing', 'blockchain', 'cybersecurity', 'api', 'database'
    ];

    technicalTerms.forEach(term => {
      if (text.toLowerCase().includes(term)) {
        concepts.push(term);
      }
    });

    return concepts.slice(0, 5); // Return top 5 concepts
  }

  calculateConfidenceScore(result) {
    // Multi-factor confidence calculation
    let confidence = result.vectorSimilarityScore * 0.6; // Base similarity

    // Content length factor
    const contentLength = (result.content || result.contentPreview || '').length;
    if (contentLength > 1000) confidence += 0.1;
    if (contentLength > 3000) confidence += 0.1;

    // Metadata completeness factor
    if (result.category) confidence += 0.05;
    if (result.tags && result.tags.length > 0) confidence += 0.05;
    if (result.authorName) confidence += 0.05;

    // Popularity factor
    if (result.metrics.viewCount > 100) confidence += 0.05;
    if (result.metrics.likeCount > 10) confidence += 0.05;

    return Math.min(confidence, 1.0); // Cap at 1.0
  }

  generateAccessRecommendation(result, options) {
    // Generate recommendations for how to use/access the content
    const recommendations = [];

    if (result.vectorSimilarityScore > 0.9) {
      recommendations.push('Highly recommended - very relevant to your search');
    }

    if (result.metrics.viewCount > 1000) {
      recommendations.push('Popular content - frequently viewed by users');
    }

    if (result.estimatedReadingTime && result.estimatedReadingTime.minutes <= 5) {
      recommendations.push('Quick read - can be completed in a few minutes');
    }

    if (result.category === 'tutorial') {
      recommendations.push('Step-by-step guidance available');
    }

    return recommendations;
  }

  async logSearchQuery(query, queryEmbedding, results, options) {
    try {
      const searchLog = {
        query: query,
        queryEmbedding: queryEmbedding,
        userId: options.userId || null,
        sessionId: options.sessionId || null,

        // Search configuration
        searchConfig: {
          model: this.aiConfig.embeddingModel,
          similarityThreshold: options.similarityThreshold || this.aiConfig.similarityThreshold,
          limit: options.limit || 20,
          enableHybridSearch: options.enableHybridSearch !== false,
          enablePersonalization: this.aiConfig.enablePersonalization && !!options.userId
        },

        // Results metadata
        resultsMetadata: {
          totalResults: results.length,
          averageSimilarity: results.length > 0 ? 
            results.reduce((sum, r) => sum + r.vectorSimilarityScore, 0) / results.length : 0,
          topCategories: this.extractTopCategories(results),
          searchTime: Date.now() - (options.startTime || Date.now())
        },

        // User context
        userContext: {
          ipAddress: options.ipAddress,
          userAgent: options.userAgent,
          referrer: options.referrer
        },

        timestamp: new Date()
      };

      await this.collections.searchLogs.insertOne(searchLog);

    } catch (error) {
      console.error('Error logging search query:', error);
      // Don't throw - logging shouldn't break search
    }
  }

  extractTopCategories(results) {
    const categoryCount = {};
    results.forEach(result => {
      if (result.category) {
        categoryCount[result.category] = (categoryCount[result.category] || 0) + 1;
      }
    });

    return Object.entries(categoryCount)
      .sort(([,a], [,b]) => b - a)
      .slice(0, 5)
      .map(([category, count]) => ({ category, count }));
  }

  // Additional utility methods for comprehensive vector search functionality

  generateCacheKey(content) {
    const crypto = require('crypto');
    return crypto.createHash('sha256').update(content).digest('hex');
  }

  generateContentHash(content) {
    const crypto = require('crypto');
    return crypto.createHash('md5').update(content).digest('hex');
  }

  calculateVectorNorm(vector) {
    return Math.sqrt(vector.reduce((sum, val) => sum + val * val, 0));
  }

  assessContentQuality(document) {
    let qualityScore = 0;

    // Length factor
    const contentLength = (document.content || '').length;
    if (contentLength > 1000) qualityScore += 0.3;
    if (contentLength > 5000) qualityScore += 0.2;

    // Metadata completeness
    if (document.title) qualityScore += 0.1;
    if (document.summary) qualityScore += 0.1;
    if (document.tags && document.tags.length > 0) qualityScore += 0.1;
    if (document.category) qualityScore += 0.1;

    // Structure indicators
    if (document.content && document.content.includes('\n\n')) qualityScore += 0.1; // Paragraphs

    return Math.min(qualityScore, 1.0);
  }
}

// Benefits of MongoDB Vector Search for AI Applications:
// - Native vector similarity search with cosine similarity
// - Seamless integration with embedding models (OpenAI, Hugging Face, etc.)
// - High-performance vector indexing and retrieval at scale
// - Advanced filtering and hybrid search capabilities
// - Built-in support for multi-modal content (text, images, audio)
// - Personalization through user profile vector matching
// - Real-time search with low-latency vector operations
// - Comprehensive search analytics and query optimization
// - Integration with MongoDB's document model for rich metadata
// - Production-ready scalability with sharding and replication

module.exports = {
  AdvancedVectorSearchEngine
};

Understanding MongoDB Vector Search Architecture

Advanced AI Integration Patterns and Semantic Search Optimization

Implement sophisticated vector search strategies for production AI applications:

// Production-ready MongoDB Vector Search with advanced AI integration and optimization patterns
class ProductionVectorSearchPlatform extends AdvancedVectorSearchEngine {
  constructor(db, productionConfig) {
    super(db, productionConfig);

    this.productionConfig = {
      ...productionConfig,
      multiModelSupport: true,
      realtimeIndexing: true,
      distributedEmbedding: true,
      autoOptimization: true,
      advancedAnalytics: true,
      contentModeration: true
    };

    this.setupProductionOptimizations();
    this.initializeAdvancedFeatures();
    this.setupMonitoringAndAlerts();
  }

  async implementAdvancedSemanticCapabilities() {
    console.log('Implementing advanced semantic capabilities...');

    // Multi-model embedding strategy
    const embeddingStrategy = {
      textEmbeddings: {
        primary: 'text-embedding-3-large',
        fallback: 'text-embedding-ada-002',
        specialized: {
          code: 'code-search-babbage-code-001',
          legal: 'text-similarity-curie-001',
          medical: 'text-search-curie-doc-001'
        }
      },

      multimodalEmbeddings: {
        imageText: 'clip-vit-base-patch32',
        audioText: 'wav2vec2-base-960h', 
        videoText: 'video-text-retrieval'
      },

      domainSpecific: {
        scientific: 'scibert-scivocab-uncased',
        financial: 'finbert-base-uncased',
        biomedical: 'biobert-base-cased'
      }
    };

    return await this.deployEmbeddingStrategy(embeddingStrategy);
  }

  async setupRealtimeSemanticIndexing() {
    console.log('Setting up real-time semantic indexing...');

    const indexingPipeline = {
      // Change stream monitoring for real-time updates
      changeStreams: [
        {
          collection: 'documents',
          pipeline: [
            { $match: { 'operationType': { $in: ['insert', 'update'] } } }
          ],
          handler: this.processDocumentChange.bind(this)
        }
      ],

      // Batch processing for bulk operations
      batchProcessor: {
        batchSize: 100,
        maxWaitTime: 30000, // 30 seconds
        retryLogic: true,
        errorHandling: 'resilient'
      },

      // Quality assurance pipeline
      qualityChecks: [
        'contentValidation',
        'languageDetection', 
        'duplicateDetection',
        'contentModeration'
      ]
    };

    return await this.deployIndexingPipeline(indexingPipeline);
  }

  async implementAdvancedRecommendationEngine() {
    console.log('Implementing advanced recommendation engine...');

    const recommendationStrategies = {
      // Collaborative filtering with vector embeddings
      collaborative: {
        userSimilarity: 'cosine',
        itemSimilarity: 'cosine',
        hybridWeighting: {
          contentBased: 0.6,
          collaborative: 0.4
        }
      },

      // Content-based recommendations
      contentBased: {
        semanticSimilarity: true,
        categoryWeighting: true,
        temporalDecay: true,
        diversityOptimization: true
      },

      // Deep learning recommendations
      deepLearning: {
        neuralCollaborativeFiltering: true,
        sequentialRecommendations: true,
        multiTaskLearning: true
      }
    };

    return await this.deployRecommendationStrategies(recommendationStrategies);
  }

  async optimizeVectorSearchPerformance() {
    console.log('Optimizing vector search performance...');

    const optimizations = {
      // Index optimization strategies
      indexOptimization: {
        approximateNearestNeighbor: true,
        hierarchicalNavigableSmallWorld: true,
        productQuantization: true,
        localitySensitiveHashing: true
      },

      // Query optimization
      queryOptimization: {
        queryExpansion: true,
        queryRewriting: true,
        candidatePrefiltering: true,
        adaptiveSimilarityThresholds: true
      },

      // Caching strategies
      cachingStrategy: {
        embeddingCache: '10GB',
        resultCache: '5GB',
        queryCache: '2GB',
        indexCache: '20GB'
      }
    };

    return await this.implementOptimizations(optimizations);
  }
}

SQL-Style Vector Search Operations with QueryLeaf

QueryLeaf provides familiar SQL syntax for MongoDB Vector Search operations and AI-powered semantic queries:

-- QueryLeaf advanced vector search and AI operations with SQL-familiar syntax

-- Create vector search indexes for different content types and embedding models
CREATE VECTOR INDEX document_semantic_index 
ON documents (
  embedding VECTOR(3072) USING COSINE_SIMILARITY,
  category,
  language,
  contentType,
  accessLevel,
  createdAt
)
WITH (
  model = 'text-embedding-3-large',
  auto_update = true,
  optimization_level = 'performance',

  -- Advanced index configuration
  approximate_nn = true,
  candidate_multiplier = 10,
  ef_construction = 200,
  m_connections = 16
);

CREATE VECTOR INDEX multimodal_content_index
ON documents (
  multimodalEmbedding VECTOR(1536) USING COSINE_SIMILARITY,
  mediaType,
  contentFormat
)
WITH (
  model = 'clip-vit-base-patch32',
  multimodal = true
);

-- Advanced semantic search with vector similarity and hybrid scoring
WITH semantic_search AS (
  SELECT 
    d.*,
    -- Vector similarity search using embeddings
    VECTOR_SEARCH(
      d.embedding,
      GENERATE_EMBEDDING(
        'Find research papers about machine learning applications in healthcare diagnostics',
        'text-embedding-3-large'
      ),
      'COSINE'
    ) as vector_similarity,

    -- Hybrid scoring combining vector and traditional text search
    (
      VECTOR_SEARCH(
        d.embedding,
        GENERATE_EMBEDDING(
          'machine learning healthcare diagnostics medical AI',
          'text-embedding-3-large'
        ),
        'COSINE'
      ) * 0.7 +

      MATCH_SCORE(d.title || ' ' || d.content, 'machine learning healthcare diagnostics') * 0.2 +

      -- Recency boost
      CASE 
        WHEN d.createdAt > CURRENT_DATE - INTERVAL '30 days' THEN 0.1
        WHEN d.createdAt > CURRENT_DATE - INTERVAL '90 days' THEN 0.05
        ELSE 0
      END +

      -- Quality and popularity boost
      (LOG(d.metrics.citationCount + 1) * 0.02) +
      (d.metrics.averageRating / 5.0 * 0.03)

    ) as hybrid_score

  FROM documents d
  WHERE 
    -- Vector similarity threshold
    VECTOR_SEARCH(
      d.embedding,
      GENERATE_EMBEDDING(
        'machine learning healthcare diagnostics',
        'text-embedding-3-large'
      ),
      'COSINE'
    ) > 0.75

    -- Additional filters for precision
    AND d.category IN ('research', 'academic', 'medical')
    AND d.language = 'en'
    AND d.accessLevel IN ('public', 'academic')
    AND d.contentType = 'research_paper'
),

-- Enhanced search with semantic category classification and concept extraction
enriched_results AS (
  SELECT 
    ss.*,

    -- Semantic category classification using AI
    AI_CLASSIFY_CATEGORY(
      ss.title || ' ' || SUBSTRING(ss.content, 1, 1000),
      ['machine_learning', 'healthcare', 'diagnostics', 'medical_imaging', 'clinical_ai']
    ) as semantic_categories,

    -- Key concept extraction
    AI_EXTRACT_CONCEPTS(
      ss.title || ' ' || ss.abstract,
      10 -- top 10 concepts
    ) as key_concepts,

    -- Content summary generation
    AI_SUMMARIZE(
      ss.content,
      max_length => 200,
      style => 'academic'
    ) as ai_summary,

    -- Reading difficulty assessment
    AI_ASSESS_DIFFICULTY(
      ss.content,
      domain => 'medical'
    ) as reading_difficulty,

    -- Related research identification
    FIND_SIMILAR_DOCUMENTS(
      ss.embedding,
      limit => 5,
      exclude_ids => ARRAY[ss.document_id],
      similarity_threshold => 0.8
    ) as related_research,

    -- Citation and reference analysis
    ANALYZE_CITATIONS(ss.content) as citation_analysis,

    -- Author expertise scoring
    u.expertise_score,
    u.h_index,
    u.research_domains,

    -- Impact metrics
    CALCULATE_IMPACT_SCORE(
      ss.metrics.citationCount,
      ss.metrics.downloadCount,
      ss.metrics.viewCount,
      ss.createdAt
    ) as impact_score

  FROM semantic_search ss
  JOIN users u ON ss.createdBy = u.user_id
  WHERE ss.vector_similarity > 0.7
),

-- Personalized recommendations based on user research interests
personalized_recommendations AS (
  SELECT 
    er.*,

    -- User interest alignment scoring
    VECTOR_SIMILARITY(
      er.embedding,
      (SELECT interest_embedding FROM user_profiles WHERE user_id = CURRENT_USER_ID()),
      'COSINE'
    ) as interest_alignment,

    -- Reading history similarity
    CALCULATE_READING_HISTORY_SIMILARITY(
      CURRENT_USER_ID(),
      er.document_id,
      window_days => 180
    ) as reading_history_similarity,

    -- Collaborative filtering score
    COLLABORATIVE_FILTERING_SCORE(
      CURRENT_USER_ID(),
      er.document_id,
      algorithm => 'neural_collaborative_filtering'
    ) as collaborative_score,

    -- Personalized relevance scoring
    (
      er.hybrid_score * 0.5 +
      interest_alignment * 0.3 +
      reading_history_similarity * 0.1 +
      collaborative_score * 0.1
    ) as personalized_relevance

  FROM enriched_results er
  WHERE interest_alignment > 0.6
),

-- Advanced analytics and search insights
search_analytics AS (
  SELECT 
    COUNT(*) as total_results,
    AVG(pr.vector_similarity) as avg_similarity,
    AVG(pr.hybrid_score) as avg_hybrid_score,
    AVG(pr.personalized_relevance) as avg_personalized_relevance,

    -- Category distribution analysis
    JSON_OBJECT_AGG(
      pr.category,
      COUNT(*)
    ) as category_distribution,

    -- Semantic category insights
    FLATTEN_ARRAY(
      ARRAY_AGG(pr.semantic_categories)
    ) as all_semantic_categories,

    -- Concept frequency analysis
    AI_ANALYZE_CONCEPT_TRENDS(
      ARRAY_AGG(pr.key_concepts),
      time_window => '30 days'
    ) as concept_trends,

    -- Research domain coverage
    CALCULATE_DOMAIN_COVERAGE(
      ARRAY_AGG(pr.research_domains)
    ) as domain_coverage,

    -- Quality distribution
    JSON_OBJECT(
      'high_impact', COUNT(*) FILTER (WHERE pr.impact_score > 80),
      'medium_impact', COUNT(*) FILTER (WHERE pr.impact_score BETWEEN 50 AND 80),
      'emerging', COUNT(*) FILTER (WHERE pr.impact_score BETWEEN 20 AND 50),
      'new_research', COUNT(*) FILTER (WHERE pr.impact_score < 20)
    ) as quality_distribution

  FROM personalized_recommendations pr
)

-- Final comprehensive search results with analytics and recommendations
SELECT 
  -- Document information
  pr.document_id,
  pr.title,
  pr.ai_summary,
  pr.category,
  pr.semantic_categories,
  pr.key_concepts,
  pr.reading_difficulty,
  pr.createdAt,

  -- Author information
  JSON_OBJECT(
    'name', u.name,
    'expertise_score', pr.expertise_score,
    'h_index', pr.h_index,
    'research_domains', pr.research_domains
  ) as author_info,

  -- Relevance scoring
  ROUND(pr.vector_similarity, 4) as semantic_similarity,
  ROUND(pr.hybrid_score, 4) as hybrid_relevance,
  ROUND(pr.personalized_relevance, 4) as personalized_score,
  ROUND(pr.interest_alignment, 4) as interest_match,

  -- Content characteristics
  pr.reading_difficulty,
  pr.impact_score,
  pr.citation_analysis,

  -- Related content
  pr.related_research,

  -- Access information
  CASE pr.accessLevel
    WHEN 'public' THEN 'Open Access'
    WHEN 'academic' THEN 'Academic Access Required'
    WHEN 'subscription' THEN 'Subscription Required'
    ELSE 'Restricted Access'
  END as access_type,

  -- Download and interaction URLs
  CONCAT('/api/documents/', pr.document_id, '/download') as download_url,
  CONCAT('/api/documents/', pr.document_id, '/cite') as citation_url,
  CONCAT('/api/documents/', pr.document_id, '/related') as related_url,

  -- Recommendation metadata
  JSON_OBJECT(
    'recommendation_reason', CASE 
      WHEN pr.interest_alignment > 0.9 THEN 'Highly aligned with your research interests'
      WHEN pr.collaborative_score > 0.8 THEN 'Recommended by researchers with similar interests'
      WHEN pr.reading_history_similarity > 0.7 THEN 'Similar to your recent reading patterns'
      ELSE 'Semantically relevant to your search'
    END,
    'confidence_level', CASE
      WHEN pr.personalized_relevance > 0.9 THEN 'Very High'
      WHEN pr.personalized_relevance > 0.8 THEN 'High'
      WHEN pr.personalized_relevance > 0.7 THEN 'Medium'
      ELSE 'Low'
    END
  ) as recommendation_metadata,

  -- Search analytics (same for all results)
  (SELECT ROW_TO_JSON(sa.*) FROM search_analytics sa) as search_insights

FROM personalized_recommendations pr
JOIN users u ON pr.createdBy = u.user_id
WHERE pr.personalized_relevance > 0.6
ORDER BY pr.personalized_relevance DESC
LIMIT 20;

-- Advanced vector operations for content discovery and analysis

-- Find conceptually similar documents across different languages
WITH multilingual_search AS (
  SELECT 
    d.document_id,
    d.title,
    d.language,
    d.category,

    -- Cross-language semantic similarity
    VECTOR_SEARCH(
      d.embedding,
      GENERATE_MULTILINGUAL_EMBEDDING(
        'intelligence artificielle apprentissage automatique', -- French query
        source_language => 'fr',
        target_embedding_language => 'en'
      ),
      'COSINE'
    ) as cross_language_similarity

  FROM documents d
  WHERE d.language IN ('en', 'fr', 'de', 'es', 'zh')
    AND VECTOR_SEARCH(
      d.embedding,
      GENERATE_MULTILINGUAL_EMBEDDING(
        'intelligence artificielle apprentissage automatique',
        source_language => 'fr',
        target_embedding_language => 'en'
      ),
      'COSINE'
    ) > 0.8
)
SELECT * FROM multilingual_search
ORDER BY cross_language_similarity DESC;

-- Content recommendation based on user behavior patterns
CREATE VIEW personalized_content_feed AS
WITH user_interaction_embedding AS (
  SELECT 
    ui.user_id,

    -- Generate user interest embedding from interaction history
    AGGREGATE_EMBEDDINGS(
      ARRAY_AGG(d.embedding),
      weights => ARRAY_AGG(
        CASE ui.interaction_type
          WHEN 'download' THEN 1.0
          WHEN 'like' THEN 0.8
          WHEN 'share' THEN 0.9
          WHEN 'view' THEN 0.3
          ELSE 0.1
        END * 
        -- Temporal decay
        GREATEST(0.1, 1.0 - EXTRACT(DAYS FROM CURRENT_DATE - ui.interaction_timestamp) / 365.0)
      ),
      aggregation_method => 'weighted_average'
    ) as interest_embedding

  FROM user_interactions ui
  JOIN documents d ON ui.document_id = d.document_id
  WHERE ui.interaction_timestamp > CURRENT_DATE - INTERVAL '1 year'
  GROUP BY ui.user_id
),
content_recommendations AS (
  SELECT 
    uie.user_id,
    d.document_id,
    d.title,
    d.category,
    d.createdAt,

    -- Interest-based similarity
    VECTOR_SIMILARITY(
      d.embedding,
      uie.interest_embedding,
      'COSINE'
    ) as interest_similarity,

    -- Trending factor
    CALCULATE_TRENDING_SCORE(
      d.document_id,
      time_window => '7 days'
    ) as trending_score,

    -- Novelty factor (encourages discovery)
    CALCULATE_NOVELTY_SCORE(
      uie.user_id,
      d.document_id,
      d.category
    ) as novelty_score,

    -- Combined recommendation score
    (
      VECTOR_SIMILARITY(d.embedding, uie.interest_embedding, 'COSINE') * 0.6 +
      CALCULATE_TRENDING_SCORE(d.document_id, time_window => '7 days') * 0.2 +
      CALCULATE_NOVELTY_SCORE(uie.user_id, d.document_id, d.category) * 0.2
    ) as recommendation_score

  FROM user_interaction_embedding uie
  CROSS JOIN documents d
  WHERE NOT EXISTS (
    -- Exclude already interacted content
    SELECT 1 FROM user_interactions ui2 
    WHERE ui2.user_id = uie.user_id 
    AND ui2.document_id = d.document_id
  )
  AND VECTOR_SIMILARITY(d.embedding, uie.interest_embedding, 'COSINE') > 0.7
)
SELECT 
  user_id,
  document_id,
  title,
  category,
  ROUND(interest_similarity, 4) as interest_match,
  ROUND(trending_score, 4) as trending_score,
  ROUND(novelty_score, 4) as discovery_potential,
  ROUND(recommendation_score, 4) as overall_score,

  -- Recommendation explanation
  CASE 
    WHEN interest_similarity > 0.9 THEN 'Perfect match for your interests'
    WHEN trending_score > 0.8 THEN 'Trending content in your area'
    WHEN novelty_score > 0.7 THEN 'New topic for you to explore'
    ELSE 'Related to your reading patterns'
  END as recommendation_reason

FROM content_recommendations
WHERE recommendation_score > 0.75
ORDER BY user_id, recommendation_score DESC;

-- Advanced analytics for content optimization and performance monitoring
WITH vector_search_analytics AS (
  SELECT 
    -- Search performance metrics
    sl.query,
    COUNT(*) as search_frequency,
    AVG(sl.resultsMetadata.totalResults) as avg_results_count,
    AVG(sl.resultsMetadata.averageSimilarity) as avg_similarity_score,
    AVG(sl.resultsMetadata.searchTime) as avg_search_time_ms,

    -- Query characteristics
    AI_ANALYZE_QUERY_INTENT(sl.query) as query_intent,
    AI_EXTRACT_ENTITIES(sl.query) as query_entities,
    LENGTH(sl.query) as query_length,

    -- Result quality metrics
    AVG(
      (SELECT COUNT(*) FROM JSON_ARRAY_ELEMENTS_TEXT(sl.resultsMetadata.topCategories))
    ) as category_diversity,

    -- User engagement with results
    COALESCE(
      (
        SELECT AVG(ui.rating) 
        FROM user_interactions ui
        WHERE ui.session_id = sl.sessionId
        AND ui.interaction_timestamp >= sl.timestamp
        AND ui.interaction_timestamp <= sl.timestamp + INTERVAL '1 hour'
      ), 0
    ) as result_satisfaction_score

  FROM search_logs sl
  WHERE sl.timestamp >= CURRENT_DATE - INTERVAL '30 days'
  GROUP BY sl.query, AI_ANALYZE_QUERY_INTENT(sl.query), AI_EXTRACT_ENTITIES(sl.query), LENGTH(sl.query)
),
content_performance_analysis AS (
  SELECT 
    d.document_id,
    d.title,
    d.category,
    d.createdAt,

    -- Discoverability metrics
    COUNT(sl.query) as times_found_in_search,
    AVG(sl.resultsMetadata.averageSimilarity) as avg_search_relevance,

    -- Engagement metrics
    COUNT(ui.interaction_id) as total_interactions,
    COUNT(ui.interaction_id) FILTER (WHERE ui.interaction_type = 'view') as view_count,
    COUNT(ui.interaction_id) FILTER (WHERE ui.interaction_type = 'download') as download_count,
    AVG(ui.rating) FILTER (WHERE ui.rating IS NOT NULL) as avg_rating,

    -- Content optimization recommendations
    CASE 
      WHEN COUNT(sl.query) < 5 THEN 'Low discoverability - consider SEO optimization'
      WHEN AVG(sl.resultsMetadata.averageSimilarity) < 0.7 THEN 'Low relevance - review content structure'
      WHEN COUNT(ui.interaction_id) FILTER (WHERE ui.interaction_type = 'download') / 
           NULLIF(COUNT(ui.interaction_id) FILTER (WHERE ui.interaction_type = 'view'), 0) < 0.1 
        THEN 'Low conversion - improve content value proposition'
      ELSE 'Performance within normal parameters'
    END as optimization_recommendation

  FROM documents d
  LEFT JOIN search_logs sl ON d.document_id = ANY(
    SELECT JSON_ARRAY_ELEMENTS_TEXT(sl.resultsMetadata.resultIds)::UUID
  )
  LEFT JOIN user_interactions ui ON d.document_id = ui.document_id
  WHERE d.createdAt >= CURRENT_DATE - INTERVAL '90 days'
  GROUP BY d.document_id, d.title, d.category, d.createdAt
)
SELECT 
  -- Search analytics summary
  vsa.query,
  vsa.search_frequency,
  vsa.query_intent,
  vsa.query_entities,
  ROUND(vsa.avg_similarity_score, 3) as avg_relevance,
  ROUND(vsa.avg_search_time_ms, 1) as avg_response_time_ms,
  ROUND(vsa.result_satisfaction_score, 2) as user_satisfaction,

  -- Content performance insights
  cpa.title as top_performing_content,
  cpa.times_found_in_search,
  cpa.total_interactions,
  cpa.optimization_recommendation,

  -- Improvement recommendations
  CASE 
    WHEN vsa.avg_search_time_ms > 1000 THEN 'Consider index optimization'
    WHEN vsa.avg_similarity_score < 0.7 THEN 'Review embedding model performance'
    WHEN vsa.result_satisfaction_score < 3.0 THEN 'Improve result quality and relevance'
    ELSE 'Search performance is optimal'
  END as search_optimization_recommendation

FROM vector_search_analytics vsa
LEFT JOIN content_performance_analysis cpa ON true
WHERE vsa.search_frequency > 10  -- Focus on frequently searched queries
ORDER BY vsa.search_frequency DESC, vsa.result_satisfaction_score DESC
LIMIT 50;

-- QueryLeaf provides comprehensive vector search capabilities:
-- 1. Native vector similarity search with advanced embedding models
-- 2. Hybrid scoring combining semantic and traditional text search
-- 3. Personalized recommendations based on user interest embeddings  
-- 4. Multi-language semantic search with cross-language understanding
-- 5. Real-time content recommendations and discovery systems
-- 6. Advanced analytics for search optimization and content performance
-- 7. AI-powered content classification and concept extraction
-- 8. Production-ready vector indexing with performance optimization
-- 9. Comprehensive search logging and user behavior analysis
-- 10. SQL-familiar syntax for complex vector operations and AI workflows

Best Practices for Production Vector Search Implementation

Embedding Strategy and Model Selection

Essential principles for effective MongoDB Vector Search deployment:

  1. Model Selection: Choose appropriate embedding models based on content type, domain, and language requirements
  2. Embedding Quality: Implement comprehensive content preparation and preprocessing for optimal embedding generation
  3. Index Optimization: Configure vector indexes with appropriate similarity metrics and performance parameters
  4. Hybrid Approach: Combine vector similarity with traditional text search for comprehensive relevance scoring
  5. Personalization: Implement user profile embeddings for personalized search and recommendation experiences
  6. Performance Monitoring: Track search performance, result quality, and user satisfaction metrics continuously

Scalability and Performance Optimization

Optimize vector search deployments for production-scale requirements:

  1. Index Strategy: Design efficient vector indexes with appropriate dimensionality and similarity algorithms
  2. Caching Implementation: Implement multi-tier caching for embeddings, queries, and search results
  3. Batch Processing: Optimize embedding generation and indexing through intelligent batch processing
  4. Query Optimization: Implement query expansion, rewriting, and adaptive similarity thresholds
  5. Resource Management: Monitor and optimize computational resources for embedding generation and vector operations
  6. Distribution Strategy: Design sharding and replication strategies for large-scale vector collections

Conclusion

MongoDB Vector Search provides comprehensive AI-powered semantic search capabilities that enable natural language queries, intelligent content discovery, and sophisticated recommendation systems through high-dimensional vector embeddings and advanced similarity algorithms. The native MongoDB integration ensures that vector search benefits from the same scalability, performance, and operational features as traditional database operations.

Key MongoDB Vector Search benefits include:

  • Semantic Understanding: AI-powered semantic search that understands meaning and context beyond keyword matching
  • Advanced Similarity: Sophisticated vector similarity algorithms with cosine similarity and approximate nearest neighbor search
  • Hybrid Capabilities: Seamless integration of vector similarity with traditional text search and metadata filtering
  • Personalization: User profile embeddings for personalized search results and intelligent recommendations
  • Multi-Modal Support: Vector search across text, images, audio, and multi-modal content with unified similarity operations
  • Production Ready: High-performance vector indexing with automatic optimization and comprehensive analytics

Whether you're building AI-powered search applications, recommendation engines, content discovery platforms, or intelligent document retrieval systems, MongoDB Vector Search with QueryLeaf's familiar SQL interface provides the foundation for sophisticated semantic capabilities.

QueryLeaf Integration: QueryLeaf automatically manages MongoDB Vector Search operations while providing SQL-familiar syntax for vector similarity queries, embedding generation, and AI-powered content discovery. Advanced vector search patterns, personalization algorithms, and semantic analytics are seamlessly handled through familiar SQL constructs, making sophisticated AI capabilities accessible to SQL-oriented development teams.

The combination of MongoDB's robust vector search capabilities with SQL-style AI operations makes it an ideal platform for modern AI applications that require both advanced semantic understanding and familiar database management patterns, ensuring your AI-powered search solutions can scale efficiently while remaining maintainable and feature-rich.

MongoDB Capped Collections and Circular Buffers: High-Performance Logging, Event Streaming, and Fixed-Size Data Management for High-Throughput Applications

High-throughput applications require specialized data storage patterns that can handle massive write volumes while maintaining predictable performance characteristics and managing storage space efficiently. Traditional relational database approaches to logging and event streaming often struggle with write scalability, storage growth management, and query performance under extreme load conditions, particularly when dealing with time-series data, application logs, and real-time event streams.

MongoDB's capped collections provide a unique solution for these scenarios, offering fixed-size collections that automatically maintain insertion order and efficiently manage storage by overwriting old documents when capacity limits are reached. Unlike traditional log rotation mechanisms that require complex external processes and can introduce performance bottlenecks, capped collections provide built-in circular buffer functionality with native MongoDB integration, tailable cursors for real-time streaming, and optimized write performance that makes them ideal for high-throughput logging, event processing, and time-sensitive data scenarios.

The Traditional High-Volume Logging Challenge

Conventional database approaches to high-volume logging and event streaming face significant scalability and management challenges:

-- Traditional PostgreSQL high-volume logging - storage growth and performance challenges

-- Application log table with typical structure
CREATE TABLE application_logs (
  log_id BIGSERIAL PRIMARY KEY,
  timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  application_name VARCHAR(100) NOT NULL,
  environment VARCHAR(20) DEFAULT 'production',
  log_level VARCHAR(10) NOT NULL, -- DEBUG, INFO, WARN, ERROR, FATAL
  message TEXT NOT NULL,
  user_id UUID,
  session_id VARCHAR(100),
  request_id VARCHAR(100),

  -- Contextual information
  source_ip INET,
  user_agent TEXT,
  request_method VARCHAR(10),
  request_url TEXT,
  response_status INTEGER,
  response_time_ms INTEGER,

  -- Structured data fields
  metadata JSONB,
  tags TEXT[],

  -- Performance tracking
  created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,

  -- Indexing for queries
  CONSTRAINT valid_log_level CHECK (log_level IN ('DEBUG', 'INFO', 'WARN', 'ERROR', 'FATAL'))
);

-- Indexes for log querying (expensive to maintain with high write volume)
CREATE INDEX idx_application_logs_timestamp ON application_logs USING BTREE (timestamp DESC);
CREATE INDEX idx_application_logs_application ON application_logs (application_name, timestamp DESC);
CREATE INDEX idx_application_logs_level ON application_logs (log_level, timestamp DESC) WHERE log_level IN ('ERROR', 'FATAL');
CREATE INDEX idx_application_logs_user ON application_logs (user_id, timestamp DESC) WHERE user_id IS NOT NULL;
CREATE INDEX idx_application_logs_session ON application_logs (session_id, timestamp DESC) WHERE session_id IS NOT NULL;
CREATE INDEX idx_application_logs_request ON application_logs (request_id) WHERE request_id IS NOT NULL;

-- Partitioning strategy for managing large datasets (complex setup)
CREATE TABLE application_logs_y2024m01 PARTITION OF application_logs
FOR VALUES FROM ('2024-01-01') TO ('2024-02-01');

CREATE TABLE application_logs_y2024m02 PARTITION OF application_logs  
FOR VALUES FROM ('2024-02-01') TO ('2024-03-01');

CREATE TABLE application_logs_y2024m03 PARTITION OF application_logs
FOR VALUES FROM ('2024-03-01') TO ('2024-04-01');

-- Complex log rotation and cleanup procedures (operational overhead)
-- Daily log cleanup procedure
CREATE OR REPLACE FUNCTION cleanup_old_logs()
RETURNS void AS $$
DECLARE
  cutoff_date TIMESTAMP;
  affected_rows BIGINT;
  partition_name TEXT;
  partition_start DATE;
  partition_end DATE;
BEGIN
  -- Keep logs for 30 days
  cutoff_date := CURRENT_TIMESTAMP - INTERVAL '30 days';

  -- Delete old logs in batches to avoid lock contention
  LOOP
    DELETE FROM application_logs 
    WHERE timestamp < cutoff_date 
    AND log_id IN (
      SELECT log_id FROM application_logs 
      WHERE timestamp < cutoff_date 
      LIMIT 10000
    );

    GET DIAGNOSTICS affected_rows = ROW_COUNT;
    EXIT WHEN affected_rows = 0;

    -- Commit batch and pause to reduce system impact
    COMMIT;
    PERFORM pg_sleep(0.1);
  END LOOP;

  -- Drop old partitions if using partitioning
  FOR partition_name, partition_start, partition_end IN
    SELECT schemaname||'.'||tablename, 
           split_part(split_part(pg_get_expr(c.relpartbound, c.oid), '''', 2), '''', 1)::date,
           split_part(split_part(pg_get_expr(c.relpartbound, c.oid), '''', 4), '''', 1)::date
    FROM pg_tables pt
    JOIN pg_class c ON c.relname = pt.tablename
    WHERE pt.tablename LIKE 'application_logs_y%'
      AND split_part(split_part(pg_get_expr(c.relpartbound, c.oid), '''', 4), '''', 1)::date < CURRENT_DATE - INTERVAL '30 days'
  LOOP
    EXECUTE format('DROP TABLE IF EXISTS %s', partition_name);
  END LOOP;

END;
$$ LANGUAGE plpgsql;

-- Schedule daily cleanup (requires external scheduler)
-- 0 2 * * * /usr/bin/psql -d myapp -c "SELECT cleanup_old_logs();"

-- Complex high-volume log insertion with batching
WITH log_batch AS (
  INSERT INTO application_logs (
    application_name, environment, log_level, message, user_id, 
    session_id, request_id, source_ip, user_agent, request_method, 
    request_url, response_status, response_time_ms, metadata, tags
  ) VALUES 
  ('web-api', 'production', 'INFO', 'User login successful', 
   '550e8400-e29b-41d4-a716-446655440000', 'sess_abc123', 'req_xyz789',
   '192.168.1.100', 'Mozilla/5.0...', 'POST', '/api/auth/login', 200, 150,
   '{"login_method": "email", "ip_geolocation": "US-CA"}', ARRAY['auth', 'login']
  ),
  ('web-api', 'production', 'WARN', 'Rate limit threshold reached', 
   '550e8400-e29b-41d4-a716-446655440001', 'sess_def456', 'req_abc123',
   '192.168.1.101', 'PostmanRuntime/7.29.0', 'POST', '/api/data/upload', 429, 50,
   '{"rate_limit": "100_per_minute", "current_count": 101}', ARRAY['rate_limiting', 'api']
  ),
  ('background-worker', 'production', 'ERROR', 'Database connection timeout', 
   NULL, NULL, 'job_456789',
   NULL, NULL, NULL, NULL, NULL, 5000,
   '{"error_code": "DB_TIMEOUT", "retry_attempt": 3, "queue_size": 1500}', ARRAY['database', 'error', 'timeout']
  ),
  ('web-api', 'production', 'DEBUG', 'Cache miss for user preferences', 
   '550e8400-e29b-41d4-a716-446655440002', 'sess_ghi789', 'req_def456',
   '192.168.1.102', 'React Native App', 'GET', '/api/user/preferences', 200, 85,
   '{"cache_key": "user_prefs_12345", "cache_ttl": 300}', ARRAY['cache', 'performance']
  )
  RETURNING log_id, timestamp, application_name, log_level
)
SELECT 
  COUNT(*) as logs_inserted,
  MIN(timestamp) as first_log_time,
  MAX(timestamp) as last_log_time,
  string_agg(DISTINCT application_name, ', ') as applications,
  string_agg(DISTINCT log_level, ', ') as log_levels
FROM log_batch;

-- Complex log analysis queries (expensive on large datasets)
WITH hourly_log_stats AS (
  SELECT 
    date_trunc('hour', timestamp) as hour_bucket,
    application_name,
    log_level,
    COUNT(*) as log_count,

    -- Error rate calculation
    COUNT(*) FILTER (WHERE log_level IN ('ERROR', 'FATAL')) as error_count,
    COUNT(*) FILTER (WHERE log_level IN ('ERROR', 'FATAL'))::float / COUNT(*) * 100 as error_rate_percent,

    -- Response time statistics
    AVG(response_time_ms) FILTER (WHERE response_time_ms IS NOT NULL) as avg_response_time,
    PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY response_time_ms) FILTER (WHERE response_time_ms IS NOT NULL) as p95_response_time,

    -- Request statistics
    COUNT(DISTINCT user_id) FILTER (WHERE user_id IS NOT NULL) as unique_users,
    COUNT(DISTINCT session_id) FILTER (WHERE session_id IS NOT NULL) as unique_sessions,

    -- Top error messages
    mode() WITHIN GROUP (ORDER BY message) FILTER (WHERE log_level IN ('ERROR', 'FATAL')) as most_common_error,

    -- Resource utilization indicators
    COUNT(*) FILTER (WHERE response_time_ms > 1000) as slow_requests,
    COUNT(*) FILTER (WHERE response_status >= 400) as client_errors,
    COUNT(*) FILTER (WHERE response_status >= 500) as server_errors

  FROM application_logs
  WHERE timestamp >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
    AND timestamp < CURRENT_TIMESTAMP
  GROUP BY date_trunc('hour', timestamp), application_name, log_level
),

trend_analysis AS (
  SELECT 
    hour_bucket,
    application_name,
    log_level,
    log_count,
    error_rate_percent,
    avg_response_time,

    -- Hour-over-hour trend analysis
    LAG(log_count) OVER (
      PARTITION BY application_name, log_level 
      ORDER BY hour_bucket
    ) as prev_hour_count,

    LAG(error_rate_percent) OVER (
      PARTITION BY application_name, log_level 
      ORDER BY hour_bucket
    ) as prev_hour_error_rate,

    LAG(avg_response_time) OVER (
      PARTITION BY application_name, log_level 
      ORDER BY hour_bucket
    ) as prev_hour_response_time,

    -- Calculate trends
    CASE 
      WHEN LAG(log_count) OVER (PARTITION BY application_name, log_level ORDER BY hour_bucket) IS NOT NULL THEN
        ((log_count - LAG(log_count) OVER (PARTITION BY application_name, log_level ORDER BY hour_bucket))::float / 
         LAG(log_count) OVER (PARTITION BY application_name, log_level ORDER BY hour_bucket) * 100)
      ELSE NULL
    END as log_count_change_percent,

    -- Moving averages
    AVG(log_count) OVER (
      PARTITION BY application_name, log_level 
      ORDER BY hour_bucket 
      ROWS BETWEEN 3 PRECEDING AND CURRENT ROW
    ) as rolling_4h_avg_count,

    AVG(error_rate_percent) OVER (
      PARTITION BY application_name, log_level 
      ORDER BY hour_bucket 
      ROWS BETWEEN 3 PRECEDING AND CURRENT ROW  
    ) as rolling_4h_avg_error_rate

  FROM hourly_log_stats
)

SELECT 
  hour_bucket,
  application_name,
  log_level,
  log_count,
  ROUND(error_rate_percent::numeric, 2) as error_rate_percent,
  ROUND(avg_response_time::numeric, 2) as avg_response_time_ms,

  -- Trend indicators
  ROUND(log_count_change_percent::numeric, 1) as hourly_change_percent,
  ROUND(rolling_4h_avg_count::numeric, 0) as rolling_4h_avg,
  ROUND(rolling_4h_avg_error_rate::numeric, 2) as rolling_4h_avg_error_rate,

  -- Alert conditions
  CASE 
    WHEN error_rate_percent > rolling_4h_avg_error_rate * 2 AND error_rate_percent > 1 THEN 'HIGH_ERROR_RATE'
    WHEN log_count_change_percent > 100 THEN 'TRAFFIC_SPIKE'
    WHEN log_count_change_percent < -50 AND log_count < rolling_4h_avg * 0.5 THEN 'TRAFFIC_DROP'
    WHEN avg_response_time > 1000 THEN 'HIGH_LATENCY'
    ELSE 'NORMAL'
  END as alert_condition,

  CURRENT_TIMESTAMP as analysis_time

FROM trend_analysis
ORDER BY hour_bucket DESC, application_name, log_level;

-- Problems with traditional PostgreSQL logging approaches:
-- 1. Unlimited storage growth requiring complex rotation strategies
-- 2. Index maintenance overhead degrading write performance
-- 3. Partitioning complexity for managing large datasets
-- 4. Expensive cleanup operations impacting production performance
-- 5. Limited real-time streaming capabilities for log analysis
-- 6. Complex batching logic required for high-volume insertions
-- 7. Vacuum and maintenance operations required for table health
-- 8. Query performance degradation as table size grows
-- 9. Storage space reclamation challenges after log deletion
-- 10. Manual operational overhead for log management and cleanup

-- Additional complications:
-- - WAL (Write-Ahead Log) bloat from high-volume insertions
-- - Lock contention during peak logging periods
-- - Backup complexity due to large log table sizes
-- - Replication lag caused by high write volume
-- - Statistics staleness affecting query plan optimization
-- - Complex monitoring required for log system health
-- - Difficulty implementing real-time log streaming
-- - Storage I/O bottlenecks during cleanup operations

MongoDB capped collections provide elegant solutions for high-volume logging:

// MongoDB Capped Collections - efficient high-volume logging with built-in circular buffer functionality
const { MongoClient } = require('mongodb');

const client = new MongoClient('mongodb://localhost:27017');
const db = client.db('high_throughput_logging_platform');

// Advanced MongoDB Capped Collections manager for high-performance logging and event streaming
class AdvancedCappedCollectionManager {
  constructor(db) {
    this.db = db;
    this.collections = new Map();
    this.tailable_cursors = new Map();
    this.streaming_handlers = new Map();

    // Configuration for different log types and use cases
    this.cappedConfigurations = {
      // Application logs with high write volume
      application_logs: {
        size: 1024 * 1024 * 1024, // 1GB
        max: 10000000, // 10 million documents
        indexing: ['timestamp', 'application', 'level'],
        streaming: true,
        compression: true,
        retention_hours: 72
      },

      // Real-time event stream
      event_stream: {
        size: 512 * 1024 * 1024, // 512MB
        max: 5000000, // 5 million events
        indexing: ['timestamp', 'event_type'],
        streaming: true,
        compression: false, // For lowest latency
        retention_hours: 24
      },

      // System metrics collection
      system_metrics: {
        size: 2048 * 1024 * 1024, // 2GB
        max: 20000000, // 20 million metrics
        indexing: ['timestamp', 'metric_type', 'host'],
        streaming: true,
        compression: true,
        retention_hours: 168 // 7 days
      },

      // User activity tracking
      user_activity: {
        size: 256 * 1024 * 1024, // 256MB
        max: 2000000, // 2 million activities
        indexing: ['timestamp', 'user_id', 'activity_type'],
        streaming: false,
        compression: true,
        retention_hours: 24
      },

      // Error and exception tracking
      error_logs: {
        size: 128 * 1024 * 1024, // 128MB
        max: 500000, // 500k errors
        indexing: ['timestamp', 'application', 'error_type', 'severity'],
        streaming: true,
        compression: false, // For immediate processing
        retention_hours: 168 // 7 days
      }
    };

    // Performance monitoring
    this.stats = {
      writes_per_second: new Map(),
      collection_sizes: new Map(),
      streaming_clients: new Map()
    };
  }

  async initializeCappedCollections() {
    console.log('Initializing advanced capped collections for high-throughput logging...');

    const initializationResults = [];

    for (const [collectionName, config] of Object.entries(this.cappedConfigurations)) {
      try {
        console.log(`Setting up capped collection: ${collectionName}`);

        // Create capped collection with optimal configuration
        const collection = await this.createOptimizedCappedCollection(collectionName, config);

        // Setup indexing strategy
        await this.setupCollectionIndexes(collection, config.indexing);

        // Initialize real-time streaming if enabled
        if (config.streaming) {
          await this.setupRealTimeStreaming(collectionName, collection);
        }

        // Setup monitoring and statistics
        await this.setupCollectionMonitoring(collectionName, collection, config);

        this.collections.set(collectionName, {
          collection: collection,
          config: config,
          created_at: new Date(),
          stats: {
            documents_written: 0,
            bytes_written: 0,
            last_write: null,
            streaming_clients: 0
          }
        });

        initializationResults.push({
          collection: collectionName,
          status: 'success',
          size_mb: Math.round(config.size / (1024 * 1024)),
          max_documents: config.max,
          streaming_enabled: config.streaming
        });

      } catch (error) {
        console.error(`Failed to initialize capped collection ${collectionName}:`, error);
        initializationResults.push({
          collection: collectionName,
          status: 'error',
          error: error.message
        });
      }
    }

    console.log(`Initialized ${initializationResults.filter(r => r.status === 'success').length} capped collections`);
    return initializationResults;
  }

  async createOptimizedCappedCollection(name, config) {
    console.log(`Creating capped collection '${name}' with ${Math.round(config.size / (1024 * 1024))}MB capacity`);

    try {
      // Check if collection already exists
      const existingCollections = await this.db.listCollections({ name: name }).toArray();

      if (existingCollections.length > 0) {
        const existing = existingCollections[0];

        if (existing.options?.capped) {
          console.log(`Capped collection '${name}' already exists`);
          return this.db.collection(name);
        } else {
          throw new Error(`Collection '${name}' exists but is not capped`);
        }
      }

      // Create new capped collection with optimized settings
      const createOptions = {
        capped: true,
        size: config.size,
        max: config.max
      };

      await this.db.createCollection(name, createOptions);
      const collection = this.db.collection(name);

      console.log(`Created capped collection '${name}' successfully`);
      return collection;

    } catch (error) {
      console.error(`Error creating capped collection '${name}':`, error);
      throw error;
    }
  }

  async setupCollectionIndexes(collection, indexFields) {
    console.log(`Setting up indexes for capped collection: ${collection.collectionName}`);

    try {
      // Create indexes for efficient querying (note: capped collections have limitations on indexing)
      const indexPromises = indexFields.map(async (field) => {
        const indexSpec = {};
        indexSpec[field] = 1;

        try {
          await collection.createIndex(indexSpec, { 
            background: true,
            name: `idx_${field}`
          });
          console.log(`Created index on field: ${field}`);
        } catch (error) {
          // Some indexes may not be supported on capped collections
          console.warn(`Could not create index on ${field}: ${error.message}`);
        }
      });

      await Promise.allSettled(indexPromises);

    } catch (error) {
      console.error(`Error setting up indexes for ${collection.collectionName}:`, error);
    }
  }

  async setupRealTimeStreaming(collectionName, collection) {
    console.log(`Setting up real-time streaming for: ${collectionName}`);

    try {
      // Create tailable cursor for real-time document streaming
      const tailableCursor = collection.find({}, {
        tailable: true,
        awaitData: true,
        noCursorTimeout: true,
        maxTimeMS: 0
      });

      this.tailable_cursors.set(collectionName, tailableCursor);

      // Setup streaming event handlers
      const streamingHandler = async (document) => {
        await this.processStreamingDocument(collectionName, document);
      };

      this.streaming_handlers.set(collectionName, streamingHandler);

      // Start streaming in background
      this.startTailableStreaming(collectionName, tailableCursor, streamingHandler);

      console.log(`Real-time streaming enabled for: ${collectionName}`);

    } catch (error) {
      console.error(`Error setting up streaming for ${collectionName}:`, error);
    }
  }

  async startTailableStreaming(collectionName, cursor, handler) {
    console.log(`Starting tailable streaming for: ${collectionName}`);

    try {
      // Process documents as they are inserted
      for await (const document of cursor) {
        try {
          await handler(document);
          this.updateStreamingStats(collectionName, 'document_streamed');
        } catch (error) {
          console.error(`Error processing streamed document in ${collectionName}:`, error);
          this.updateStreamingStats(collectionName, 'streaming_error');
        }
      }
    } catch (error) {
      console.error(`Tailable streaming error for ${collectionName}:`, error);

      // Attempt to restart streaming after delay
      setTimeout(() => {
        console.log(`Restarting tailable streaming for: ${collectionName}`);
        this.startTailableStreaming(collectionName, cursor, handler);
      }, 5000);
    }
  }

  async processStreamingDocument(collectionName, document) {
    // Process real-time document based on collection type
    switch (collectionName) {
      case 'application_logs':
        await this.processLogDocument(document);
        break;
      case 'event_stream':
        await this.processEventDocument(document);
        break;
      case 'system_metrics':
        await this.processMetricDocument(document);
        break;
      case 'error_logs':
        await this.processErrorDocument(document);
        break;
      default:
        console.log(`Streamed document from ${collectionName}:`, document._id);
    }
  }

  async processLogDocument(logDocument) {
    // Real-time log processing
    console.log(`Processing log: ${logDocument.level} - ${logDocument.application}`);

    // Alert on critical errors
    if (logDocument.level === 'ERROR' || logDocument.level === 'FATAL') {
      await this.triggerErrorAlert(logDocument);
    }

    // Update real-time metrics
    await this.updateLogMetrics(logDocument);
  }

  async processEventDocument(eventDocument) {
    // Real-time event processing
    console.log(`Processing event: ${eventDocument.event_type}`);

    // Update event counters
    await this.updateEventCounters(eventDocument);

    // Trigger event-based workflows
    if (eventDocument.event_type === 'user_purchase') {
      await this.triggerPurchaseWorkflow(eventDocument);
    }
  }

  async processMetricDocument(metricDocument) {
    // Real-time metrics processing
    console.log(`Processing metric: ${metricDocument.metric_type} = ${metricDocument.value}`);

    // Check thresholds
    if (metricDocument.metric_type === 'cpu_usage' && metricDocument.value > 80) {
      await this.triggerHighCPUAlert(metricDocument);
    }
  }

  async processErrorDocument(errorDocument) {
    // Real-time error processing
    console.log(`Processing error: ${errorDocument.error_type} in ${errorDocument.application}`);

    // Immediate alerting for critical errors
    if (errorDocument.severity === 'CRITICAL') {
      await this.triggerCriticalErrorAlert(errorDocument);
    }
  }

  async logEvent(collectionName, eventData, options = {}) {
    console.log(`Logging event to ${collectionName}`);

    try {
      const collectionData = this.collections.get(collectionName);
      if (!collectionData) {
        throw new Error(`Capped collection ${collectionName} not initialized`);
      }

      const collection = collectionData.collection;

      // Enhance event data with standard fields
      const enhancedEvent = {
        ...eventData,
        timestamp: new Date(),
        _logged_at: new Date(),
        _collection_type: collectionName,

        // Add metadata if specified
        ...(options.metadata && { _metadata: options.metadata }),

        // Add correlation ID for tracking
        ...(options.correlationId && { _correlation_id: options.correlationId })
      };

      // High-performance insert (no acknowledgment waiting for maximum throughput)
      const insertOptions = {
        writeConcern: options.writeConcern || { w: 0 }, // No acknowledgment for maximum speed
        ordered: false // Allow out-of-order inserts for better performance
      };

      const result = await collection.insertOne(enhancedEvent, insertOptions);

      // Update statistics
      this.updateWriteStats(collectionName, enhancedEvent);

      return {
        success: true,
        insertedId: result.insertedId,
        collection: collectionName,
        timestamp: enhancedEvent.timestamp
      };

    } catch (error) {
      console.error(`Error logging to ${collectionName}:`, error);

      // Update error statistics
      this.updateWriteStats(collectionName, null, error);

      throw error;
    }
  }

  async logEventBatch(collectionName, events, options = {}) {
    console.log(`Logging batch of ${events.length} events to ${collectionName}`);

    try {
      const collectionData = this.collections.get(collectionName);
      if (!collectionData) {
        throw new Error(`Capped collection ${collectionName} not initialized`);
      }

      const collection = collectionData.collection;
      const timestamp = new Date();

      // Enhance all events with standard fields
      const enhancedEvents = events.map((eventData, index) => ({
        ...eventData,
        timestamp: new Date(timestamp.getTime() + index), // Ensure unique timestamps
        _logged_at: timestamp,
        _collection_type: collectionName,
        _batch_id: options.batchId || require('crypto').randomUUID(),

        // Add metadata if specified
        ...(options.metadata && { _metadata: options.metadata })
      }));

      // High-performance batch insert
      const insertOptions = {
        writeConcern: options.writeConcern || { w: 0 }, // No acknowledgment
        ordered: false // Allow out-of-order inserts
      };

      const result = await collection.insertMany(enhancedEvents, insertOptions);

      // Update statistics for all events
      enhancedEvents.forEach(event => this.updateWriteStats(collectionName, event));

      return {
        success: true,
        insertedCount: result.insertedCount,
        insertedIds: result.insertedIds,
        collection: collectionName,
        batchSize: events.length,
        timestamp: timestamp
      };

    } catch (error) {
      console.error(`Error batch logging to ${collectionName}:`, error);

      // Update error statistics
      this.updateWriteStats(collectionName, null, error);

      throw error;
    }
  }

  async queryRecentEvents(collectionName, query = {}, options = {}) {
    console.log(`Querying recent events from ${collectionName}`);

    try {
      const collectionData = this.collections.get(collectionName);
      if (!collectionData) {
        throw new Error(`Capped collection ${collectionName} not initialized`);
      }

      const collection = collectionData.collection;

      // Build query with time-based filtering
      const timeFilter = {};
      if (options.since) {
        timeFilter.timestamp = { $gte: options.since };
      }
      if (options.until) {
        timeFilter.timestamp = { ...timeFilter.timestamp, $lte: options.until };
      }

      const finalQuery = {
        ...query,
        ...timeFilter
      };

      // Query options for efficient retrieval
      const queryOptions = {
        sort: options.sort || { $natural: -1 }, // Natural order for capped collections
        limit: options.limit || 1000,
        projection: options.projection || {}
      };

      const cursor = collection.find(finalQuery, queryOptions);
      const results = await cursor.toArray();

      console.log(`Retrieved ${results.length} events from ${collectionName}`);

      return {
        collection: collectionName,
        count: results.length,
        events: results,
        query: finalQuery,
        options: queryOptions
      };

    } catch (error) {
      console.error(`Error querying ${collectionName}:`, error);
      throw error;
    }
  }

  async getCollectionStats(collectionName) {
    console.log(`Getting statistics for capped collection: ${collectionName}`);

    try {
      const collectionData = this.collections.get(collectionName);
      if (!collectionData) {
        throw new Error(`Capped collection ${collectionName} not initialized`);
      }

      const collection = collectionData.collection;

      // Get collection statistics
      const stats = await this.db.command({ collStats: collectionName });
      const recentStats = collectionData.stats;

      // Calculate performance metrics
      const now = Date.now();
      const timeSinceLastWrite = recentStats.last_write ? 
        (now - recentStats.last_write.getTime()) / 1000 : null;

      const writesPerSecond = this.stats.writes_per_second.get(collectionName) || 0;

      return {
        collection_name: collectionName,

        // MongoDB collection stats
        is_capped: stats.capped,
        max_size: stats.maxSize,
        max_documents: stats.max,
        current_size: stats.size,
        storage_size: stats.storageSize,
        document_count: stats.count,
        average_document_size: stats.avgObjSize,

        // Usage statistics
        size_utilization_percent: (stats.size / stats.maxSize * 100).toFixed(2),
        document_utilization_percent: stats.max ? (stats.count / stats.max * 100).toFixed(2) : null,

        // Performance metrics
        writes_per_second: writesPerSecond,
        documents_written: recentStats.documents_written,
        bytes_written: recentStats.bytes_written,
        last_write: recentStats.last_write,
        time_since_last_write_seconds: timeSinceLastWrite,

        // Streaming statistics
        streaming_enabled: collectionData.config.streaming,
        streaming_clients: recentStats.streaming_clients,

        // Configuration
        retention_hours: collectionData.config.retention_hours,
        compression_enabled: collectionData.config.compression,

        // Timestamps
        created_at: collectionData.created_at,
        stats_generated_at: new Date()
      };

    } catch (error) {
      console.error(`Error getting stats for ${collectionName}:`, error);
      throw error;
    }
  }

  async getAllCollectionStats() {
    console.log('Getting comprehensive statistics for all capped collections');

    const allStats = {};
    const promises = Array.from(this.collections.keys()).map(async (collectionName) => {
      try {
        const stats = await this.getCollectionStats(collectionName);
        allStats[collectionName] = stats;
      } catch (error) {
        allStats[collectionName] = { error: error.message };
      }
    });

    await Promise.all(promises);

    // Calculate aggregate statistics
    const aggregateStats = {
      total_collections: Object.keys(allStats).length,
      total_documents: 0,
      total_size_bytes: 0,
      total_writes_per_second: 0,
      collections_with_streaming: 0,
      average_utilization_percent: 0
    };

    let validCollections = 0;
    for (const [name, stats] of Object.entries(allStats)) {
      if (!stats.error) {
        validCollections++;
        aggregateStats.total_documents += stats.document_count || 0;
        aggregateStats.total_size_bytes += stats.current_size || 0;
        aggregateStats.total_writes_per_second += stats.writes_per_second || 0;
        if (stats.streaming_enabled) aggregateStats.collections_with_streaming++;
        aggregateStats.average_utilization_percent += parseFloat(stats.size_utilization_percent) || 0;
      }
    }

    if (validCollections > 0) {
      aggregateStats.average_utilization_percent /= validCollections;
    }

    return {
      individual_collections: allStats,
      aggregate_statistics: aggregateStats,
      generated_at: new Date()
    };
  }

  // Real-time streaming client management
  createTailableStream(collectionName, filter = {}, options = {}) {
    console.log(`Creating tailable stream for: ${collectionName}`);

    const collectionData = this.collections.get(collectionName);
    if (!collectionData || !collectionData.config.streaming) {
      throw new Error(`Collection ${collectionName} is not configured for streaming`);
    }

    const collection = collectionData.collection;

    // Create tailable cursor with real-time options
    const tailableOptions = {
      tailable: true,
      awaitData: true,
      noCursorTimeout: true,
      maxTimeMS: 0,
      ...options
    };

    const cursor = collection.find(filter, tailableOptions);

    // Update streaming client count
    this.updateStreamingStats(collectionName, 'client_connected');

    return cursor;
  }

  // Utility methods for statistics and monitoring
  updateWriteStats(collectionName, eventData, error = null) {
    const collectionData = this.collections.get(collectionName);
    if (!collectionData) return;

    if (error) {
      // Handle error statistics
      collectionData.stats.errors = (collectionData.stats.errors || 0) + 1;
    } else {
      // Update write statistics
      collectionData.stats.documents_written++;
      collectionData.stats.last_write = new Date();

      if (eventData) {
        const eventSize = JSON.stringify(eventData).length;
        collectionData.stats.bytes_written += eventSize;
      }
    }

    // Update writes per second
    this.updateWritesPerSecond(collectionName);
  }

  updateWritesPerSecond(collectionName) {
    const now = Date.now();
    const key = `${collectionName}_${Math.floor(now / 1000)}`;

    if (!this.stats.writes_per_second.has(collectionName)) {
      this.stats.writes_per_second.set(collectionName, 0);
    }

    // Simple writes per second calculation
    this.stats.writes_per_second.set(
      collectionName, 
      this.stats.writes_per_second.get(collectionName) + 1
    );

    // Reset counter every second
    setTimeout(() => {
      this.stats.writes_per_second.set(collectionName, 0);
    }, 1000);
  }

  updateStreamingStats(collectionName, action) {
    const collectionData = this.collections.get(collectionName);
    if (!collectionData) return;

    switch (action) {
      case 'client_connected':
        collectionData.stats.streaming_clients++;
        break;
      case 'client_disconnected':
        collectionData.stats.streaming_clients = Math.max(0, collectionData.stats.streaming_clients - 1);
        break;
      case 'document_streamed':
        collectionData.stats.documents_streamed = (collectionData.stats.documents_streamed || 0) + 1;
        break;
      case 'streaming_error':
        collectionData.stats.streaming_errors = (collectionData.stats.streaming_errors || 0) + 1;
        break;
    }
  }

  // Alert and notification methods
  async triggerErrorAlert(logDocument) {
    console.log(`🚨 ERROR ALERT: ${logDocument.application} - ${logDocument.message}`);
    // Implement alerting logic (email, Slack, PagerDuty, etc.)
  }

  async triggerCriticalErrorAlert(errorDocument) {
    console.log(`🔥 CRITICAL ERROR: ${errorDocument.application} - ${errorDocument.error_type}`);
    // Implement critical alerting logic
  }

  async triggerHighCPUAlert(metricDocument) {
    console.log(`⚠️ HIGH CPU: ${metricDocument.host} - ${metricDocument.value}%`);
    // Implement system monitoring alerts
  }

  // Workflow triggers
  async triggerPurchaseWorkflow(eventDocument) {
    console.log(`💰 Purchase Event: User ${eventDocument.user_id} - Amount ${eventDocument.amount}`);
    // Implement purchase-related workflows
  }

  // Metrics updating methods
  async updateLogMetrics(logDocument) {
    // Update aggregated log metrics in real-time
    const metricsUpdate = {
      $inc: {
        [`hourly_logs.${new Date().getHours()}.${logDocument.level.toLowerCase()}`]: 1,
        [`application_logs.${logDocument.application}.${logDocument.level.toLowerCase()}`]: 1
      },
      $set: {
        last_updated: new Date()
      }
    };

    await this.db.collection('log_metrics').updateOne(
      { _id: 'real_time_metrics' },
      metricsUpdate,
      { upsert: true }
    );
  }

  async updateEventCounters(eventDocument) {
    // Update real-time event counters
    const counterUpdate = {
      $inc: {
        [`event_counts.${eventDocument.event_type}`]: 1,
        'total_events': 1
      },
      $set: {
        last_event: new Date(),
        last_event_type: eventDocument.event_type
      }
    };

    await this.db.collection('event_metrics').updateOne(
      { _id: 'real_time_counters' },
      counterUpdate,
      { upsert: true }
    );
  }

  async setupCollectionMonitoring(collectionName, collection, config) {
    // Setup monitoring for collection health and performance
    setInterval(async () => {
      try {
        const stats = await this.getCollectionStats(collectionName);

        // Check for potential issues
        if (stats.size_utilization_percent > 90) {
          console.warn(`⚠️ Collection ${collectionName} is ${stats.size_utilization_percent}% full`);
        }

        if (stats.writes_per_second === 0 && config.retention_hours < 24) {
          console.warn(`⚠️ No recent writes to ${collectionName}`);
        }

      } catch (error) {
        console.error(`Error monitoring ${collectionName}:`, error);
      }
    }, 60000); // Check every minute
  }
}

// Example usage: High-performance logging system setup
async function setupHighPerformanceLogging() {
  console.log('Setting up comprehensive high-performance logging system with capped collections...');

  const cappedManager = new AdvancedCappedCollectionManager(db);

  // Initialize all capped collections
  await cappedManager.initializeCappedCollections();

  // Example: High-volume application logging
  const logEntries = [
    {
      application: 'web-api',
      level: 'INFO',
      message: 'User authentication successful',
      user_id: '507f1f77bcf86cd799439011',
      session_id: 'sess_abc123',
      request_id: 'req_xyz789',
      source_ip: '192.168.1.100',
      user_agent: 'Mozilla/5.0...',
      request_method: 'POST',
      request_url: '/api/auth/login',
      response_status: 200,
      response_time_ms: 150,
      metadata: {
        login_method: 'email',
        ip_geolocation: 'US-CA',
        device_type: 'desktop'
      }
    },
    {
      application: 'background-worker',
      level: 'ERROR',
      message: 'Database connection timeout',
      error_type: 'DatabaseTimeout',
      error_code: 'DB_CONN_TIMEOUT',
      stack_trace: 'Error: Connection timeout...',
      job_id: 'job_456789',
      queue_name: 'high_priority',
      retry_attempt: 3,
      metadata: {
        connection_pool_size: 10,
        active_connections: 9,
        queue_size: 1500
      }
    },
    {
      application: 'payment-service',
      level: 'WARN',
      message: 'Payment processing took longer than expected',
      user_id: '507f1f77bcf86cd799439012',
      transaction_id: 'txn_abc456',
      payment_method: 'credit_card',
      amount: 99.99,
      currency: 'USD',
      processing_time_ms: 5500,
      metadata: {
        gateway: 'stripe',
        gateway_response_time: 4800,
        fraud_check_time: 700
      }
    }
  ];

  // Batch insert logs for maximum performance
  await cappedManager.logEventBatch('application_logs', logEntries, {
    batchId: 'batch_001',
    metadata: { source: 'demo', environment: 'production' }
  });

  // Example: Real-time event streaming
  const events = [
    {
      event_type: 'user_signup',
      user_id: '507f1f77bcf86cd799439013',
      email: 'user@example.com',
      signup_method: 'google_oauth',
      referrer: 'organic_search',
      metadata: {
        utm_source: 'google',
        utm_medium: 'organic',
        landing_page: '/pricing'
      }
    },
    {
      event_type: 'user_purchase',
      user_id: '507f1f77bcf86cd799439013',
      order_id: '507f1f77bcf86cd799439014',
      amount: 299.99,
      currency: 'USD',
      product_ids: ['prod_001', 'prod_002'],
      payment_method: 'stripe',
      metadata: {
        discount_applied: 50.00,
        coupon_code: 'SAVE50',
        affiliate_id: 'aff_123'
      }
    }
  ];

  await cappedManager.logEventBatch('event_stream', events);

  // Example: System metrics collection
  const metrics = [
    {
      metric_type: 'cpu_usage',
      host: 'web-server-01',
      value: 78.5,
      unit: 'percent',
      tags: ['production', 'web-tier']
    },
    {
      metric_type: 'memory_usage',
      host: 'web-server-01', 
      value: 6.2,
      unit: 'gb',
      tags: ['production', 'web-tier']
    },
    {
      metric_type: 'disk_io',
      host: 'db-server-01',
      value: 1250,
      unit: 'ops_per_second',
      tags: ['production', 'database-tier']
    }
  ];

  await cappedManager.logEventBatch('system_metrics', metrics);

  // Query recent events
  const recentErrors = await cappedManager.queryRecentEvents('error_logs', 
    { level: 'ERROR' }, 
    { limit: 100, since: new Date(Date.now() - 60 * 60 * 1000) } // Last hour
  );

  console.log(`Found ${recentErrors.count} recent errors`);

  // Get comprehensive statistics
  const stats = await cappedManager.getAllCollectionStats();
  console.log('Capped Collections System Status:', JSON.stringify(stats.aggregate_statistics, null, 2));

  // Setup real-time streaming
  const logStream = cappedManager.createTailableStream('application_logs', 
    { level: { $in: ['ERROR', 'FATAL'] } }
  );

  console.log('Real-time error log streaming started...');

  return cappedManager;
}

// Benefits of MongoDB Capped Collections:
// - Fixed-size collections with automatic space management
// - Built-in circular buffer functionality for efficient storage utilization
// - Optimized for high-throughput write operations with minimal overhead
// - Tailable cursors for real-time streaming and event processing
// - Natural insertion order preservation without additional indexing
// - No fragmentation issues compared to traditional log rotation
// - Automatic old document removal without manual cleanup processes
// - Superior performance for append-only workloads like logging
// - Built-in MongoDB integration with replication and sharding support
// - SQL-compatible operations through QueryLeaf for familiar management

module.exports = {
  AdvancedCappedCollectionManager,
  setupHighPerformanceLogging
};

Understanding MongoDB Capped Collections Architecture

Advanced Circular Buffer Implementation and High-Throughput Patterns

Implement sophisticated capped collection patterns for production-scale logging systems:

// Production-grade capped collection patterns for enterprise logging infrastructure
class EnterpriseCappedCollectionManager extends AdvancedCappedCollectionManager {
  constructor(db, enterpriseConfig) {
    super(db);

    this.enterpriseConfig = {
      multiTenant: enterpriseConfig.multiTenant || false,
      distributedLogging: enterpriseConfig.distributedLogging || false,
      compressionEnabled: enterpriseConfig.compressionEnabled || true,
      retentionPolicies: enterpriseConfig.retentionPolicies || {},
      alertingIntegration: enterpriseConfig.alertingIntegration || {},
      metricsExport: enterpriseConfig.metricsExport || {}
    };

    this.setupEnterpriseIntegrations();
  }

  async setupMultiTenantCappedCollections(tenants) {
    console.log('Setting up multi-tenant capped collection architecture...');

    const tenantCollections = new Map();

    for (const [tenantId, tenantConfig] of Object.entries(tenants)) {
      const tenantCollectionName = `logs_tenant_${tenantId}`;

      // Create tenant-specific capped collection
      const cappedConfig = {
        size: tenantConfig.logQuotaBytes || 128 * 1024 * 1024, // 128MB default
        max: tenantConfig.maxDocuments || 1000000,
        indexing: ['timestamp', 'level', 'application'],
        streaming: tenantConfig.streamingEnabled || false,
        compression: true,
        retention_hours: tenantConfig.retentionHours || 72
      };

      this.cappedConfigurations[tenantCollectionName] = cappedConfig;
      tenantCollections.set(tenantId, tenantCollectionName);
    }

    await this.initializeCappedCollections();
    return tenantCollections;
  }

  async setupDistributedLoggingAggregation(nodeConfigs) {
    console.log('Setting up distributed logging aggregation...');

    const aggregationStreams = {};

    for (const [nodeId, nodeConfig] of Object.entries(nodeConfigs)) {
      // Create aggregation stream for each distributed node
      aggregationStreams[`node_${nodeId}_aggregation`] = {
        sourceCollections: nodeConfig.sourceCollections,
        aggregationPipeline: [
          {
            $match: {
              timestamp: { $gte: new Date(Date.now() - 60000) }, // Last minute
              node_id: nodeId
            }
          },
          {
            $group: {
              _id: {
                minute: { $dateToString: { format: "%Y-%m-%d %H:%M", date: "$timestamp" } },
                level: "$level",
                application: "$application"
              },
              count: { $sum: 1 },
              first_occurrence: { $min: "$timestamp" },
              last_occurrence: { $max: "$timestamp" },
              sample_message: { $first: "$message" }
            }
          }
        ],
        targetCollection: `distributed_log_summary`,
        refreshInterval: 60000 // 1 minute
      };
    }

    return await this.implementAggregationStreams(aggregationStreams);
  }

  async setupLogRetentionPolicies(policies) {
    console.log('Setting up automated log retention policies...');

    const retentionTasks = {};

    for (const [collectionName, policy] of Object.entries(policies)) {
      retentionTasks[collectionName] = {
        retentionDays: policy.retentionDays,
        archiveToS3: policy.archiveToS3 || false,
        compressionLevel: policy.compressionLevel || 'standard',
        schedule: policy.schedule || '0 2 * * *', // Daily at 2 AM

        cleanupFunction: async () => {
          await this.executeRetentionPolicy(collectionName, policy);
        }
      };
    }

    return await this.scheduleRetentionTasks(retentionTasks);
  }

  async implementAdvancedStreaming(streamingConfigs) {
    console.log('Implementing advanced streaming capabilities...');

    const streamingServices = {};

    for (const [streamName, config] of Object.entries(streamingConfigs)) {
      streamingServices[streamName] = {
        sourceCollection: config.sourceCollection,
        filterPipeline: config.filterPipeline,
        transformFunction: config.transformFunction,
        destinations: config.destinations, // Kafka, Redis, WebSockets, etc.
        bufferSize: config.bufferSize || 1000,
        flushInterval: config.flushInterval || 1000,

        processor: async (documents) => {
          await this.processStreamingBatch(streamName, documents, config);
        }
      };
    }

    return await this.activateStreamingServices(streamingServices);
  }
}

SQL-Style Capped Collection Management with QueryLeaf

QueryLeaf provides familiar SQL syntax for MongoDB capped collection operations and management:

-- QueryLeaf capped collection management with SQL-familiar syntax

-- Create high-performance capped collection for application logging
CREATE CAPPED COLLECTION application_logs (
  size = '1GB',
  max_documents = 10000000,

  -- Document structure (for documentation)
  timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  application VARCHAR(100) NOT NULL,
  environment VARCHAR(20) DEFAULT 'production',
  level VARCHAR(10) NOT NULL CHECK (level IN ('DEBUG', 'INFO', 'WARN', 'ERROR', 'FATAL')),
  message TEXT NOT NULL,
  user_id UUID,
  session_id VARCHAR(100),
  request_id VARCHAR(100),

  -- Performance and context fields
  source_ip INET,
  user_agent TEXT,
  request_method VARCHAR(10),
  request_url TEXT,
  response_status INTEGER,
  response_time_ms INTEGER,

  -- Flexible metadata
  metadata JSONB,
  tags TEXT[]
)
WITH OPTIONS (
  write_concern = { w: 0 }, -- Maximum write performance
  tailable_cursors = true,  -- Enable real-time streaming
  compression = true,       -- Enable document compression
  streaming_enabled = true
);

-- Create real-time event stream capped collection
CREATE CAPPED COLLECTION event_stream (
  size = '512MB',
  max_documents = 5000000,

  -- Event structure
  event_type VARCHAR(100) NOT NULL,
  user_id UUID,
  session_id VARCHAR(100),
  event_data JSONB,

  -- Event context
  timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  source VARCHAR(50),
  environment VARCHAR(20),

  -- Event metadata
  correlation_id UUID,
  trace_id UUID,
  metadata JSONB
)
WITH OPTIONS (
  write_concern = { w: 0 },
  tailable_cursors = true,
  compression = false, -- Low latency over storage efficiency
  streaming_enabled = true,
  retention_hours = 24
);

-- Create system metrics capped collection
CREATE CAPPED COLLECTION system_metrics (
  size = '2GB', 
  max_documents = 20000000,

  -- Metrics structure
  metric_type VARCHAR(100) NOT NULL,
  host VARCHAR(100) NOT NULL,
  value DECIMAL(15,6) NOT NULL,
  unit VARCHAR(20),
  tags TEXT[],

  -- Timing information
  timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  collected_at TIMESTAMP,

  -- Metric metadata
  labels JSONB,
  metadata JSONB
)
WITH OPTIONS (
  write_concern = { w: 0 },
  tailable_cursors = true,
  compression = true,
  streaming_enabled = true,
  retention_hours = 168 -- 7 days
);

-- High-volume log insertion with optimized batch processing
INSERT INTO application_logs (
  application, environment, level, message, user_id, session_id, 
  request_id, source_ip, user_agent, request_method, request_url, 
  response_status, response_time_ms, metadata, tags
) VALUES 
-- Batch insert for maximum throughput
('web-api', 'production', 'INFO', 'User login successful', 
 '550e8400-e29b-41d4-a716-446655440000', 'sess_abc123', 'req_xyz789',
 '192.168.1.100', 'Mozilla/5.0...', 'POST', '/api/auth/login', 200, 150,
 JSON_OBJECT('login_method', 'email', 'ip_geolocation', 'US-CA', 'device_type', 'desktop'),
 ARRAY['authentication', 'user_activity']
),
('payment-service', 'production', 'WARN', 'Payment processing delay detected', 
 '550e8400-e29b-41d4-a716-446655440001', 'sess_def456', 'req_abc123',
 '192.168.1.101', 'Mobile App/1.2.3', 'POST', '/api/payments/process', 200, 3500,
 JSON_OBJECT('gateway', 'stripe', 'amount', 99.99, 'currency', 'USD', 'delay_reason', 'gateway_latency'),
 ARRAY['payments', 'performance', 'warning']
),
('background-worker', 'production', 'ERROR', 'Queue processing failure', 
 NULL, NULL, 'job_456789',
 NULL, NULL, NULL, NULL, NULL, NULL,
 JSON_OBJECT('queue_name', 'high_priority', 'job_type', 'email_sender', 'error_code', 'SMTP_TIMEOUT', 'retry_count', 3),
 ARRAY['background_job', 'error', 'email']
),
('web-api', 'production', 'DEBUG', 'Cache hit for user preferences', 
 '550e8400-e29b-41d4-a716-446655440002', 'sess_ghi789', 'req_def456',
 '192.168.1.102', 'React Native/1.0.0', 'GET', '/api/user/preferences', 200, 25,
 JSON_OBJECT('cache_key', 'user_prefs_12345', 'cache_ttl', 3600, 'hit_rate', 0.85),
 ARRAY['cache', 'performance', 'optimization']
)
WITH WRITE_OPTIONS (
  acknowledge = false,  -- No write acknowledgment for maximum throughput
  ordered = false,     -- Allow out-of-order inserts
  batch_size = 1000    -- Optimize batch size
);

-- Real-time event streaming insertion
INSERT INTO event_stream (
  event_type, user_id, session_id, event_data, source, 
  environment, correlation_id, trace_id, metadata
) VALUES 
('user_signup', '550e8400-e29b-41d4-a716-446655440003', 'sess_new123',
 JSON_OBJECT('email', 'newuser@example.com', 'signup_method', 'google_oauth', 'referrer', 'organic_search'),
 'web-application', 'production', UUID(), UUID(),
 JSON_OBJECT('utm_source', 'google', 'utm_medium', 'organic', 'landing_page', '/pricing')
),
('purchase_completed', '550e8400-e29b-41d4-a716-446655440003', 'sess_new123',
 JSON_OBJECT('order_id', '550e8400-e29b-41d4-a716-446655440004', 'amount', 299.99, 'currency', 'USD', 'items', 2),
 'web-application', 'production', UUID(), UUID(),
 JSON_OBJECT('payment_method', 'stripe', 'discount_applied', 50.00, 'coupon_code', 'SAVE50')
),
('api_call', '550e8400-e29b-41d4-a716-446655440005', 'sess_api789',
 JSON_OBJECT('endpoint', '/api/data/export', 'method', 'GET', 'response_size_bytes', 1048576),
 'mobile-app', 'production', UUID(), UUID(),
 JSON_OBJECT('app_version', '2.1.0', 'os', 'iOS', 'device_model', 'iPhone13')
);

-- System metrics batch insertion
INSERT INTO system_metrics (
  metric_type, host, value, unit, tags, collected_at, labels, metadata
) VALUES 
('cpu_usage', 'web-server-01', 78.5, 'percent', ARRAY['production', 'web-tier'], CURRENT_TIMESTAMP,
 JSON_OBJECT('instance_type', 'm5.large', 'az', 'us-east-1a'),
 JSON_OBJECT('cores', 2, 'architecture', 'x86_64')
),
('memory_usage', 'web-server-01', 6.2, 'gb', ARRAY['production', 'web-tier'], CURRENT_TIMESTAMP,
 JSON_OBJECT('total_memory', '8gb', 'instance_type', 'm5.large'),
 JSON_OBJECT('swap_usage', '0.1gb', 'buffer_cache', '1.2gb')
),
('disk_io_read', 'db-server-01', 1250, 'ops_per_second', ARRAY['production', 'database-tier'], CURRENT_TIMESTAMP,
 JSON_OBJECT('disk_type', 'ssd', 'size', '500gb'),
 JSON_OBJECT('queue_depth', 32, 'utilization', 0.85)
),
('network_throughput', 'web-server-01', 45.8, 'mbps', ARRAY['production', 'web-tier'], CURRENT_TIMESTAMP,
 JSON_OBJECT('interface', 'eth0', 'max_bandwidth', '1000mbps'),
 JSON_OBJECT('packets_per_second', 15000, 'error_rate', 0.001)
);

-- Advanced querying with natural ordering (capped collections maintain insertion order)
SELECT 
  timestamp,
  application,
  level,
  message,
  user_id,
  request_id,
  response_time_ms,

  -- Extract specific metadata fields
  JSON_EXTRACT(metadata, '$.login_method') as login_method,
  JSON_EXTRACT(metadata, '$.error_code') as error_code,
  JSON_EXTRACT(metadata, '$.gateway') as payment_gateway,

  -- Categorize response times
  CASE 
    WHEN response_time_ms IS NULL THEN 'N/A'
    WHEN response_time_ms <= 100 THEN 'fast'
    WHEN response_time_ms <= 500 THEN 'acceptable' 
    WHEN response_time_ms <= 2000 THEN 'slow'
    ELSE 'very_slow'
  END as performance_category,

  -- Extract tags as comma-separated string
  ARRAY_TO_STRING(tags, ', ') as tag_list

FROM application_logs
WHERE 
  -- Query recent logs (capped collections are optimized for recent data)
  timestamp >= CURRENT_TIMESTAMP - INTERVAL '1 hour'

  -- Filter by log level
  AND level IN ('ERROR', 'WARN', 'FATAL')

  -- Filter by application
  AND application IN ('web-api', 'payment-service', 'background-worker')

  -- Filter by performance issues
  AND (response_time_ms > 1000 OR level = 'ERROR')

ORDER BY 
  -- Natural order is most efficient for capped collections
  timestamp DESC

LIMIT 1000;

-- Real-time streaming query with tailable cursor
SELECT 
  event_type,
  user_id,
  session_id,
  timestamp,

  -- Extract event-specific data
  JSON_EXTRACT(event_data, '$.email') as user_email,
  JSON_EXTRACT(event_data, '$.amount') as transaction_amount,
  JSON_EXTRACT(event_data, '$.order_id') as order_id,
  JSON_EXTRACT(event_data, '$.endpoint') as api_endpoint,

  -- Extract metadata
  JSON_EXTRACT(metadata, '$.utm_source') as traffic_source,
  JSON_EXTRACT(metadata, '$.payment_method') as payment_method,
  JSON_EXTRACT(metadata, '$.app_version') as app_version,

  -- Event categorization
  CASE 
    WHEN event_type LIKE '%signup%' THEN 'user_acquisition'
    WHEN event_type LIKE '%purchase%' THEN 'monetization'
    WHEN event_type LIKE '%api%' THEN 'api_usage'
    ELSE 'other'
  END as event_category

FROM event_stream
WHERE 
  -- Real-time event processing (last few minutes)
  timestamp >= CURRENT_TIMESTAMP - INTERVAL '5 minutes'

  -- Focus on high-value events
  AND (
    event_type IN ('user_signup', 'purchase_completed', 'subscription_upgraded')
    OR JSON_EXTRACT(event_data, '$.amount')::DECIMAL > 100
  )

ORDER BY timestamp DESC

-- Enable tailable cursor for real-time streaming
WITH CURSOR_OPTIONS (
  tailable = true,
  await_data = true,
  no_cursor_timeout = true
);

-- Aggregated metrics analysis from system_metrics capped collection
WITH recent_metrics AS (
  SELECT 
    metric_type,
    host,
    value,
    unit,
    timestamp,
    JSON_EXTRACT(labels, '$.instance_type') as instance_type,
    JSON_EXTRACT(labels, '$.az') as availability_zone,

    -- Time bucketing for aggregation
    DATE_TRUNC('minute', timestamp) as minute_bucket

  FROM system_metrics
  WHERE timestamp >= CURRENT_TIMESTAMP - INTERVAL '30 minutes'
),

aggregated_metrics AS (
  SELECT 
    minute_bucket,
    metric_type,
    host,
    instance_type,
    availability_zone,

    -- Statistical aggregations
    COUNT(*) as sample_count,
    AVG(value) as avg_value,
    MIN(value) as min_value,
    MAX(value) as max_value,
    STDDEV(value) as stddev_value,

    -- Percentile calculations
    PERCENTILE_CONT(0.50) WITHIN GROUP (ORDER BY value) as p50_value,
    PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY value) as p95_value,
    PERCENTILE_CONT(0.99) WITHIN GROUP (ORDER BY value) as p99_value,

    -- Trend analysis
    value - LAG(AVG(value)) OVER (
      PARTITION BY metric_type, host 
      ORDER BY minute_bucket
    ) as change_from_previous,

    -- Moving averages
    AVG(AVG(value)) OVER (
      PARTITION BY metric_type, host 
      ORDER BY minute_bucket 
      ROWS BETWEEN 4 PRECEDING AND CURRENT ROW
    ) as rolling_5min_avg

  FROM recent_metrics
  GROUP BY minute_bucket, metric_type, host, instance_type, availability_zone
)

SELECT 
  minute_bucket,
  metric_type,
  host,
  instance_type,
  availability_zone,

  -- Formatted metrics
  ROUND(avg_value::NUMERIC, 2) as avg_value,
  ROUND(p95_value::NUMERIC, 2) as p95_value,
  ROUND(rolling_5min_avg::NUMERIC, 2) as rolling_avg,

  -- Alert conditions
  CASE 
    WHEN metric_type = 'cpu_usage' AND avg_value > 80 THEN 'HIGH_CPU'
    WHEN metric_type = 'memory_usage' AND avg_value > 7 THEN 'HIGH_MEMORY'  
    WHEN metric_type = 'disk_io_read' AND avg_value > 2000 THEN 'HIGH_DISK_IO'
    WHEN metric_type = 'network_throughput' AND avg_value > 800 THEN 'HIGH_NETWORK'
    ELSE 'NORMAL'
  END as alert_status,

  -- Trend indicators
  CASE 
    WHEN change_from_previous > rolling_5min_avg * 0.2 THEN 'INCREASING'
    WHEN change_from_previous < rolling_5min_avg * -0.2 THEN 'DECREASING'
    ELSE 'STABLE'
  END as trend,

  sample_count,
  CURRENT_TIMESTAMP as analysis_time

FROM aggregated_metrics
ORDER BY minute_bucket DESC, metric_type, host;

-- Capped collection maintenance and monitoring
SELECT 
  collection_name,
  is_capped,
  max_size_bytes,
  max_documents,
  current_size_bytes,
  current_document_count,

  -- Utilization calculations
  ROUND((current_size_bytes::FLOAT / max_size_bytes * 100)::NUMERIC, 2) as size_utilization_percent,
  ROUND((current_document_count::FLOAT / max_documents * 100)::NUMERIC, 2) as document_utilization_percent,

  -- Efficiency metrics
  ROUND((current_size_bytes::FLOAT / current_document_count)::NUMERIC, 0) as avg_document_size_bytes,

  -- Storage projections
  CASE 
    WHEN size_utilization_percent > 90 THEN 'NEAR_CAPACITY'
    WHEN size_utilization_percent > 75 THEN 'HIGH_UTILIZATION'
    WHEN size_utilization_percent > 50 THEN 'MODERATE_UTILIZATION'  
    ELSE 'LOW_UTILIZATION'
  END as capacity_status,

  -- Recommendations
  CASE 
    WHEN size_utilization_percent > 95 THEN 'Consider increasing collection size'
    WHEN document_utilization_percent > 95 THEN 'Consider increasing max document limit'
    WHEN size_utilization_percent < 25 AND current_document_count > 1000 THEN 'Collection may be over-provisioned'
    ELSE 'Optimal configuration'
  END as recommendation

FROM INFORMATION_SCHEMA.CAPPED_COLLECTIONS
WHERE collection_name IN ('application_logs', 'event_stream', 'system_metrics')
ORDER BY size_utilization_percent DESC;

-- Real-time log analysis with streaming aggregation
CREATE STREAMING VIEW log_error_rates AS
SELECT 
  application,
  level,
  DATE_TRUNC('minute', timestamp) as minute_bucket,

  -- Error rate calculations
  COUNT(*) as total_logs,
  COUNT(*) FILTER (WHERE level IN ('ERROR', 'FATAL')) as error_count,
  ROUND(
    (COUNT(*) FILTER (WHERE level IN ('ERROR', 'FATAL'))::FLOAT / COUNT(*) * 100)::NUMERIC, 
    2
  ) as error_rate_percent,

  -- Performance metrics  
  AVG(response_time_ms) FILTER (WHERE response_time_ms IS NOT NULL) as avg_response_time,
  PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY response_time_ms) FILTER (WHERE response_time_ms IS NOT NULL) as p95_response_time,

  -- Request analysis
  COUNT(DISTINCT user_id) FILTER (WHERE user_id IS NOT NULL) as unique_users,
  COUNT(DISTINCT session_id) FILTER (WHERE session_id IS NOT NULL) as unique_sessions,

  -- Status code distribution
  COUNT(*) FILTER (WHERE response_status BETWEEN 200 AND 299) as success_count,
  COUNT(*) FILTER (WHERE response_status BETWEEN 400 AND 499) as client_error_count,
  COUNT(*) FILTER (WHERE response_status BETWEEN 500 AND 599) as server_error_count,

  CURRENT_TIMESTAMP as computed_at

FROM application_logs
WHERE timestamp >= CURRENT_TIMESTAMP - INTERVAL '5 minutes'
GROUP BY application, level, minute_bucket
WITH REFRESH INTERVAL 30 SECONDS;

-- Cleanup and maintenance operations for capped collections
-- Note: Capped collections automatically manage space, but monitoring is still important

-- Monitor capped collection health
WITH collection_health AS (
  SELECT 
    'application_logs' as collection_name,
    COUNT(*) as current_documents,
    MIN(timestamp) as oldest_document,
    MAX(timestamp) as newest_document,
    MAX(timestamp) - MIN(timestamp) as time_span,
    AVG(LENGTH(CAST(message AS TEXT))) as avg_message_length
  FROM application_logs

  UNION ALL

  SELECT 
    'event_stream' as collection_name,
    COUNT(*) as current_documents,
    MIN(timestamp) as oldest_document, 
    MAX(timestamp) as newest_document,
    MAX(timestamp) - MIN(timestamp) as time_span,
    AVG(LENGTH(CAST(event_data AS TEXT))) as avg_event_size
  FROM event_stream

  UNION ALL

  SELECT 
    'system_metrics' as collection_name,
    COUNT(*) as current_documents,
    MIN(timestamp) as oldest_document,
    MAX(timestamp) as newest_document, 
    MAX(timestamp) - MIN(timestamp) as time_span,
    AVG(LENGTH(CAST(labels AS TEXT))) as avg_label_size
  FROM system_metrics
)

SELECT 
  collection_name,
  current_documents,
  oldest_document,
  newest_document,

  -- Time span analysis
  EXTRACT(DAYS FROM time_span) as retention_days,
  EXTRACT(HOURS FROM time_span) as retention_hours,

  -- Document characteristics
  ROUND(avg_message_length::NUMERIC, 0) as avg_content_size,

  -- Health indicators
  CASE 
    WHEN oldest_document > CURRENT_TIMESTAMP - INTERVAL '24 hours' THEN 'HIGH_TURNOVER'
    WHEN oldest_document > CURRENT_TIMESTAMP - INTERVAL '7 days' THEN 'NORMAL_TURNOVER'  
    ELSE 'LOW_TURNOVER'
  END as turnover_rate,

  -- Efficiency assessment
  CASE 
    WHEN current_documents < 1000 THEN 'UNDERUTILIZED'
    WHEN EXTRACT(HOURS FROM time_span) < 1 THEN 'VERY_HIGH_VOLUME'
    WHEN EXTRACT(HOURS FROM time_span) < 12 THEN 'HIGH_VOLUME'
    ELSE 'NORMAL_VOLUME'
  END as volume_assessment

FROM collection_health
ORDER BY current_documents DESC;

-- QueryLeaf provides comprehensive capped collection capabilities:
-- 1. SQL-familiar syntax for MongoDB capped collection creation and management
-- 2. High-performance batch insertion with optimized write concerns
-- 3. Real-time streaming queries with tailable cursor support
-- 4. Advanced aggregation and analytics on circular buffer data
-- 5. Automated monitoring and health assessment of capped collections
-- 6. Streaming materialized views for real-time log analysis
-- 7. Natural insertion order querying without additional indexing overhead
-- 8. Integrated alerting and threshold monitoring for operational intelligence
-- 9. Multi-tenant and enterprise-scale capped collection management
-- 10. Built-in space management with circular buffer efficiency patterns

Best Practices for Capped Collection Implementation

High-Performance Logging Design

Essential principles for effective capped collection deployment:

  1. Size Planning: Calculate appropriate collection sizes based on expected throughput and retention requirements
  2. Write Optimization: Use unacknowledged writes (w:0) for maximum throughput in logging scenarios
  3. Query Patterns: Leverage natural insertion order for efficient time-based queries
  4. Streaming Integration: Implement tailable cursors for real-time log processing and analysis
  5. Monitoring Strategy: Track collection utilization and performance metrics continuously
  6. Retention Management: Design retention policies that align with business and compliance requirements

Production Deployment Strategies

Optimize capped collection deployments for enterprise environments:

  1. Capacity Planning: Model storage requirements based on peak logging volumes and retention needs
  2. Performance Tuning: Configure appropriate write concerns and batch sizes for optimal throughput
  3. Monitoring Integration: Implement comprehensive monitoring for collection health and performance
  4. Backup Strategy: Design backup approaches that account for continuous data rotation
  5. Multi-tenant Architecture: Implement tenant isolation strategies for shared logging infrastructure
  6. Disaster Recovery: Plan for collection recreation and historical data restoration procedures

Conclusion

MongoDB capped collections provide an elegant and efficient solution for high-throughput logging, event streaming, and fixed-size data management scenarios where traditional database approaches struggle with performance and storage management complexity. The built-in circular buffer functionality, combined with optimized write performance and real-time streaming capabilities, makes capped collections ideal for modern applications requiring high-volume data ingestion with predictable storage characteristics.

Key MongoDB Capped Collections benefits include:

  • Fixed-Size Storage: Automatic space management with predictable storage utilization
  • High-Throughput Writes: Optimized for append-only workloads with minimal performance overhead
  • Natural Ordering: Preservation of insertion order without additional indexing requirements
  • Real-time Streaming: Native tailable cursor support for live data processing
  • Circular Buffer Efficiency: Automatic old document removal without manual maintenance processes
  • SQL Compatibility: Familiar SQL-style operations through QueryLeaf integration for accessible management

Whether you're building high-performance logging systems, real-time event processing platforms, system monitoring solutions, or any application requiring efficient circular buffer functionality, MongoDB capped collections with QueryLeaf's familiar SQL interface provide the foundation for scalable, maintainable data management.

QueryLeaf Integration: QueryLeaf automatically manages MongoDB capped collection operations while providing SQL-familiar syntax for collection creation, high-volume data insertion, real-time streaming, and monitoring. Advanced capped collection patterns, tailable cursors, and circular buffer management are seamlessly accessible through familiar SQL constructs, making high-performance logging both powerful and approachable for SQL-oriented development teams.

The combination of MongoDB's robust capped collection capabilities with SQL-style operations makes it an ideal platform for applications requiring high-throughput data ingestion and efficient storage management, ensuring your logging and event streaming solutions can scale effectively while maintaining predictable performance characteristics as data volumes grow.

MongoDB GridFS and Binary Data Management: Advanced File Storage Patterns for Scalable Document-Based Applications

Modern applications increasingly require sophisticated file storage capabilities that can handle diverse binary data types, massive file sizes, and complex metadata requirements while providing seamless integration with application data models. Traditional file storage approaches often create architectural complexity, separate storage silos, and synchronization challenges that become problematic as applications scale and evolve.

MongoDB GridFS provides a comprehensive solution for storing and managing large files within MongoDB itself, enabling seamless integration between file storage and document data while supporting advanced features like streaming, chunking, versioning, and metadata management. Unlike external file storage systems that require complex coordination mechanisms, GridFS offers native MongoDB integration with automatic sharding, replication, and backup capabilities that ensure file storage scales with application requirements.

The Traditional File Storage Challenge

Conventional file storage approaches often struggle with scalability, consistency, and integration complexity:

-- Traditional PostgreSQL file storage with external file system coordination challenges

-- File metadata table with limited integration capabilities  
CREATE TABLE file_metadata (
  file_id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  filename VARCHAR(255) NOT NULL,
  original_filename VARCHAR(255) NOT NULL,
  file_path TEXT NOT NULL,
  file_size BIGINT NOT NULL,
  content_type VARCHAR(100),
  upload_timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  uploaded_by UUID REFERENCES users(user_id),
  file_hash VARCHAR(64),

  -- Basic metadata fields
  storage_location VARCHAR(50) DEFAULT 'local',
  is_public BOOLEAN DEFAULT FALSE,
  access_level VARCHAR(20) DEFAULT 'private',

  -- Versioning attempt (complex to manage)
  version_number INTEGER DEFAULT 1,
  parent_file_id UUID REFERENCES file_metadata(file_id),

  -- Status tracking
  processing_status VARCHAR(20) DEFAULT 'uploaded',
  last_accessed TIMESTAMP
);

-- Separate table for file associations (loose coupling problems)
CREATE TABLE document_files (
  document_id UUID NOT NULL,
  file_id UUID NOT NULL,
  relationship_type VARCHAR(50) NOT NULL, -- attachment, image, document, etc.
  display_order INTEGER,
  is_primary BOOLEAN DEFAULT FALSE,
  added_timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (document_id, file_id)
);

-- Complex query requiring multiple joins and external file system coordination
SELECT 
  d.document_id,
  d.title,
  d.content,
  d.created_at,

  -- File information requires complex aggregation
  json_agg(
    json_build_object(
      'file_id', fm.file_id,
      'filename', fm.filename,
      'original_filename', fm.original_filename,
      'file_size', fm.file_size,
      'content_type', fm.content_type,
      'file_path', fm.file_path, -- External file system path
      'relationship_type', df.relationship_type,
      'display_order', df.display_order,
      'is_primary', df.is_primary,
      'file_exists', (
        -- Expensive file system check required for each file
        CASE WHEN pg_stat_file(fm.file_path) IS NOT NULL 
             THEN TRUE ELSE FALSE END
      ),
      'accessible', (
        -- Additional access control complexity
        CASE WHEN fm.access_level = 'public' OR fm.uploaded_by = $2
             THEN TRUE ELSE FALSE END
      )
    ) ORDER BY df.display_order
  ) FILTER (WHERE fm.file_id IS NOT NULL) as files,

  -- File statistics aggregation
  COUNT(fm.file_id) as file_count,
  SUM(fm.file_size) as total_file_size,
  MAX(fm.upload_timestamp) as latest_file_upload

FROM documents d
LEFT JOIN document_files df ON d.document_id = df.document_id
LEFT JOIN file_metadata fm ON df.file_id = fm.file_id 
  AND fm.processing_status = 'completed'
WHERE d.user_id = $1 
  AND d.status = 'active'
  AND (fm.access_level = 'public' OR fm.uploaded_by = $1 OR $1 IN (
    SELECT user_id FROM document_permissions 
    WHERE document_id = d.document_id AND permission_level IN ('read', 'write', 'admin')
  ))
GROUP BY d.document_id, d.title, d.content, d.created_at
ORDER BY d.created_at DESC
LIMIT 50;

-- Problems with traditional file storage approaches:
-- 1. File system and database synchronization complexity
-- 2. Backup and replication coordination between file system and database
-- 3. Transactional integrity challenges across file system and database operations
-- 4. Complex access control implementation across multiple storage layers
-- 5. Difficulty implementing file versioning and history tracking
-- 6. Storage location management and migration complexity
-- 7. Limited file metadata search and indexing capabilities
-- 8. Performance bottlenecks with large numbers of files
-- 9. Scalability challenges with distributed file storage
-- 10. Complex error handling and recovery across multiple storage systems

-- File upload handling complexity (application layer)
/*
const multer = require('multer');
const path = require('path');
const fs = require('fs').promises;
const crypto = require('crypto');

class TraditionalFileStorage {
  constructor(storageConfig) {
    this.storagePath = storageConfig.storagePath;
    this.maxFileSize = storageConfig.maxFileSize || 10 * 1024 * 1024; // 10MB
    this.allowedTypes = storageConfig.allowedTypes || [];

    // Complex storage configuration
    this.storage = multer.diskStorage({
      destination: async (req, file, cb) => {
        const userDir = path.join(this.storagePath, req.user.id.toString());

        try {
          await fs.mkdir(userDir, { recursive: true });
          cb(null, userDir);
        } catch (error) {
          cb(error);
        }
      },

      filename: (req, file, cb) => {
        const uniqueSuffix = Date.now() + '-' + Math.round(Math.random() * 1E9);
        const fileExtension = path.extname(file.originalname);
        cb(null, file.fieldname + '-' + uniqueSuffix + fileExtension);
      }
    });
  }

  async handleFileUpload(req, res, next) {
    const upload = multer({
      storage: this.storage,
      limits: { fileSize: this.maxFileSize },
      fileFilter: (req, file, cb) => {
        if (this.allowedTypes.length > 0 && 
            !this.allowedTypes.includes(file.mimetype)) {
          cb(new Error('File type not allowed'));
          return;
        }
        cb(null, true);
      }
    }).array('files', 10);

    upload(req, res, async (err) => {
      if (err) {
        console.error('File upload error:', err);
        return res.status(400).json({ error: err.message });
      }

      try {
        // Complex file processing and database coordination
        const filePromises = req.files.map(async (file) => {
          // Calculate file hash
          const fileBuffer = await fs.readFile(file.path);
          const fileHash = crypto.createHash('sha256')
            .update(fileBuffer).digest('hex');

          // Database transaction complexity
          const client = await pool.connect();
          try {
            await client.query('BEGIN');

            // Insert file metadata
            const fileResult = await client.query(`
              INSERT INTO file_metadata (
                filename, original_filename, file_path, file_size, 
                content_type, uploaded_by, file_hash
              ) VALUES ($1, $2, $3, $4, $5, $6, $7)
              RETURNING file_id
            `, [
              file.filename,
              file.originalname,
              file.path,
              file.size,
              file.mimetype,
              req.user.id,
              fileHash
            ]);

            // Associate with document if specified
            if (req.body.document_id) {
              await client.query(`
                INSERT INTO document_files (
                  document_id, file_id, relationship_type, display_order
                ) VALUES ($1, $2, $3, $4)
              `, [
                req.body.document_id,
                fileResult.rows[0].file_id,
                req.body.relationship_type || 'attachment',
                parseInt(req.body.display_order) || 0
              ]);
            }

            await client.query('COMMIT');
            return {
              file_id: fileResult.rows[0].file_id,
              filename: file.filename,
              original_filename: file.originalname,
              file_size: file.size,
              content_type: file.mimetype
            };

          } catch (dbError) {
            await client.query('ROLLBACK');

            // Cleanup file on database error
            try {
              await fs.unlink(file.path);
            } catch (cleanupError) {
              console.error('File cleanup error:', cleanupError);
            }

            throw dbError;
          } finally {
            client.release();
          }
        });

        const uploadedFiles = await Promise.all(filePromises);
        res.json({ 
          success: true, 
          files: uploadedFiles,
          message: `${uploadedFiles.length} files uploaded successfully`
        });

      } catch (error) {
        console.error('File processing error:', error);
        res.status(500).json({ 
          error: 'File processing failed',
          details: error.message 
        });
      }
    });
  }

  async downloadFile(req, res) {
    try {
      const { file_id } = req.params;

      // Complex authorization and file access logic
      const fileQuery = `
        SELECT fm.*, dp.permission_level
        FROM file_metadata fm
        LEFT JOIN document_files df ON fm.file_id = df.file_id
        LEFT JOIN document_permissions dp ON df.document_id = dp.document_id 
          AND dp.user_id = $2
        WHERE fm.file_id = $1 
          AND (
            fm.is_public = true 
            OR fm.uploaded_by = $2 
            OR dp.permission_level IN ('read', 'write', 'admin')
          )
      `;

      const result = await pool.query(fileQuery, [file_id, req.user.id]);

      if (result.rows.length === 0) {
        return res.status(404).json({ error: 'File not found or access denied' });
      }

      const fileMetadata = result.rows[0];

      // Check if file exists on file system
      try {
        await fs.access(fileMetadata.file_path);
      } catch (error) {
        console.error('File missing from file system:', error);
        return res.status(404).json({ 
          error: 'File not found on storage system' 
        });
      }

      // Update access tracking
      await pool.query(
        'UPDATE file_metadata SET last_accessed = CURRENT_TIMESTAMP WHERE file_id = $1',
        [file_id]
      );

      // Set response headers
      res.setHeader('Content-Type', fileMetadata.content_type);
      res.setHeader('Content-Length', fileMetadata.file_size);
      res.setHeader('Content-Disposition', 
        `inline; filename="${fileMetadata.original_filename}"`);

      // Stream file
      const fileStream = require('fs').createReadStream(fileMetadata.file_path);
      fileStream.pipe(res);

      fileStream.on('error', (error) => {
        console.error('File streaming error:', error);
        if (!res.headersSent) {
          res.status(500).json({ error: 'File streaming failed' });
        }
      });

    } catch (error) {
      console.error('File download error:', error);
      res.status(500).json({ 
        error: 'File download failed',
        details: error.message 
      });
    }
  }
}

// Issues with traditional approach:
// 1. Complex file system and database coordination
// 2. Manual transaction management across storage layers
// 3. File cleanup complexity on errors
// 4. Limited streaming and chunking capabilities
// 5. Difficult backup and replication coordination
// 6. Complex access control across multiple systems
// 7. Manual file existence validation required
// 8. Limited metadata search capabilities
// 9. Scalability bottlenecks with large file counts
// 10. Error-prone manual file path management
*/

MongoDB GridFS provides comprehensive file storage with seamless integration:

// MongoDB GridFS - comprehensive file storage with advanced patterns and seamless MongoDB integration
const { MongoClient, GridFSBucket } = require('mongodb');
const { Readable } = require('stream');
const crypto = require('crypto');
const mime = require('mime-types');

const client = new MongoClient('mongodb://localhost:27017');
const db = client.db('advanced_file_management_platform');

// Advanced MongoDB GridFS file storage and management system
class AdvancedGridFSManager {
  constructor(db, options = {}) {
    this.db = db;
    this.collections = {
      documents: db.collection('documents'),
      users: db.collection('users'),
      fileAccess: db.collection('file_access_logs'),
      fileVersions: db.collection('file_versions'),
      fileSharing: db.collection('file_sharing')
    };

    // GridFS configuration for different file types and use cases
    this.gridFSBuckets = {
      // General file storage
      files: new GridFSBucket(db, { 
        bucketName: 'files',
        chunkSizeBytes: 255 * 1024 // 255KB chunks for optimal performance
      }),

      // Image storage with different chunk size for better streaming
      images: new GridFSBucket(db, {
        bucketName: 'images', 
        chunkSizeBytes: 512 * 1024 // 512KB chunks for large images
      }),

      // Video storage optimized for streaming
      videos: new GridFSBucket(db, {
        bucketName: 'videos',
        chunkSizeBytes: 1024 * 1024 // 1MB chunks for video streaming
      }),

      // Document storage for PDFs, Office files, etc.
      documents: new GridFSBucket(db, {
        bucketName: 'documents',
        chunkSizeBytes: 128 * 1024 // 128KB chunks for document files
      }),

      // Archive storage for compressed files
      archives: new GridFSBucket(db, {
        bucketName: 'archives',
        chunkSizeBytes: 2048 * 1024 // 2MB chunks for large archives
      })
    };

    // Advanced file management configuration
    this.config = {
      maxFileSize: options.maxFileSize || 100 * 1024 * 1024, // 100MB default
      allowedMimeTypes: options.allowedMimeTypes || [],
      enableVersioning: options.enableVersioning !== false,
      enableThumbnails: options.enableThumbnails !== false,
      enableVirusScanning: options.enableVirusScanning || false,
      compressionEnabled: options.compressionEnabled || false,
      encryptionEnabled: options.encryptionEnabled || false,
      accessLogging: options.accessLogging !== false,
      automaticCleanup: options.automaticCleanup !== false
    };

    // File processing pipelines
    this.processors = new Map();
    this.thumbnailGenerators = new Map();
    this.metadataExtractors = new Map();

    this.setupFileProcessors();
    this.setupMetadataExtractors();
    this.setupThumbnailGenerators();
  }

  async uploadFile(fileStream, metadata, options = {}) {
    console.log(`Uploading file: ${metadata.filename}`);

    try {
      // Validate file and metadata
      const validationResult = await this.validateFileUpload(fileStream, metadata, options);
      if (!validationResult.valid) {
        throw new Error(`File validation failed: ${validationResult.errors.join(', ')}`);
      }

      // Determine appropriate GridFS bucket based on file type
      const bucketName = this.determineBucket(metadata.contentType);
      const gridFSBucket = this.gridFSBuckets[bucketName];

      // Enhanced metadata with comprehensive file information
      const enhancedMetadata = {
        // Original metadata
        filename: metadata.filename,
        contentType: metadata.contentType || mime.lookup(metadata.filename) || 'application/octet-stream',

        // File characteristics
        originalSize: metadata.size,
        uploadedAt: new Date(),
        uploadedBy: metadata.uploadedBy,

        // Advanced metadata
        fileHash: null, // Will be calculated during upload
        bucketName: bucketName,
        chunkSize: gridFSBucket.s.options.chunkSizeBytes,

        // Application context
        documentId: metadata.documentId,
        projectId: metadata.projectId,
        organizationId: metadata.organizationId,

        // Access control and permissions
        accessLevel: metadata.accessLevel || 'private',
        permissions: metadata.permissions || {},
        isPublic: metadata.isPublic || false,

        // File relationships and organization
        category: metadata.category || 'general',
        tags: metadata.tags || [],
        description: metadata.description,

        // Versioning information
        version: options.version || 1,
        parentFileId: metadata.parentFileId,
        versionHistory: [],

        // Processing status
        processingStatus: 'uploading',
        processingSteps: [],

        // Performance and optimization
        compressionApplied: false,
        encryptionApplied: false,
        thumbnailGenerated: false,
        metadataExtracted: false,

        // Usage tracking
        downloadCount: 0,
        lastAccessed: null,
        lastModified: new Date(),

        // Storage optimization
        storageOptimized: false,
        deduplicationChecked: false,
        duplicateOf: null,

        // Custom metadata fields
        customMetadata: metadata.customMetadata || {}
      };

      // Create upload stream with comprehensive error handling
      const uploadStream = gridFSBucket.openUploadStream(metadata.filename, {
        metadata: enhancedMetadata,

        // Advanced GridFS options
        chunkSizeBytes: gridFSBucket.s.options.chunkSizeBytes,
        disableMD5: false, // Enable MD5 for file integrity
      });

      // File processing pipeline setup
      let fileHash = crypto.createHash('sha256');
      let totalBytes = 0;
      let compressionStream = null;
      let encryptionStream = null;

      // Setup processing streams if enabled
      if (this.config.compressionEnabled && this.shouldCompress(metadata.contentType)) {
        compressionStream = this.createCompressionStream();
      }

      if (this.config.encryptionEnabled && metadata.encrypted) {
        encryptionStream = this.createEncryptionStream(metadata.encryptionKey);
      }

      // Promise-based upload handling with comprehensive progress tracking
      return new Promise((resolve, reject) => {
        let processingChain = fileStream;

        // Build processing chain
        if (compressionStream) {
          processingChain = processingChain.pipe(compressionStream);
        }

        if (encryptionStream) {
          processingChain = processingChain.pipe(encryptionStream);
        }

        // File data processing during upload
        processingChain.on('data', (chunk) => {
          fileHash.update(chunk);
          totalBytes += chunk.length;

          // Emit progress events
          this.emit('uploadProgress', {
            filename: metadata.filename,
            bytesUploaded: totalBytes,
            totalBytes: metadata.size,
            progress: metadata.size ? (totalBytes / metadata.size) * 100 : 0
          });
        });

        // Stream to GridFS
        processingChain.pipe(uploadStream);

        uploadStream.on('error', async (error) => {
          console.error(`GridFS upload error for ${metadata.filename}:`, error);

          // Cleanup partial upload
          try {
            await this.cleanupFailedUpload(uploadStream.id, bucketName);
          } catch (cleanupError) {
            console.error('Upload cleanup error:', cleanupError);
          }

          reject(error);
        });

        uploadStream.on('finish', async () => {
          try {
            console.log(`File upload completed: ${metadata.filename} (${uploadStream.id})`);

            // Update file metadata with calculated information
            const calculatedHash = fileHash.digest('hex');

            const updateResult = await this.collections.files.updateOne(
              { _id: uploadStream.id },
              {
                $set: {
                  'metadata.fileHash': calculatedHash,
                  'metadata.actualSize': totalBytes,
                  'metadata.compressionApplied': compressionStream !== null,
                  'metadata.encryptionApplied': encryptionStream !== null,
                  'metadata.processingStatus': 'uploaded',
                  'metadata.uploadCompletedAt': new Date()
                }
              }
            );

            // Check for duplicate files based on hash
            const duplicateCheck = await this.checkForDuplicates(calculatedHash, uploadStream.id);

            // Post-upload processing
            const postProcessingTasks = [
              this.extractFileMetadata(uploadStream.id, bucketName, metadata.contentType),
              this.generateThumbnails(uploadStream.id, bucketName, metadata.contentType),
              this.performVirusScanning(uploadStream.id, bucketName),
              this.logFileAccess(uploadStream.id, 'upload', metadata.uploadedBy),
              this.updateFileVersionHistory(uploadStream.id, metadata.parentFileId),
              this.triggerFileWebhooks(uploadStream.id, 'file.uploaded')
            ];

            // Execute post-processing tasks
            const processingResults = await Promise.allSettled(postProcessingTasks);

            // Track processing failures
            const failedProcessing = processingResults
              .filter(result => result.status === 'rejected')
              .map(result => result.reason);

            if (failedProcessing.length > 0) {
              console.warn(`Post-processing warnings for ${metadata.filename}:`, failedProcessing);
            }

            // Final file information
            const fileInfo = {
              fileId: uploadStream.id,
              filename: metadata.filename,
              contentType: metadata.contentType,
              size: totalBytes,
              uploadedAt: new Date(),
              fileHash: calculatedHash,
              bucketName: bucketName,
              gridFSId: uploadStream.id,

              // Processing results
              processingStatus: failedProcessing.length === 0 ? 'completed' : 'completed_with_warnings',
              processingWarnings: failedProcessing,

              // Duplication information
              duplicateInfo: duplicateCheck,

              // URLs for file access
              downloadUrl: this.generateDownloadUrl(uploadStream.id, bucketName),
              streamUrl: this.generateStreamUrl(uploadStream.id, bucketName),
              thumbnailUrl: metadata.contentType.startsWith('image/') ? 
                this.generateThumbnailUrl(uploadStream.id) : null,

              // Metadata
              metadata: enhancedMetadata
            };

            // Update processing status
            await this.collections.files.updateOne(
              { _id: uploadStream.id },
              {
                $set: {
                  'metadata.processingStatus': fileInfo.processingStatus,
                  'metadata.processingCompletedAt': new Date(),
                  'metadata.processingWarnings': failedProcessing
                }
              }
            );

            resolve(fileInfo);

          } catch (error) {
            console.error(`Post-upload processing error for ${metadata.filename}:`, error);
            reject(error);
          }
        });
      });

    } catch (error) {
      console.error(`File upload error for ${metadata.filename}:`, error);
      throw error;
    }
  }

  async downloadFile(fileId, options = {}) {
    console.log(`Downloading file: ${fileId}`);

    try {
      // Get file information
      const fileInfo = await this.getFileInfo(fileId);
      if (!fileInfo) {
        throw new Error(`File not found: ${fileId}`);
      }

      // Authorization check
      if (options.userId) {
        const authorized = await this.checkFileAccess(fileId, options.userId, 'read');
        if (!authorized) {
          throw new Error('Access denied');
        }
      }

      // Determine appropriate bucket
      const bucketName = fileInfo.metadata?.bucketName || 'files';
      const gridFSBucket = this.gridFSBuckets[bucketName];

      // Create download stream
      const downloadStream = gridFSBucket.openDownloadStream(fileId);

      // Setup stream processing if needed
      let processingChain = downloadStream;

      if (fileInfo.metadata?.encryptionApplied && options.decryptionKey) {
        const decryptionStream = this.createDecryptionStream(options.decryptionKey);
        processingChain = processingChain.pipe(decryptionStream);
      }

      if (fileInfo.metadata?.compressionApplied) {
        const decompressionStream = this.createDecompressionStream();
        processingChain = processingChain.pipe(decompressionStream);
      }

      // Log file access
      if (options.userId) {
        await this.logFileAccess(fileId, 'download', options.userId);

        // Update access statistics
        await this.collections.files.updateOne(
          { _id: fileId },
          {
            $inc: { 'metadata.downloadCount': 1 },
            $set: { 'metadata.lastAccessed': new Date() }
          }
        );
      }

      // Return stream with file information
      return {
        stream: processingChain,
        fileInfo: fileInfo,
        contentType: fileInfo.contentType,
        contentLength: fileInfo.length,
        filename: fileInfo.filename,

        // Additional headers for HTTP response
        headers: {
          'Content-Type': fileInfo.contentType,
          'Content-Length': fileInfo.length,
          'Content-Disposition': `${options.disposition || 'inline'}; filename="${fileInfo.filename}"`,
          'Cache-Control': options.cacheControl || 'private, max-age=3600',
          'ETag': fileInfo.metadata?.fileHash,
          'Last-Modified': fileInfo.uploadDate.toUTCString()
        }
      };

    } catch (error) {
      console.error(`File download error for ${fileId}:`, error);
      throw error;
    }
  }

  async searchFiles(query, options = {}) {
    console.log(`Searching files with query:`, query);

    try {
      // Build comprehensive search pipeline
      const searchPipeline = [
        // Stage 1: Initial filtering based on search criteria
        {
          $match: {
            ...this.buildFileSearchFilter(query, options),
            // Ensure we're searching in the files collection
            filename: { $exists: true }
          }
        },

        // Stage 2: Add computed fields for search relevance
        {
          $addFields: {
            // Text search relevance scoring
            textScore: {
              $cond: {
                if: { $ne: [query.text, null] },
                then: {
                  $add: [
                    // Filename match weight
                    { $cond: { 
                      if: { $regexMatch: { input: '$filename', regex: query.text, options: 'i' } },
                      then: 10, else: 0 
                    }},
                    // Description match weight
                    { $cond: { 
                      if: { $regexMatch: { input: '$metadata.description', regex: query.text, options: 'i' } },
                      then: 5, else: 0 
                    }},
                    // Tags match weight
                    { $cond: { 
                      if: { $in: [query.text, '$metadata.tags'] },
                      then: 8, else: 0 
                    }},
                    // Category match weight
                    { $cond: { 
                      if: { $regexMatch: { input: '$metadata.category', regex: query.text, options: 'i' } },
                      then: 3, else: 0 
                    }}
                  ]
                },
                else: 0
              }
            },

            // Recency scoring (newer files get higher scores)
            recencyScore: {
              $divide: [
                { $subtract: [new Date(), '$uploadDate'] },
                86400000 // Convert to days
              ]
            },

            // Popularity scoring based on download count
            popularityScore: {
              $multiply: [
                { $log10: { $add: ['$metadata.downloadCount', 1] } },
                2
              ]
            },

            // Size category for filtering
            sizeCategory: {
              $switch: {
                branches: [
                  { case: { $lt: ['$length', 1024 * 1024] }, then: 'small' }, // < 1MB
                  { case: { $lt: ['$length', 10 * 1024 * 1024] }, then: 'medium' }, // < 10MB
                  { case: { $lt: ['$length', 100 * 1024 * 1024] }, then: 'large' }, // < 100MB
                ],
                default: 'very_large'
              }
            }
          }
        },

        // Stage 3: Apply advanced filtering
        {
          $match: {
            ...(query.sizeCategory && { sizeCategory: query.sizeCategory }),
            ...(query.minScore && { textScore: { $gte: query.minScore } })
          }
        },

        // Stage 4: Lookup related document information if file is associated
        {
          $lookup: {
            from: 'documents',
            localField: 'metadata.documentId',
            foreignField: '_id',
            as: 'documentInfo',
            pipeline: [
              {
                $project: {
                  title: 1,
                  status: 1,
                  createdBy: 1,
                  projectId: 1
                }
              }
            ]
          }
        },

        // Stage 5: Lookup user information for uploaded_by
        {
          $lookup: {
            from: 'users',
            localField: 'metadata.uploadedBy',
            foreignField: '_id',
            as: 'uploaderInfo',
            pipeline: [
              {
                $project: {
                  name: 1,
                  email: 1,
                  avatar: 1
                }
              }
            ]
          }
        },

        // Stage 6: Calculate final relevance score
        {
          $addFields: {
            relevanceScore: {
              $add: [
                '$textScore',
                { $divide: ['$popularityScore', 4] },
                { $cond: { if: { $lt: ['$recencyScore', 30] }, then: 5, else: 0 } }, // Bonus for files < 30 days old
                { $cond: { if: { $gt: [{ $size: '$documentInfo' }, 0] }, then: 2, else: 0 } } // Bonus for associated files
              ]
            }
          }
        },

        // Stage 7: Project final result structure
        {
          $project: {
            fileId: '$_id',
            filename: 1,
            contentType: 1,
            length: 1,
            uploadDate: 1,

            // Metadata information
            category: '$metadata.category',
            tags: '$metadata.tags',
            description: '$metadata.description',
            accessLevel: '$metadata.accessLevel',
            isPublic: '$metadata.isPublic',

            // File characteristics
            fileHash: '$metadata.fileHash',
            bucketName: '$metadata.bucketName',
            downloadCount: '$metadata.downloadCount',
            lastAccessed: '$metadata.lastAccessed',

            // Processing status
            processingStatus: '$metadata.processingStatus',
            thumbnailGenerated: '$metadata.thumbnailGenerated',

            // Computed scores
            textScore: 1,
            popularityScore: 1,
            relevanceScore: 1,
            sizeCategory: 1,

            // Related information
            documentInfo: { $arrayElemAt: ['$documentInfo', 0] },
            uploaderInfo: { $arrayElemAt: ['$uploaderInfo', 0] },

            // Access URLs
            downloadUrl: {
              $concat: [
                '/api/files/',
                { $toString: '$_id' },
                '/download'
              ]
            },

            thumbnailUrl: {
              $cond: {
                if: { $eq: ['$metadata.thumbnailGenerated', true] },
                then: {
                  $concat: [
                    '/api/files/',
                    { $toString: '$_id' },
                    '/thumbnail'
                  ]
                },
                else: null
              }
            },

            // Formatted file information
            formattedSize: {
              $switch: {
                branches: [
                  { 
                    case: { $lt: ['$length', 1024] },
                    then: { $concat: [{ $toString: '$length' }, ' bytes'] }
                  },
                  { 
                    case: { $lt: ['$length', 1024 * 1024] },
                    then: { 
                      $concat: [
                        { $toString: { $round: [{ $divide: ['$length', 1024] }, 1] } },
                        ' KB'
                      ]
                    }
                  },
                  { 
                    case: { $lt: ['$length', 1024 * 1024 * 1024] },
                    then: { 
                      $concat: [
                        { $toString: { $round: [{ $divide: ['$length', 1024 * 1024] }, 1] } },
                        ' MB'
                      ]
                    }
                  }
                ],
                default: { 
                  $concat: [
                    { $toString: { $round: [{ $divide: ['$length', 1024 * 1024 * 1024] }, 1] } },
                    ' GB'
                  ]
                }
              }
            }
          }
        },

        // Stage 8: Sort by relevance and apply pagination
        { $sort: this.buildSearchSort(options.sortBy, options.sortOrder) },
        { $skip: options.skip || 0 },
        { $limit: options.limit || 20 }
      ];

      // Execute search pipeline
      const searchResults = await this.db.collection('fs.files').aggregate(searchPipeline).toArray();

      // Get total count for pagination
      const totalCountPipeline = [
        { $match: this.buildFileSearchFilter(query, options) },
        { $count: 'total' }
      ];

      const countResult = await this.db.collection('fs.files').aggregate(totalCountPipeline).toArray();
      const totalCount = countResult.length > 0 ? countResult[0].total : 0;

      return {
        files: searchResults,
        pagination: {
          total: totalCount,
          page: Math.floor((options.skip || 0) / (options.limit || 20)) + 1,
          limit: options.limit || 20,
          pages: Math.ceil(totalCount / (options.limit || 20))
        },
        query: query,
        searchTime: Date.now() - (options.startTime || Date.now()),

        // Search analytics
        analytics: {
          averageRelevanceScore: searchResults.length > 0 ? 
            searchResults.reduce((sum, file) => sum + file.relevanceScore, 0) / searchResults.length : 0,
          categoryDistribution: this.analyzeCategoryDistribution(searchResults),
          sizeDistribution: this.analyzeSizeDistribution(searchResults),
          contentTypeDistribution: this.analyzeContentTypeDistribution(searchResults)
        }
      };

    } catch (error) {
      console.error('File search error:', error);
      throw error;
    }
  }

  async manageFileVersions(fileId, operation, options = {}) {
    console.log(`Managing file versions for ${fileId}, operation: ${operation}`);

    try {
      switch (operation) {
        case 'create_version':
          return await this.createFileVersion(fileId, options);

        case 'list_versions':
          return await this.listFileVersions(fileId, options);

        case 'restore_version':
          return await this.restoreFileVersion(fileId, options.versionId);

        case 'delete_version':
          return await this.deleteFileVersion(fileId, options.versionId);

        case 'compare_versions':
          return await this.compareFileVersions(fileId, options.versionId1, options.versionId2);

        default:
          throw new Error(`Unknown version operation: ${operation}`);
      }
    } catch (error) {
      console.error(`File version management error for ${fileId}:`, error);
      throw error;
    }
  }

  async createFileVersion(originalFileId, options) {
    console.log(`Creating new version for file: ${originalFileId}`);

    const session = this.db.client.startSession();

    try {
      await session.withTransaction(async () => {
        // Get original file information
        const originalFile = await this.getFileInfo(originalFileId);
        if (!originalFile) {
          throw new Error(`Original file not found: ${originalFileId}`);
        }

        // Create new version metadata
        const versionMetadata = {
          ...originalFile.metadata,
          version: (originalFile.metadata?.version || 1) + 1,
          parentFileId: originalFileId,
          versionCreatedAt: new Date(),
          versionCreatedBy: options.userId,
          versionNotes: options.versionNotes,
          isCurrentVersion: true
        };

        // Upload new version
        const newVersionInfo = await this.uploadFile(options.fileStream, versionMetadata, {
          version: versionMetadata.version
        });

        // Update original file to mark as not current
        await this.collections.files.updateOne(
          { _id: originalFileId },
          {
            $set: { 'metadata.isCurrentVersion': false },
            $push: {
              'metadata.versionHistory': {
                versionId: newVersionInfo.fileId,
                version: versionMetadata.version,
                createdAt: versionMetadata.versionCreatedAt,
                createdBy: versionMetadata.versionCreatedBy,
                notes: versionMetadata.versionNotes
              }
            }
          }
        );

        // Update version references in related documents
        if (originalFile.metadata?.documentId) {
          await this.collections.documents.updateMany(
            { 'files.fileId': originalFileId },
            {
              $set: { 'files.$.fileId': newVersionInfo.fileId },
              $push: {
                'files.$.versionHistory': {
                  previousFileId: originalFileId,
                  newFileId: newVersionInfo.fileId,
                  versionedAt: new Date(),
                  versionedBy: options.userId
                }
              }
            }
          );
        }

        return newVersionInfo;
      });

    } catch (error) {
      console.error(`File version creation error for ${originalFileId}:`, error);
      throw error;
    } finally {
      await session.endSession();
    }
  }

  // Helper methods for advanced file processing

  determineBucket(contentType) {
    if (contentType.startsWith('image/')) return 'images';
    if (contentType.startsWith('video/')) return 'videos';
    if (contentType.includes('pdf') || 
        contentType.includes('document') || 
        contentType.includes('word') || 
        contentType.includes('excel') || 
        contentType.includes('powerpoint')) return 'documents';
    if (contentType.includes('zip') || 
        contentType.includes('tar') || 
        contentType.includes('compress')) return 'archives';
    return 'files';
  }

  async validateFileUpload(fileStream, metadata, options) {
    const errors = [];

    // Size validation
    if (metadata.size > this.config.maxFileSize) {
      errors.push(`File size ${metadata.size} exceeds maximum ${this.config.maxFileSize}`);
    }

    // MIME type validation
    if (this.config.allowedMimeTypes.length > 0 && 
        !this.config.allowedMimeTypes.includes(metadata.contentType)) {
      errors.push(`Content type ${metadata.contentType} is not allowed`);
    }

    // Filename validation
    if (!metadata.filename || metadata.filename.trim().length === 0) {
      errors.push('Filename is required');
    }

    return { valid: errors.length === 0, errors };
  }

  buildFileSearchFilter(query, options) {
    const filter = {};

    // Text search across multiple fields
    if (query.text) {
      filter.$or = [
        { filename: { $regex: query.text, $options: 'i' } },
        { 'metadata.description': { $regex: query.text, $options: 'i' } },
        { 'metadata.tags': { $in: [new RegExp(query.text, 'i')] } },
        { 'metadata.category': { $regex: query.text, $options: 'i' } }
      ];
    }

    // Content type filtering
    if (query.contentType) {
      if (Array.isArray(query.contentType)) {
        filter.contentType = { $in: query.contentType };
      } else {
        filter.contentType = { $regex: query.contentType, $options: 'i' };
      }
    }

    // Date range filtering
    if (query.uploadedAfter || query.uploadedBefore) {
      filter.uploadDate = {};
      if (query.uploadedAfter) filter.uploadDate.$gte = new Date(query.uploadedAfter);
      if (query.uploadedBefore) filter.uploadDate.$lte = new Date(query.uploadedBefore);
    }

    // Size range filtering
    if (query.minSize || query.maxSize) {
      filter.length = {};
      if (query.minSize) filter.length.$gte = query.minSize;
      if (query.maxSize) filter.length.$lte = query.maxSize;
    }

    // Category filtering
    if (query.category) {
      filter['metadata.category'] = query.category;
    }

    // Tags filtering
    if (query.tags) {
      const tags = Array.isArray(query.tags) ? query.tags : [query.tags];
      filter['metadata.tags'] = { $in: tags };
    }

    // Uploader filtering
    if (query.uploadedBy) {
      filter['metadata.uploadedBy'] = query.uploadedBy;
    }

    // Access level filtering
    if (query.accessLevel) {
      filter['metadata.accessLevel'] = query.accessLevel;
    }

    // Document association filtering
    if (query.documentId) {
      filter['metadata.documentId'] = query.documentId;
    }

    // Processing status filtering
    if (query.processingStatus) {
      filter['metadata.processingStatus'] = query.processingStatus;
    }

    return filter;
  }

  buildSearchSort(sortBy = 'relevance', sortOrder = 'desc') {
    const sortDirection = sortOrder === 'desc' ? -1 : 1;

    switch (sortBy) {
      case 'relevance':
        return { relevanceScore: -1, uploadDate: -1 };
      case 'name':
        return { filename: sortDirection };
      case 'size':
        return { length: sortDirection };
      case 'date':
        return { uploadDate: sortDirection };
      case 'popularity':
        return { popularityScore: -1, uploadDate: -1 };
      case 'type':
        return { contentType: sortDirection, filename: 1 };
      default:
        return { relevanceScore: -1, uploadDate: -1 };
    }
  }

  setupFileProcessors() {
    // Image processing
    this.processors.set('image', async (fileId, contentType) => {
      // Implement image processing (resize, optimize, etc.)
      console.log(`Processing image: ${fileId}`);
    });

    // Document processing
    this.processors.set('document', async (fileId, contentType) => {
      // Implement document processing (text extraction, etc.)
      console.log(`Processing document: ${fileId}`);
    });

    // Video processing
    this.processors.set('video', async (fileId, contentType) => {
      // Implement video processing (thumbnail, compression, etc.)
      console.log(`Processing video: ${fileId}`);
    });
  }

  setupThumbnailGenerators() {
    // Image thumbnail generation
    this.thumbnailGenerators.set('image', async (fileId) => {
      console.log(`Generating image thumbnail for: ${fileId}`);
      // Implement image thumbnail generation
    });

    // PDF thumbnail generation
    this.thumbnailGenerators.set('pdf', async (fileId) => {
      console.log(`Generating PDF thumbnail for: ${fileId}`);
      // Implement PDF thumbnail generation
    });
  }

  setupMetadataExtractors() {
    // Image metadata extraction (EXIF, etc.)
    this.metadataExtractors.set('image', async (fileId) => {
      console.log(`Extracting image metadata for: ${fileId}`);
      // Implement EXIF and other metadata extraction
    });

    // Document metadata extraction
    this.metadataExtractors.set('document', async (fileId) => {
      console.log(`Extracting document metadata for: ${fileId}`);
      // Implement document properties extraction
    });
  }
}

// Benefits of MongoDB GridFS Advanced File Management:
// - Seamless integration with MongoDB data model and queries
// - Automatic file chunking and streaming for large files
// - Built-in file versioning and history tracking
// - Comprehensive metadata management and search capabilities
// - Advanced file processing pipelines and thumbnail generation
// - Integrated access control and permission management
// - Automatic backup and replication with MongoDB cluster
// - Sophisticated file search with relevance scoring
// - Real-time file access logging and analytics
// - SQL-compatible file operations through QueryLeaf integration

module.exports = {
  AdvancedGridFSManager
};

Understanding MongoDB GridFS Architecture

Advanced File Storage Patterns and Integration Strategies

Implement sophisticated GridFS patterns for production-scale applications:

// Production-ready GridFS management with advanced patterns and optimization
class ProductionGridFSManager extends AdvancedGridFSManager {
  constructor(db, productionConfig) {
    super(db);

    this.productionConfig = {
      ...productionConfig,
      replicationEnabled: true,
      shardingOptimized: true,
      compressionEnabled: true,
      encryptionEnabled: productionConfig.encryptionEnabled || false,
      cdnIntegration: productionConfig.cdnIntegration || false,
      virusScanning: productionConfig.virusScanning || false,
      contentDelivery: productionConfig.contentDelivery || false
    };

    this.setupProductionOptimizations();
    this.setupMonitoringAndAlerts();
    this.setupCDNIntegration();
  }

  async implementFileStorageStrategy(storageRequirements) {
    console.log('Implementing production file storage strategy...');

    const strategy = {
      storageDistribution: await this.designStorageDistribution(storageRequirements),
      performanceOptimization: await this.implementPerformanceOptimizations(storageRequirements),
      securityMeasures: await this.implementSecurityMeasures(storageRequirements),
      monitoringSetup: await this.setupComprehensiveMonitoring(storageRequirements),
      backupStrategy: await this.designBackupStrategy(storageRequirements)
    };

    return {
      strategy: strategy,
      implementation: await this.executeStorageStrategy(strategy),
      validation: await this.validateStorageImplementation(strategy),
      documentation: this.generateStorageDocumentation(strategy)
    };
  }

  async setupAdvancedFileCaching(cachingConfig) {
    console.log('Setting up advanced file caching system...');

    const cachingStrategy = {
      // Multi-tier caching
      tiers: [
        {
          name: 'memory',
          type: 'redis',
          capacity: '2GB',
          ttl: 3600, // 1 hour
          priority: ['images', 'thumbnails', 'frequently_accessed']
        },
        {
          name: 'disk',
          type: 'filesystem',
          capacity: '100GB',
          ttl: 86400, // 24 hours
          priority: ['documents', 'archives', 'medium_access']
        },
        {
          name: 'cdn',
          type: 'cloudfront',
          capacity: 'unlimited',
          ttl: 604800, // 7 days
          priority: ['public_files', 'static_content']
        }
      ],

      // Intelligent prefetching
      prefetchingRules: [
        {
          condition: 'user_documents',
          action: 'prefetch_related_files',
          priority: 'high'
        },
        {
          condition: 'popular_content',
          action: 'cache_preemptively',
          priority: 'medium'
        }
      ],

      // Cache invalidation strategies
      invalidationRules: [
        {
          trigger: 'file_updated',
          action: 'invalidate_all_versions',
          scope: 'global'
        },
        {
          trigger: 'permission_changed',
          action: 'invalidate_user_cache',
          scope: 'user_specific'
        }
      ]
    };

    return await this.implementCachingStrategy(cachingStrategy);
  }

  async manageFileLifecycle(lifecycleConfig) {
    console.log('Managing file lifecycle policies...');

    const lifecyclePolicies = {
      // Automatic archival policies
      archival: [
        {
          name: 'inactive_files',
          condition: { lastAccessed: { $lt: new Date(Date.now() - 90 * 24 * 60 * 60 * 1000) } },
          action: 'move_to_cold_storage',
          schedule: 'daily'
        },
        {
          name: 'large_old_files',
          condition: { 
            uploadDate: { $lt: new Date(Date.now() - 365 * 24 * 60 * 60 * 1000) },
            length: { $gt: 100 * 1024 * 1024 }
          },
          action: 'compress_and_archive',
          schedule: 'weekly'
        }
      ],

      // Cleanup policies
      cleanup: [
        {
          name: 'temp_files',
          condition: { 
            'metadata.category': 'temporary',
            uploadDate: { $lt: new Date(Date.now() - 24 * 60 * 60 * 1000) }
          },
          action: 'delete',
          schedule: 'hourly'
        },
        {
          name: 'orphaned_versions',
          condition: { 'metadata.parentFileId': { $exists: true, $nin: [] } },
          action: 'cleanup_orphaned',
          schedule: 'daily'
        }
      ],

      // Optimization policies
      optimization: [
        {
          name: 'duplicate_detection',
          condition: { 'metadata.deduplicationChecked': { $ne: true } },
          action: 'check_duplicates',
          schedule: 'continuous'
        },
        {
          name: 'compression_optimization',
          condition: { 
            contentType: { $in: ['image/png', 'image/jpeg', 'image/tiff'] },
            'metadata.compressionApplied': { $ne: true },
            length: { $gt: 1024 * 1024 }
          },
          action: 'apply_compression',
          schedule: 'daily'
        }
      ]
    };

    return await this.implementLifecyclePolicies(lifecyclePolicies);
  }
}

SQL-Style GridFS Management with QueryLeaf

QueryLeaf provides familiar SQL syntax for MongoDB GridFS operations and file management:

-- QueryLeaf advanced file storage and GridFS management with SQL-familiar syntax

-- Create file storage with GridFS configuration
CREATE FILE_STORAGE advanced_file_system 
USING GRIDFS (
  bucket_name = 'application_files',
  chunk_size = 255 * 1024, -- 255KB chunks for optimal performance

  -- Advanced GridFS configuration
  enable_md5 = true,
  enable_compression = true,
  compression_algorithm = 'zlib',
  encryption_enabled = false,

  -- Storage optimization
  auto_deduplication = true,
  thumbnail_generation = true,
  metadata_extraction = true,

  -- Performance tuning
  read_preference = 'secondaryPreferred',
  write_concern = { w: 'majority', j: true },
  max_time_ms = 30000
);

-- Upload files with comprehensive metadata management
INSERT INTO files (
  filename,
  content_type,
  file_data,
  metadata
) VALUES (
  'project_proposal.pdf',
  'application/pdf',
  LOAD_FILE('/path/to/project_proposal.pdf'),
  {
    category: 'documents',
    tags: ['project', 'proposal', 'business'],
    description: 'Q4 project proposal document',
    access_level: 'team',
    project_id: '507f1f77bcf86cd799439011',
    uploaded_by: '507f1f77bcf86cd799439012',

    -- Custom metadata fields
    document_type: 'proposal',
    confidentiality: 'internal',
    review_required: true,
    expiry_date: DATE_ADD(CURRENT_DATE, INTERVAL 1 YEAR),

    -- Processing options
    generate_thumbnail: true,
    extract_text: true,
    enable_versioning: true,
    compression_level: 'medium'
  }
),
(
  'user_avatar.jpg',
  'image/jpeg', 
  LOAD_FILE('/path/to/avatar.jpg'),
  {
    category: 'images',
    tags: ['avatar', 'profile', 'user'],
    description: 'User profile avatar image',
    access_level: 'public',
    user_id: '507f1f77bcf86cd799439013',
    uploaded_by: '507f1f77bcf86cd799439013',

    -- Image-specific metadata
    image_type: 'avatar',
    max_width: 200,
    max_height: 200,
    quality: 85,

    -- Processing options  
    generate_thumbnails: ['small', 'medium', 'large'],
    extract_exif: true,
    auto_optimize: true
  }
);

-- Advanced file search with comprehensive filtering and relevance scoring
SELECT 
  f.file_id,
  f.filename,
  f.content_type,
  f.file_size,
  f.upload_date,
  f.download_count,
  f.metadata,

  -- File access URLs
  CONCAT('/api/files/', f.file_id, '/download') as download_url,
  CONCAT('/api/files/', f.file_id, '/stream') as stream_url,

  -- Conditional thumbnail URL
  CASE 
    WHEN f.content_type LIKE 'image/%' AND f.metadata.thumbnail_generated = true THEN
      CONCAT('/api/files/', f.file_id, '/thumbnail')
    ELSE NULL
  END as thumbnail_url,

  -- File size formatting
  CASE 
    WHEN f.file_size < 1024 THEN CONCAT(f.file_size, ' bytes')
    WHEN f.file_size < 1024 * 1024 THEN CONCAT(ROUND(f.file_size / 1024.0, 1), ' KB')
    WHEN f.file_size < 1024 * 1024 * 1024 THEN CONCAT(ROUND(f.file_size / (1024.0 * 1024), 1), ' MB')
    ELSE CONCAT(ROUND(f.file_size / (1024.0 * 1024 * 1024), 1), ' GB')
  END as formatted_size,

  -- Search relevance scoring
  (
    -- Filename match weight (highest)
    CASE WHEN f.filename ILIKE '%proposal%' THEN 10 ELSE 0 END +

    -- Description match weight
    CASE WHEN f.metadata.description ILIKE '%proposal%' THEN 5 ELSE 0 END +

    -- Tags match weight  
    CASE WHEN 'proposal' = ANY(f.metadata.tags) THEN 8 ELSE 0 END +

    -- Category match weight
    CASE WHEN f.metadata.category ILIKE '%proposal%' THEN 3 ELSE 0 END +

    -- Recency bonus (files uploaded within last 30 days)
    CASE WHEN f.upload_date > CURRENT_DATE - INTERVAL '30 days' THEN 5 ELSE 0 END +

    -- Popularity bonus (files with high download count)
    LEAST(LOG(f.download_count + 1) * 2, 10) +

    -- Access level bonus (public files get slight boost)
    CASE WHEN f.metadata.access_level = 'public' THEN 2 ELSE 0 END

  ) as relevance_score,

  -- File status and health
  CASE 
    WHEN f.metadata.processing_status = 'completed' THEN 'ready'
    WHEN f.metadata.processing_status = 'processing' THEN 'processing'  
    WHEN f.metadata.processing_status = 'failed' THEN 'error'
    ELSE 'unknown'
  END as file_status,

  -- Associated document information
  d.title as document_title,
  d.project_id,

  -- Uploader information
  u.name as uploaded_by_name,
  u.email as uploaded_by_email

FROM files f
LEFT JOIN documents d ON f.metadata.document_id = d.document_id
LEFT JOIN users u ON f.metadata.uploaded_by = u.user_id

WHERE 
  -- Text search across multiple fields
  (
    f.filename ILIKE '%proposal%' 
    OR f.metadata.description ILIKE '%proposal%'
    OR 'proposal' = ANY(f.metadata.tags)
    OR f.metadata.category ILIKE '%proposal%'
  )

  -- Content type filtering
  AND f.content_type IN ('application/pdf', 'application/msword', 'text/plain')

  -- Date range filtering
  AND f.upload_date >= CURRENT_DATE - INTERVAL '1 year'

  -- Size filtering (between 1KB and 50MB)
  AND f.file_size BETWEEN 1024 AND 50 * 1024 * 1024

  -- Access level filtering (user can access)
  AND (
    f.metadata.access_level = 'public'
    OR f.metadata.uploaded_by = CURRENT_USER_ID()
    OR CURRENT_USER_ID() IN (
      SELECT user_id FROM file_permissions 
      WHERE file_id = f.file_id AND permission_level IN ('read', 'write', 'admin')
    )
  )

  -- Processing status filtering
  AND f.metadata.processing_status = 'completed'

  -- Project-based filtering (if specified)
  AND (f.metadata.project_id = '507f1f77bcf86cd799439011' OR f.metadata.project_id IS NULL)

ORDER BY 
  relevance_score DESC,
  f.download_count DESC,
  f.upload_date DESC

LIMIT 20 OFFSET 0;

-- File versioning management with comprehensive history tracking
WITH file_versions AS (
  SELECT 
    f.file_id,
    f.filename,
    f.content_type,
    f.file_size,
    f.upload_date,
    f.metadata,

    -- Version information
    f.metadata.version as version_number,
    f.metadata.parent_file_id,
    f.metadata.is_current_version,
    f.metadata.version_notes,

    -- Version relationships
    LAG(f.file_id) OVER (
      PARTITION BY COALESCE(f.metadata.parent_file_id, f.file_id)
      ORDER BY f.metadata.version
    ) as previous_version_id,

    LEAD(f.file_id) OVER (
      PARTITION BY COALESCE(f.metadata.parent_file_id, f.file_id)
      ORDER BY f.metadata.version  
    ) as next_version_id,

    -- Version statistics
    COUNT(*) OVER (
      PARTITION BY COALESCE(f.metadata.parent_file_id, f.file_id)
    ) as total_versions,

    ROW_NUMBER() OVER (
      PARTITION BY COALESCE(f.metadata.parent_file_id, f.file_id)
      ORDER BY f.metadata.version DESC
    ) as version_rank

  FROM files f
  WHERE f.metadata.version IS NOT NULL
),

version_changes AS (
  SELECT 
    fv.*,

    -- Size change analysis
    fv.file_size - LAG(fv.file_size) OVER (
      PARTITION BY COALESCE(fv.metadata.parent_file_id, fv.file_id)
      ORDER BY fv.version_number
    ) as size_change,

    -- Time between versions
    fv.upload_date - LAG(fv.upload_date) OVER (
      PARTITION BY COALESCE(fv.metadata.parent_file_id, fv.file_id)  
      ORDER BY fv.version_number
    ) as time_since_previous_version,

    -- Version change type
    CASE 
      WHEN LAG(fv.file_size) OVER (
        PARTITION BY COALESCE(fv.metadata.parent_file_id, fv.file_id)
        ORDER BY fv.version_number
      ) IS NULL THEN 'initial'
      WHEN fv.file_size > LAG(fv.file_size) OVER (
        PARTITION BY COALESCE(fv.metadata.parent_file_id, fv.file_id)
        ORDER BY fv.version_number
      ) THEN 'expansion'
      WHEN fv.file_size < LAG(fv.file_size) OVER (
        PARTITION BY COALESCE(fv.metadata.parent_file_id, fv.file_id)
        ORDER BY fv.version_number
      ) THEN 'reduction'
      ELSE 'maintenance'
    END as change_type

  FROM file_versions fv
)

SELECT 
  vc.file_id,
  vc.filename,
  vc.version_number,
  vc.upload_date,
  vc.file_size,
  vc.metadata.version_notes,
  vc.is_current_version,

  -- Version navigation
  vc.previous_version_id,
  vc.next_version_id,
  vc.total_versions,
  vc.version_rank,

  -- Change analysis
  vc.size_change,
  vc.time_since_previous_version,
  vc.change_type,

  -- Formatted information
  CASE 
    WHEN vc.size_change > 0 THEN CONCAT('+', vc.size_change, ' bytes')
    WHEN vc.size_change < 0 THEN CONCAT(vc.size_change, ' bytes')
    ELSE 'No size change'
  END as formatted_size_change,

  CASE 
    WHEN vc.time_since_previous_version IS NULL THEN 'Initial version'
    WHEN EXTRACT(DAYS FROM vc.time_since_previous_version) > 0 THEN 
      CONCAT(EXTRACT(DAYS FROM vc.time_since_previous_version), ' days ago')
    WHEN EXTRACT(HOURS FROM vc.time_since_previous_version) > 0 THEN 
      CONCAT(EXTRACT(HOURS FROM vc.time_since_previous_version), ' hours ago')
    ELSE 'Less than an hour ago'
  END as formatted_time_diff,

  -- Version actions
  CASE vc.is_current_version
    WHEN true THEN 'Current Version'
    ELSE 'Restore This Version'
  END as version_action,

  -- Download URLs for each version
  CONCAT('/api/files/', vc.file_id, '/download') as download_url,
  CONCAT('/api/files/', vc.file_id, '/compare/', vc.previous_version_id) as compare_url

FROM version_changes vc
WHERE COALESCE(vc.metadata.parent_file_id, vc.file_id) = '507f1f77bcf86cd799439015'
ORDER BY vc.version_number DESC;

-- Advanced file analytics and usage reporting
WITH file_analytics AS (
  SELECT 
    f.file_id,
    f.filename,
    f.content_type,
    f.file_size,
    f.upload_date,
    f.metadata,

    -- Usage statistics
    f.download_count,
    f.metadata.last_accessed,

    -- File age and activity metrics
    EXTRACT(DAYS FROM CURRENT_TIMESTAMP - f.upload_date) as age_days,
    EXTRACT(DAYS FROM CURRENT_TIMESTAMP - f.metadata.last_accessed) as days_since_access,

    -- Usage intensity calculation
    CASE 
      WHEN EXTRACT(DAYS FROM CURRENT_TIMESTAMP - f.upload_date) > 0 THEN
        f.download_count::float / EXTRACT(DAYS FROM CURRENT_TIMESTAMP - f.upload_date)
      ELSE f.download_count::float
    END as downloads_per_day,

    -- Storage cost calculation (simplified)
    f.file_size * 0.00000012 as monthly_storage_cost_usd, -- $0.12 per GB per month

    -- File category classification
    CASE 
      WHEN f.content_type LIKE 'image/%' THEN 'Images'
      WHEN f.content_type LIKE 'video/%' THEN 'Videos' 
      WHEN f.content_type LIKE 'audio/%' THEN 'Audio'
      WHEN f.content_type IN ('application/pdf', 'application/msword', 'text/plain') THEN 'Documents'
      WHEN f.content_type LIKE 'application/%zip%' OR f.content_type LIKE '%compress%' THEN 'Archives'
      ELSE 'Other'
    END as file_category,

    -- Size category
    CASE 
      WHEN f.file_size < 1024 * 1024 THEN 'Small (<1MB)'
      WHEN f.file_size < 10 * 1024 * 1024 THEN 'Medium (1-10MB)'
      WHEN f.file_size < 100 * 1024 * 1024 THEN 'Large (10-100MB)'
      ELSE 'Very Large (>100MB)'
    END as size_category,

    -- Activity classification
    CASE 
      WHEN f.metadata.last_accessed > CURRENT_DATE - INTERVAL '7 days' THEN 'Hot'
      WHEN f.metadata.last_accessed > CURRENT_DATE - INTERVAL '30 days' THEN 'Warm'  
      WHEN f.metadata.last_accessed > CURRENT_DATE - INTERVAL '90 days' THEN 'Cool'
      ELSE 'Cold'
    END as access_temperature

  FROM files f
  WHERE f.upload_date >= CURRENT_DATE - INTERVAL '1 year'
),

aggregated_analytics AS (
  SELECT 
    -- Overall file statistics
    COUNT(*) as total_files,
    SUM(fa.file_size) as total_storage_bytes,
    AVG(fa.file_size) as avg_file_size,
    SUM(fa.download_count) as total_downloads,
    AVG(fa.download_count) as avg_downloads_per_file,
    SUM(fa.monthly_storage_cost_usd) as total_monthly_cost_usd,

    -- Category breakdown
    COUNT(*) FILTER (WHERE fa.file_category = 'Images') as image_count,
    COUNT(*) FILTER (WHERE fa.file_category = 'Documents') as document_count,
    COUNT(*) FILTER (WHERE fa.file_category = 'Videos') as video_count,
    COUNT(*) FILTER (WHERE fa.file_category = 'Archives') as archive_count,

    -- Size distribution
    COUNT(*) FILTER (WHERE fa.size_category = 'Small (<1MB)') as small_files,
    COUNT(*) FILTER (WHERE fa.size_category = 'Medium (1-10MB)') as medium_files,
    COUNT(*) FILTER (WHERE fa.size_category = 'Large (10-100MB)') as large_files,
    COUNT(*) FILTER (WHERE fa.size_category = 'Very Large (>100MB)') as very_large_files,

    -- Activity distribution
    COUNT(*) FILTER (WHERE fa.access_temperature = 'Hot') as hot_files,
    COUNT(*) FILTER (WHERE fa.access_temperature = 'Warm') as warm_files,
    COUNT(*) FILTER (WHERE fa.access_temperature = 'Cool') as cool_files,
    COUNT(*) FILTER (WHERE fa.access_temperature = 'Cold') as cold_files,

    -- Storage optimization opportunities
    SUM(fa.file_size) FILTER (WHERE fa.access_temperature = 'Cold') as cold_storage_bytes,
    COUNT(*) FILTER (WHERE fa.download_count = 0 AND fa.age_days > 90) as unused_files,

    -- Performance metrics
    AVG(fa.downloads_per_day) as avg_downloads_per_day,
    MAX(fa.downloads_per_day) as max_downloads_per_day,

    -- Trend analysis
    COUNT(*) FILTER (WHERE fa.upload_date >= CURRENT_DATE - INTERVAL '30 days') as files_last_30_days,
    COUNT(*) FILTER (WHERE fa.upload_date >= CURRENT_DATE - INTERVAL '7 days') as files_last_7_days

  FROM file_analytics fa
)

SELECT 
  -- Storage summary
  total_files,
  ROUND((total_storage_bytes / 1024.0 / 1024 / 1024)::numeric, 2) as total_storage_gb,
  ROUND((avg_file_size / 1024.0 / 1024)::numeric, 2) as avg_file_size_mb,
  ROUND(total_monthly_cost_usd::numeric, 2) as monthly_cost_usd,

  -- Usage summary
  total_downloads,
  ROUND(avg_downloads_per_file::numeric, 1) as avg_downloads_per_file,
  ROUND(avg_downloads_per_day::numeric, 2) as avg_downloads_per_day,

  -- Category distribution (percentages)
  ROUND((image_count::float / total_files * 100)::numeric, 1) as image_percentage,
  ROUND((document_count::float / total_files * 100)::numeric, 1) as document_percentage,
  ROUND((video_count::float / total_files * 100)::numeric, 1) as video_percentage,

  -- Size distribution (percentages)
  ROUND((small_files::float / total_files * 100)::numeric, 1) as small_files_percentage,
  ROUND((medium_files::float / total_files * 100)::numeric, 1) as medium_files_percentage,
  ROUND((large_files::float / total_files * 100)::numeric, 1) as large_files_percentage,
  ROUND((very_large_files::float / total_files * 100)::numeric, 1) as very_large_files_percentage,

  -- Activity distribution
  hot_files,
  warm_files, 
  cool_files,
  cold_files,

  -- Optimization opportunities
  ROUND((cold_storage_bytes / 1024.0 / 1024 / 1024)::numeric, 2) as cold_storage_gb,
  unused_files,
  ROUND((unused_files::float / total_files * 100)::numeric, 1) as unused_files_percentage,

  -- Growth trends
  files_last_30_days,
  files_last_7_days,
  ROUND(((files_last_30_days::float / GREATEST(total_files - files_last_30_days, 1)) * 100)::numeric, 1) as monthly_growth_rate,

  -- Recommendations
  CASE 
    WHEN unused_files::float / total_files > 0.2 THEN 'High cleanup potential - consider archiving unused files'
    WHEN cold_storage_bytes::float / total_storage_bytes > 0.5 THEN 'Cold storage optimization recommended'
    WHEN files_last_7_days::float / files_last_30_days > 0.5 THEN 'High recent activity - monitor storage growth'
    ELSE 'File storage appears optimized'
  END as optimization_recommendation

FROM aggregated_analytics;

-- File cleanup and maintenance operations
DELETE FROM files 
WHERE 
  -- Remove temporary files older than 24 hours
  (metadata.category = 'temporary' AND upload_date < CURRENT_TIMESTAMP - INTERVAL '24 hours')

  OR 

  -- Remove unused files older than 1 year with no downloads
  (download_count = 0 AND upload_date < CURRENT_TIMESTAMP - INTERVAL '1 year')

  OR

  -- Remove orphaned file versions (parent file no longer exists)
  (metadata.parent_file_id IS NOT NULL AND 
   metadata.parent_file_id NOT IN (SELECT file_id FROM files WHERE metadata.is_current_version = true));

-- QueryLeaf provides comprehensive GridFS capabilities:
-- 1. SQL-familiar syntax for MongoDB GridFS file storage and management  
-- 2. Advanced file upload with comprehensive metadata and processing options
-- 3. Sophisticated file search with relevance scoring and multi-field filtering
-- 4. Complete file versioning system with history tracking and comparison
-- 5. Real-time file analytics and usage reporting with optimization recommendations
-- 6. Automated file lifecycle management and cleanup operations
-- 7. Integration with MongoDB's native GridFS chunking and streaming capabilities
-- 8. Advanced access control and permission management for file security
-- 9. Performance optimization through intelligent caching and storage distribution
-- 10. Production-ready file management with monitoring, alerts, and maintenance automation

Best Practices for Production GridFS Implementation

File Storage Strategy

Essential principles for effective MongoDB GridFS deployment and management:

  1. Bucket Organization: Design appropriate GridFS buckets for different file types and use cases to optimize performance
  2. Chunk Size Optimization: Configure optimal chunk sizes based on file types and access patterns for storage efficiency
  3. Metadata Design: Implement comprehensive metadata schemas for search, categorization, and lifecycle management
  4. Access Control Integration: Design robust permission systems that integrate with application authentication and authorization
  5. Performance Monitoring: Implement comprehensive monitoring for file access patterns, storage growth, and system performance
  6. Backup and Recovery: Design complete backup strategies that ensure file integrity and availability

Scalability and Performance Optimization

Optimize GridFS deployments for production-scale requirements:

  1. Sharding Strategy: Design appropriate sharding keys for distributed file storage across MongoDB clusters
  2. Index Optimization: Create optimal indexes for file metadata queries and search operations
  3. Caching Implementation: Implement multi-tier caching strategies for frequently accessed files
  4. Content Delivery: Integrate with CDN services for optimal file delivery performance
  5. Storage Optimization: Implement automated archival, compression, and deduplication strategies
  6. Resource Management: Monitor and optimize storage utilization, network bandwidth, and processing resources

Conclusion

MongoDB GridFS provides a comprehensive solution for storing and managing large files within MongoDB applications, offering seamless integration between file storage and document data while supporting advanced features like streaming, versioning, metadata management, and scalable storage patterns. The native MongoDB integration ensures that file storage benefits from the same replication, sharding, and backup capabilities as application data.

Key MongoDB GridFS benefits include:

  • Seamless Integration: Native MongoDB integration with automatic replication, sharding, and backup capabilities
  • Advanced File Management: Comprehensive versioning, metadata extraction, thumbnail generation, and processing pipelines
  • Scalable Architecture: Automatic file chunking and streaming support for files of any size
  • Sophisticated Search: Rich metadata-based search with relevance scoring and advanced filtering capabilities
  • Production Features: Built-in access control, lifecycle management, monitoring, and optimization capabilities
  • SQL Compatibility: Familiar SQL-style file operations through QueryLeaf integration for accessible file management

Whether you're building document management systems, media applications, content platforms, or any application requiring sophisticated file storage, MongoDB GridFS with QueryLeaf's familiar SQL interface provides the foundation for robust, scalable file management.

QueryLeaf Integration: QueryLeaf automatically manages MongoDB GridFS operations while providing SQL-familiar syntax for file upload, download, search, and management. Advanced GridFS patterns, metadata management, and file processing capabilities are seamlessly handled through familiar SQL constructs, making sophisticated file storage both powerful and accessible to SQL-oriented development teams.

The combination of MongoDB's robust GridFS capabilities with SQL-style file operations makes it an ideal platform for applications requiring both advanced file storage and familiar database management patterns, ensuring your file storage solutions can scale efficiently while remaining maintainable and feature-rich as they evolve.

MongoDB Transactions and ACID Compliance: Advanced Multi-Document Operations for Distributed Application Consistency

Modern distributed applications require sophisticated transaction management capabilities that can guarantee data consistency across multiple documents, collections, and database operations while maintaining high performance and availability. Traditional approaches to maintaining consistency in NoSQL systems often involve complex application-level coordination, eventual consistency patterns, or sacrificing atomicity guarantees that become increasingly problematic as business logic complexity grows.

MongoDB's multi-document ACID transactions provide comprehensive support for complex business operations that span multiple documents and collections while maintaining strict consistency guarantees. Unlike traditional NoSQL systems that sacrifice consistency for scalability, MongoDB transactions offer full ACID compliance with distributed transaction support, enabling sophisticated financial applications, inventory management systems, and complex workflow automation that requires atomic operations across multiple data entities.

The Traditional NoSQL Transaction Challenge

Conventional NoSQL transaction approaches suffer from significant limitations for complex business operations:

// Traditional NoSQL approaches - complex application-level coordination and consistency challenges

// Approach 1: Application-level two-phase commit (error-prone and complex)
class TraditionalOrderProcessor {
  constructor(databases) {
    this.userDB = databases.users;
    this.inventoryDB = databases.inventory;
    this.orderDB = databases.orders;
    this.paymentDB = databases.payments;
    this.auditDB = databases.audit;

    // Complex state tracking for manual coordination
    this.pendingTransactions = new Map();
    this.compensationLog = [];
    this.retryQueue = [];
  }

  async processComplexOrder(orderData) {
    const transactionId = require('crypto').randomUUID();
    const operationLog = [];
    let rollbackOperations = [];

    try {
      // Phase 1: Prepare all operations
      console.log('Phase 1: Preparing distributed operations...');

      // Step 1: Validate user account and credit limit
      const user = await this.userDB.findOne({ _id: orderData.userId });
      if (!user) {
        throw new Error('User not found');
      }

      if (user.creditLimit < orderData.totalAmount) {
        throw new Error('Insufficient credit limit');
      }

      // Step 2: Reserve inventory across multiple items
      const inventoryReservations = [];
      const inventoryUpdates = [];

      for (const item of orderData.items) {
        const product = await this.inventoryDB.findOne({ 
          _id: item.productId,
          availableQuantity: { $gte: item.quantity }
        });

        if (!product) {
          // Manual rollback required
          await this.rollbackInventoryReservations(inventoryReservations);
          throw new Error(`Insufficient inventory for product ${item.productId}`);
        }

        // Manual inventory reservation (not atomic)
        const reservationResult = await this.inventoryDB.updateOne(
          { 
            _id: item.productId,
            availableQuantity: { $gte: item.quantity }
          },
          {
            $inc: { 
              availableQuantity: -item.quantity,
              reservedQuantity: item.quantity
            },
            $push: {
              reservations: {
                orderId: transactionId,
                quantity: item.quantity,
                timestamp: new Date(),
                status: 'pending'
              }
            }
          }
        );

        if (reservationResult.modifiedCount === 0) {
          // Race condition occurred, need to rollback
          await this.rollbackInventoryReservations(inventoryReservations);
          throw new Error(`Race condition: inventory changed for product ${item.productId}`);
        }

        inventoryReservations.push({
          productId: item.productId,
          quantity: item.quantity,
          reservationId: `${transactionId}_${item.productId}`
        });

        rollbackOperations.push({
          type: 'inventory_rollback',
          operation: () => this.inventoryDB.updateOne(
            { _id: item.productId },
            {
              $inc: {
                availableQuantity: item.quantity,
                reservedQuantity: -item.quantity
              },
              $pull: {
                reservations: { orderId: transactionId }
              }
            }
          )
        });
      }

      // Step 3: Process payment authorization
      const paymentAuth = await this.processPaymentAuthorization(orderData);
      if (!paymentAuth.success) {
        await this.rollbackInventoryReservations(inventoryReservations);
        throw new Error(`Payment authorization failed: ${paymentAuth.error}`);
      }

      rollbackOperations.push({
        type: 'payment_rollback',
        operation: () => this.voidPaymentAuthorization(paymentAuth.authId)
      });

      // Step 4: Update user account balance and credit
      const userUpdateResult = await this.userDB.updateOne(
        { 
          _id: orderData.userId,
          creditUsed: { $lte: user.creditLimit - orderData.totalAmount }
        },
        {
          $inc: {
            creditUsed: orderData.totalAmount,
            totalOrderValue: orderData.totalAmount,
            orderCount: 1
          },
          $set: {
            lastOrderDate: new Date()
          }
        }
      );

      if (userUpdateResult.modifiedCount === 0) {
        // User account changed during processing
        await this.executeRollbackOperations(rollbackOperations);
        throw new Error('User account state changed during processing');
      }

      rollbackOperations.push({
        type: 'user_rollback', 
        operation: () => this.userDB.updateOne(
          { _id: orderData.userId },
          {
            $inc: {
              creditUsed: -orderData.totalAmount,
              totalOrderValue: -orderData.totalAmount,
              orderCount: -1
            }
          }
        )
      });

      // Phase 2: Commit all operations
      console.log('Phase 2: Committing distributed transaction...');

      // Create the order document
      const orderDocument = {
        _id: transactionId,
        userId: orderData.userId,
        items: orderData.items,
        totalAmount: orderData.totalAmount,
        paymentAuthId: paymentAuth.authId,
        inventoryReservations: inventoryReservations,
        status: 'processing',
        createdAt: new Date(),
        transactionLog: operationLog
      };

      const orderResult = await this.orderDB.insertOne(orderDocument);
      if (!orderResult.insertedId) {
        await this.executeRollbackOperations(rollbackOperations);
        throw new Error('Failed to create order document');
      }

      // Confirm inventory reservations
      for (const reservation of inventoryReservations) {
        await this.inventoryDB.updateOne(
          { 
            _id: reservation.productId,
            'reservations.orderId': transactionId
          },
          {
            $set: {
              'reservations.$.status': 'confirmed',
              'reservations.$.confirmedAt': new Date()
            }
          }
        );
      }

      // Capture payment
      const paymentCapture = await this.capturePayment(paymentAuth.authId);
      if (!paymentCapture.success) {
        await this.executeRollbackOperations(rollbackOperations);
        throw new Error(`Payment capture failed: ${paymentCapture.error}`);
      }

      // Record payment transaction
      await this.paymentDB.insertOne({
        _id: `payment_${transactionId}`,
        orderId: transactionId,
        userId: orderData.userId,
        amount: orderData.totalAmount,
        authId: paymentAuth.authId,
        captureId: paymentCapture.captureId,
        status: 'captured',
        capturedAt: new Date()
      });

      // Update order status
      await this.orderDB.updateOne(
        { _id: transactionId },
        {
          $set: {
            status: 'confirmed',
            confirmedAt: new Date(),
            paymentCaptureId: paymentCapture.captureId
          }
        }
      );

      // Audit log entry
      await this.auditDB.insertOne({
        _id: `audit_${transactionId}`,
        transactionId: transactionId,
        operationType: 'order_processing',
        userId: orderData.userId,
        amount: orderData.totalAmount,
        operations: operationLog,
        status: 'success',
        completedAt: new Date()
      });

      console.log(`Transaction ${transactionId} completed successfully`);
      return {
        success: true,
        transactionId: transactionId,
        orderId: transactionId,
        operationsCompleted: operationLog.length
      };

    } catch (error) {
      console.error(`Transaction ${transactionId} failed:`, error.message);

      // Execute rollback operations in reverse order
      await this.executeRollbackOperations(rollbackOperations.reverse());

      // Log failure for investigation
      await this.auditDB.insertOne({
        _id: `audit_failed_${transactionId}`,
        transactionId: transactionId,
        operationType: 'order_processing',
        userId: orderData.userId,
        amount: orderData.totalAmount,
        error: error.message,
        rollbackOperations: rollbackOperations.length,
        status: 'failed',
        failedAt: new Date()
      });

      return {
        success: false,
        transactionId: transactionId,
        error: error.message,
        rollbacksExecuted: rollbackOperations.length
      };
    }
  }

  async rollbackInventoryReservations(reservations) {
    const rollbackPromises = reservations.map(async (reservation) => {
      try {
        await this.inventoryDB.updateOne(
          { _id: reservation.productId },
          {
            $inc: {
              availableQuantity: reservation.quantity,
              reservedQuantity: -reservation.quantity
            },
            $pull: {
              reservations: { orderId: reservation.reservationId }
            }
          }
        );
      } catch (rollbackError) {
        console.error(`Rollback failed for product ${reservation.productId}:`, rollbackError);
        // In production, this would need sophisticated error handling
        // and potentially manual intervention
      }
    });

    await Promise.allSettled(rollbackPromises);
  }

  async executeRollbackOperations(rollbackOperations) {
    for (const rollback of rollbackOperations) {
      try {
        await rollback.operation();
        console.log(`Rollback completed: ${rollback.type}`);
      } catch (rollbackError) {
        console.error(`Rollback failed: ${rollback.type}`, rollbackError);
        // This is where things get really complicated - failed rollbacks
        // require manual intervention and complex recovery procedures
      }
    }
  }

  async processPaymentAuthorization(orderData) {
    // Simulate payment authorization
    return new Promise((resolve) => {
      setTimeout(() => {
        if (Math.random() > 0.1) { // 90% success rate
          resolve({
            success: true,
            authId: `auth_${require('crypto').randomUUID()}`,
            amount: orderData.totalAmount,
            authorizedAt: new Date()
          });
        } else {
          resolve({
            success: false,
            error: 'Payment authorization declined'
          });
        }
      }, 100);
    });
  }

  async capturePayment(authId) {
    // Simulate payment capture
    return new Promise((resolve) => {
      setTimeout(() => {
        if (Math.random() > 0.05) { // 95% success rate
          resolve({
            success: true,
            captureId: `capture_${require('crypto').randomUUID()}`,
            capturedAt: new Date()
          });
        } else {
          resolve({
            success: false,
            error: 'Payment capture failed'
          });
        }
      }, 150);
    });
  }
}

// Problems with traditional NoSQL transaction approaches:
// 1. Complex application-level coordination requiring extensive error handling
// 2. Race conditions and consistency issues between operations
// 3. Manual rollback implementation prone to failures and partial states
// 4. No atomicity guarantees - partial failures leave system in inconsistent state
// 5. Difficult debugging and troubleshooting of transaction failures
// 6. Poor performance due to multiple round-trips and coordination overhead
// 7. Scalability limitations as transaction complexity increases
// 8. No isolation guarantees - concurrent transactions can interfere
// 9. Limited durability guarantees without complex persistence coordination
// 10. Operational complexity for monitoring and maintaining distributed state

// Approach 2: Eventual consistency with compensation patterns (Saga pattern)
class SagaOrderProcessor {
  constructor(eventStore, commandHandlers) {
    this.eventStore = eventStore;
    this.commandHandlers = commandHandlers;
    this.sagaState = new Map();
  }

  async processOrderSaga(orderData) {
    const sagaId = require('crypto').randomUUID();
    const saga = {
      id: sagaId,
      status: 'started',
      steps: [
        { name: 'validate_user', status: 'pending', compensate: 'none' },
        { name: 'reserve_inventory', status: 'pending', compensate: 'release_inventory' },
        { name: 'process_payment', status: 'pending', compensate: 'refund_payment' },
        { name: 'create_order', status: 'pending', compensate: 'cancel_order' },
        { name: 'update_user_account', status: 'pending', compensate: 'revert_user_account' }
      ],
      currentStep: 0,
      compensationNeeded: false,
      orderData: orderData,
      createdAt: new Date()
    };

    this.sagaState.set(sagaId, saga);

    try {
      await this.executeSagaSteps(saga);
      return { success: true, sagaId: sagaId, status: 'completed' };
    } catch (error) {
      await this.executeCompensation(saga, error);
      return { success: false, sagaId: sagaId, error: error.message, status: 'compensated' };
    }
  }

  async executeSagaSteps(saga) {
    for (let i = saga.currentStep; i < saga.steps.length; i++) {
      const step = saga.steps[i];
      console.log(`Executing saga step: ${step.name}`);

      try {
        const stepResult = await this.executeStep(step.name, saga.orderData);
        step.status = 'completed';
        step.result = stepResult;
        saga.currentStep = i + 1;

        // Save saga state after each step
        await this.saveSagaState(saga);

      } catch (stepError) {
        console.error(`Saga step ${step.name} failed:`, stepError);
        step.status = 'failed';
        step.error = stepError.message;
        saga.compensationNeeded = true;
        throw stepError;
      }
    }

    saga.status = 'completed';
    await this.saveSagaState(saga);
  }

  async executeCompensation(saga, originalError) {
    console.log(`Executing compensation for saga ${saga.id}`);
    saga.status = 'compensating';

    // Execute compensation in reverse order of completed steps
    for (let i = saga.currentStep - 1; i >= 0; i--) {
      const step = saga.steps[i];

      if (step.status === 'completed' && step.compensate !== 'none') {
        try {
          console.log(`Compensating step: ${step.name}`);
          await this.executeCompensation(step.compensate, step.result, saga.orderData);
          step.compensationStatus = 'completed';
        } catch (compensationError) {
          console.error(`Compensation failed for ${step.name}:`, compensationError);
          step.compensationStatus = 'failed';
          step.compensationError = compensationError.message;

          // In a real system, this would require manual intervention
          // or sophisticated retry and escalation mechanisms
        }
      }
    }

    saga.status = 'compensated';
    saga.originalError = originalError.message;
    await this.saveSagaState(saga);
  }

  // Saga pattern problems:
  // 1. Complex state management and coordination across services
  // 2. No isolation - other transactions can see intermediate states
  // 3. Compensation logic complexity increases exponentially with steps
  // 4. Potential for cascading failures during compensation
  // 5. Debugging and troubleshooting distributed saga state is difficult
  // 6. Performance overhead from state persistence and coordination
  // 7. Limited consistency guarantees during saga execution
  // 8. Operational complexity for monitoring and error recovery
  // 9. No built-in support for complex business rules and constraints
  // 10. Scalability challenges as saga complexity and concurrency increase
}

MongoDB provides comprehensive ACID transactions with multi-document support:

// MongoDB Multi-Document ACID Transactions - comprehensive atomic operations with full consistency guarantees
const { MongoClient, ClientSession } = require('mongodb');

const client = new MongoClient('mongodb://localhost:27017');
const db = client.db('ecommerce_platform');

// Advanced MongoDB Transaction Management System
class MongoTransactionManager {
  constructor(db) {
    this.db = db;
    this.collections = {
      users: db.collection('users'),
      products: db.collection('products'),
      inventory: db.collection('inventory'), 
      orders: db.collection('orders'),
      payments: db.collection('payments'),
      audit: db.collection('audit'),
      promotions: db.collection('promotions'),
      loyalty: db.collection('loyalty_points')
    };

    // Transaction configuration
    this.transactionConfig = {
      readConcern: { level: 'snapshot' },
      writeConcern: { w: 'majority', j: true },
      readPreference: 'primary',
      maxTimeMS: 60000, // 1 minute timeout
      maxCommitTimeMS: 30000 // 30 second commit timeout
    };

    this.retryConfig = {
      maxRetries: 3,
      retryDelayMs: 100,
      backoffFactor: 2
    };
  }

  async processComplexOrderTransaction(orderData, options = {}) {
    console.log(`Starting complex order transaction for user: ${orderData.userId}`);

    const session = client.startSession();
    const transactionResults = {
      transactionId: require('crypto').randomUUID(),
      success: false,
      operations: [],
      metrics: {
        startTime: new Date(),
        endTime: null,
        durationMs: 0,
        documentsModified: 0,
        collectionsAffected: 0
      },
      rollbackExecuted: false,
      error: null
    };

    try {
      // Start transaction with ACID guarantees
      await session.withTransaction(async () => {
        console.log('Beginning atomic transaction...');

        // Operation 1: Validate user account and apply business rules
        const userValidation = await this.validateAndUpdateUserAccount(
          orderData.userId, 
          orderData.totalAmount, 
          session,
          transactionResults
        );

        if (!userValidation.valid) {
          throw new Error(`User validation failed: ${userValidation.reason}`);
        }

        // Operation 2: Apply promotional codes and calculate discounts
        const promotionResult = await this.applyPromotionsAndDiscounts(
          orderData,
          userValidation.user,
          session,
          transactionResults
        );

        // Update order total with promotions
        orderData.originalTotal = orderData.totalAmount;
        orderData.totalAmount = promotionResult.finalAmount;
        orderData.discountsApplied = promotionResult.discountsApplied;

        // Operation 3: Reserve inventory with complex allocation logic
        const inventoryReservation = await this.reserveInventoryWithAllocation(
          orderData.items,
          transactionResults.transactionId,
          session,
          transactionResults
        );

        if (!inventoryReservation.success) {
          throw new Error(`Inventory reservation failed: ${inventoryReservation.reason}`);
        }

        // Operation 4: Process payment with fraud detection
        const paymentResult = await this.processPaymentWithFraudDetection(
          orderData,
          userValidation.user,
          session,
          transactionResults
        );

        if (!paymentResult.success) {
          throw new Error(`Payment processing failed: ${paymentResult.reason}`);
        }

        // Operation 5: Create comprehensive order document
        const orderCreation = await this.createComprehensiveOrder(
          orderData,
          userValidation.user,
          inventoryReservation,
          paymentResult,
          promotionResult,
          session,
          transactionResults
        );

        // Operation 6: Update user loyalty points and tier status
        await this.updateUserLoyaltyAndTier(
          orderData.userId,
          orderData.totalAmount,
          orderData.items,
          session,
          transactionResults
        );

        // Operation 7: Create audit trail with comprehensive tracking
        await this.createComprehensiveAuditTrail(
          transactionResults.transactionId,
          orderData,
          userValidation.user,
          paymentResult,
          inventoryReservation,
          promotionResult,
          session,
          transactionResults
        );

        // All operations completed successfully within transaction
        console.log(`Transaction ${transactionResults.transactionId} completed with ${transactionResults.operations.length} operations`);

      }, this.transactionConfig);

      // Transaction committed successfully
      transactionResults.success = true;
      transactionResults.metrics.endTime = new Date();
      transactionResults.metrics.durationMs = transactionResults.metrics.endTime - transactionResults.metrics.startTime;

      console.log(`Order transaction completed successfully in ${transactionResults.metrics.durationMs}ms`);
      console.log(`${transactionResults.metrics.documentsModified} documents modified across ${transactionResults.metrics.collectionsAffected} collections`);

    } catch (error) {
      console.error(`Transaction ${transactionResults.transactionId} failed:`, error.message);
      transactionResults.success = false;
      transactionResults.error = {
        message: error.message,
        code: error.code,
        codeName: error.codeName,
        stack: error.stack
      };
      transactionResults.rollbackExecuted = true;
      transactionResults.metrics.endTime = new Date();
      transactionResults.metrics.durationMs = transactionResults.metrics.endTime - transactionResults.metrics.startTime;

      // MongoDB automatically handles rollback for failed transactions
      console.log(`Automatic rollback executed for transaction ${transactionResults.transactionId}`);

    } finally {
      await session.endSession();
    }

    return transactionResults;
  }

  async validateAndUpdateUserAccount(userId, orderAmount, session, transactionResults) {
    console.log(`Validating user account: ${userId}`);

    const user = await this.collections.users.findOne(
      { _id: userId },
      { session }
    );

    if (!user) {
      return { valid: false, reason: 'User not found' };
    }

    if (user.status !== 'active') {
      return { valid: false, reason: 'User account is not active' };
    }

    // Complex business rules validation
    const availableCredit = user.creditLimit - user.creditUsed;
    const dailySpendingLimit = user.dailySpendingLimit || user.creditLimit * 0.3;
    const todaySpending = user.dailySpending?.find(d => 
      d.date.toDateString() === new Date().toDateString()
    )?.amount || 0;

    if (orderAmount > availableCredit) {
      return { 
        valid: false, 
        reason: `Insufficient credit: available ${availableCredit}, required ${orderAmount}` 
      };
    }

    if (todaySpending + orderAmount > dailySpendingLimit) {
      return { 
        valid: false, 
        reason: `Daily spending limit exceeded: limit ${dailySpendingLimit}, current ${todaySpending}, requested ${orderAmount}` 
      };
    }

    // Update user account within transaction
    const updateResult = await this.collections.users.updateOne(
      { _id: userId },
      {
        $inc: {
          creditUsed: orderAmount,
          totalOrderValue: orderAmount,
          orderCount: 1
        },
        $set: {
          lastOrderDate: new Date(),
          lastActivityAt: new Date()
        },
        $push: {
          dailySpending: {
            $each: [{
              date: new Date(),
              amount: todaySpending + orderAmount
            }],
            $slice: -30 // Keep last 30 days
          }
        }
      },
      { session }
    );

    this.updateTransactionMetrics(transactionResults, 'users', 'validateAndUpdateUserAccount', updateResult);

    return { 
      valid: true, 
      user: user,
      creditUsed: orderAmount,
      remainingCredit: availableCredit - orderAmount
    };
  }

  async applyPromotionsAndDiscounts(orderData, user, session, transactionResults) {
    console.log('Applying promotions and discounts...');

    let finalAmount = orderData.totalAmount;
    let discountsApplied = [];

    // Find applicable promotions
    const applicablePromotions = await this.collections.promotions.find({
      status: 'active',
      startDate: { $lte: new Date() },
      endDate: { $gte: new Date() },
      $or: [
        { applicableToUsers: user._id },
        { applicableToUserTiers: user.tier },
        { applicableToAll: true }
      ]
    }, { session }).toArray();

    for (const promotion of applicablePromotions) {
      let discountAmount = 0;
      let applicable = false;

      // Validate promotion conditions
      if (promotion.minimumOrderAmount && orderData.totalAmount < promotion.minimumOrderAmount) {
        continue;
      }

      if (promotion.applicableProducts && promotion.applicableProducts.length > 0) {
        const hasApplicableProducts = orderData.items.some(item => 
          promotion.applicableProducts.includes(item.productId)
        );
        if (!hasApplicableProducts) continue;
      }

      // Calculate discount based on promotion type
      switch (promotion.type) {
        case 'percentage':
          discountAmount = finalAmount * (promotion.discountPercentage / 100);
          if (promotion.maxDiscount) {
            discountAmount = Math.min(discountAmount, promotion.maxDiscount);
          }
          applicable = true;
          break;

        case 'fixed_amount':
          discountAmount = Math.min(promotion.discountAmount, finalAmount);
          applicable = true;
          break;

        case 'buy_x_get_y':
          const qualifyingItems = orderData.items.filter(item => 
            promotion.buyProducts.includes(item.productId)
          );
          const totalQualifyingQuantity = qualifyingItems.reduce((sum, item) => sum + item.quantity, 0);

          if (totalQualifyingQuantity >= promotion.buyQuantity) {
            const freeQuantity = Math.floor(totalQualifyingQuantity / promotion.buyQuantity) * promotion.getQuantity;
            const averagePrice = qualifyingItems.reduce((sum, item) => sum + item.price, 0) / qualifyingItems.length;
            discountAmount = freeQuantity * averagePrice;
            applicable = true;
          }
          break;
      }

      if (applicable && discountAmount > 0) {
        finalAmount -= discountAmount;
        discountsApplied.push({
          promotionId: promotion._id,
          promotionName: promotion.name,
          discountAmount: discountAmount,
          appliedAt: new Date()
        });

        // Update promotion usage
        await this.collections.promotions.updateOne(
          { _id: promotion._id },
          {
            $inc: { usageCount: 1 },
            $push: {
              recentUsage: {
                userId: user._id,
                orderId: transactionResults.transactionId,
                discountAmount: discountAmount,
                usedAt: new Date()
              }
            }
          },
          { session }
        );

        this.updateTransactionMetrics(transactionResults, 'promotions', 'applyPromotionsAndDiscounts');
      }
    }

    console.log(`Applied ${discountsApplied.length} promotions, total discount: ${orderData.totalAmount - finalAmount}`);

    return {
      finalAmount: Math.max(finalAmount, 0), // Ensure non-negative
      discountsApplied: discountsApplied,
      totalDiscount: orderData.totalAmount - finalAmount
    };
  }

  async reserveInventoryWithAllocation(orderItems, transactionId, session, transactionResults) {
    console.log(`Reserving inventory for ${orderItems.length} items...`);

    const reservationResults = [];
    const allocationStrategy = 'fifo'; // First-In-First-Out allocation

    for (const item of orderItems) {
      // Find available inventory with complex allocation logic
      const inventoryRecords = await this.collections.inventory.find({
        productId: item.productId,
        availableQuantity: { $gt: 0 },
        status: 'active'
      }, { session })
      .sort({ createdAt: 1 }) // FIFO allocation
      .toArray();

      let remainingQuantity = item.quantity;
      const allocatedFrom = [];

      for (const inventoryRecord of inventoryRecords) {
        if (remainingQuantity <= 0) break;

        const allocateQuantity = Math.min(remainingQuantity, inventoryRecord.availableQuantity);

        // Reserve inventory from this record
        const reservationResult = await this.collections.inventory.updateOne(
          { 
            _id: inventoryRecord._id,
            availableQuantity: { $gte: allocateQuantity }
          },
          {
            $inc: { 
              availableQuantity: -allocateQuantity,
              reservedQuantity: allocateQuantity
            },
            $push: {
              reservations: {
                reservationId: `${transactionId}_${item.productId}_${inventoryRecord._id}`,
                orderId: transactionId,
                quantity: allocateQuantity,
                reservedAt: new Date(),
                expiresAt: new Date(Date.now() + 30 * 60 * 1000), // 30 minutes
                status: 'active'
              }
            }
          },
          { session }
        );

        if (reservationResult.modifiedCount === 1) {
          allocatedFrom.push({
            inventoryId: inventoryRecord._id,
            warehouseLocation: inventoryRecord.location,
            quantity: allocateQuantity,
            unitCost: inventoryRecord.unitCost
          });
          remainingQuantity -= allocateQuantity;

          this.updateTransactionMetrics(transactionResults, 'inventory', 'reserveInventoryWithAllocation', reservationResult);
        }
      }

      if (remainingQuantity > 0) {
        return {
          success: false,
          reason: `Insufficient inventory for product ${item.productId}: requested ${item.quantity}, available ${item.quantity - remainingQuantity}`
        };
      }

      reservationResults.push({
        productId: item.productId,
        requestedQuantity: item.quantity,
        allocatedFrom: allocatedFrom,
        totalCost: allocatedFrom.reduce((sum, alloc) => sum + (alloc.quantity * alloc.unitCost), 0)
      });
    }

    console.log(`Successfully reserved inventory for all ${orderItems.length} items`);

    return {
      success: true,
      reservationId: transactionId,
      reservations: reservationResults,
      totalReservedItems: reservationResults.reduce((sum, res) => sum + res.requestedQuantity, 0)
    };
  }

  async processPaymentWithFraudDetection(orderData, user, session, transactionResults) {
    console.log(`Processing payment with fraud detection for order amount: ${orderData.totalAmount}`);

    // Fraud detection analysis within transaction
    const fraudScore = await this.calculateFraudScore(orderData, user, session);

    if (fraudScore > 0.8) {
      return {
        success: false,
        reason: `Transaction flagged for fraud (score: ${fraudScore})`,
        fraudScore: fraudScore
      };
    }

    // Process payment (in real system, this would integrate with payment gateway)
    const paymentRecord = {
      _id: `payment_${transactionResults.transactionId}`,
      orderId: transactionResults.transactionId,
      userId: user._id,
      amount: orderData.totalAmount,
      originalAmount: orderData.originalTotal || orderData.totalAmount,
      paymentMethod: orderData.paymentMethod,
      fraudScore: fraudScore,

      // Payment processing details
      authorizationId: `auth_${require('crypto').randomUUID()}`,
      captureId: `capture_${require('crypto').randomUUID()}`,

      status: 'completed',
      processedAt: new Date(),

      // Enhanced payment metadata
      riskAssessment: {
        score: fraudScore,
        factors: await this.getFraudFactors(orderData, user),
        recommendation: fraudScore > 0.5 ? 'review' : 'approve'
      },

      processingFees: {
        gatewayFee: orderData.totalAmount * 0.029 + 0.30, // Typical payment gateway fee
        fraudProtectionFee: 0.05
      }
    };

    const insertResult = await this.collections.payments.insertOne(paymentRecord, { session });

    this.updateTransactionMetrics(transactionResults, 'payments', 'processPaymentWithFraudDetection', insertResult);

    console.log(`Payment processed successfully: ${paymentRecord._id}`);

    return {
      success: true,
      paymentId: paymentRecord._id,
      authorizationId: paymentRecord.authorizationId,
      captureId: paymentRecord.captureId,
      fraudScore: fraudScore,
      processingFees: paymentRecord.processingFees
    };
  }

  async createComprehensiveOrder(orderData, user, inventoryReservation, paymentResult, promotionResult, session, transactionResults) {
    console.log('Creating comprehensive order document...');

    const orderDocument = {
      _id: transactionResults.transactionId,
      orderNumber: `ORD-${Date.now()}-${Math.random().toString(36).substr(2, 6).toUpperCase()}`,

      // Customer information
      customer: {
        userId: user._id,
        email: user.email,
        tier: user.tier,
        isReturningCustomer: user.orderCount > 0
      },

      // Order details
      items: orderData.items.map(item => ({
        ...item,
        allocation: inventoryReservation.reservations.find(r => r.productId === item.productId)?.allocatedFrom || []
      })),

      // Financial details
      pricing: {
        subtotal: orderData.originalTotal || orderData.totalAmount,
        discounts: promotionResult.discountsApplied || [],
        totalDiscount: promotionResult.totalDiscount || 0,
        finalAmount: orderData.totalAmount,
        tax: orderData.tax || 0,
        shipping: orderData.shipping || 0,
        total: orderData.totalAmount
      },

      // Payment information
      payment: {
        paymentId: paymentResult.paymentId,
        method: orderData.paymentMethod,
        status: 'completed',
        fraudScore: paymentResult.fraudScore,
        processedAt: new Date()
      },

      // Inventory allocation
      inventory: {
        reservationId: inventoryReservation.reservationId,
        totalItemsReserved: inventoryReservation.totalReservedItems,
        reservationDetails: inventoryReservation.reservations
      },

      // Order lifecycle
      status: 'confirmed',
      lifecycle: {
        createdAt: new Date(),
        confirmedAt: new Date(),
        estimatedFulfillmentDate: new Date(Date.now() + 2 * 24 * 60 * 60 * 1000), // 2 days
        estimatedDeliveryDate: new Date(Date.now() + 7 * 24 * 60 * 60 * 1000) // 1 week
      },

      // Shipping information
      shipping: {
        address: orderData.shippingAddress,
        method: orderData.shippingMethod || 'standard',
        trackingNumber: null, // Will be updated when shipped
        carrier: orderData.carrier || 'fedex'
      },

      // Transaction metadata
      transaction: {
        transactionId: transactionResults.transactionId,
        sessionId: session.id ? session.id.toString() : null,
        source: orderData.source || 'web',
        channel: orderData.channel || 'direct'
      }
    };

    const insertResult = await this.collections.orders.insertOne(orderDocument, { session });

    this.updateTransactionMetrics(transactionResults, 'orders', 'createComprehensiveOrder', insertResult);

    console.log(`Order created successfully: ${orderDocument.orderNumber}`);

    return orderDocument;
  }

  async updateUserLoyaltyAndTier(userId, orderAmount, orderItems, session, transactionResults) {
    console.log(`Updating loyalty points and tier for user: ${userId}`);

    // Calculate loyalty points based on complex rules
    const basePoints = Math.floor(orderAmount); // 1 point per dollar
    const bonusPoints = this.calculateBonusPoints(orderItems, orderAmount);
    const totalPoints = basePoints + bonusPoints;

    // Update loyalty points
    const loyaltyUpdate = await this.collections.loyalty.updateOne(
      { userId: userId },
      {
        $inc: {
          totalPointsEarned: totalPoints,
          availablePoints: totalPoints,
          lifetimeValue: orderAmount
        },
        $push: {
          pointsHistory: {
            orderId: transactionResults.transactionId,
            pointsEarned: totalPoints,
            reason: 'order_purchase',
            earnedAt: new Date()
          }
        },
        $set: {
          lastActivityAt: new Date()
        }
      },
      { 
        upsert: true,
        session 
      }
    );

    // Check for tier upgrades
    const loyaltyRecord = await this.collections.loyalty.findOne(
      { userId: userId },
      { session }
    );

    if (loyaltyRecord) {
      const newTier = this.calculateUserTier(loyaltyRecord.lifetimeValue, loyaltyRecord.totalPointsEarned);

      if (newTier !== loyaltyRecord.currentTier) {
        await this.collections.users.updateOne(
          { _id: userId },
          {
            $set: { tier: newTier },
            $push: {
              tierHistory: {
                previousTier: loyaltyRecord.currentTier,
                newTier: newTier,
                upgradedAt: new Date(),
                triggeredBy: transactionResults.transactionId
              }
            }
          },
          { session }
        );

        await this.collections.loyalty.updateOne(
          { userId: userId },
          { $set: { currentTier: newTier } },
          { session }
        );
      }
    }

    this.updateTransactionMetrics(transactionResults, 'loyalty', 'updateUserLoyaltyAndTier', loyaltyUpdate);

    console.log(`Awarded ${totalPoints} loyalty points to user ${userId}`);

    return {
      pointsAwarded: totalPoints,
      basePoints: basePoints,
      bonusPoints: bonusPoints,
      newTier: loyaltyRecord?.currentTier || 'bronze'
    };
  }

  async createComprehensiveAuditTrail(transactionId, orderData, user, paymentResult, inventoryReservation, promotionResult, session, transactionResults) {
    console.log('Creating comprehensive audit trail...');

    const auditRecord = {
      _id: `audit_${transactionId}`,
      transactionId: transactionId,
      auditType: 'order_processing',

      // Transaction context
      context: {
        userId: user._id,
        userEmail: user.email,
        userTier: user.tier,
        sessionId: session.id ? session.id.toString() : null,
        source: orderData.source || 'web',
        userAgent: orderData.userAgent,
        ipAddress: orderData.ipAddress
      },

      // Detailed operation log
      operations: transactionResults.operations.map(op => ({
        ...op,
        timestamp: new Date()
      })),

      // Financial audit trail
      financial: {
        originalAmount: orderData.originalTotal || orderData.totalAmount,
        finalAmount: orderData.totalAmount,
        discountsApplied: promotionResult.discountsApplied || [],
        totalDiscount: promotionResult.totalDiscount || 0,
        paymentMethod: orderData.paymentMethod,
        fraudScore: paymentResult.fraudScore,
        processingFees: paymentResult.processingFees
      },

      // Inventory audit trail
      inventory: {
        reservationId: inventoryReservation.reservationId,
        itemsReserved: inventoryReservation.totalReservedItems,
        allocationDetails: inventoryReservation.reservations
      },

      // Compliance and regulatory data
      compliance: {
        dataProcessingConsent: orderData.dataProcessingConsent || false,
        marketingConsent: orderData.marketingConsent || false,
        privacyPolicyVersion: orderData.privacyPolicyVersion || '1.0',
        termsOfServiceVersion: orderData.termsOfServiceVersion || '1.0'
      },

      // Transaction metrics
      performance: {
        transactionDurationMs: transactionResults.metrics.durationMs || 0,
        documentsModified: transactionResults.metrics.documentsModified,
        collectionsAffected: transactionResults.metrics.collectionsAffected,
        operationsExecuted: transactionResults.operations.length
      },

      // Audit metadata
      auditedAt: new Date(),
      retentionDate: new Date(Date.now() + 7 * 365 * 24 * 60 * 60 * 1000), // 7 years
      status: 'completed'
    };

    const insertResult = await this.collections.audit.insertOne(auditRecord, { session });

    this.updateTransactionMetrics(transactionResults, 'audit', 'createComprehensiveAuditTrail', insertResult);

    console.log(`Audit trail created: ${auditRecord._id}`);

    return auditRecord;
  }

  // Helper methods for transaction processing

  async calculateFraudScore(orderData, user, session) {
    // Simplified fraud scoring algorithm
    let fraudScore = 0.0;

    // Velocity checks
    const recentOrderCount = await this.collections.orders.countDocuments({
      'customer.userId': user._id,
      'lifecycle.createdAt': { $gte: new Date(Date.now() - 24 * 60 * 60 * 1000) }
    }, { session });

    if (recentOrderCount > 5) fraudScore += 0.3;

    // Amount-based risk
    if (orderData.totalAmount > user.averageOrderValue * 3) {
      fraudScore += 0.2;
    }

    // Time-based patterns
    const hour = new Date().getHours();
    if (hour >= 2 && hour <= 6) fraudScore += 0.1; // Unusual hours

    // Geographic risk (simplified)
    if (orderData.ipCountry !== user.country) {
      fraudScore += 0.15;
    }

    return Math.min(fraudScore, 1.0);
  }

  async getFraudFactors(orderData, user) {
    return [
      { factor: 'velocity_check', weight: 0.3 },
      { factor: 'amount_anomaly', weight: 0.2 },
      { factor: 'time_pattern', weight: 0.1 },
      { factor: 'geographic_risk', weight: 0.15 }
    ];
  }

  calculateBonusPoints(orderItems, orderAmount) {
    let bonusPoints = 0;

    // Category-based bonus points
    for (const item of orderItems) {
      if (item.category === 'electronics') bonusPoints += item.quantity * 2;
      else if (item.category === 'premium') bonusPoints += item.quantity * 3;
    }

    // Order size bonus
    if (orderAmount > 500) bonusPoints += 50;
    else if (orderAmount > 200) bonusPoints += 20;

    return bonusPoints;
  }

  calculateUserTier(lifetimeValue, totalPoints) {
    if (lifetimeValue > 10000 && totalPoints > 5000) return 'platinum';
    else if (lifetimeValue > 5000 && totalPoints > 2500) return 'gold';
    else if (lifetimeValue > 1000 && totalPoints > 500) return 'silver';
    else return 'bronze';
  }

  updateTransactionMetrics(transactionResults, collection, operation, result = {}) {
    transactionResults.operations.push({
      collection: collection,
      operation: operation,
      documentsModified: result.modifiedCount || result.insertedCount || result.upsertedCount || 1,
      timestamp: new Date()
    });

    if (result.modifiedCount || result.insertedCount || result.upsertedCount) {
      transactionResults.metrics.documentsModified += result.modifiedCount || result.insertedCount || result.upsertedCount;
    }

    const uniqueCollections = new Set(transactionResults.operations.map(op => op.collection));
    transactionResults.metrics.collectionsAffected = uniqueCollections.size;
  }

  // Advanced transaction patterns and error handling

  async executeWithRetry(transactionFunction, maxRetries = 3) {
    let lastError;

    for (let attempt = 1; attempt <= maxRetries; attempt++) {
      try {
        return await transactionFunction();
      } catch (error) {
        lastError = error;

        // Check if error is retryable
        if (this.isRetryableError(error) && attempt < maxRetries) {
          const delay = this.retryConfig.retryDelayMs * Math.pow(this.retryConfig.backoffFactor, attempt - 1);
          console.log(`Transaction attempt ${attempt} failed, retrying in ${delay}ms: ${error.message}`);
          await new Promise(resolve => setTimeout(resolve, delay));
          continue;
        }

        throw error;
      }
    }

    throw lastError;
  }

  isRetryableError(error) {
    // MongoDB transient transaction errors that can be retried
    const retryableErrorCodes = [
      'TransientTransactionError',
      'UnknownTransactionCommitResult',
      'WriteConflict',
      'LockTimeout'
    ];

    return error.hasErrorLabel && retryableErrorCodes.some(label => error.hasErrorLabel(label));
  }

  async getTransactionStatus(transactionId) {
    // Check transaction completion status across collections
    const collections = ['orders', 'payments', 'audit'];
    const status = {};

    for (const collectionName of collections) {
      const collection = this.collections[collectionName];
      const document = await collection.findOne({ 
        $or: [
          { _id: transactionId },
          { transactionId: transactionId },
          { orderId: transactionId }
        ]
      });

      status[collectionName] = document ? 'completed' : 'missing';
    }

    return status;
  }

  async close() {
    // Close database connections
    if (client) {
      await client.close();
    }
  }
}

// Benefits of MongoDB Multi-Document ACID Transactions:
// - Full ACID compliance with automatic rollback on transaction failure
// - Multi-document atomicity across collections within single database
// - Strong consistency guarantees with configurable read and write concerns
// - Built-in retry logic for transient errors and network issues
// - Automatic deadlock detection and resolution
// - Snapshot isolation preventing dirty reads and write conflicts
// - Comprehensive transaction state management without application complexity
// - Performance optimization through write batching and connection pooling
// - Cross-shard transaction support in sharded environments
// - SQL-compatible transaction management through QueryLeaf integration

module.exports = {
  MongoTransactionManager
};

Understanding MongoDB Transaction Architecture

Advanced Transaction Patterns and Error Handling

Implement sophisticated transaction management for production applications:

// Production-ready transaction patterns with advanced error handling and monitoring
class ProductionTransactionManager extends MongoTransactionManager {
  constructor(db, config = {}) {
    super(db);

    this.productionConfig = {
      ...config,
      transactionTimeoutMs: config.transactionTimeoutMs || 60000,
      maxConcurrentTransactions: config.maxConcurrentTransactions || 100,
      deadlockDetectionEnabled: true,
      performanceMonitoringEnabled: true,
      automaticRetryEnabled: true
    };

    this.activeTransactions = new Map();
    this.transactionMetrics = new Map();
    this.deadlockDetector = new DeadlockDetector();
  }

  async executeBusinessTransaction(transactionType, transactionData, options = {}) {
    console.log(`Executing ${transactionType} business transaction...`);

    const transactionContext = {
      id: require('crypto').randomUUID(),
      type: transactionType,
      data: transactionData,
      options: options,
      startTime: new Date(),
      status: 'started',
      retryCount: 0,
      operations: [],
      checkpoints: []
    };

    // Register active transaction
    this.activeTransactions.set(transactionContext.id, transactionContext);

    try {
      // Execute transaction with comprehensive error handling
      const result = await this.executeWithComprehensiveRetry(async () => {
        return await this.executeTransactionByType(transactionContext);
      }, transactionContext);

      transactionContext.status = 'completed';
      transactionContext.endTime = new Date();
      transactionContext.durationMs = transactionContext.endTime - transactionContext.startTime;

      // Record performance metrics
      await this.recordTransactionMetrics(transactionContext, result);

      console.log(`Transaction ${transactionContext.id} completed in ${transactionContext.durationMs}ms`);
      return result;

    } catch (error) {
      transactionContext.status = 'failed';
      transactionContext.endTime = new Date();
      transactionContext.error = error;

      // Record failure metrics
      await this.recordTransactionFailure(transactionContext, error);

      throw error;
    } finally {
      // Clean up active transaction
      this.activeTransactions.delete(transactionContext.id);
    }
  }

  async executeTransactionByType(transactionContext) {
    const { type, data, options } = transactionContext;

    switch (type) {
      case 'order_processing':
        return await this.processComplexOrderTransaction(data, options);

      case 'inventory_transfer':
        return await this.executeInventoryTransfer(data, transactionContext);

      case 'bulk_user_update':
        return await this.executeBulkUserUpdate(data, transactionContext);

      case 'financial_reconciliation':
        return await this.executeFinancialReconciliation(data, transactionContext);

      default:
        throw new Error(`Unknown transaction type: ${type}`);
    }
  }

  async executeInventoryTransfer(transferData, transactionContext) {
    const session = client.startSession();
    const transferResult = {
      transferId: transactionContext.id,
      sourceWarehouse: transferData.sourceWarehouse,
      targetWarehouse: transferData.targetWarehouse,
      itemsTransferred: [],
      success: false
    };

    try {
      await session.withTransaction(async () => {
        // Validate source warehouse inventory
        for (const item of transferData.items) {
          const sourceInventory = await this.collections.inventory.findOne({
            warehouseId: transferData.sourceWarehouse,
            productId: item.productId,
            availableQuantity: { $gte: item.quantity }
          }, { session });

          if (!sourceInventory) {
            throw new Error(`Insufficient inventory in source warehouse for product ${item.productId}`);
          }

          // Remove from source warehouse
          await this.collections.inventory.updateOne(
            { 
              _id: sourceInventory._id,
              availableQuantity: { $gte: item.quantity }
            },
            {
              $inc: { 
                availableQuantity: -item.quantity,
                transferOutQuantity: item.quantity
              },
              $push: {
                transferHistory: {
                  transferId: transactionContext.id,
                  type: 'outbound',
                  quantity: item.quantity,
                  targetWarehouse: transferData.targetWarehouse,
                  transferredAt: new Date()
                }
              }
            },
            { session }
          );

          // Add to target warehouse
          await this.collections.inventory.updateOne(
            {
              warehouseId: transferData.targetWarehouse,
              productId: item.productId
            },
            {
              $inc: { 
                availableQuantity: item.quantity,
                transferInQuantity: item.quantity
              },
              $push: {
                transferHistory: {
                  transferId: transactionContext.id,
                  type: 'inbound',
                  quantity: item.quantity,
                  sourceWarehouse: transferData.sourceWarehouse,
                  transferredAt: new Date()
                }
              }
            },
            { 
              upsert: true,
              session 
            }
          );

          transferResult.itemsTransferred.push({
            productId: item.productId,
            quantity: item.quantity,
            transferredAt: new Date()
          });
        }

        // Create transfer record
        await this.collections.transfers.insertOne({
          _id: transactionContext.id,
          sourceWarehouse: transferData.sourceWarehouse,
          targetWarehouse: transferData.targetWarehouse,
          items: transferResult.itemsTransferred,
          status: 'completed',
          transferredAt: new Date(),
          transferredBy: transferData.transferredBy
        }, { session });

      }, this.transactionConfig);

      transferResult.success = true;
      return transferResult;

    } finally {
      await session.endSession();
    }
  }

  async executeBulkUserUpdate(updateData, transactionContext) {
    const session = client.startSession();
    const updateResult = {
      updateId: transactionContext.id,
      usersUpdated: 0,
      updatesFailed: 0,
      success: false
    };

    try {
      await session.withTransaction(async () => {
        const bulkOperations = [];

        // Build bulk operations
        for (const userUpdate of updateData.updates) {
          bulkOperations.push({
            updateOne: {
              filter: { _id: userUpdate.userId },
              update: {
                $set: userUpdate.updates,
                $push: {
                  updateHistory: {
                    updateId: transactionContext.id,
                    updates: userUpdate.updates,
                    updatedAt: new Date(),
                    updatedBy: updateData.updatedBy
                  }
                }
              }
            }
          });
        }

        // Execute bulk operation within transaction
        const bulkResult = await this.collections.users.bulkWrite(
          bulkOperations,
          { session, ordered: false }
        );

        updateResult.usersUpdated = bulkResult.modifiedCount;
        updateResult.updatesFailed = updateData.updates.length - bulkResult.modifiedCount;

        // Log bulk update
        await this.collections.bulk_operations.insertOne({
          _id: transactionContext.id,
          operationType: 'bulk_user_update',
          targetCount: updateData.updates.length,
          successCount: bulkResult.modifiedCount,
          failureCount: updateResult.updatesFailed,
          executedAt: new Date(),
          executedBy: updateData.updatedBy
        }, { session });

      }, this.transactionConfig);

      updateResult.success = true;
      return updateResult;

    } finally {
      await session.endSession();
    }
  }

  async executeWithComprehensiveRetry(transactionFunction, transactionContext) {
    let lastError;
    const maxRetries = this.productionConfig.maxRetries || 3;

    for (let attempt = 1; attempt <= maxRetries; attempt++) {
      try {
        transactionContext.retryCount = attempt - 1;
        return await transactionFunction();
      } catch (error) {
        lastError = error;

        // Analyze error and determine retry strategy
        const retryDecision = await this.analyzeErrorForRetry(error, attempt, maxRetries, transactionContext);

        if (retryDecision.shouldRetry) {
          console.log(`Transaction ${transactionContext.id} attempt ${attempt} failed, retrying: ${error.message}`);
          await this.executeRetryDelay(retryDecision.delayMs);
          continue;
        }

        // Error is not retryable or max retries reached
        break;
      }
    }

    // All retries exhausted
    console.error(`Transaction ${transactionContext.id} failed after ${maxRetries} attempts`);
    throw lastError;
  }

  async analyzeErrorForRetry(error, attempt, maxRetries, transactionContext) {
    const retryableErrors = [
      'TransientTransactionError',
      'UnknownTransactionCommitResult',
      'WriteConflict',
      'TemporarilyUnavailable'
    ];

    const isTransientError = error.hasErrorLabel && 
      retryableErrors.some(label => error.hasErrorLabel(label));

    const isTimeoutError = error.code === 50 || error.codeName === 'MaxTimeMSExpired';
    const isNetworkError = error.name === 'MongoNetworkError';

    // Check for deadlock
    const isDeadlock = await this.deadlockDetector.isDeadlock(error, transactionContext);
    if (isDeadlock) {
      await this.resolveDeadlock(transactionContext);
    }

    const shouldRetry = (isTransientError || isTimeoutError || isNetworkError || isDeadlock) && 
                       attempt < maxRetries;

    let delayMs = 100;
    if (shouldRetry) {
      // Exponential backoff with jitter
      const baseDelay = this.retryConfig.retryDelayMs || 100;
      const backoffFactor = this.retryConfig.backoffFactor || 2;
      delayMs = baseDelay * Math.pow(backoffFactor, attempt - 1);

      // Add jitter to prevent thundering herd
      delayMs += Math.random() * 50;
    }

    return {
      shouldRetry: shouldRetry,
      delayMs: delayMs,
      errorType: isTransientError ? 'transient' : 
                isTimeoutError ? 'timeout' : 
                isNetworkError ? 'network' : 
                isDeadlock ? 'deadlock' : 'permanent'
    };
  }

  async executeRetryDelay(delayMs) {
    await new Promise(resolve => setTimeout(resolve, delayMs));
  }

  async recordTransactionMetrics(transactionContext, result) {
    const metrics = {
      transactionId: transactionContext.id,
      transactionType: transactionContext.type,
      durationMs: transactionContext.durationMs,
      retryCount: transactionContext.retryCount,
      operationCount: transactionContext.operations.length,
      documentsModified: result.metrics?.documentsModified || 0,
      collectionsAffected: result.metrics?.collectionsAffected || 0,
      success: true,
      recordedAt: new Date()
    };

    await this.collections.transaction_metrics.insertOne(metrics);

    // Update running averages
    this.updateRunningMetrics(transactionContext.type, metrics);
  }

  async recordTransactionFailure(transactionContext, error) {
    const failureMetrics = {
      transactionId: transactionContext.id,
      transactionType: transactionContext.type,
      durationMs: transactionContext.endTime - transactionContext.startTime,
      retryCount: transactionContext.retryCount,
      errorType: error.name,
      errorCode: error.code,
      errorMessage: error.message,
      success: false,
      recordedAt: new Date()
    };

    await this.collections.transaction_failures.insertOne(failureMetrics);
  }

  updateRunningMetrics(transactionType, metrics) {
    if (!this.transactionMetrics.has(transactionType)) {
      this.transactionMetrics.set(transactionType, {
        totalTransactions: 0,
        totalDurationMs: 0,
        successfulTransactions: 0,
        averageDurationMs: 0
      });
    }

    const typeMetrics = this.transactionMetrics.get(transactionType);
    typeMetrics.totalTransactions++;
    typeMetrics.totalDurationMs += metrics.durationMs;

    if (metrics.success) {
      typeMetrics.successfulTransactions++;
    }

    typeMetrics.averageDurationMs = typeMetrics.totalDurationMs / typeMetrics.totalTransactions;
  }

  getTransactionMetrics(transactionType = null) {
    if (transactionType) {
      return this.transactionMetrics.get(transactionType) || null;
    }

    return Object.fromEntries(this.transactionMetrics);
  }

  async resolveDeadlock(transactionContext) {
    console.log(`Resolving deadlock for transaction ${transactionContext.id}`);

    // Implement deadlock resolution strategy
    // This could involve backing off, reordering operations, or other strategies
    const delayMs = Math.random() * 1000; // Random delay to break deadlock
    await this.executeRetryDelay(delayMs);
  }
}

// Deadlock detection system
class DeadlockDetector {
  constructor() {
    this.waitForGraph = new Map();
    this.transactionLocks = new Map();
  }

  async isDeadlock(error, transactionContext) {
    // Simplified deadlock detection based on error patterns
    const deadlockIndicators = [
      'LockTimeout',
      'WriteConflict', 
      'DeadlockDetected'
    ];

    return error.codeName && deadlockIndicators.includes(error.codeName);
  }

  async detectDeadlockCycle(transactionId) {
    // Implement cycle detection in wait-for graph
    // This is a simplified implementation
    const visited = new Set();
    const recursionStack = new Set();

    const hasCycle = (node) => {
      visited.add(node);
      recursionStack.add(node);

      const dependencies = this.waitForGraph.get(node) || [];
      for (const dependency of dependencies) {
        if (!visited.has(dependency)) {
          if (hasCycle(dependency)) return true;
        } else if (recursionStack.has(dependency)) {
          return true;
        }
      }

      recursionStack.delete(node);
      return false;
    };

    return hasCycle(transactionId);
  }
}

SQL-Style Transaction Management with QueryLeaf

QueryLeaf provides familiar SQL syntax for MongoDB transaction management and ACID operations:

-- QueryLeaf transaction management with SQL-familiar syntax

-- Begin complex multi-document transaction with ACID guarantees
BEGIN TRANSACTION order_processing WITH (
  isolation_level = 'snapshot',
  write_concern = { w = 'majority', j = true },
  read_concern = { level = 'majority' },
  timeout = '60 seconds',
  retry_policy = {
    max_attempts = 3,
    backoff_strategy = 'exponential',
    base_delay = '100ms'
  }
);

-- Transaction Operation 1: Validate and update user account
UPDATE users 
SET 
  credit_used = credit_used + @order_total,
  total_order_value = total_order_value + @order_total,
  order_count = order_count + 1,
  last_order_date = CURRENT_TIMESTAMP,
  daily_spending = ARRAY_APPEND(
    daily_spending,
    DOCUMENT(
      'date', CURRENT_DATE,
      'amount', @order_total
    )
  )
WHERE _id = @user_id 
  AND credit_limit - credit_used >= @order_total
  AND (
    SELECT amount 
    FROM UNNEST(daily_spending) AS ds 
    WHERE ds.date = CURRENT_DATE
  ) + @order_total <= daily_spending_limit;

-- Verify user update succeeded
IF @@ROWCOUNT = 0 THEN
  ROLLBACK TRANSACTION;
  THROW 'INSUFFICIENT_CREDIT', 'User does not have sufficient credit or daily limit exceeded';
END IF;

-- Transaction Operation 2: Apply promotions and calculate discounts
WITH applicable_promotions AS (
  SELECT 
    p._id as promotion_id,
    p.name as promotion_name,
    p.type as discount_type,
    p.discount_percentage,
    p.discount_amount,
    p.max_discount,

    -- Calculate discount amount based on promotion type
    CASE p.type
      WHEN 'percentage' THEN 
        LEAST(@order_total * p.discount_percentage / 100, COALESCE(p.max_discount, @order_total))
      WHEN 'fixed_amount' THEN 
        LEAST(p.discount_amount, @order_total)
      ELSE 0
    END as calculated_discount

  FROM promotions p
  WHERE p.status = 'active'
    AND p.start_date <= CURRENT_TIMESTAMP
    AND p.end_date >= CURRENT_TIMESTAMP
    AND (@order_total >= p.minimum_order_amount OR p.minimum_order_amount IS NULL)
    AND (
      p.applicable_to_all = true OR
      @user_id = ANY(p.applicable_to_users) OR
      @user_tier = ANY(p.applicable_to_user_tiers)
    )
  ORDER BY calculated_discount DESC
  LIMIT 3  -- Apply maximum 3 promotions
)

UPDATE promotions 
SET 
  usage_count = usage_count + 1,
  recent_usage = ARRAY_APPEND(
    recent_usage,
    DOCUMENT(
      'user_id', @user_id,
      'order_id', @transaction_id,
      'discount_amount', ap.calculated_discount,
      'used_at', CURRENT_TIMESTAMP
    )
  )
FROM applicable_promotions ap
WHERE promotions._id = ap.promotion_id;

-- Calculate final order amount after discounts
SET @final_order_total = @order_total - (
  SELECT COALESCE(SUM(calculated_discount), 0) 
  FROM applicable_promotions
);

-- Transaction Operation 3: Reserve inventory with FIFO allocation
WITH inventory_allocation AS (
  SELECT 
    i._id as inventory_id,
    i.product_id,
    i.warehouse_location,
    i.available_quantity,
    i.unit_cost,
    oi.requested_quantity,

    -- Calculate allocation using FIFO
    ROW_NUMBER() OVER (
      PARTITION BY i.product_id 
      ORDER BY i.created_at ASC
    ) as allocation_order,

    -- Running total for allocation
    SUM(i.available_quantity) OVER (
      PARTITION BY i.product_id 
      ORDER BY i.created_at ASC 
      ROWS UNBOUNDED PRECEDING
    ) as cumulative_available

  FROM inventory i
  JOIN UNNEST(@order_items) AS oi ON i.product_id = oi.product_id
  WHERE i.available_quantity > 0 
    AND i.status = 'active'
),

allocation_plan AS (
  SELECT 
    inventory_id,
    product_id,
    warehouse_location,
    requested_quantity,

    -- Calculate exact quantity to allocate from each inventory record
    CASE 
      WHEN cumulative_available - available_quantity >= requested_quantity THEN 0
      WHEN cumulative_available >= requested_quantity THEN 
        requested_quantity - (cumulative_available - available_quantity)
      ELSE available_quantity
    END as quantity_to_allocate,

    unit_cost

  FROM inventory_allocation
  WHERE cumulative_available > 
    LAG(cumulative_available, 1, 0) OVER (PARTITION BY product_id ORDER BY allocation_order)
)

-- Execute inventory reservations
UPDATE inventory 
SET 
  available_quantity = available_quantity - ap.quantity_to_allocate,
  reserved_quantity = reserved_quantity + ap.quantity_to_allocate,
  reservations = ARRAY_APPEND(
    reservations,
    DOCUMENT(
      'reservation_id', CONCAT(@transaction_id, '_', ap.product_id, '_', ap.inventory_id),
      'order_id', @transaction_id,
      'quantity', ap.quantity_to_allocate,
      'reserved_at', CURRENT_TIMESTAMP,
      'expires_at', CURRENT_TIMESTAMP + INTERVAL '30 minutes',
      'status', 'active'
    )
  )
FROM allocation_plan ap
WHERE inventory._id = ap.inventory_id
  AND inventory.available_quantity >= ap.quantity_to_allocate;

-- Verify all inventory was successfully reserved
IF (
  SELECT SUM(quantity_to_allocate) FROM allocation_plan
) != (
  SELECT SUM(requested_quantity) FROM UNNEST(@order_items)
) THEN
  ROLLBACK TRANSACTION;
  THROW 'INSUFFICIENT_INVENTORY', 'Unable to reserve sufficient inventory for all items';
END IF;

-- Transaction Operation 4: Process payment with fraud detection
WITH fraud_assessment AS (
  SELECT 
    @user_id as user_id,
    @final_order_total as order_amount,

    -- Calculate fraud score based on multiple factors
    CASE
      -- Velocity check: orders in last 24 hours
      WHEN (
        SELECT COUNT(*) 
        FROM orders 
        WHERE customer.user_id = @user_id 
          AND lifecycle.created_at >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
      ) > 5 THEN 0.3
      ELSE 0.0
    END +

    -- Amount anomaly check
    CASE
      WHEN @final_order_total > (
        SELECT AVG(pricing.final_amount) * 3 
        FROM orders 
        WHERE customer.user_id = @user_id
      ) THEN 0.2
      ELSE 0.0
    END +

    -- Time pattern check (unusual hours)
    CASE
      WHEN EXTRACT(HOUR FROM CURRENT_TIMESTAMP) BETWEEN 2 AND 6 THEN 0.1
      ELSE 0.0
    END +

    -- Geographic risk check
    CASE
      WHEN @ip_country != (SELECT country FROM users WHERE _id = @user_id) THEN 0.15
      ELSE 0.0
    END as fraud_score
)

-- Insert payment record with fraud assessment
INSERT INTO payments (
  _id,
  order_id,
  user_id,
  amount,
  original_amount,
  payment_method,
  authorization_id,
  capture_id,
  fraud_score,
  risk_assessment,
  status,
  processed_at
)
SELECT 
  CONCAT('payment_', @transaction_id),
  @transaction_id,
  @user_id,
  @final_order_total,
  @order_total,
  @payment_method,
  CONCAT('auth_', GENERATE_UUID()),
  CONCAT('capture_', GENERATE_UUID()),
  fa.fraud_score,
  DOCUMENT(
    'score', fa.fraud_score,
    'factors', ARRAY[
      'velocity_check',
      'amount_anomaly', 
      'time_pattern',
      'geographic_risk'
    ],
    'recommendation', 
    CASE WHEN fa.fraud_score > 0.5 THEN 'review' ELSE 'approve' END
  ),
  'completed',
  CURRENT_TIMESTAMP
FROM fraud_assessment fa
WHERE fa.fraud_score <= 0.8; -- Reject transactions with high fraud scores

-- Verify payment was processed (not rejected for fraud)
IF @@ROWCOUNT = 0 THEN
  ROLLBACK TRANSACTION;
  THROW 'FRAUD_DETECTED', 'Transaction flagged for potential fraud and rejected';
END IF;

-- Transaction Operation 5: Create comprehensive order document
INSERT INTO orders (
  _id,
  order_number,

  -- Customer information
  customer,

  -- Order items with inventory allocation
  items,

  -- Pricing breakdown  
  pricing,

  -- Payment information
  payment,

  -- Inventory allocation details
  inventory,

  -- Order lifecycle tracking
  status,
  lifecycle,

  -- Shipping information
  shipping,

  -- Transaction metadata
  transaction
)
VALUES (
  @transaction_id,
  CONCAT('ORD-', UNIX_TIMESTAMP(), '-', UPPER(RANDOM_STRING(6))),

  -- Customer document
  DOCUMENT(
    'user_id', @user_id,
    'email', (SELECT email FROM users WHERE _id = @user_id),
    'tier', (SELECT tier FROM users WHERE _id = @user_id),
    'is_returning_customer', (SELECT order_count > 0 FROM users WHERE _id = @user_id)
  ),

  -- Items with allocation details
  (
    SELECT ARRAY_AGG(
      DOCUMENT(
        'product_id', oi.product_id,
        'quantity', oi.quantity,
        'price', oi.price,
        'allocation', (
          SELECT ARRAY_AGG(
            DOCUMENT(
              'inventory_id', ap.inventory_id,
              'warehouse_location', ap.warehouse_location,
              'quantity', ap.quantity_to_allocate,
              'unit_cost', ap.unit_cost
            )
          )
          FROM allocation_plan ap
          WHERE ap.product_id = oi.product_id
        )
      )
    )
    FROM UNNEST(@order_items) AS oi
  ),

  -- Pricing breakdown document
  DOCUMENT(
    'subtotal', @order_total,
    'discounts', (
      SELECT ARRAY_AGG(
        DOCUMENT(
          'promotion_id', promotion_id,
          'promotion_name', promotion_name,
          'discount_amount', calculated_discount,
          'applied_at', CURRENT_TIMESTAMP
        )
      )
      FROM applicable_promotions
    ),
    'total_discount', @order_total - @final_order_total,
    'final_amount', @final_order_total,
    'tax', @tax_amount,
    'shipping', @shipping_cost,
    'total', @final_order_total + @tax_amount + @shipping_cost
  ),

  -- Payment document
  DOCUMENT(
    'payment_id', CONCAT('payment_', @transaction_id),
    'method', @payment_method,
    'status', 'completed',
    'fraud_score', (SELECT fraud_score FROM fraud_assessment),
    'processed_at', CURRENT_TIMESTAMP
  ),

  -- Inventory allocation document
  DOCUMENT(
    'reservation_id', @transaction_id,
    'total_items_reserved', (SELECT SUM(quantity_to_allocate) FROM allocation_plan),
    'reservation_details', (
      SELECT ARRAY_AGG(
        DOCUMENT(
          'product_id', product_id,
          'requested_quantity', requested_quantity,
          'allocated_from', ARRAY_AGG(
            DOCUMENT(
              'inventory_id', inventory_id,
              'warehouse_location', warehouse_location,
              'quantity', quantity_to_allocate,
              'unit_cost', unit_cost
            )
          )
        )
      )
      FROM allocation_plan
      GROUP BY product_id, requested_quantity
    )
  ),

  -- Order status and lifecycle
  'confirmed',
  DOCUMENT(
    'created_at', CURRENT_TIMESTAMP,
    'confirmed_at', CURRENT_TIMESTAMP,
    'estimated_fulfillment_date', CURRENT_TIMESTAMP + INTERVAL '2 days',
    'estimated_delivery_date', CURRENT_TIMESTAMP + INTERVAL '7 days'
  ),

  -- Shipping information
  DOCUMENT(
    'address', @shipping_address,
    'method', COALESCE(@shipping_method, 'standard'),
    'carrier', COALESCE(@carrier, 'fedex'),
    'tracking_number', NULL
  ),

  -- Transaction metadata
  DOCUMENT(
    'transaction_id', @transaction_id,
    'source', COALESCE(@order_source, 'web'),
    'channel', COALESCE(@order_channel, 'direct'),
    'user_agent', @user_agent,
    'ip_address', @ip_address
  )
);

-- Transaction Operation 6: Update loyalty points and tier status
WITH loyalty_calculation AS (
  SELECT 
    @user_id as user_id,
    FLOOR(@final_order_total) as base_points, -- 1 point per dollar

    -- Calculate bonus points based on items and categories
    (
      SELECT COALESCE(SUM(
        CASE 
          WHEN oi.category = 'electronics' THEN oi.quantity * 2
          WHEN oi.category = 'premium' THEN oi.quantity * 3
          ELSE 0
        END
      ), 0)
      FROM UNNEST(@order_items) AS oi
    ) +

    -- Order size bonus
    CASE 
      WHEN @final_order_total > 500 THEN 50
      WHEN @final_order_total > 200 THEN 20
      ELSE 0
    END as bonus_points
),

tier_calculation AS (
  SELECT 
    lc.user_id,
    lc.base_points + lc.bonus_points as total_points_earned,

    -- Calculate new tier based on lifetime value and points
    CASE
      WHEN (
        SELECT lifetime_value + @final_order_total FROM loyalty WHERE user_id = @user_id
      ) > 10000 AND (
        SELECT total_points_earned + (lc.base_points + lc.bonus_points) FROM loyalty WHERE user_id = @user_id
      ) > 5000 THEN 'platinum'

      WHEN (
        SELECT lifetime_value + @final_order_total FROM loyalty WHERE user_id = @user_id
      ) > 5000 AND (
        SELECT total_points_earned + (lc.base_points + lc.bonus_points) FROM loyalty WHERE user_id = @user_id
      ) > 2500 THEN 'gold'

      WHEN (
        SELECT lifetime_value + @final_order_total FROM loyalty WHERE user_id = @user_id
      ) > 1000 AND (
        SELECT total_points_earned + (lc.base_points + lc.bonus_points) FROM loyalty WHERE user_id = @user_id
      ) > 500 THEN 'silver'

      ELSE 'bronze'
    END as new_tier

  FROM loyalty_calculation lc
)

-- Update loyalty points
INSERT INTO loyalty (
  user_id,
  total_points_earned,
  available_points,
  lifetime_value,
  current_tier,
  points_history,
  last_activity_at
)
SELECT 
  tc.user_id,
  tc.total_points_earned,
  tc.total_points_earned,
  @final_order_total,
  tc.new_tier,
  ARRAY[
    DOCUMENT(
      'order_id', @transaction_id,
      'points_earned', tc.total_points_earned,
      'reason', 'order_purchase',
      'earned_at', CURRENT_TIMESTAMP
    )
  ],
  CURRENT_TIMESTAMP
FROM tier_calculation tc
ON DUPLICATE KEY UPDATE
  total_points_earned = total_points_earned + tc.total_points_earned,
  available_points = available_points + tc.total_points_earned,
  lifetime_value = lifetime_value + @final_order_total,
  current_tier = tc.new_tier,
  points_history = ARRAY_APPEND(
    points_history,
    DOCUMENT(
      'order_id', @transaction_id,
      'points_earned', tc.total_points_earned,
      'reason', 'order_purchase',
      'earned_at', CURRENT_TIMESTAMP
    )
  ),
  last_activity_at = CURRENT_TIMESTAMP;

-- Update user tier if changed
UPDATE users 
SET 
  tier = tc.new_tier,
  tier_history = ARRAY_APPEND(
    tier_history,
    DOCUMENT(
      'previous_tier', (SELECT current_tier FROM loyalty WHERE user_id = @user_id),
      'new_tier', tc.new_tier,
      'upgraded_at', CURRENT_TIMESTAMP,
      'triggered_by', @transaction_id
    )
  )
FROM tier_calculation tc
WHERE users._id = @user_id 
  AND users.tier != tc.new_tier;

-- Transaction Operation 7: Create comprehensive audit trail
INSERT INTO audit (
  _id,
  transaction_id,
  audit_type,

  -- Transaction context
  context,

  -- Detailed operation log  
  operations,

  -- Financial audit trail
  financial,

  -- Inventory audit trail
  inventory_audit,

  -- Compliance data
  compliance,

  -- Performance metrics
  performance,

  -- Audit metadata
  audited_at,
  retention_date,
  status
)
VALUES (
  CONCAT('audit_', @transaction_id),
  @transaction_id,
  'order_processing',

  -- Context document
  DOCUMENT(
    'user_id', @user_id,
    'user_email', (SELECT email FROM users WHERE _id = @user_id),
    'user_tier', (SELECT tier FROM users WHERE _id = @user_id),
    'source', @order_source,
    'user_agent', @user_agent,
    'ip_address', @ip_address
  ),

  -- Operations log
  ARRAY[
    DOCUMENT('collection', 'users', 'operation', 'validateAndUpdateUserAccount', 'timestamp', CURRENT_TIMESTAMP),
    DOCUMENT('collection', 'promotions', 'operation', 'applyPromotionsAndDiscounts', 'timestamp', CURRENT_TIMESTAMP),
    DOCUMENT('collection', 'inventory', 'operation', 'reserveInventoryWithAllocation', 'timestamp', CURRENT_TIMESTAMP),
    DOCUMENT('collection', 'payments', 'operation', 'processPaymentWithFraudDetection', 'timestamp', CURRENT_TIMESTAMP),
    DOCUMENT('collection', 'orders', 'operation', 'createComprehensiveOrder', 'timestamp', CURRENT_TIMESTAMP),
    DOCUMENT('collection', 'loyalty', 'operation', 'updateUserLoyaltyAndTier', 'timestamp', CURRENT_TIMESTAMP)
  ],

  -- Financial audit
  DOCUMENT(
    'original_amount', @order_total,
    'final_amount', @final_order_total,
    'discounts_applied', (SELECT COALESCE(COUNT(*), 0) FROM applicable_promotions),
    'total_discount', @order_total - @final_order_total,
    'payment_method', @payment_method,
    'fraud_score', (SELECT fraud_score FROM fraud_assessment)
  ),

  -- Inventory audit
  DOCUMENT(
    'reservation_id', @transaction_id,
    'items_reserved', (SELECT SUM(quantity_to_allocate) FROM allocation_plan),
    'allocation_details', (
      SELECT ARRAY_AGG(
        DOCUMENT(
          'product_id', product_id,
          'quantity_allocated', SUM(quantity_to_allocate),
          'warehouse_locations', ARRAY_AGG(DISTINCT warehouse_location)
        )
      )
      FROM allocation_plan
      GROUP BY product_id
    )
  ),

  -- Compliance information
  DOCUMENT(
    'data_processing_consent', COALESCE(@data_processing_consent, false),
    'marketing_consent', COALESCE(@marketing_consent, false),
    'privacy_policy_version', COALESCE(@privacy_policy_version, '1.0'),
    'terms_of_service_version', COALESCE(@terms_of_service_version, '1.0')
  ),

  -- Performance tracking
  DOCUMENT(
    'operations_executed', 7,
    'collections_affected', 6,
    'documents_modified', @@TOTAL_DOCUMENTS_MODIFIED
  ),

  -- Audit metadata
  CURRENT_TIMESTAMP,
  CURRENT_TIMESTAMP + INTERVAL '7 years', -- Retention period
  'completed'
);

-- Commit the entire transaction atomically
COMMIT TRANSACTION order_processing;

-- Advanced transaction monitoring and analysis queries
WITH transaction_performance_analysis AS (
  SELECT 
    DATE_TRUNC('hour', audited_at) as hour_bucket,
    audit_type as transaction_type,

    -- Performance metrics
    COUNT(*) as transaction_count,
    AVG(CAST(performance->>'operations_executed' AS INTEGER)) as avg_operations,
    AVG(CAST(performance->>'collections_affected' AS INTEGER)) as avg_collections,
    AVG(CAST(performance->>'documents_modified' AS INTEGER)) as avg_documents_modified,

    -- Financial metrics
    AVG(CAST(financial->>'final_amount' AS DECIMAL)) as avg_transaction_amount,
    SUM(CAST(financial->>'final_amount' AS DECIMAL)) as total_transaction_volume,
    AVG(CAST(financial->>'fraud_score' AS DECIMAL)) as avg_fraud_score,

    -- Success rate calculation
    COUNT(*) FILTER (WHERE status = 'completed') as successful_transactions,
    COUNT(*) FILTER (WHERE status != 'completed') as failed_transactions,
    ROUND(
      COUNT(*) FILTER (WHERE status = 'completed') * 100.0 / COUNT(*), 2
    ) as success_rate_pct

  FROM audit
  WHERE audited_at >= CURRENT_TIMESTAMP - INTERVAL '24 hours'
  GROUP BY DATE_TRUNC('hour', audited_at), audit_type
),

fraud_analysis AS (
  SELECT 
    DATE_TRUNC('day', audited_at) as day_bucket,

    -- Fraud detection metrics
    COUNT(*) as total_transactions,
    COUNT(*) FILTER (WHERE CAST(financial->>'fraud_score' AS DECIMAL) > 0.5) as high_risk_transactions,
    COUNT(*) FILTER (WHERE CAST(financial->>'fraud_score' AS DECIMAL) > 0.8) as rejected_transactions,
    AVG(CAST(financial->>'fraud_score' AS DECIMAL)) as avg_fraud_score,
    MAX(CAST(financial->>'fraud_score' AS DECIMAL)) as max_fraud_score,

    -- Risk distribution
    COUNT(*) FILTER (WHERE CAST(financial->>'fraud_score' AS DECIMAL) BETWEEN 0 AND 0.2) as low_risk,
    COUNT(*) FILTER (WHERE CAST(financial->>'fraud_score' AS DECIMAL) BETWEEN 0.2 AND 0.5) as medium_risk,
    COUNT(*) FILTER (WHERE CAST(financial->>'fraud_score' AS DECIMAL) BETWEEN 0.5 AND 0.8) as high_risk,
    COUNT(*) FILTER (WHERE CAST(financial->>'fraud_score' AS DECIMAL) > 0.8) as critical_risk

  FROM audit
  WHERE audited_at >= CURRENT_TIMESTAMP - INTERVAL '30 days'
    AND audit_type = 'order_processing'
  GROUP BY DATE_TRUNC('day', audited_at)
),

inventory_impact_analysis AS (
  SELECT 
    JSON_EXTRACT(inv_detail.value, '$.product_id') as product_id,

    -- Inventory allocation metrics
    SUM(CAST(JSON_EXTRACT(inv_detail.value, '$.quantity_allocated') AS INTEGER)) as total_allocated,
    COUNT(DISTINCT transaction_id) as allocation_transactions,
    AVG(CAST(JSON_EXTRACT(inv_detail.value, '$.quantity_allocated') AS INTEGER)) as avg_allocation_per_transaction,

    -- Warehouse distribution
    COUNT(DISTINCT JSON_EXTRACT(loc.value, '$')) as warehouses_used,
    JSON_ARRAYAGG(DISTINCT JSON_EXTRACT(loc.value, '$')) as warehouse_list

  FROM audit,
    JSON_TABLE(
      inventory_audit->'$.allocation_details', '$[*]'
      COLUMNS (
        value JSON PATH '$'
      )
    ) as inv_detail,
    JSON_TABLE(
      JSON_EXTRACT(inv_detail.value, '$.warehouse_locations'), '$[*]'
      COLUMNS (
        value JSON PATH '$'
      )
    ) as loc
  WHERE audited_at >= CURRENT_TIMESTAMP - INTERVAL '7 days'
    AND audit_type = 'order_processing'
  GROUP BY JSON_EXTRACT(inv_detail.value, '$.product_id')
  ORDER BY total_allocated DESC
  LIMIT 20
)

-- Comprehensive transaction monitoring dashboard
SELECT 
  'PERFORMANCE_SUMMARY' as metric_type,
  tpa.hour_bucket,
  tpa.transaction_type,
  tpa.transaction_count,
  tpa.avg_operations,
  tpa.avg_transaction_amount,
  tpa.success_rate_pct,

  -- Performance grading
  CASE 
    WHEN tpa.success_rate_pct >= 99.5 AND tpa.avg_operations <= 10 THEN 'EXCELLENT'
    WHEN tpa.success_rate_pct >= 99.0 AND tpa.avg_operations <= 15 THEN 'GOOD'
    WHEN tpa.success_rate_pct >= 95.0 THEN 'ACCEPTABLE'
    ELSE 'NEEDS_IMPROVEMENT'
  END as performance_grade

FROM transaction_performance_analysis tpa

UNION ALL

SELECT 
  'FRAUD_SUMMARY' as metric_type,
  fa.day_bucket::timestamp,
  'fraud_analysis',
  fa.total_transactions,
  fa.avg_fraud_score,
  fa.max_fraud_score,
  ROUND(fa.rejected_transactions * 100.0 / fa.total_transactions, 2) as rejection_rate_pct,

  -- Risk level assessment
  CASE
    WHEN fa.avg_fraud_score < 0.2 THEN 'LOW_RISK'
    WHEN fa.avg_fraud_score < 0.5 THEN 'MEDIUM_RISK'  
    WHEN fa.avg_fraud_score < 0.8 THEN 'HIGH_RISK'
    ELSE 'CRITICAL_RISK'
  END as risk_level

FROM fraud_analysis fa

UNION ALL

SELECT 
  'INVENTORY_SUMMARY' as metric_type,
  CURRENT_TIMESTAMP,
  'inventory_allocation',
  iia.allocation_transactions,
  iia.total_allocated,
  iia.avg_allocation_per_transaction,
  iia.warehouses_used,

  -- Allocation efficiency
  CASE
    WHEN iia.warehouses_used = 1 THEN 'SINGLE_WAREHOUSE'
    WHEN iia.warehouses_used <= 3 THEN 'EFFICIENT_DISTRIBUTION'
    ELSE 'FRAGMENTED_ALLOCATION'
  END as allocation_pattern

FROM inventory_impact_analysis iia
ORDER BY metric_type, hour_bucket DESC;

-- Real-time transaction health monitoring
CREATE MATERIALIZED VIEW transaction_health_dashboard AS
WITH real_time_metrics AS (
  SELECT 
    DATE_TRUNC('minute', audited_at) as minute_bucket,
    audit_type,

    -- Real-time performance metrics
    COUNT(*) as transactions_per_minute,
    AVG(CAST(performance->>'operations_executed' AS INTEGER)) as avg_operations,
    COUNT(*) FILTER (WHERE status = 'completed') as successful_transactions,
    COUNT(*) FILTER (WHERE status != 'completed') as failed_transactions,

    -- Financial metrics
    SUM(CAST(financial->>'final_amount' AS DECIMAL)) as revenue_per_minute,
    AVG(CAST(financial->>'fraud_score' AS DECIMAL)) as avg_fraud_score,

    -- Operational metrics
    AVG(CAST(performance->>'documents_modified' AS INTEGER)) as avg_documents_per_transaction

  FROM audit
  WHERE audited_at >= CURRENT_TIMESTAMP - INTERVAL '1 hour'
  GROUP BY DATE_TRUNC('minute', audited_at), audit_type
),

health_indicators AS (
  SELECT 
    minute_bucket,
    audit_type,
    transactions_per_minute,
    successful_transactions,
    failed_transactions,
    revenue_per_minute,
    avg_fraud_score,

    -- Calculate success rate
    CASE WHEN transactions_per_minute > 0 THEN
      ROUND(successful_transactions * 100.0 / transactions_per_minute, 2)
    ELSE 0 END as success_rate,

    -- Detect anomalies
    CASE 
      WHEN failed_transactions > successful_transactions THEN 'CRITICAL_FAILURE_RATE'
      WHEN successful_transactions = 0 AND transactions_per_minute > 0 THEN 'COMPLETE_FAILURE'
      WHEN avg_fraud_score > 0.6 THEN 'HIGH_FRAUD_ACTIVITY' 
      WHEN transactions_per_minute > 100 THEN 'HIGH_VOLUME_ALERT'
      WHEN transactions_per_minute = 0 AND EXTRACT(HOUR FROM CURRENT_TIMESTAMP) BETWEEN 9 AND 21 THEN 'NO_TRANSACTIONS_ALERT'
      ELSE 'NORMAL'
    END as health_status,

    -- Performance trend
    LAG(successful_transactions) OVER (
      PARTITION BY audit_type 
      ORDER BY minute_bucket
    ) as prev_minute_success,

    LAG(failed_transactions) OVER (
      PARTITION BY audit_type 
      ORDER BY minute_bucket  
    ) as prev_minute_failures

  FROM real_time_metrics
)

SELECT 
  minute_bucket,
  audit_type,
  transactions_per_minute,
  success_rate,
  revenue_per_minute,
  health_status,
  avg_fraud_score,

  -- Trend analysis
  CASE 
    WHEN prev_minute_success IS NOT NULL THEN
      successful_transactions - prev_minute_success
    ELSE 0
  END as success_trend,

  CASE 
    WHEN prev_minute_failures IS NOT NULL THEN  
      failed_transactions - prev_minute_failures
    ELSE 0
  END as failure_trend,

  -- Alert priority
  CASE health_status
    WHEN 'COMPLETE_FAILURE' THEN 1
    WHEN 'CRITICAL_FAILURE_RATE' THEN 2
    WHEN 'HIGH_FRAUD_ACTIVITY' THEN 3
    WHEN 'HIGH_VOLUME_ALERT' THEN 4
    WHEN 'NO_TRANSACTIONS_ALERT' THEN 5
    ELSE 10
  END as alert_priority,

  -- Recommendations
  CASE health_status
    WHEN 'COMPLETE_FAILURE' THEN 'IMMEDIATE: Check system connectivity and database status'
    WHEN 'CRITICAL_FAILURE_RATE' THEN 'HIGH: Review error logs and investigate transaction failures'
    WHEN 'HIGH_FRAUD_ACTIVITY' THEN 'MEDIUM: Review fraud detection rules and recent transactions'
    WHEN 'HIGH_VOLUME_ALERT' THEN 'LOW: Monitor system resources and scaling capabilities'
    WHEN 'NO_TRANSACTIONS_ALERT' THEN 'MEDIUM: Check application availability and user access'
    ELSE 'Continue monitoring'
  END as recommendation

FROM health_indicators
WHERE minute_bucket >= CURRENT_TIMESTAMP - INTERVAL '15 minutes'
ORDER BY alert_priority ASC, minute_bucket DESC;

-- QueryLeaf provides comprehensive transaction management:
-- 1. SQL-familiar syntax for complex MongoDB multi-document transactions
-- 2. Full ACID compliance with automatic rollback on failure
-- 3. Advanced business logic integration within transactional contexts
-- 4. Comprehensive audit trail generation with regulatory compliance
-- 5. Real-time fraud detection and risk assessment within transactions
-- 6. Sophisticated inventory allocation and reservation management
-- 7. Dynamic promotions and loyalty points calculation in transactions
-- 8. Performance monitoring and alerting for transaction health
-- 9. Automated retry logic and error handling for transient failures
-- 10. Production-ready transaction patterns with comprehensive monitoring

Best Practices for MongoDB Transaction Implementation

Transaction Design Principles

Essential guidelines for effective MongoDB transaction usage:

  1. Minimize Transaction Scope: Keep transactions as short as possible to reduce lock contention and improve performance
  2. Idempotent Operations: Design transaction operations to be safely retryable in case of transient failures
  3. Proper Error Handling: Implement comprehensive error handling with appropriate retry logic for transient errors
  4. Read and Write Concerns: Configure appropriate read and write concerns for consistency requirements
  5. Timeout Management: Set reasonable timeouts to prevent long-running transactions from blocking resources
  6. Performance Monitoring: Monitor transaction performance and identify bottlenecks or long-running operations

Production Optimization Strategies

Optimize MongoDB transactions for production environments:

  1. Connection Pooling: Use connection pooling to efficiently manage database connections across transaction sessions
  2. Index Optimization: Ensure proper indexing for all queries within transactions to minimize lock duration
  3. Batch Operations: Use bulk operations where possible to reduce the number of round trips and improve performance
  4. Monitoring and Alerting: Implement comprehensive monitoring for transaction success rates, latency, and error patterns
  5. Capacity Planning: Plan for transaction concurrency and ensure sufficient resources for peak transaction loads
  6. Testing and Validation: Regularly test transaction logic under load to identify potential issues before production

Conclusion

MongoDB's multi-document ACID transactions provide comprehensive atomic operations that eliminate the complexity and consistency challenges of traditional NoSQL coordination approaches. The sophisticated transaction management, automatic retry logic, and comprehensive error handling ensure reliable business operations while maintaining the flexibility and scalability benefits of MongoDB's document model.

Key MongoDB Transaction benefits include:

  • Full ACID Compliance: Complete atomicity, consistency, isolation, and durability guarantees across multiple documents
  • Automatic Rollback: Built-in rollback functionality eliminates complex application-level coordination requirements
  • Cross-Collection Atomicity: Multi-document operations spanning different collections within the same database
  • Retry Logic: Intelligent retry mechanisms for transient errors and network issues
  • Performance Optimization: Advanced transaction management with connection pooling and batch operations
  • Comprehensive Monitoring: Built-in transaction metrics and monitoring capabilities for production environments

Whether you're building financial applications, e-commerce platforms, or complex workflow systems, MongoDB's ACID transactions with QueryLeaf's familiar SQL interface provide the foundation for reliable, consistent, and scalable multi-document operations.

QueryLeaf Integration: QueryLeaf automatically manages MongoDB transaction operations while providing SQL-familiar syntax for complex multi-document business logic, comprehensive error handling, and advanced transaction patterns. ACID compliance, automatic retry logic, and production monitoring capabilities are seamlessly handled through familiar SQL constructs, making sophisticated transactional applications both powerful and accessible to SQL-oriented development teams.

The combination of MongoDB's robust transaction capabilities with SQL-style operations makes it an ideal platform for applications requiring both NoSQL flexibility and traditional database transaction guarantees, ensuring your business operations maintain consistency and reliability as they scale and evolve.

MongoDB Change Streams and Real-time Event Processing: Advanced Microservices Architecture Patterns for Event-Driven Applications

Modern distributed applications require sophisticated event-driven architectures that can process real-time data changes, coordinate microservices communication, and maintain system consistency across complex distributed topologies. Traditional polling-based approaches to change detection introduce latency, resource waste, and scaling challenges that become increasingly problematic as application complexity and data volumes grow.

MongoDB Change Streams provide a powerful, efficient mechanism for building reactive applications that respond to data changes in real-time without the overhead and complexity of traditional change detection patterns. Unlike database triggers or polling-based solutions that require complex infrastructure and introduce performance bottlenecks, Change Streams offer a scalable, resumable, and ordered stream of change events that enables sophisticated event-driven architectures, microservices coordination, and real-time analytics.

The Traditional Change Detection Challenge

Conventional change detection approaches suffer from significant limitations for real-time application requirements:

-- Traditional PostgreSQL change detection with LISTEN/NOTIFY - limited scalability and functionality

-- Basic trigger-based notification system
CREATE OR REPLACE FUNCTION notify_order_changes()
RETURNS TRIGGER AS $$
BEGIN
  IF TG_OP = 'INSERT' THEN
    PERFORM pg_notify('order_created', json_build_object(
      'operation', 'INSERT',
      'order_id', NEW.order_id,
      'user_id', NEW.user_id,
      'total_amount', NEW.total_amount,
      'timestamp', NOW()
    )::text);
    RETURN NEW;
  ELSIF TG_OP = 'UPDATE' THEN
    PERFORM pg_notify('order_updated', json_build_object(
      'operation', 'UPDATE',
      'order_id', NEW.order_id,
      'old_status', OLD.status,
      'new_status', NEW.status,
      'timestamp', NOW()
    )::text);
    RETURN NEW;
  ELSIF TG_OP = 'DELETE' THEN
    PERFORM pg_notify('order_deleted', json_build_object(
      'operation', 'DELETE',
      'order_id', OLD.order_id,
      'user_id', OLD.user_id,
      'timestamp', NOW()
    )::text);
    RETURN OLD;
  END IF;
  RETURN NULL;
END;
$$ LANGUAGE plpgsql;

-- Attach triggers to orders table
CREATE TRIGGER order_changes_trigger
  AFTER INSERT OR UPDATE OR DELETE ON orders
  FOR EACH ROW EXECUTE FUNCTION notify_order_changes();

-- Client-side change listening with significant limitations
-- Node.js example showing polling approach complexity

const { Client } = require('pg');
const EventEmitter = require('events');

class PostgreSQLChangeListener extends EventEmitter {
  constructor(connectionConfig) {
    super();
    this.client = new Client(connectionConfig);
    this.isListening = false;
    this.reconnectAttempts = 0;
    this.maxReconnectAttempts = 5;
    this.lastProcessedId = null;

    // Complex connection management required
    this.setupErrorHandlers();
  }

  async startListening() {
    try {
      await this.client.connect();

      // Listen to specific channels
      await this.client.query('LISTEN order_created');
      await this.client.query('LISTEN order_updated');
      await this.client.query('LISTEN order_deleted');
      await this.client.query('LISTEN user_activity');

      this.isListening = true;
      console.log('Started listening for database changes...');

      // Handle incoming notifications
      this.client.on('notification', async (msg) => {
        try {
          const changeData = JSON.parse(msg.payload);
          await this.processChange(msg.channel, changeData);
        } catch (error) {
          console.error('Error processing notification:', error);
          this.emit('error', error);
        }
      });

      // Poll for missed changes during disconnection
      this.startMissedChangePolling();

    } catch (error) {
      console.error('Failed to start listening:', error);
      await this.handleReconnection();
    }
  }

  async processChange(channel, changeData) {
    console.log(`Processing ${channel} change:`, changeData);

    // Complex event processing logic
    switch (channel) {
      case 'order_created':
        await this.handleOrderCreated(changeData);
        break;
      case 'order_updated':
        await this.handleOrderUpdated(changeData);
        break;
      case 'order_deleted':
        await this.handleOrderDeleted(changeData);
        break;
      default:
        console.warn(`Unknown channel: ${channel}`);
    }

    // Update processing checkpoint
    this.lastProcessedId = changeData.order_id;
  }

  async handleOrderCreated(orderData) {
    // Microservice coordination complexity
    const coordinationTasks = [
      this.notifyInventoryService(orderData),
      this.notifyPaymentService(orderData),
      this.notifyShippingService(orderData),
      this.notifyAnalyticsService(orderData),
      this.updateCustomerProfile(orderData)
    ];

    try {
      await Promise.all(coordinationTasks);
      console.log(`Successfully coordinated order creation: ${orderData.order_id}`);
    } catch (error) {
      console.error('Coordination failed:', error);
      // Complex error handling and retry logic required
      await this.handleCoordinationFailure(orderData, error);
    }
  }

  async startMissedChangePolling() {
    // Polling fallback for missed changes during disconnection
    setInterval(async () => {
      if (!this.isListening) return;

      try {
        const query = `
          SELECT 
            o.order_id,
            o.user_id,
            o.status,
            o.total_amount,
            o.created_at,
            o.updated_at,
            'order' as entity_type,
            CASE 
              WHEN o.created_at > NOW() - INTERVAL '5 minutes' THEN 'created'
              WHEN o.updated_at > NOW() - INTERVAL '5 minutes' THEN 'updated'
            END as change_type
          FROM orders o
          WHERE (o.created_at > NOW() - INTERVAL '5 minutes' 
                 OR o.updated_at > NOW() - INTERVAL '5 minutes')
            AND o.order_id > $1
          ORDER BY o.order_id
          LIMIT 1000
        `;

        const result = await this.client.query(query, [this.lastProcessedId || 0]);

        for (const row of result.rows) {
          await this.processChange(`order_${row.change_type}`, row);
        }

      } catch (error) {
        console.error('Polling error:', error);
      }
    }, 30000); // Poll every 30 seconds
  }

  async handleReconnection() {
    if (this.reconnectAttempts >= this.maxReconnectAttempts) {
      console.error('Max reconnection attempts reached');
      this.emit('fatal_error', new Error('Connection permanently lost'));
      return;
    }

    this.reconnectAttempts++;
    const delay = Math.pow(2, this.reconnectAttempts) * 1000; // Exponential backoff

    console.log(`Attempting reconnection ${this.reconnectAttempts}/${this.maxReconnectAttempts} in ${delay}ms`);

    setTimeout(async () => {
      try {
        await this.client.end();
        this.client = new Client(this.connectionConfig);
        this.setupErrorHandlers();
        await this.startListening();
        this.reconnectAttempts = 0;
      } catch (error) {
        console.error('Reconnection failed:', error);
        await this.handleReconnection();
      }
    }, delay);
  }

  setupErrorHandlers() {
    this.client.on('error', async (error) => {
      console.error('PostgreSQL connection error:', error);
      this.isListening = false;
      await this.handleReconnection();
    });

    this.client.on('end', () => {
      console.log('PostgreSQL connection ended');
      this.isListening = false;
    });
  }
}

// Problems with traditional PostgreSQL LISTEN/NOTIFY approach:
// 1. Limited payload size (8000 bytes) restricts change data detail
// 2. No guaranteed delivery - notifications lost during disconnection
// 3. No ordering guarantees across multiple channels
// 4. Complex reconnection and missed change handling logic required
// 5. Limited filtering capabilities - all listeners receive all notifications
// 6. No built-in support for change resumption from specific points
// 7. Scalability limitations with many concurrent listeners
// 8. Manual coordination required for microservices communication
// 9. Complex error handling and retry mechanisms needed
// 10. No native support for document-level change tracking

// MySQL limitations are even more restrictive
-- MySQL basic replication events (limited functionality)
SHOW MASTER STATUS;
SHOW SLAVE STATUS;

-- MySQL binary log parsing (complex and fragile)
-- Requires external tools like Maxwell or Debezium
-- Limited change event structure and filtering
-- Complex setup and operational overhead
-- No native application-level change streams
-- Poor support for real-time event processing

MongoDB Change Streams provide comprehensive real-time change processing:

// MongoDB Change Streams - comprehensive real-time event processing with advanced patterns
const { MongoClient } = require('mongodb');
const EventEmitter = require('events');

const client = new MongoClient('mongodb://localhost:27017');
const db = client.db('ecommerce_platform');

// Advanced MongoDB Change Streams manager for microservices architecture
class MongoChangeStreamManager extends EventEmitter {
  constructor(db) {
    super();
    this.db = db;
    this.collections = {
      orders: db.collection('orders'),
      users: db.collection('users'),
      products: db.collection('products'),
      inventory: db.collection('inventory'),
      payments: db.collection('payments')
    };

    this.changeStreams = new Map();
    this.eventProcessors = new Map();
    this.resumeTokens = new Map();
    this.processingStats = new Map();

    // Advanced configuration for production use
    this.streamConfig = {
      batchSize: 100,
      maxAwaitTimeMS: 1000,
      fullDocument: 'updateLookup',
      fullDocumentBeforeChange: 'whenAvailable',
      startAtOperationTime: null,
      resumeAfter: null
    };

    // Event processing pipeline
    this.eventQueue = [];
    this.isProcessing = false;
    this.maxQueueSize = 10000;

    this.setupEventProcessors();
  }

  async initializeChangeStreams(streamConfigurations) {
    console.log('Initializing MongoDB Change Streams for microservices architecture...');

    for (const [streamName, config] of Object.entries(streamConfigurations)) {
      try {
        console.log(`Setting up change stream: ${streamName}`);
        await this.createChangeStream(streamName, config);
      } catch (error) {
        console.error(`Failed to create change stream ${streamName}:`, error);
        this.emit('stream_error', { streamName, error });
      }
    }

    // Start event processing
    this.startEventProcessing();

    console.log(`${this.changeStreams.size} change streams initialized successfully`);
    return this.getStreamStatus();
  }

  async createChangeStream(streamName, config) {
    const {
      collection,
      pipeline = [],
      options = {},
      processor,
      resumeToken = null
    } = config;

    // Build comprehensive change stream pipeline
    const changeStreamPipeline = [
      // Stage 1: Filter by operation types if specified
      ...(config.operationTypes ? [
        { $match: { operationType: { $in: config.operationTypes } } }
      ] : []),

      // Stage 2: Document-level filtering
      ...(config.documentFilter ? [
        { $match: config.documentFilter }
      ] : []),

      // Stage 3: Field-level filtering for efficiency
      ...(config.fieldFilter ? [
        { $project: config.fieldFilter }
      ] : []),

      // Custom pipeline stages
      ...pipeline
    ];

    const streamOptions = {
      ...this.streamConfig,
      ...options,
      ...(resumeToken && { resumeAfter: resumeToken })
    };

    const targetCollection = this.collections[collection] || this.db.collection(collection);
    const changeStream = targetCollection.watch(changeStreamPipeline, streamOptions);

    // Configure change stream event handlers
    this.setupChangeStreamHandlers(streamName, changeStream, processor);

    this.changeStreams.set(streamName, {
      stream: changeStream,
      collection: collection,
      processor: processor,
      config: config,
      stats: {
        eventsProcessed: 0,
        errors: 0,
        lastEventTime: null,
        startTime: new Date()
      }
    });

    console.log(`Change stream '${streamName}' created for collection '${collection}'`);
    return changeStream;
  }

  setupChangeStreamHandlers(streamName, changeStream, processor) {
    changeStream.on('change', async (changeDoc) => {
      try {
        // Extract resume token for fault tolerance
        this.resumeTokens.set(streamName, changeDoc._id);

        // Add comprehensive change metadata
        const enhancedChange = {
          ...changeDoc,
          streamName: streamName,
          receivedAt: new Date(),
          processingMetadata: {
            retryCount: 0,
            priority: this.calculateEventPriority(changeDoc),
            correlationId: this.generateCorrelationId(changeDoc),
            traceId: this.generateTraceId()
          }
        };

        // Queue for processing
        await this.queueChangeEvent(enhancedChange, processor);

        // Update statistics
        this.updateStreamStats(streamName, 'event_received');

      } catch (error) {
        console.error(`Error handling change in stream ${streamName}:`, error);
        this.updateStreamStats(streamName, 'error');
        this.emit('change_error', { streamName, error, changeDoc });
      }
    });

    changeStream.on('error', async (error) => {
      console.error(`Change stream ${streamName} error:`, error);
      this.updateStreamStats(streamName, 'stream_error');

      // Attempt to resume from last known position
      if (error.code === 40585 || error.code === 136) { // Resume token expired or invalid
        console.log(`Attempting to resume change stream ${streamName}...`);
        await this.resumeChangeStream(streamName);
      } else {
        this.emit('stream_error', { streamName, error });
      }
    });

    changeStream.on('close', () => {
      console.log(`Change stream ${streamName} closed`);
      this.emit('stream_closed', { streamName });
    });
  }

  async queueChangeEvent(changeEvent, processor) {
    // Prevent queue overflow
    if (this.eventQueue.length >= this.maxQueueSize) {
      console.warn('Event queue at capacity, dropping oldest events');
      this.eventQueue.splice(0, Math.floor(this.maxQueueSize * 0.1)); // Drop 10% of oldest
    }

    // Add event to processing queue with priority ordering
    this.eventQueue.push({ changeEvent, processor });
    this.eventQueue.sort((a, b) => 
      b.changeEvent.processingMetadata.priority - a.changeEvent.processingMetadata.priority
    );

    // Start processing if not already running
    if (!this.isProcessing) {
      setImmediate(() => this.processEventQueue());
    }
  }

  async processEventQueue() {
    if (this.isProcessing || this.eventQueue.length === 0) return;

    this.isProcessing = true;

    try {
      while (this.eventQueue.length > 0) {
        const { changeEvent, processor } = this.eventQueue.shift();

        try {
          const startTime = Date.now();
          await this.processChangeEvent(changeEvent, processor);
          const processingTime = Date.now() - startTime;

          // Update processing metrics
          this.updateProcessingMetrics(changeEvent.streamName, processingTime, true);

        } catch (error) {
          console.error('Event processing failed:', error);

          // Implement retry logic
          if (changeEvent.processingMetadata.retryCount < 3) {
            changeEvent.processingMetadata.retryCount++;
            changeEvent.processingMetadata.priority -= 1; // Lower priority for retries
            this.eventQueue.unshift({ changeEvent, processor });
          } else {
            console.error('Max retries reached for event:', changeEvent._id);
            this.emit('event_failed', { changeEvent, error });
          }

          this.updateProcessingMetrics(changeEvent.streamName, 0, false);
        }
      }
    } finally {
      this.isProcessing = false;
    }
  }

  async processChangeEvent(changeEvent, processor) {
    const { operationType, fullDocument, documentKey, updateDescription } = changeEvent;

    console.log(`Processing ${operationType} event for ${changeEvent.streamName}`);

    // Execute processor function with comprehensive context
    const processingContext = {
      operation: operationType,
      document: fullDocument,
      documentKey: documentKey,
      updateDescription: updateDescription,
      timestamp: changeEvent.clusterTime,
      metadata: changeEvent.processingMetadata,

      // Utility functions
      isInsert: () => operationType === 'insert',
      isUpdate: () => operationType === 'update',
      isDelete: () => operationType === 'delete',
      isReplace: () => operationType === 'replace',

      // Field change utilities
      hasFieldChanged: (fieldName) => {
        return updateDescription?.updatedFields?.hasOwnProperty(fieldName) ||
               updateDescription?.removedFields?.includes(fieldName);
      },

      getFieldChange: (fieldName) => {
        return updateDescription?.updatedFields?.[fieldName];
      },

      // Document utilities
      getDocumentId: () => documentKey._id,
      getFullDocument: () => fullDocument
    };

    // Execute the processor
    await processor(processingContext);
  }

  setupEventProcessors() {
    // Order lifecycle management processor
    this.eventProcessors.set('orderLifecycle', async (context) => {
      const { operation, document, hasFieldChanged } = context;

      switch (operation) {
        case 'insert':
          await this.handleOrderCreated(document);
          break;

        case 'update':
          if (hasFieldChanged('status')) {
            await this.handleOrderStatusChange(document, context.getFieldChange('status'));
          }
          if (hasFieldChanged('payment_status')) {
            await this.handlePaymentStatusChange(document, context.getFieldChange('payment_status'));
          }
          if (hasFieldChanged('shipping_status')) {
            await this.handleShippingStatusChange(document, context.getFieldChange('shipping_status'));
          }
          break;

        case 'delete':
          await this.handleOrderCancelled(context.getDocumentId());
          break;
      }
    });

    // Inventory management processor
    this.eventProcessors.set('inventorySync', async (context) => {
      const { operation, document, hasFieldChanged } = context;

      if (operation === 'insert' && document.items) {
        // New order - reserve inventory
        await this.reserveInventoryForOrder(document);
      } else if (operation === 'update' && hasFieldChanged('status')) {
        const newStatus = context.getFieldChange('status');

        if (newStatus === 'cancelled') {
          await this.releaseInventoryReservation(document);
        } else if (newStatus === 'shipped') {
          await this.confirmInventoryConsumption(document);
        }
      }
    });

    // Real-time analytics processor
    this.eventProcessors.set('realTimeAnalytics', async (context) => {
      const { operation, document, timestamp } = context;

      // Update real-time metrics
      const analyticsEvent = {
        eventType: `order_${operation}`,
        timestamp: timestamp,
        data: {
          orderId: context.getDocumentId(),
          customerId: document?.user_id,
          amount: document?.total_amount,
          region: document?.shipping_address?.region,
          products: document?.items?.map(item => item.product_id)
        }
      };

      await this.updateRealTimeMetrics(analyticsEvent);
    });

    // Customer engagement processor
    this.eventProcessors.set('customerEngagement', async (context) => {
      const { operation, document, hasFieldChanged } = context;

      if (operation === 'insert') {
        // New order - update customer profile
        await this.updateCustomerOrderHistory(document.user_id, document);

        // Trigger post-purchase engagement
        await this.triggerPostPurchaseEngagement(document);

      } else if (operation === 'update' && hasFieldChanged('status')) {
        const newStatus = context.getFieldChange('status');

        if (newStatus === 'delivered') {
          // Order delivered - trigger review request
          await this.triggerReviewRequest(document);
        }
      }
    });
  }

  async handleOrderCreated(orderDocument) {
    console.log(`Processing new order: ${orderDocument._id}`);

    // Coordinate microservices for order creation
    const coordinationTasks = [
      this.notifyPaymentService({
        action: 'process_payment',
        orderId: orderDocument._id,
        amount: orderDocument.total_amount,
        paymentMethod: orderDocument.payment_method
      }),

      this.notifyInventoryService({
        action: 'reserve_inventory',
        orderId: orderDocument._id,
        items: orderDocument.items
      }),

      this.notifyShippingService({
        action: 'calculate_shipping',
        orderId: orderDocument._id,
        shippingAddress: orderDocument.shipping_address,
        items: orderDocument.items
      }),

      this.notifyCustomerService({
        action: 'order_confirmation',
        orderId: orderDocument._id,
        customerId: orderDocument.user_id
      })
    ];

    // Execute coordination with error handling
    const results = await Promise.allSettled(coordinationTasks);

    // Check for coordination failures
    const failures = results.filter(result => result.status === 'rejected');
    if (failures.length > 0) {
      console.error(`Order coordination failures for ${orderDocument._id}:`, failures);

      // Trigger compensation workflow
      await this.triggerCompensationWorkflow(orderDocument._id, failures);
    }
  }

  async handleOrderStatusChange(orderDocument, newStatus) {
    console.log(`Order ${orderDocument._id} status changed to: ${newStatus}`);

    const statusHandlers = {
      'confirmed': async () => {
        await this.notifyFulfillmentService({
          action: 'prepare_order',
          orderId: orderDocument._id
        });
      },

      'shipped': async () => {
        await this.notifyCustomerService({
          action: 'shipping_notification',
          orderId: orderDocument._id,
          trackingNumber: orderDocument.tracking_number
        });

        // Update inventory
        await this.confirmInventoryConsumption(orderDocument);
      },

      'delivered': async () => {
        // Trigger post-delivery workflows
        await Promise.all([
          this.triggerReviewRequest(orderDocument),
          this.updateCustomerLoyaltyPoints(orderDocument),
          this.analyzeReorderProbability(orderDocument)
        ]);
      },

      'cancelled': async () => {
        // Execute cancellation compensation
        await this.executeOrderCancellation(orderDocument);
      }
    };

    const handler = statusHandlers[newStatus];
    if (handler) {
      await handler();
    }
  }

  async reserveInventoryForOrder(orderDocument) {
    console.log(`Reserving inventory for order: ${orderDocument._id}`);

    const inventoryOperations = orderDocument.items.map(item => ({
      updateOne: {
        filter: {
          product_id: item.product_id,
          available_quantity: { $gte: item.quantity }
        },
        update: {
          $inc: {
            available_quantity: -item.quantity,
            reserved_quantity: item.quantity
          },
          $push: {
            reservations: {
              order_id: orderDocument._id,
              quantity: item.quantity,
              reserved_at: new Date(),
              expires_at: new Date(Date.now() + 30 * 60 * 1000) // 30 minutes
            }
          }
        }
      }
    }));

    try {
      const result = await this.collections.inventory.bulkWrite(inventoryOperations);
      console.log(`Inventory reserved for ${result.modifiedCount} items`);

      // Check for insufficient inventory
      if (result.modifiedCount < orderDocument.items.length) {
        await this.handleInsufficientInventory(orderDocument, result);
      }

    } catch (error) {
      console.error(`Inventory reservation failed for order ${orderDocument._id}:`, error);
      throw error;
    }
  }

  async updateRealTimeMetrics(analyticsEvent) {
    console.log(`Updating real-time metrics for: ${analyticsEvent.eventType}`);

    const metricsUpdate = {
      $inc: {
        [`hourly_metrics.${new Date().getHours()}.${analyticsEvent.eventType}`]: 1
      },
      $push: {
        recent_events: {
          $each: [analyticsEvent],
          $slice: -1000 // Keep last 1000 events
        }
      },
      $set: {
        last_updated: new Date()
      }
    };

    // Update regional metrics
    if (analyticsEvent.data.region) {
      metricsUpdate.$inc[`regional_metrics.${analyticsEvent.data.region}.${analyticsEvent.eventType}`] = 1;
    }

    await this.collections.analytics.updateOne(
      { _id: 'real_time_metrics' },
      metricsUpdate,
      { upsert: true }
    );
  }

  async triggerPostPurchaseEngagement(orderDocument) {
    console.log(`Triggering post-purchase engagement for order: ${orderDocument._id}`);

    // Schedule engagement activities
    const engagementTasks = [
      {
        type: 'order_confirmation_email',
        scheduledFor: new Date(Date.now() + 5 * 60 * 1000), // 5 minutes
        recipient: orderDocument.user_id,
        data: { orderId: orderDocument._id }
      },
      {
        type: 'shipping_updates_subscription',
        scheduledFor: new Date(Date.now() + 60 * 60 * 1000), // 1 hour
        recipient: orderDocument.user_id,
        data: { orderId: orderDocument._id }
      },
      {
        type: 'product_recommendations',
        scheduledFor: new Date(Date.now() + 24 * 60 * 60 * 1000), // 24 hours
        recipient: orderDocument.user_id,
        data: { 
          orderId: orderDocument._id,
          purchasedProducts: orderDocument.items.map(item => item.product_id)
        }
      }
    ];

    await this.collections.engagement_queue.insertMany(engagementTasks);
  }

  // Microservice communication methods
  async notifyPaymentService(message) {
    // In production, this would use message queues (RabbitMQ, Apache Kafka, etc.)
    console.log('Notifying Payment Service:', message);

    // Simulate service call
    return new Promise((resolve) => {
      setTimeout(() => {
        console.log(`Payment service processed: ${message.action}`);
        resolve({ status: 'success', processedAt: new Date() });
      }, 100);
    });
  }

  async notifyInventoryService(message) {
    console.log('Notifying Inventory Service:', message);

    return new Promise((resolve) => {
      setTimeout(() => {
        console.log(`Inventory service processed: ${message.action}`);
        resolve({ status: 'success', processedAt: new Date() });
      }, 150);
    });
  }

  async notifyShippingService(message) {
    console.log('Notifying Shipping Service:', message);

    return new Promise((resolve) => {
      setTimeout(() => {
        console.log(`Shipping service processed: ${message.action}`);
        resolve({ status: 'success', processedAt: new Date() });
      }, 200);
    });
  }

  async notifyCustomerService(message) {
    console.log('Notifying Customer Service:', message);

    return new Promise((resolve) => {
      setTimeout(() => {
        console.log(`Customer service processed: ${message.action}`);
        resolve({ status: 'success', processedAt: new Date() });
      }, 75);
    });
  }

  // Utility methods
  calculateEventPriority(changeDoc) {
    // Priority scoring based on operation type and document characteristics
    const basePriority = {
      'insert': 10,
      'update': 5,
      'delete': 15,
      'replace': 8
    };

    let priority = basePriority[changeDoc.operationType] || 1;

    // Boost priority for high-value orders
    if (changeDoc.fullDocument?.total_amount > 1000) {
      priority += 5;
    }

    // Boost priority for status changes
    if (changeDoc.updateDescription?.updatedFields?.status) {
      priority += 3;
    }

    return priority;
  }

  generateCorrelationId(changeDoc) {
    return `${changeDoc.operationType}-${changeDoc.documentKey._id}-${Date.now()}`;
  }

  generateTraceId() {
    return require('crypto').randomUUID();
  }

  updateStreamStats(streamName, event) {
    const streamData = this.changeStreams.get(streamName);
    if (streamData) {
      streamData.stats.lastEventTime = new Date();

      switch (event) {
        case 'event_received':
          streamData.stats.eventsProcessed++;
          break;
        case 'error':
        case 'stream_error':
          streamData.stats.errors++;
          break;
      }
    }
  }

  updateProcessingMetrics(streamName, processingTime, success) {
    if (!this.processingStats.has(streamName)) {
      this.processingStats.set(streamName, {
        totalProcessed: 0,
        totalErrors: 0,
        totalProcessingTime: 0,
        avgProcessingTime: 0
      });
    }

    const stats = this.processingStats.get(streamName);

    if (success) {
      stats.totalProcessed++;
      stats.totalProcessingTime += processingTime;
      stats.avgProcessingTime = stats.totalProcessingTime / stats.totalProcessed;
    } else {
      stats.totalErrors++;
    }
  }

  getStreamStatus() {
    const status = {
      activeStreams: this.changeStreams.size,
      totalEventsProcessed: 0,
      totalErrors: 0,
      streams: {}
    };

    for (const [streamName, streamData] of this.changeStreams) {
      status.totalEventsProcessed += streamData.stats.eventsProcessed;
      status.totalErrors += streamData.stats.errors;

      status.streams[streamName] = {
        collection: streamData.collection,
        eventsProcessed: streamData.stats.eventsProcessed,
        errors: streamData.stats.errors,
        uptime: Date.now() - streamData.stats.startTime.getTime(),
        lastEventTime: streamData.stats.lastEventTime
      };
    }

    return status;
  }

  async resumeChangeStream(streamName) {
    const streamData = this.changeStreams.get(streamName);
    if (!streamData) return;

    console.log(`Resuming change stream: ${streamName}`);

    try {
      // Close current stream
      await streamData.stream.close();

      // Create new stream with resume token
      const resumeToken = this.resumeTokens.get(streamName);
      const config = {
        ...streamData.config,
        resumeToken: resumeToken
      };

      await this.createChangeStream(streamName, config);
      console.log(`Change stream ${streamName} resumed successfully`);

    } catch (error) {
      console.error(`Failed to resume change stream ${streamName}:`, error);
      this.emit('resume_failed', { streamName, error });
    }
  }

  async close() {
    console.log('Closing all change streams...');

    for (const [streamName, streamData] of this.changeStreams) {
      try {
        await streamData.stream.close();
        console.log(`Closed change stream: ${streamName}`);
      } catch (error) {
        console.error(`Error closing stream ${streamName}:`, error);
      }
    }

    this.changeStreams.clear();
    this.resumeTokens.clear();
    console.log('All change streams closed');
  }
}

// Example usage: Complete microservices coordination system
async function setupEcommerceEventProcessing() {
  console.log('Setting up comprehensive e-commerce event processing system...');

  const changeStreamManager = new MongoChangeStreamManager(db);

  // Configure change streams for different aspects of the system
  const streamConfigurations = {
    // Order lifecycle management
    orderEvents: {
      collection: 'orders',
      operationTypes: ['insert', 'update', 'delete'],
      processor: changeStreamManager.eventProcessors.get('orderLifecycle'),
      options: {
        fullDocument: 'updateLookup',
        fullDocumentBeforeChange: 'whenAvailable'
      }
    },

    // Inventory synchronization
    inventorySync: {
      collection: 'orders',
      operationTypes: ['insert', 'update'],
      documentFilter: {
        $or: [
          { operationType: 'insert' },
          { 'updateDescription.updatedFields.status': { $exists: true } }
        ]
      },
      processor: changeStreamManager.eventProcessors.get('inventorySync')
    },

    // Real-time analytics
    analyticsEvents: {
      collection: 'orders',
      processor: changeStreamManager.eventProcessors.get('realTimeAnalytics'),
      options: {
        fullDocument: 'updateLookup'
      }
    },

    // Customer engagement
    customerEngagement: {
      collection: 'orders',
      operationTypes: ['insert', 'update'],
      processor: changeStreamManager.eventProcessors.get('customerEngagement'),
      options: {
        fullDocument: 'updateLookup'
      }
    },

    // User profile updates
    userProfileSync: {
      collection: 'users',
      operationTypes: ['update'],
      documentFilter: {
        'updateDescription.updatedFields': {
          $or: [
            { 'email': { $exists: true } },
            { 'profile': { $exists: true } },
            { 'preferences': { $exists: true } }
          ]
        }
      },
      processor: async (context) => {
        console.log(`User profile updated: ${context.getDocumentId()}`);
        // Sync profile changes across microservices
        await changeStreamManager.notifyCustomerService({
          action: 'profile_sync',
          userId: context.getDocumentId(),
          changes: context.updateDescription.updatedFields
        });
      }
    }
  };

  // Initialize all change streams
  await changeStreamManager.initializeChangeStreams(streamConfigurations);

  // Monitor system health
  setInterval(() => {
    const status = changeStreamManager.getStreamStatus();
    console.log('Change Stream System Status:', JSON.stringify(status, null, 2));
  }, 30000); // Every 30 seconds

  return changeStreamManager;
}

// Benefits of MongoDB Change Streams:
// - Real-time, ordered change events with guaranteed delivery
// - Resume capability from any point using resume tokens
// - Rich filtering and transformation capabilities through aggregation pipelines
// - Automatic failover and reconnection handling
// - Document-level granularity with full document context
// - Cluster-wide change tracking across replica sets and sharded clusters
// - Built-in support for microservices coordination patterns
// - Efficient resource utilization without polling overhead
// - Comprehensive event metadata and processing context
// - SQL-compatible change processing through QueryLeaf integration

module.exports = {
  MongoChangeStreamManager,
  setupEcommerceEventProcessing
};

Understanding MongoDB Change Streams Architecture

Advanced Event-Driven Patterns and Microservices Coordination

Implement sophisticated change stream patterns for production-scale event processing:

// Production-grade change stream patterns for enterprise applications
class EnterpriseChangeStreamManager extends MongoChangeStreamManager {
  constructor(db, enterpriseConfig) {
    super(db);

    this.enterpriseConfig = {
      messageQueue: enterpriseConfig.messageQueue, // RabbitMQ, Kafka, etc.
      distributedTracing: enterpriseConfig.distributedTracing,
      metricsCollector: enterpriseConfig.metricsCollector,
      errorReporting: enterpriseConfig.errorReporting,
      circuitBreaker: enterpriseConfig.circuitBreaker
    };

    this.setupEnterpriseIntegrations();
  }

  async setupMultiTenantChangeStreams(tenantConfigurations) {
    console.log('Setting up multi-tenant change stream architecture...');

    const tenantStreams = new Map();

    for (const [tenantId, config] of Object.entries(tenantConfigurations)) {
      const tenantStreamConfig = {
        ...config,
        pipeline: [
          { $match: { 'fullDocument.tenant_id': tenantId } },
          ...(config.pipeline || [])
        ],
        processor: this.createTenantProcessor(tenantId, config.processor)
      };

      const streamName = `tenant_${tenantId}_${config.name}`;
      tenantStreams.set(streamName, tenantStreamConfig);
    }

    await this.initializeChangeStreams(Object.fromEntries(tenantStreams));
    return tenantStreams;
  }

  createTenantProcessor(tenantId, baseProcessor) {
    return async (context) => {
      // Add tenant context
      const tenantContext = {
        ...context,
        tenantId: tenantId,
        tenantConfig: await this.getTenantConfig(tenantId)
      };

      // Execute with tenant-specific error handling
      try {
        await baseProcessor(tenantContext);
      } catch (error) {
        await this.handleTenantError(tenantId, error, context);
      }
    };
  }

  async implementEventSourcingPattern(aggregateConfigs) {
    console.log('Implementing event sourcing pattern with change streams...');

    const eventSourcingStreams = {};

    for (const [aggregateName, config] of Object.entries(aggregateConfigs)) {
      eventSourcingStreams[`${aggregateName}_events`] = {
        collection: config.collection,
        operationTypes: ['insert', 'update', 'delete'],
        processor: async (context) => {
          const event = this.buildDomainEvent(aggregateName, context);

          // Store in event store
          await this.appendToEventStore(event);

          // Update projections
          await this.updateProjections(aggregateName, event);

          // Publish to event bus
          await this.publishDomainEvent(event);
        },
        options: {
          fullDocument: 'updateLookup',
          fullDocumentBeforeChange: 'whenAvailable'
        }
      };
    }

    return eventSourcingStreams;
  }

  buildDomainEvent(aggregateName, context) {
    const { operation, document, documentKey, updateDescription, timestamp } = context;

    return {
      eventId: require('crypto').randomUUID(),
      eventType: `${aggregateName}.${operation}`,
      aggregateId: documentKey._id,
      aggregateType: aggregateName,
      eventData: {
        before: context.fullDocumentBeforeChange,
        after: document,
        changes: updateDescription
      },
      eventMetadata: {
        timestamp: timestamp,
        causationId: context.metadata.correlationId,
        correlationId: context.metadata.traceId,
        userId: document?.user_id || 'system',
        version: await this.getAggregateVersion(aggregateName, documentKey._id)
      }
    };
  }

  async setupCQRSIntegration(cqrsConfig) {
    console.log('Setting up CQRS integration with change streams...');

    const cqrsStreams = {};

    // Command side - write model changes
    for (const [commandModel, config] of Object.entries(cqrsConfig.commandModels)) {
      cqrsStreams[`${commandModel}_commands`] = {
        collection: config.collection,
        processor: async (context) => {
          // Update read models
          await this.updateReadModels(commandModel, context);

          // Invalidate caches
          await this.invalidateReadModelCaches(commandModel, context.getDocumentId());

          // Publish integration events
          await this.publishIntegrationEvents(commandModel, context);
        }
      };
    }

    return cqrsStreams;
  }

  async setupDistributedSagaCoordination(sagaConfigurations) {
    console.log('Setting up distributed saga coordination...');

    const sagaStreams = {};

    for (const [sagaName, config] of Object.entries(sagaConfigurations)) {
      sagaStreams[`${sagaName}_saga`] = {
        collection: config.triggerCollection,
        documentFilter: config.triggerFilter,
        processor: async (context) => {
          const sagaInstance = await this.createSagaInstance(sagaName, context);
          await this.executeSagaStep(sagaInstance, context);
        }
      };
    }

    return sagaStreams;
  }

  async createSagaInstance(sagaName, triggerContext) {
    const sagaInstance = {
      sagaId: require('crypto').randomUUID(),
      sagaType: sagaName,
      status: 'started',
      currentStep: 0,
      triggerEvent: {
        aggregateId: triggerContext.getDocumentId(),
        eventData: triggerContext.document
      },
      compensation: [],
      createdAt: new Date()
    };

    await this.db.collection('saga_instances').insertOne(sagaInstance);
    return sagaInstance;
  }

  async setupAdvancedMonitoring() {
    console.log('Setting up advanced change stream monitoring...');

    const monitoringConfig = {
      healthChecks: {
        streamLiveness: true,
        processingLatency: true,
        errorRates: true,
        throughput: true
      },

      alerting: {
        streamFailure: { threshold: 1, window: '1m' },
        highLatency: { threshold: 5000, window: '5m' },
        errorRate: { threshold: 0.05, window: '10m' },
        lowThroughput: { threshold: 10, window: '5m' }
      },

      metrics: {
        prometheus: true,
        cloudwatch: false,
        datadog: false
      }
    };

    return this.initializeMonitoring(monitoringConfig);
  }
}

SQL-Style Change Stream Processing with QueryLeaf

QueryLeaf provides familiar SQL syntax for MongoDB change stream configuration and event processing:

-- QueryLeaf change stream management with SQL-familiar patterns

-- Create comprehensive change stream for order processing
CREATE CHANGE STREAM order_processing_stream ON orders
WATCH FOR (INSERT, UPDATE, DELETE)
WHERE 
  status IN ('pending', 'confirmed', 'shipped', 'delivered', 'cancelled')
  AND total_amount > 0
WITH OPTIONS (
  full_document = 'updateLookup',
  full_document_before_change = 'whenAvailable',
  batch_size = 100,
  max_await_time = 1000,
  start_at_operation_time = CURRENT_TIMESTAMP - INTERVAL '1 hour'
)
PROCESS WITH order_lifecycle_handler;

-- Advanced change stream with complex filtering and transformation
CREATE CHANGE STREAM high_value_order_stream ON orders
WATCH FOR (INSERT, UPDATE)
WHERE 
  operationType = 'insert' AND fullDocument.total_amount >= 1000
  OR (operationType = 'update' AND updateDescription.updatedFields.status EXISTS)
WITH PIPELINE (
  -- Stage 1: Additional filtering
  {
    $match: {
      $or: [
        { 
          operationType: 'insert',
          'fullDocument.customer_tier': { $in: ['gold', 'platinum'] }
        },
        {
          operationType: 'update',
          'fullDocument.total_amount': { $gte: 1000 }
        }
      ]
    }
  },

  -- Stage 2: Enrich with customer data
  {
    $lookup: {
      from: 'users',
      localField: 'fullDocument.user_id',
      foreignField: '_id',
      as: 'customer_data',
      pipeline: [
        {
          $project: {
            email: 1,
            customer_tier: 1,
            lifetime_value: 1,
            preferences: 1
          }
        }
      ]
    }
  },

  -- Stage 3: Calculate priority score
  {
    $addFields: {
      processing_priority: {
        $switch: {
          branches: [
            { 
              case: { $gte: ['$fullDocument.total_amount', 5000] }, 
              then: 'critical' 
            },
            { 
              case: { $gte: ['$fullDocument.total_amount', 2000] }, 
              then: 'high' 
            },
            { 
              case: { $gte: ['$fullDocument.total_amount', 1000] }, 
              then: 'medium' 
            }
          ],
          default: 'normal'
        }
      }
    }
  }
)
PROCESS WITH vip_order_processor;

-- Real-time analytics change stream with aggregation
CREATE MATERIALIZED CHANGE STREAM real_time_order_metrics ON orders
WATCH FOR (INSERT, UPDATE, DELETE)
WITH AGGREGATION (
  -- Group by time buckets for real-time metrics
  GROUP BY (
    DATE_TRUNC('minute', clusterTime, 5) as time_bucket,
    fullDocument.region as region
  )
  SELECT 
    time_bucket,
    region,

    -- Real-time KPIs
    COUNT(*) FILTER (WHERE operationType = 'insert') as new_orders,
    COUNT(*) FILTER (WHERE operationType = 'update' AND updateDescription.updatedFields.status = 'shipped') as orders_shipped,
    COUNT(*) FILTER (WHERE operationType = 'delete') as orders_cancelled,

    -- Revenue metrics
    SUM(fullDocument.total_amount) FILTER (WHERE operationType = 'insert') as new_revenue,
    AVG(fullDocument.total_amount) FILTER (WHERE operationType = 'insert') as avg_order_value,

    -- Customer metrics
    COUNT(DISTINCT fullDocument.user_id) as unique_customers,

    -- Performance indicators
    COUNT(*) / 5.0 as events_per_minute,
    CURRENT_TIMESTAMP as computed_at

  WINDOW (
    ORDER BY time_bucket
    ROWS BETWEEN 6 PRECEDING AND CURRENT ROW
  )
  ADD (
    AVG(new_orders) OVER window as rolling_avg_orders,
    AVG(new_revenue) OVER window as rolling_avg_revenue,

    -- Trend detection
    CASE 
      WHEN new_orders > rolling_avg_orders * 1.2 THEN 'surge'
      WHEN new_orders < rolling_avg_orders * 0.8 THEN 'decline'
      ELSE 'stable'
    END as order_trend
  )
)
REFRESH EVERY 5 SECONDS
PROCESS WITH analytics_event_handler;

-- Customer segmentation change stream with RFM analysis
CREATE CHANGE STREAM customer_behavior_analysis ON orders
WATCH FOR (INSERT, UPDATE)
WHERE fullDocument.status IN ('completed', 'delivered')
WITH CUSTOMER_SEGMENTATION (
  -- Calculate RFM metrics from change events
  SELECT 
    fullDocument.user_id as customer_id,

    -- Recency calculation
    EXTRACT(DAYS FROM CURRENT_TIMESTAMP - MAX(fullDocument.order_date)) as recency_days,

    -- Frequency calculation  
    COUNT(*) FILTER (WHERE operationType = 'insert') as order_frequency,

    -- Monetary calculation
    SUM(fullDocument.total_amount) as total_monetary_value,
    AVG(fullDocument.total_amount) as avg_order_value,

    -- Advanced behavior metrics
    COUNT(DISTINCT fullDocument.product_categories) as category_diversity,
    AVG(ARRAY_LENGTH(fullDocument.items)) as avg_items_per_order,

    -- Engagement patterns
    COUNT(*) FILTER (WHERE EXTRACT(DOW FROM fullDocument.order_date) IN (0, 6)) / COUNT(*)::float as weekend_preference,

    -- RFM scoring
    NTILE(5) OVER (ORDER BY recency_days DESC) as recency_score,
    NTILE(5) OVER (ORDER BY order_frequency ASC) as frequency_score,  
    NTILE(5) OVER (ORDER BY total_monetary_value ASC) as monetary_score,

    -- Customer segment classification
    CASE 
      WHEN NTILE(5) OVER (ORDER BY recency_days DESC) >= 4 
           AND NTILE(5) OVER (ORDER BY order_frequency ASC) >= 4 
           AND NTILE(5) OVER (ORDER BY total_monetary_value ASC) >= 4 THEN 'champions'
      WHEN NTILE(5) OVER (ORDER BY recency_days DESC) >= 3 
           AND NTILE(5) OVER (ORDER BY order_frequency ASC) >= 3 
           AND NTILE(5) OVER (ORDER BY total_monetary_value ASC) >= 3 THEN 'loyal_customers'
      WHEN NTILE(5) OVER (ORDER BY recency_days DESC) >= 4 
           AND NTILE(5) OVER (ORDER BY order_frequency ASC) <= 2 THEN 'potential_loyalists'
      WHEN NTILE(5) OVER (ORDER BY recency_days DESC) >= 4 
           AND NTILE(5) OVER (ORDER BY order_frequency ASC) <= 1 THEN 'new_customers'
      WHEN NTILE(5) OVER (ORDER BY recency_days DESC) <= 2 
           AND NTILE(5) OVER (ORDER BY order_frequency ASC) >= 3 THEN 'at_risk'
      ELSE 'needs_attention'
    END as customer_segment,

    -- Predictive metrics
    total_monetary_value / GREATEST(recency_days / 30.0, 1) * order_frequency as predicted_clv,

    CURRENT_TIMESTAMP as analyzed_at

  GROUP BY fullDocument.user_id
  WINDOW customer_analysis AS (
    PARTITION BY fullDocument.user_id
    ORDER BY fullDocument.order_date
    RANGE BETWEEN INTERVAL '365 days' PRECEDING AND CURRENT ROW
  )
)
PROCESS WITH customer_segmentation_handler;

-- Inventory synchronization change stream
CREATE CHANGE STREAM inventory_sync_stream ON orders  
WATCH FOR (INSERT, UPDATE, DELETE)
WHERE 
  operationType = 'insert' 
  OR (operationType = 'update' AND updateDescription.updatedFields.status EXISTS)
  OR operationType = 'delete'
WITH EVENT_PROCESSING (
  CASE operationType
    WHEN 'insert' THEN 
      CALL reserve_inventory(fullDocument.items, fullDocument._id)
    WHEN 'update' THEN
      CASE updateDescription.updatedFields.status
        WHEN 'cancelled' THEN 
          CALL release_inventory_reservation(fullDocument._id)
        WHEN 'shipped' THEN 
          CALL confirm_inventory_consumption(fullDocument._id)
        WHEN 'returned' THEN 
          CALL restore_inventory(fullDocument.items, fullDocument._id)
      END
    WHEN 'delete' THEN
      CALL cleanup_inventory_reservations(documentKey._id)
  END
)
WITH OPTIONS (
  retry_policy = {
    max_attempts: 3,
    backoff_strategy: 'exponential',
    base_delay: '1 second'
  },
  dead_letter_queue = 'inventory_sync_dlq',
  processing_timeout = '30 seconds'
)
PROCESS WITH inventory_coordination_handler;

-- Microservices event coordination with saga pattern
CREATE DISTRIBUTED SAGA order_fulfillment_saga 
TRIGGERED BY orders.insert
WHERE fullDocument.status = 'pending' AND fullDocument.total_amount > 0
WITH STEPS (
  -- Step 1: Payment processing
  {
    service: 'payment-service',
    action: 'process_payment',
    input: {
      order_id: NEW.documentKey._id,
      amount: NEW.fullDocument.total_amount,
      payment_method: NEW.fullDocument.payment_method
    },
    compensation: {
      service: 'payment-service', 
      action: 'refund_payment',
      input: { payment_id: '${payment_result.payment_id}' }
    },
    timeout: '30 seconds'
  },

  -- Step 2: Inventory reservation
  {
    service: 'inventory-service',
    action: 'reserve_products',
    input: {
      order_id: NEW.documentKey._id,
      items: NEW.fullDocument.items
    },
    compensation: {
      service: 'inventory-service',
      action: 'release_reservation', 
      input: { reservation_id: '${inventory_result.reservation_id}' }
    },
    timeout: '15 seconds'
  },

  -- Step 3: Shipping calculation
  {
    service: 'shipping-service',
    action: 'calculate_shipping',
    input: {
      order_id: NEW.documentKey._id,
      shipping_address: NEW.fullDocument.shipping_address,
      items: NEW.fullDocument.items
    },
    compensation: {
      service: 'shipping-service',
      action: 'cancel_shipping',
      input: { shipping_id: '${shipping_result.shipping_id}' }
    },
    timeout: '10 seconds'
  },

  -- Step 4: Order confirmation
  {
    service: 'notification-service',
    action: 'send_confirmation',
    input: {
      order_id: NEW.documentKey._id,
      customer_email: NEW.fullDocument.customer_email,
      order_details: NEW.fullDocument
    },
    timeout: '5 seconds'
  }
)
WITH SAGA_OPTIONS (
  max_retry_attempts = 3,
  compensation_timeout = '60 seconds',
  saga_timeout = '5 minutes'
);

-- Event sourcing pattern with change streams
CREATE EVENT STORE order_events
FROM CHANGE STREAM orders.*
WITH EVENT_MAPPING (
  event_type = CONCAT('Order.', TITLE_CASE(operationType)),
  aggregate_id = documentKey._id,
  aggregate_type = 'Order',
  event_data = {
    before: fullDocumentBeforeChange,
    after: fullDocument,
    changes: updateDescription
  },
  event_metadata = {
    timestamp: clusterTime,
    causation_id: correlation_id,
    correlation_id: trace_id,
    user_id: COALESCE(fullDocument.user_id, 'system'),
    version: aggregate_version + 1
  }
)
WITH PROJECTIONS (
  -- Order summary projection
  order_summary = {
    aggregate_id: aggregate_id,
    current_status: event_data.after.status,
    total_amount: event_data.after.total_amount,
    created_at: event_data.after.created_at,
    last_updated: event_metadata.timestamp,
    version: event_metadata.version
  },

  -- Customer order history projection  
  customer_orders = {
    customer_id: event_data.after.user_id,
    order_id: aggregate_id,
    order_amount: event_data.after.total_amount,
    order_date: event_data.after.created_at,
    status: event_data.after.status
  }
);

-- Advanced monitoring and alerting for change streams
CREATE CHANGE STREAM MONITOR comprehensive_monitoring
WITH METRICS (
  -- Stream health metrics
  stream_uptime,
  events_processed_per_second,
  processing_latency_p95,
  error_rate,
  resume_token_age,

  -- Business metrics
  high_value_orders_per_minute,
  average_processing_time,
  failed_event_count,

  -- System resource metrics
  memory_usage,
  cpu_utilization,
  network_throughput
)
WITH ALERTS (
  -- Critical alerts
  stream_disconnected = {
    condition: stream_uptime = 0,
    severity: 'critical',
    notification: ['pager', 'slack:#ops-critical']
  },

  high_error_rate = {
    condition: error_rate > 0.05 FOR 5 MINUTES,
    severity: 'high', 
    notification: ['email:ops-team@company.com', 'slack:#database-alerts']
  },

  processing_latency = {
    condition: processing_latency_p95 > 5000 FOR 3 MINUTES,
    severity: 'medium',
    notification: ['slack:#performance-alerts']
  },

  -- Business alerts
  revenue_drop = {
    condition: high_value_orders_per_minute < 10 FOR 10 MINUTES DURING BUSINESS_HOURS,
    severity: 'high',
    notification: ['email:business-ops@company.com']
  }
);

-- QueryLeaf provides comprehensive change stream capabilities:
-- 1. SQL-familiar syntax for MongoDB change stream creation and management
-- 2. Advanced filtering and transformation through aggregation pipelines
-- 3. Real-time analytics and materialized views from change events
-- 4. Customer segmentation and behavioral analysis integration
-- 5. Microservices coordination with distributed saga patterns
-- 6. Event sourcing and CQRS implementation support
-- 7. Comprehensive monitoring and alerting for production environments
-- 8. Inventory synchronization and business process automation
-- 9. Multi-tenant and enterprise-grade change stream management
-- 10. Integration with external message queues and event systems

Best Practices for Change Stream Implementation

Event-Driven Architecture Design

Essential principles for building robust change stream-based systems:

  1. Resume Token Management: Always store resume tokens for fault tolerance and recovery
  2. Event Processing Idempotency: Design event processors to handle duplicate events gracefully
  3. Error Handling Strategy: Implement comprehensive error handling with retry policies and dead letter queues
  4. Filtering Optimization: Use early filtering in change stream pipelines to reduce processing overhead
  5. Resource Management: Monitor and manage memory usage for long-running change streams
  6. Monitoring Integration: Implement comprehensive monitoring for stream health and processing metrics

Production Deployment Strategies

Optimize change stream deployments for production-scale environments:

  1. High Availability: Deploy change stream processors across multiple instances with proper load balancing
  2. Scaling Patterns: Implement horizontal scaling strategies for high-throughput scenarios
  3. Performance Monitoring: Track processing latency, throughput, and error rates continuously
  4. Security Considerations: Ensure proper authentication and authorization for change stream access
  5. Backup and Recovery: Implement comprehensive backup strategies for resume tokens and processing state
  6. Integration Testing: Thoroughly test change stream integrations with downstream systems

Conclusion

MongoDB Change Streams provide a powerful foundation for building sophisticated event-driven architectures that enable real-time data processing, microservices coordination, and reactive application patterns. The ordered, resumable stream of change events eliminates the complexity and limitations of traditional change detection approaches while providing comprehensive filtering, transformation, and integration capabilities.

Key MongoDB Change Streams benefits include:

  • Real-time Processing: Immediate notification of data changes without polling overhead
  • Fault Tolerance: Resume capability from any point using resume tokens with guaranteed delivery
  • Rich Context: Complete document context with before/after states for comprehensive processing
  • Scalable Architecture: Horizontal scaling support for high-throughput event processing scenarios
  • Microservices Integration: Native support for distributed system coordination and communication patterns
  • Flexible Filtering: Advanced aggregation pipeline integration for sophisticated event filtering and transformation

Whether you're building real-time analytics platforms, microservices architectures, event sourcing systems, or reactive applications, MongoDB Change Streams with QueryLeaf's familiar SQL interface provide the foundation for modern event-driven development.

QueryLeaf Integration: QueryLeaf automatically manages MongoDB change stream operations while providing SQL-familiar syntax for event processing, microservices coordination, and real-time analytics. Advanced change stream patterns, saga orchestration, and event sourcing capabilities are seamlessly accessible through familiar SQL constructs, making sophisticated event-driven architectures both powerful and approachable for SQL-oriented development teams.

The combination of MongoDB's robust change stream capabilities with SQL-style operations makes it an ideal platform for modern applications requiring real-time responsiveness and distributed system coordination, ensuring your event-driven architectures can scale efficiently while maintaining consistency and reliability across complex distributed topologies.

MongoDB Indexing Strategies and Compound Indexes: Advanced Performance Optimization for Scalable Database Operations

Database performance at scale depends heavily on effective indexing strategies that can efficiently support diverse query patterns while minimizing storage overhead and maintenance costs. Poor indexing decisions lead to slow query performance, excessive resource consumption, and degraded user experience that becomes increasingly problematic as data volumes and application complexity grow.

MongoDB's sophisticated indexing system provides comprehensive support for simple and compound indexes, partial filters, text search indexes, and specialized data type indexes that enable developers to optimize query performance for complex application requirements. Unlike traditional database systems with rigid indexing constraints, MongoDB's flexible indexing architecture supports dynamic schema requirements while providing powerful optimization capabilities through compound indexes, index intersection, and advanced filtering strategies.

The Traditional Database Indexing Limitations

Conventional database indexing approaches often struggle with complex query patterns and multi-dimensional data access requirements:

-- Traditional PostgreSQL indexing with limited flexibility and optimization challenges

-- Basic single-column indexes with poor compound query support
CREATE INDEX idx_users_email ON users (email);
CREATE INDEX idx_users_status ON users (status);
CREATE INDEX idx_users_created_at ON users (created_at);
CREATE INDEX idx_users_country ON users (country);

-- Simple compound index with fixed column order limitations
CREATE INDEX idx_users_status_country ON users (status, country);

-- Complex query requiring multiple index scans and poor optimization
SELECT 
  u.user_id,
  u.email,
  u.first_name,
  u.last_name,
  u.status,
  u.country,
  u.created_at,
  u.last_login_at,
  COUNT(o.order_id) as order_count,
  SUM(o.total_amount) as total_spent,
  MAX(o.order_date) as last_order_date
FROM users u
LEFT JOIN orders o ON u.user_id = o.user_id
WHERE u.status IN ('active', 'premium', 'trial')
  AND u.country IN ('US', 'CA', 'UK', 'AU', 'DE', 'FR')
  AND u.created_at >= CURRENT_DATE - INTERVAL '2 years'
  AND u.last_login_at >= CURRENT_DATE - INTERVAL '30 days'
  AND (u.email LIKE '%@gmail.com' OR u.email LIKE '%@hotmail.com')
  AND u.subscription_tier IS NOT NULL
GROUP BY u.user_id, u.email, u.first_name, u.last_name, u.status, u.country, u.created_at, u.last_login_at
HAVING COUNT(o.order_id) > 0
ORDER BY total_spent DESC, last_order_date DESC
LIMIT 100;

-- PostgreSQL EXPLAIN showing inefficient index usage:
-- 
-- Limit  (cost=45234.67..45234.92 rows=100 width=128) (actual time=1247.123..1247.189 rows=100 loops=1)
--   ->  Sort  (cost=45234.67..45789.23 rows=221824 width=128) (actual time=1247.121..1247.156 rows=100 loops=1)
--         Sort Key: (sum(o.total_amount)) DESC, (max(o.order_date)) DESC
--         Sort Method: top-N heapsort  Memory: 67kB
--         ->  HashAggregate  (cost=38234.56..40456.80 rows=221824 width=128) (actual time=1156.789..1201.234 rows=12789 loops=1)
--               Group Key: u.user_id, u.email, u.first_name, u.last_name, u.status, u.country, u.created_at, u.last_login_at
--               ->  Hash Left Join  (cost=12345.67..32890.45 rows=221824 width=96) (actual time=89.456..567.123 rows=87645 loops=1)
--                     Hash Cond: (u.user_id = o.user_id)
--                     ->  Bitmap Heap Scan on users u  (cost=3456.78..8901.23 rows=45678 width=88) (actual time=34.567..123.456 rows=23456 loops=1)
--                           Recheck Cond: ((status = ANY ('{active,premium,trial}'::text[])) AND 
--                                         (country = ANY ('{US,CA,UK,AU,DE,FR}'::text[])) AND 
--                                         (created_at >= (CURRENT_DATE - '2 years'::interval)) AND 
--                                         (last_login_at >= (CURRENT_DATE - '30 days'::interval)))
--                           Filter: ((subscription_tier IS NOT NULL) AND 
--                                   ((email ~~ '%@gmail.com'::text) OR (email ~~ '%@hotmail.com'::text)))
--                           Rows Removed by Filter: 12789
--                           Heap Blocks: exact=1234 lossy=234
--                           ->  BitmapOr  (cost=3456.78..3456.78 rows=45678 width=0) (actual time=33.890..33.891 rows=0 loops=1)
--                                 ->  Bitmap Index Scan on idx_users_status_country  (cost=0.00..1234.56 rows=15678 width=0) (actual time=12.345..12.345 rows=18901 loops=1)
--                                       Index Cond: ((status = ANY ('{active,premium,trial}'::text[])) AND 
--                                                   (country = ANY ('{US,CA,UK,AU,DE,FR}'::text[])))
--                                 ->  Bitmap Index Scan on idx_users_created_at  (cost=0.00..1890.23 rows=25678 width=0) (actual time=18.234..18.234 rows=34567 loops=1)
--                                       Index Cond: (created_at >= (CURRENT_DATE - '2 years'::interval))
--                                 ->  Bitmap Index Scan on idx_users_last_login  (cost=0.00..331.99 rows=4322 width=0) (actual time=3.311..3.311 rows=8765 loops=1)
--                                       Index Cond: (last_login_at >= (CURRENT_DATE - '30 days'::interval))
--                     ->  Hash  (cost=7890.45..7890.45 rows=234567 width=24) (actual time=54.889..54.889 rows=198765 loops=1)
--                           Buckets: 262144  Batches: 1  Memory Usage: 11234kB
--                           ->  Seq Scan on orders o  (cost=0.00..7890.45 rows=234567 width=24) (actual time=0.234..28.901 rows=198765 loops=1)
-- Planning Time: 4.567 ms
-- Execution Time: 1247.567 ms

-- Problems with traditional PostgreSQL indexing:
-- 1. Multiple bitmap index scans required due to lack of comprehensive compound index
-- 2. Expensive BitmapOr operations combining multiple index results
-- 3. Large number of rows removed by filter conditions not supported by indexes
-- 4. Complex compound indexes difficult to design for multiple query patterns
-- 5. Index bloat and maintenance overhead with many single-column indexes
-- 6. Poor support for partial indexes and conditional filtering
-- 7. Limited flexibility in query optimization and index selection
-- 8. Difficulty optimizing for mixed equality/range/pattern matching conditions

-- Attempt to create better compound index
CREATE INDEX idx_users_comprehensive ON users (
  status, country, created_at, last_login_at, subscription_tier, email
);

-- Problems with large compound indexes:
-- 1. Index becomes very large and expensive to maintain
-- 2. Only efficient for queries that follow exact prefix patterns
-- 3. Wasted space for queries that don't use all index columns
-- 4. Update performance degradation due to large index maintenance
-- 5. Limited effectiveness for partial field matching (email patterns)
-- 6. Poor selectivity when early columns have low cardinality

-- MySQL limitations are even more restrictive
CREATE INDEX idx_users_limited ON users (status, country, created_at);
-- MySQL compound index limitations:
-- - Maximum 16 columns per compound index
-- - 767 bytes total key length limit (InnoDB)
-- - Poor optimization for range queries on non-leading columns
-- - Limited partial index support
-- - Inefficient covering index implementation
-- - Basic query optimizer with limited compound index utilization

-- Alternative approach with covering indexes (PostgreSQL)
CREATE INDEX idx_users_covering ON users (status, country, created_at) 
INCLUDE (email, first_name, last_name, last_login_at, subscription_tier);

-- Covering index problems:
-- 1. Large storage overhead for included columns
-- 2. Still limited by leading column selectivity
-- 3. Expensive maintenance operations
-- 4. Complex index design decisions
-- 5. Poor performance for non-matching query patterns

MongoDB provides sophisticated compound indexing with flexible optimization:

// MongoDB Advanced Indexing Strategies - comprehensive compound index management and optimization
const { MongoClient } = require('mongodb');

const client = new MongoClient('mongodb://localhost:27017');
const db = client.db('advanced_ecommerce_platform');

// Advanced MongoDB indexing strategy and compound index optimization system
class MongoIndexOptimizer {
  constructor(db) {
    this.db = db;
    this.collections = {
      users: db.collection('users'),
      orders: db.collection('orders'),
      products: db.collection('products'),
      analytics: db.collection('analytics'),
      sessions: db.collection('sessions')
    };

    // Index optimization configuration
    this.indexingStrategies = {
      equalityFirst: true,        // ESR pattern - Equality, Sort, Range
      sortOptimization: true,     // Optimize for sort operations
      partialIndexes: true,       // Use partial indexes for selective filtering
      coveringIndexes: true,      // Create covering indexes where beneficial
      textSearchIndexes: true,    // Advanced text search capabilities
      geospatialIndexes: true,    // Location-based indexing
      ttlIndexes: true           // Time-based data expiration
    };

    this.performanceTargets = {
      maxQueryTimeMs: 100,
      minIndexSelectivity: 0.1,
      maxIndexSizeMB: 500,
      maxIndexesPerCollection: 10
    };

    this.indexAnalytics = new Map();
  }

  async implementComprehensiveIndexingStrategy(collectionName, queryPatterns) {
    console.log(`Implementing comprehensive indexing strategy for ${collectionName}...`);

    const collection = this.collections[collectionName];
    const existingIndexes = await collection.listIndexes().toArray();

    const indexingPlan = {
      collection: collectionName,
      queryPatterns: queryPatterns,
      existingIndexes: existingIndexes,
      recommendedIndexes: [],
      optimizationActions: [],
      performanceProjections: {}
    };

    // Analyze query patterns for optimal index design
    const queryAnalysis = await this.analyzeQueryPatterns(queryPatterns);

    // Generate compound index recommendations
    const compoundIndexes = await this.generateCompoundIndexes(queryAnalysis);

    // Design partial indexes for selective filtering
    const partialIndexes = await this.generatePartialIndexes(queryAnalysis);

    // Create covering indexes for frequently accessed projections
    const coveringIndexes = await this.generateCoveringIndexes(queryAnalysis);

    // Specialized indexes for specific data types and operations
    const specializedIndexes = await this.generateSpecializedIndexes(queryAnalysis);

    indexingPlan.recommendedIndexes = [
      ...compoundIndexes,
      ...partialIndexes, 
      ...coveringIndexes,
      ...specializedIndexes
    ];

    // Validate index recommendations against performance targets
    const validatedPlan = await this.validateIndexingPlan(collection, indexingPlan);

    // Execute index creation with comprehensive monitoring
    const implementationResult = await this.executeIndexingPlan(collection, validatedPlan);

    // Performance validation and optimization
    const performanceValidation = await this.validateIndexPerformance(collection, validatedPlan, queryPatterns);

    return {
      plan: validatedPlan,
      implementation: implementationResult,
      performance: performanceValidation,
      summary: {
        totalIndexes: validatedPlan.recommendedIndexes.length,
        compoundIndexes: compoundIndexes.length,
        partialIndexes: partialIndexes.length,
        coveringIndexes: coveringIndexes.length,
        specializedIndexes: specializedIndexes.length,
        estimatedPerformanceImprovement: this.calculatePerformanceImprovement(validatedPlan)
      }
    };
  }

  async analyzeQueryPatterns(queryPatterns) {
    console.log(`Analyzing ${queryPatterns.length} query patterns for index optimization...`);

    const analysis = {
      fieldUsage: new Map(),           // How often each field is used
      fieldCombinations: new Map(),    // Common field combinations
      filterTypes: new Map(),          // Types of filters (equality, range, etc.)
      sortPatterns: new Map(),         // Sort field combinations
      projectionPatterns: new Map(),   // Frequently requested projections
      selectivityEstimates: new Map()  // Estimated field selectivity
    };

    for (const pattern of queryPatterns) {
      // Analyze filter conditions
      this.analyzeFilterConditions(pattern.filter || {}, analysis);

      // Analyze sort requirements
      this.analyzeSortPatterns(pattern.sort || {}, analysis);

      // Analyze projection requirements
      this.analyzeProjectionPatterns(pattern.projection || {}, analysis);

      // Track query frequency for weighting
      const frequency = pattern.frequency || 1;
      this.updateFrequencyWeights(analysis, frequency);
    }

    // Calculate field selectivity estimates
    await this.estimateFieldSelectivity(analysis);

    // Identify optimal field combinations
    const optimalCombinations = this.identifyOptimalFieldCombinations(analysis);

    return {
      ...analysis,
      optimalCombinations: optimalCombinations,
      indexingRecommendations: this.generateIndexingRecommendations(analysis, optimalCombinations)
    };
  }

  analyzeFilterConditions(filter, analysis) {
    Object.entries(filter).forEach(([field, condition]) => {
      if (field.startsWith('$')) return; // Skip operators

      // Track field usage frequency
      const currentUsage = analysis.fieldUsage.get(field) || 0;
      analysis.fieldUsage.set(field, currentUsage + 1);

      // Categorize filter types
      const filterType = this.categorizeFilterType(condition);
      const currentFilterTypes = analysis.filterTypes.get(field) || new Set();
      currentFilterTypes.add(filterType);
      analysis.filterTypes.set(field, currentFilterTypes);

      // Track field combinations for compound indexes
      const otherFields = Object.keys(filter).filter(f => f !== field && !f.startsWith('$'));
      if (otherFields.length > 0) {
        const combination = [field, ...otherFields].sort().join(',');
        const currentCombinations = analysis.fieldCombinations.get(combination) || 0;
        analysis.fieldCombinations.set(combination, currentCombinations + 1);
      }
    });
  }

  categorizeFilterType(condition) {
    if (typeof condition === 'object' && condition !== null) {
      const operators = Object.keys(condition);

      if (operators.includes('$gte') || operators.includes('$gt') || 
          operators.includes('$lte') || operators.includes('$lt')) {
        return 'range';
      } else if (operators.includes('$in')) {
        return condition.$in.length <= 10 ? 'selective_in' : 'large_in';
      } else if (operators.includes('$regex')) {
        return 'pattern_match';
      } else if (operators.includes('$exists')) {
        return 'existence';
      } else if (operators.includes('$ne')) {
        return 'negation';
      } else {
        return 'complex';
      }
    } else {
      return 'equality';
    }
  }

  analyzeSortPatterns(sort, analysis) {
    if (Object.keys(sort).length === 0) return;

    const sortKey = Object.entries(sort)
      .map(([field, direction]) => `${field}:${direction}`)
      .join(',');

    const currentSort = analysis.sortPatterns.get(sortKey) || 0;
    analysis.sortPatterns.set(sortKey, currentSort + 1);
  }

  analyzeProjectionPatterns(projection, analysis) {
    if (!projection || Object.keys(projection).length === 0) return;

    const projectedFields = Object.keys(projection).filter(field => projection[field] === 1);
    const projectionKey = projectedFields.sort().join(',');

    if (projectionKey) {
      const currentProjection = analysis.projectionPatterns.get(projectionKey) || 0;
      analysis.projectionPatterns.set(projectionKey, currentProjection + 1);
    }
  }

  async generateCompoundIndexes(analysis) {
    console.log('Generating optimal compound index recommendations...');

    const compoundIndexes = [];

    // Sort field combinations by frequency and potential impact
    const sortedCombinations = Array.from(analysis.fieldCombinations.entries())
      .sort(([, a], [, b]) => b - a)
      .slice(0, 20); // Consider top 20 combinations

    for (const [fieldCombination, frequency] of sortedCombinations) {
      const fields = fieldCombination.split(',');

      // Apply ESR (Equality, Sort, Range) pattern optimization
      const optimizedIndex = this.optimizeIndexWithESRPattern(fields, analysis);

      if (optimizedIndex && this.validateIndexUtility(optimizedIndex, analysis)) {
        compoundIndexes.push({
          type: 'compound',
          name: `idx_${optimizedIndex.fields.map(f => f.field).join('_')}`,
          specification: this.buildIndexSpecification(optimizedIndex.fields),
          options: optimizedIndex.options,
          reasoning: optimizedIndex.reasoning,
          estimatedImpact: this.estimateIndexImpact(optimizedIndex, analysis),
          queryPatterns: this.identifyMatchingQueries(optimizedIndex, analysis),
          priority: this.calculateIndexPriority(optimizedIndex, frequency, analysis)
        });
      }
    }

    // Sort by priority and return top recommendations
    return compoundIndexes
      .sort((a, b) => b.priority - a.priority)
      .slice(0, this.performanceTargets.maxIndexesPerCollection);
  }

  optimizeIndexWithESRPattern(fields, analysis) {
    console.log(`Optimizing index for fields: ${fields.join(', ')} using ESR pattern...`);

    const optimizedFields = [];
    const fieldAnalysis = new Map();

    // Analyze each field's characteristics
    fields.forEach(field => {
      const filterTypes = analysis.filterTypes.get(field) || new Set();
      const usage = analysis.fieldUsage.get(field) || 0;
      const selectivity = analysis.selectivityEstimates.get(field) || 0.5;

      fieldAnalysis.set(field, {
        filterTypes: Array.from(filterTypes),
        usage: usage,
        selectivity: selectivity,
        isEquality: filterTypes.has('equality') || filterTypes.has('selective_in'),
        isRange: filterTypes.has('range'),
        isSort: this.isFieldUsedInSort(field, analysis),
        sortDirection: this.getSortDirection(field, analysis)
      });
    });

    // Step 1: Equality fields first (highest selectivity first)
    const equalityFields = fields
      .filter(field => fieldAnalysis.get(field).isEquality)
      .sort((a, b) => fieldAnalysis.get(b).selectivity - fieldAnalysis.get(a).selectivity);

    equalityFields.forEach(field => {
      const fieldInfo = fieldAnalysis.get(field);
      optimizedFields.push({
        field: field,
        direction: 1,
        type: 'equality',
        selectivity: fieldInfo.selectivity,
        reasoning: `Equality filter with ${(fieldInfo.selectivity * 100).toFixed(1)}% selectivity`
      });
    });

    // Step 2: Sort fields (maintaining sort direction)
    const sortFields = fields
      .filter(field => fieldAnalysis.get(field).isSort && !fieldAnalysis.get(field).isEquality)
      .sort((a, b) => fieldAnalysis.get(b).usage - fieldAnalysis.get(a).usage);

    sortFields.forEach(field => {
      const fieldInfo = fieldAnalysis.get(field);
      optimizedFields.push({
        field: field,
        direction: fieldInfo.sortDirection || 1,
        type: 'sort',
        selectivity: fieldInfo.selectivity,
        reasoning: `Sort field with ${fieldInfo.usage} usage frequency`
      });
    });

    // Step 3: Range fields last (lowest selectivity impact)
    const rangeFields = fields
      .filter(field => fieldAnalysis.get(field).isRange && 
                      !fieldAnalysis.get(field).isEquality && 
                      !fieldAnalysis.get(field).isSort)
      .sort((a, b) => fieldAnalysis.get(b).selectivity - fieldAnalysis.get(a).selectivity);

    rangeFields.forEach(field => {
      const fieldInfo = fieldAnalysis.get(field);
      optimizedFields.push({
        field: field,
        direction: 1,
        type: 'range',
        selectivity: fieldInfo.selectivity,
        reasoning: `Range filter with ${(fieldInfo.selectivity * 100).toFixed(1)}% selectivity`
      });
    });

    // Validate and return optimized index
    if (optimizedFields.length === 0) return null;

    return {
      fields: optimizedFields,
      options: this.generateIndexOptions(optimizedFields, analysis),
      reasoning: `ESR-optimized compound index: ${optimizedFields.length} fields arranged for optimal query performance`,
      estimatedSelectivity: this.calculateCompoundSelectivity(optimizedFields),
      supportedQueryTypes: this.identifySupportedQueryTypes(optimizedFields, analysis)
    };
  }

  async generatePartialIndexes(analysis) {
    console.log('Generating partial index recommendations for selective filtering...');

    const partialIndexes = [];

    // Identify fields with high selectivity potential
    const selectiveFields = Array.from(analysis.selectivityEstimates.entries())
      .filter(([field, selectivity]) => selectivity < this.performanceTargets.minIndexSelectivity)
      .sort(([, a], [, b]) => a - b); // Lower selectivity first (more selective)

    for (const [field, selectivity] of selectiveFields) {
      const filterTypes = analysis.filterTypes.get(field) || new Set();
      const usage = analysis.fieldUsage.get(field) || 0;

      // Generate partial filter conditions
      const partialFilters = this.generatePartialFilterConditions(field, filterTypes, analysis);

      for (const partialFilter of partialFilters) {
        const partialIndex = {
          type: 'partial',
          name: `idx_${field}_${partialFilter.suffix}`,
          specification: { [field]: 1 },
          options: {
            partialFilterExpression: partialFilter.expression,
            background: true
          },
          reasoning: partialFilter.reasoning,
          estimatedReduction: partialFilter.estimatedReduction,
          applicableQueries: partialFilter.applicableQueries,
          priority: this.calculatePartialIndexPriority(field, usage, selectivity, partialFilter)
        };

        if (this.validatePartialIndexUtility(partialIndex, analysis)) {
          partialIndexes.push(partialIndex);
        }
      }
    }

    return partialIndexes
      .sort((a, b) => b.priority - a.priority)
      .slice(0, Math.floor(this.performanceTargets.maxIndexesPerCollection / 3));
  }

  generatePartialFilterConditions(field, filterTypes, analysis) {
    const partialFilters = [];

    // Status/category fields with selective values
    if (filterTypes.has('equality') || filterTypes.has('selective_in')) {
      partialFilters.push({
        expression: { [field]: { $in: ['active', 'premium', 'verified'] } },
        suffix: 'active_premium',
        reasoning: `Partial index for high-value ${field} categories`,
        estimatedReduction: 0.7,
        applicableQueries: [`${field} equality matches for active/premium users`]
      });
    }

    // Date fields with recency focus
    if (filterTypes.has('range') && field.includes('date') || field.includes('time')) {
      partialFilters.push({
        expression: { [field]: { $gte: new Date(Date.now() - 90 * 24 * 60 * 60 * 1000) } },
        suffix: 'recent_90d',
        reasoning: `Partial index for recent ${field} within 90 days`,
        estimatedReduction: 0.8,
        applicableQueries: [`Recent ${field} range queries`]
      });
    }

    // Numeric fields with value thresholds
    if (filterTypes.has('range') && (field.includes('amount') || field.includes('count') || field.includes('score'))) {
      partialFilters.push({
        expression: { [field]: { $gt: 0 } },
        suffix: 'positive_values',
        reasoning: `Partial index excluding zero/null ${field} values`,
        estimatedReduction: 0.6,
        applicableQueries: [`${field} range queries for positive values`]
      });
    }

    return partialFilters;
  }

  async generateCoveringIndexes(analysis) {
    console.log('Generating covering index recommendations for query optimization...');

    const coveringIndexes = [];

    // Analyze projection patterns to identify covering index opportunities
    const projectionAnalysis = Array.from(analysis.projectionPatterns.entries())
      .sort(([, a], [, b]) => b - a)
      .slice(0, 10); // Top 10 projection patterns

    for (const [projectionKey, frequency] of projectionAnalysis) {
      const projectedFields = projectionKey.split(',');

      // Find queries that could benefit from covering indexes
      const candidateQueries = this.identifyConveringIndexCandidates(projectedFields, analysis);

      if (candidateQueries.length > 0) {
        const coveringIndex = this.designCoveringIndex(projectedFields, candidateQueries, analysis);

        if (coveringIndex && this.validateCoveringIndexBenefit(coveringIndex, analysis)) {
          coveringIndexes.push({
            type: 'covering',
            name: `idx_covering_${coveringIndex.keyFields.join('_')}`,
            specification: coveringIndex.specification,
            options: coveringIndex.options,
            reasoning: coveringIndex.reasoning,
            coveredQueries: candidateQueries.length,
            projectedFields: projectedFields,
            estimatedImpact: this.estimateCoveringIndexImpact(coveringIndex, frequency),
            priority: this.calculateCoveringIndexPriority(coveringIndex, frequency, candidateQueries.length)
          });
        }
      }
    }

    return coveringIndexes
      .sort((a, b) => b.priority - a.priority)
      .slice(0, Math.floor(this.performanceTargets.maxIndexesPerCollection / 4));
  }

  designCoveringIndex(projectedFields, candidateQueries, analysis) {
    // Analyze filter and sort patterns from candidate queries
    const filterFields = new Set();
    const sortFields = new Map();

    candidateQueries.forEach(query => {
      Object.keys(query.filter || {}).forEach(field => {
        if (!field.startsWith('$')) {
          filterFields.add(field);
        }
      });

      Object.entries(query.sort || {}).forEach(([field, direction]) => {
        sortFields.set(field, direction);
      });
    });

    // Design optimal key structure
    const keyFields = [];
    const includeFields = [];

    // Add filter fields to key (equality first, then range)
    const equalityFields = Array.from(filterFields).filter(field => {
      const filterTypes = analysis.filterTypes.get(field) || new Set();
      return filterTypes.has('equality') || filterTypes.has('selective_in');
    });

    const rangeFields = Array.from(filterFields).filter(field => {
      const filterTypes = analysis.filterTypes.get(field) || new Set();
      return filterTypes.has('range');
    });

    // Add equality fields to key
    equalityFields.forEach(field => {
      keyFields.push(field);
    });

    // Add sort fields to key
    sortFields.forEach((direction, field) => {
      if (!keyFields.includes(field)) {
        keyFields.push(field);
      }
    });

    // Add range fields to key
    rangeFields.forEach(field => {
      if (!keyFields.includes(field)) {
        keyFields.push(field);
      }
    });

    // Add remaining projected fields as included fields
    projectedFields.forEach(field => {
      if (!keyFields.includes(field)) {
        includeFields.push(field);
      }
    });

    if (keyFields.length === 0) return null;

    // Build index specification
    const specification = {};
    keyFields.forEach(field => {
      const direction = sortFields.get(field) || 1;
      specification[field] = direction;
    });

    return {
      keyFields: keyFields,
      includeFields: includeFields,
      specification: specification,
      options: {
        background: true,
        // Include non-key fields for covering capability
        ...(includeFields.length > 0 && { includeFields: includeFields })
      },
      reasoning: `Covering index with ${keyFields.length} key fields and ${includeFields.length} included fields`,
      estimatedCoverage: this.calculateQueryCoverage(keyFields, includeFields, candidateQueries)
    };
  }

  async generateSpecializedIndexes(analysis) {
    console.log('Generating specialized index recommendations...');

    const specializedIndexes = [];

    // Text search indexes for string fields with pattern matching
    const textFields = this.identifyTextSearchFields(analysis);
    textFields.forEach(textField => {
      specializedIndexes.push({
        type: 'text',
        name: `idx_text_${textField.field}`,
        specification: { [textField.field]: 'text' },
        options: {
          background: true,
          default_language: 'english',
          weights: { [textField.field]: textField.weight }
        },
        reasoning: `Text search index for ${textField.field} pattern matching`,
        applicableQueries: textField.queries,
        priority: textField.priority
      });
    });

    // Geospatial indexes for location data
    const geoFields = this.identifyGeospatialFields(analysis);
    geoFields.forEach(geoField => {
      specializedIndexes.push({
        type: 'geospatial',
        name: `idx_geo_${geoField.field}`,
        specification: { [geoField.field]: '2dsphere' },
        options: {
          background: true,
          '2dsphereIndexVersion': 3
        },
        reasoning: `Geospatial index for ${geoField.field} location queries`,
        applicableQueries: geoField.queries,
        priority: geoField.priority
      });
    });

    // TTL indexes for time-based data expiration
    const ttlFields = this.identifyTTLFields(analysis);
    ttlFields.forEach(ttlField => {
      specializedIndexes.push({
        type: 'ttl',
        name: `idx_ttl_${ttlField.field}`,
        specification: { [ttlField.field]: 1 },
        options: {
          background: true,
          expireAfterSeconds: ttlField.expireAfterSeconds
        },
        reasoning: `TTL index for automatic ${ttlField.field} data expiration`,
        expirationPeriod: ttlField.expirationPeriod,
        priority: ttlField.priority
      });
    });

    // Sparse indexes for fields with many null values
    const sparseFields = this.identifySparseFields(analysis);
    sparseFields.forEach(sparseField => {
      specializedIndexes.push({
        type: 'sparse',
        name: `idx_sparse_${sparseField.field}`,
        specification: { [sparseField.field]: 1 },
        options: {
          background: true,
          sparse: true
        },
        reasoning: `Sparse index for ${sparseField.field} excluding null values`,
        nullPercentage: sparseField.nullPercentage,
        priority: sparseField.priority
      });
    });

    return specializedIndexes
      .sort((a, b) => b.priority - a.priority)
      .slice(0, Math.floor(this.performanceTargets.maxIndexesPerCollection / 2));
  }

  async executeIndexingPlan(collection, plan) {
    console.log(`Executing indexing plan for ${collection.collectionName}...`);

    const results = {
      successful: [],
      failed: [],
      skipped: [],
      totalTime: 0
    };

    const startTime = Date.now();

    for (const index of plan.recommendedIndexes) {
      try {
        console.log(`Creating index: ${index.name}`);

        // Check if index already exists
        const existingIndexes = await collection.listIndexes().toArray();
        const indexExists = existingIndexes.some(existing => existing.name === index.name);

        if (indexExists) {
          console.log(`Index ${index.name} already exists, skipping...`);
          results.skipped.push({
            name: index.name,
            reason: 'Index already exists'
          });
          continue;
        }

        // Create the index
        const indexStartTime = Date.now();
        await collection.createIndex(index.specification, {
          name: index.name,
          ...index.options
        });
        const indexCreationTime = Date.now() - indexStartTime;

        results.successful.push({
          name: index.name,
          type: index.type,
          specification: index.specification,
          creationTime: indexCreationTime,
          estimatedImpact: index.estimatedImpact
        });

        console.log(`Index ${index.name} created successfully in ${indexCreationTime}ms`);

      } catch (error) {
        console.error(`Failed to create index ${index.name}:`, error.message);
        results.failed.push({
          name: index.name,
          type: index.type,
          error: error.message,
          specification: index.specification
        });
      }
    }

    results.totalTime = Date.now() - startTime;

    console.log(`Index creation completed in ${results.totalTime}ms`);
    console.log(`Successful: ${results.successful.length}, Failed: ${results.failed.length}, Skipped: ${results.skipped.length}`);

    return results;
  }

  async validateIndexPerformance(collection, plan, queryPatterns) {
    console.log('Validating index performance with test queries...');

    const validation = {
      queries: [],
      summary: {
        totalQueries: queryPatterns.length,
        improvedQueries: 0,
        avgImprovementPct: 0,
        significantImprovements: 0
      }
    };

    for (const pattern of queryPatterns.slice(0, 20)) { // Test top 20 patterns
      try {
        // Execute query with explain to get performance metrics
        const collection_handle = this.collections[collection.collectionName] || collection;

        let cursor;
        if (pattern.aggregation) {
          cursor = collection_handle.aggregate(pattern.aggregation);
        } else {
          cursor = collection_handle.find(pattern.filter || {});
          if (pattern.sort) cursor.sort(pattern.sort);
          if (pattern.limit) cursor.limit(pattern.limit);
          if (pattern.projection) cursor.project(pattern.projection);
        }

        const explainResult = await cursor.explain('executionStats');

        const queryValidation = {
          pattern: pattern.name || 'Unnamed query',
          executionTimeMs: explainResult.executionStats?.executionTimeMillis || 0,
          totalDocsExamined: explainResult.executionStats?.totalDocsExamined || 0,
          totalDocsReturned: explainResult.executionStats?.totalDocsReturned || 0,
          indexesUsed: this.extractIndexNames(explainResult),
          efficiency: this.calculateQueryEfficiency(explainResult),
          grade: this.assignPerformanceGrade(explainResult),
          improvement: this.calculateImprovement(pattern, explainResult)
        };

        validation.queries.push(queryValidation);

        if (queryValidation.improvement > 0) {
          validation.summary.improvedQueries++;
          validation.summary.avgImprovementPct += queryValidation.improvement;
        }

        if (queryValidation.improvement > 50) {
          validation.summary.significantImprovements++;
        }

      } catch (error) {
        console.warn(`Query validation failed for pattern: ${pattern.name}`, error.message);
        validation.queries.push({
          pattern: pattern.name || 'Unnamed query',
          error: error.message,
          success: false
        });
      }
    }

    if (validation.summary.improvedQueries > 0) {
      validation.summary.avgImprovementPct /= validation.summary.improvedQueries;
    }

    console.log(`Performance validation completed: ${validation.summary.improvedQueries}/${validation.summary.totalQueries} queries improved`);
    console.log(`Average improvement: ${validation.summary.avgImprovementPct.toFixed(1)}%`);
    console.log(`Significant improvements: ${validation.summary.significantImprovements}`);

    return validation;
  }

  // Helper methods for advanced index analysis and optimization

  buildIndexSpecification(fields) {
    const spec = {};
    fields.forEach(field => {
      spec[field.field] = field.direction;
    });
    return spec;
  }

  generateIndexOptions(fields, analysis) {
    return {
      background: true,
      ...(this.shouldUsePartialFilter(fields, analysis) && {
        partialFilterExpression: this.buildOptimalPartialFilter(fields, analysis)
      })
    };
  }

  isFieldUsedInSort(field, analysis) {
    for (const [sortPattern] of analysis.sortPatterns) {
      if (sortPattern.includes(`${field}:`)) {
        return true;
      }
    }
    return false;
  }

  getSortDirection(field, analysis) {
    for (const [sortPattern] of analysis.sortPatterns) {
      const fieldPattern = sortPattern.split(',').find(pattern => pattern.startsWith(`${field}:`));
      if (fieldPattern) {
        return parseInt(fieldPattern.split(':')[1]) || 1;
      }
    }
    return 1;
  }

  calculateCompoundSelectivity(fields) {
    // Estimate compound selectivity using field independence assumption
    return fields.reduce((selectivity, field) => {
      return selectivity * (field.selectivity || 0.1);
    }, 1);
  }

  validateIndexUtility(index, analysis) {
    // Validate that index provides meaningful benefit
    const estimatedSelectivity = this.calculateCompoundSelectivity(index.fields);
    const supportedQueries = this.identifyMatchingQueries(index, analysis);

    return estimatedSelectivity < 0.5 && supportedQueries.length > 0;
  }

  identifyMatchingQueries(index, analysis) {
    // Simplified query matching logic
    const matchingQueries = [];
    const indexFields = new Set(index.fields.map(f => f.field));

    // Check field combinations that would benefit from this index
    for (const [fieldCombination, frequency] of analysis.fieldCombinations) {
      const queryFields = new Set(fieldCombination.split(','));
      const overlap = [...indexFields].filter(field => queryFields.has(field));

      if (overlap.length >= 2) { // At least 2 fields overlap
        matchingQueries.push({
          fields: fieldCombination,
          frequency: frequency,
          coverage: overlap.length / indexFields.size
        });
      }
    }

    return matchingQueries;
  }

  calculateIndexPriority(index, frequency, analysis) {
    const baseScore = frequency * 10;
    const selectivityBonus = (1 - index.estimatedSelectivity) * 50;
    const fieldCountPenalty = index.fields.length * 5;

    return Math.max(0, baseScore + selectivityBonus - fieldCountPenalty);
  }

  calculatePerformanceImprovement(plan) {
    // Simplified improvement estimation
    const baseImprovement = plan.recommendedIndexes.length * 15; // 15% per index
    const compoundBonus = plan.recommendedIndexes.filter(idx => idx.type === 'compound').length * 25;
    const partialBonus = plan.recommendedIndexes.filter(idx => idx.type === 'partial').length * 35;

    return Math.min(90, baseImprovement + compoundBonus + partialBonus);
  }

  extractIndexNames(explainResult) {
    const indexes = new Set();

    const extractFromStage = (stage) => {
      if (stage.indexName) {
        indexes.add(stage.indexName);
      }
      if (stage.inputStage) {
        extractFromStage(stage.inputStage);
      }
      if (stage.inputStages) {
        stage.inputStages.forEach(extractFromStage);
      }
    };

    if (explainResult.executionStats?.executionStages) {
      extractFromStage(explainResult.executionStats.executionStages);
    }

    return Array.from(indexes);
  }

  calculateQueryEfficiency(explainResult) {
    const stats = explainResult.executionStats;
    if (!stats) return 0;

    const examined = stats.totalDocsExamined || 0;
    const returned = stats.totalDocsReturned || 0;

    return examined > 0 ? returned / examined : 1;
  }

  assignPerformanceGrade(explainResult) {
    const efficiency = this.calculateQueryEfficiency(explainResult);
    const executionTime = explainResult.executionStats?.executionTimeMillis || 0;
    const hasIndexScan = this.extractIndexNames(explainResult).length > 0;

    let score = 0;

    // Efficiency scoring
    if (efficiency >= 0.8) score += 40;
    else if (efficiency >= 0.5) score += 30;
    else if (efficiency >= 0.2) score += 20;
    else if (efficiency >= 0.1) score += 10;

    // Execution time scoring
    if (executionTime <= 50) score += 35;
    else if (executionTime <= 100) score += 25;
    else if (executionTime <= 250) score += 15;
    else if (executionTime <= 500) score += 5;

    // Index usage scoring
    if (hasIndexScan) score += 25;

    if (score >= 85) return 'A';
    else if (score >= 70) return 'B';
    else if (score >= 50) return 'C';
    else if (score >= 30) return 'D';
    else return 'F';
  }

  calculateImprovement(pattern, explainResult) {
    // Simplified improvement calculation
    const efficiency = this.calculateQueryEfficiency(explainResult);
    const executionTime = explainResult.executionStats?.executionTimeMillis || 0;
    const hasIndexScan = this.extractIndexNames(explainResult).length > 0;

    let improvementScore = 0;

    if (hasIndexScan) improvementScore += 30;
    if (efficiency > 0.5) improvementScore += 40;
    if (executionTime < 100) improvementScore += 30;

    return Math.min(100, improvementScore);
  }

  // Additional helper methods for specialized index types

  identifyTextSearchFields(analysis) {
    const textFields = [];

    analysis.filterTypes.forEach((types, field) => {
      if (types.has('pattern_match') && 
          (field.includes('name') || field.includes('title') || field.includes('description'))) {
        textFields.push({
          field: field,
          weight: analysis.fieldUsage.get(field) || 1,
          queries: [`Text search on ${field}`],
          priority: (analysis.fieldUsage.get(field) || 0) * 10
        });
      }
    });

    return textFields;
  }

  identifyGeospatialFields(analysis) {
    const geoFields = [];

    analysis.fieldUsage.forEach((usage, field) => {
      if (field.includes('location') || field.includes('coordinates') || 
          field.includes('lat') || field.includes('lng') || field.includes('geo')) {
        geoFields.push({
          field: field,
          queries: [`Geospatial queries on ${field}`],
          priority: usage * 15
        });
      }
    });

    return geoFields;
  }

  identifyTTLFields(analysis) {
    const ttlFields = [];

    analysis.fieldUsage.forEach((usage, field) => {
      if (field.includes('expires') || field.includes('expire') || 
          field === 'createdAt' || field === 'updatedAt') {
        ttlFields.push({
          field: field,
          expireAfterSeconds: this.getExpireAfterSeconds(field),
          expirationPeriod: this.getExpirationPeriod(field),
          priority: usage * 5
        });
      }
    });

    return ttlFields;
  }

  identifySparseFields(analysis) {
    const sparseFields = [];

    // Fields that are likely to have many null values
    const potentialSparseFields = ['phone', 'middle_name', 'company', 'notes', 'optional_field'];

    analysis.fieldUsage.forEach((usage, field) => {
      if (potentialSparseFields.some(sparse => field.includes(sparse))) {
        sparseFields.push({
          field: field,
          nullPercentage: 0.6, // Estimated
          priority: usage * 8
        });
      }
    });

    return sparseFields;
  }

  getExpireAfterSeconds(field) {
    const expirationMap = {
      'session': 86400,        // 1 day
      'temp': 3600,           // 1 hour  
      'cache': 1800,          // 30 minutes
      'token': 3600,          // 1 hour
      'verification': 86400,   // 1 day
      'expires': 0            // Use field value
    };

    for (const [key, seconds] of Object.entries(expirationMap)) {
      if (field.includes(key)) {
        return seconds;
      }
    }

    return 86400; // Default 1 day
  }

  getExpirationPeriod(field) {
    const expireAfter = this.getExpireAfterSeconds(field);
    if (expireAfter >= 86400) return `${Math.floor(expireAfter / 86400)} days`;
    if (expireAfter >= 3600) return `${Math.floor(expireAfter / 3600)} hours`;
    return `${Math.floor(expireAfter / 60)} minutes`;
  }

  async estimateFieldSelectivity(analysis) {
    // Simplified selectivity estimation
    // In production, this would use actual data sampling

    analysis.fieldUsage.forEach((usage, field) => {
      let estimatedSelectivity = 0.5; // Default

      // Status/enum fields typically have low cardinality
      if (field.includes('status') || field.includes('type') || field.includes('category')) {
        estimatedSelectivity = 0.1;
      }
      // ID fields have high cardinality
      else if (field.includes('id') || field.includes('_id')) {
        estimatedSelectivity = 0.9;
      }
      // Email fields have high cardinality
      else if (field.includes('email')) {
        estimatedSelectivity = 0.8;
      }
      // Date fields vary based on range
      else if (field.includes('date') || field.includes('time')) {
        estimatedSelectivity = 0.3;
      }

      analysis.selectivityEstimates.set(field, estimatedSelectivity);
    });
  }

  identifyOptimalFieldCombinations(analysis) {
    const combinations = [];

    // Sort combinations by frequency and expected performance impact
    const sortedCombinations = Array.from(analysis.fieldCombinations.entries())
      .sort(([, a], [, b]) => b - a);

    sortedCombinations.forEach(([combination, frequency]) => {
      const fields = combination.split(',');
      const totalSelectivity = fields.reduce((product, field) => {
        return product * (analysis.selectivityEstimates.get(field) || 0.5);
      }, 1);

      combinations.push({
        fields: fields,
        frequency: frequency,
        selectivity: totalSelectivity,
        score: frequency * (1 - totalSelectivity) * 100,
        reasoning: `Combination of ${fields.length} fields with ${frequency} usage frequency`
      });
    });

    return combinations
      .sort((a, b) => b.score - a.score)
      .slice(0, 15);
  }

  generateIndexingRecommendations(analysis, optimalCombinations) {
    return {
      topFieldCombinations: optimalCombinations.slice(0, 5),
      highUsageFields: Array.from(analysis.fieldUsage.entries())
        .sort(([, a], [, b]) => b - a)
        .slice(0, 10)
        .map(([field, usage]) => ({ field, usage })),
      selectiveFields: Array.from(analysis.selectivityEstimates.entries())
        .filter(([, selectivity]) => selectivity < 0.2)
        .sort(([, a], [, b]) => a - b)
        .map(([field, selectivity]) => ({ field, selectivity })),
      commonSortPatterns: Array.from(analysis.sortPatterns.entries())
        .sort(([, a], [, b]) => b - a)
        .slice(0, 5)
        .map(([pattern, frequency]) => ({ pattern, frequency }))
    };
  }
}

// Benefits of MongoDB Advanced Indexing Strategies:
// - Comprehensive compound index design using ESR (Equality, Sort, Range) optimization patterns
// - Intelligent partial indexing for selective filtering and reduced storage overhead
// - Sophisticated covering index generation for complete query optimization
// - Specialized index support for text search, geospatial, TTL, and sparse data patterns
// - Automated index performance validation and impact measurement
// - Production-ready index creation with background processing and error handling
// - Advanced query pattern analysis and field combination optimization
// - Integration with MongoDB's native indexing capabilities and query optimizer
// - Comprehensive performance monitoring and index effectiveness tracking
// - SQL-compatible index management through QueryLeaf integration

module.exports = {
  MongoIndexOptimizer
};

Understanding MongoDB Compound Index Architecture

Advanced Index Design Patterns and Performance Optimization

Implement sophisticated compound indexing strategies for production-scale applications:

// Production-ready compound index management and optimization patterns
class ProductionIndexManager extends MongoIndexOptimizer {
  constructor(db) {
    super(db);

    this.productionConfig = {
      maxConcurrentIndexBuilds: 2,
      indexMaintenanceWindows: ['02:00-04:00'],
      performanceMonitoringInterval: 300000, // 5 minutes
      autoOptimizationEnabled: true,
      indexUsageTrackingPeriod: 86400000 // 24 hours
    };

    this.indexMetrics = new Map();
    this.optimizationQueue = [];
  }

  async implementProductionIndexingWorkflow(collections) {
    console.log('Implementing production-grade indexing workflow...');

    const workflow = {
      phase1_analysis: await this.performComprehensiveIndexAnalysis(collections),
      phase2_planning: await this.generateProductionIndexPlan(collections),
      phase3_execution: await this.executeProductionIndexPlan(collections),
      phase4_monitoring: await this.setupIndexPerformanceMonitoring(collections),
      phase5_optimization: await this.implementContinuousOptimization(collections)
    };

    return {
      workflow: workflow,
      summary: this.generateWorkflowSummary(workflow),
      monitoring: await this.setupProductionMonitoring(collections),
      maintenance: await this.scheduleIndexMaintenance(collections)
    };
  }

  async performComprehensiveIndexAnalysis(collections) {
    console.log('Performing comprehensive production index analysis...');

    const analysis = {
      collections: [],
      globalPatterns: new Map(),
      crossCollectionOptimizations: [],
      resourceImpact: {},
      riskAssessment: {}
    };

    for (const collectionName of collections) {
      const collection = this.collections[collectionName];

      // Analyze current index usage
      const indexStats = await this.analyzeCurrentIndexUsage(collection);

      // Sample query patterns from profiler
      const queryPatterns = await this.extractQueryPatternsFromProfiler(collection);

      // Analyze data distribution and selectivity
      const dataDistribution = await this.analyzeDataDistribution(collection);

      // Resource utilization analysis
      const resourceUsage = await this.analyzeIndexResourceUsage(collection);

      analysis.collections.push({
        name: collectionName,
        indexStats: indexStats,
        queryPatterns: queryPatterns,
        dataDistribution: dataDistribution,
        resourceUsage: resourceUsage,
        recommendations: await this.generateCollectionSpecificRecommendations(collection, queryPatterns, dataDistribution)
      });
    }

    // Identify global optimization opportunities
    analysis.crossCollectionOptimizations = await this.identifyCrossCollectionOptimizations(analysis.collections);

    // Assess resource impact and risks
    analysis.resourceImpact = this.assessResourceImpact(analysis.collections);
    analysis.riskAssessment = this.performIndexingRiskAssessment(analysis.collections);

    return analysis;
  }

  async analyzeCurrentIndexUsage(collection) {
    console.log(`Analyzing current index usage for ${collection.collectionName}...`);

    try {
      // Get index statistics
      const indexStats = await collection.aggregate([
        { $indexStats: {} }
      ]).toArray();

      // Get collection statistics
      const collStats = await this.db.runCommand({ collStats: collection.collectionName });

      const analysis = {
        indexes: [],
        totalIndexSize: 0,
        unusedIndexes: [],
        underutilizedIndexes: [],
        highImpactIndexes: [],
        recommendations: []
      };

      indexStats.forEach(indexStat => {
        const indexAnalysis = {
          name: indexStat.name,
          key: indexStat.key,
          accessCount: indexStat.accesses?.ops || 0,
          accessSinceLastRestart: indexStat.accesses?.since || new Date(),
          sizeBytes: indexStat.size || 0,

          // Calculate utilization metrics
          utilizationScore: this.calculateIndexUtilizationScore(indexStat),
          efficiency: this.calculateIndexEfficiency(indexStat, collStats),

          // Categorize index usage
          category: this.categorizeIndexUsage(indexStat),

          // Performance impact assessment
          impactScore: this.calculateIndexImpactScore(indexStat, collStats)
        };

        analysis.indexes.push(indexAnalysis);
        analysis.totalIndexSize += indexAnalysis.sizeBytes;

        // Categorize indexes based on usage patterns
        if (indexAnalysis.category === 'unused') {
          analysis.unusedIndexes.push(indexAnalysis);
        } else if (indexAnalysis.category === 'underutilized') {
          analysis.underutilizedIndexes.push(indexAnalysis);
        } else if (indexAnalysis.impactScore > 80) {
          analysis.highImpactIndexes.push(indexAnalysis);
        }
      });

      // Generate optimization recommendations
      analysis.recommendations = this.generateIndexOptimizationRecommendations(analysis);

      return analysis;

    } catch (error) {
      console.warn(`Failed to analyze index usage for ${collection.collectionName}:`, error.message);
      return { error: error.message };
    }
  }

  async extractQueryPatternsFromProfiler(collection) {
    console.log(`Extracting query patterns from profiler for ${collection.collectionName}...`);

    try {
      // Query the profiler collection for recent operations
      const profileData = await this.db.collection('system.profile').aggregate([
        {
          $match: {
            ns: `${this.db.databaseName}.${collection.collectionName}`,
            ts: { $gte: new Date(Date.now() - this.productionConfig.indexUsageTrackingPeriod) },
            'command.find': { $exists: true }
          }
        },
        {
          $group: {
            _id: {
              filter: '$command.filter',
              sort: '$command.sort',
              projection: '$command.projection'
            },
            count: { $sum: 1 },
            avgExecutionTime: { $avg: '$millis' },
            totalDocsExamined: { $sum: '$docsExamined' },
            totalDocsReturned: { $sum: '$nreturned' },
            indexesUsed: { $addToSet: '$planSummary' }
          }
        },
        {
          $sort: { count: -1 }
        },
        {
          $limit: 100
        }
      ]).toArray();

      const patterns = profileData.map(pattern => ({
        filter: pattern._id.filter || {},
        sort: pattern._id.sort || {},
        projection: pattern._id.projection || {},
        frequency: pattern.count,
        avgExecutionTime: pattern.avgExecutionTime,
        efficiency: pattern.totalDocsReturned / Math.max(pattern.totalDocsExamined, 1),
        indexesUsed: pattern.indexesUsed,
        priority: this.calculateQueryPatternPriority(pattern)
      }));

      return patterns.sort((a, b) => b.priority - a.priority);

    } catch (error) {
      console.warn(`Failed to extract query patterns for ${collection.collectionName}:`, error.message);
      return [];
    }
  }

  async implementAdvancedIndexMonitoring(collections) {
    console.log('Setting up advanced index performance monitoring...');

    const monitoringConfig = {
      collections: collections,
      metrics: {
        indexUtilization: true,
        queryPerformance: true,
        resourceConsumption: true,
        growthTrends: true
      },
      alerts: {
        unusedIndexes: { threshold: 0.01, period: '7d' },
        slowQueries: { threshold: 1000, period: '1h' },
        highResourceUsage: { threshold: 0.8, period: '15m' }
      },
      reporting: {
        frequency: 'daily',
        recipients: ['dba-team@company.com']
      }
    };

    // Create monitoring aggregation pipelines
    const monitoringPipelines = await this.createMonitoringPipelines(collections);

    // Setup automated alerts
    const alertSystem = await this.setupIndexAlertSystem(monitoringConfig);

    // Initialize performance tracking
    const performanceTracker = await this.initializePerformanceTracking(collections);

    return {
      config: monitoringConfig,
      pipelines: monitoringPipelines,
      alerts: alertSystem,
      tracking: performanceTracker,
      dashboard: await this.createIndexMonitoringDashboard(collections)
    };
  }

  calculateIndexUtilizationScore(indexStat) {
    const accessCount = indexStat.accesses?.ops || 0;
    const timeSinceLastRestart = Date.now() - (indexStat.accesses?.since?.getTime() || Date.now());
    const hoursRunning = timeSinceLastRestart / (1000 * 60 * 60);

    // Calculate accesses per hour
    const accessesPerHour = hoursRunning > 0 ? accessCount / hoursRunning : 0;

    // Score based on usage frequency
    if (accessesPerHour > 100) return 100;
    else if (accessesPerHour > 10) return 80;
    else if (accessesPerHour > 1) return 60;
    else if (accessesPerHour > 0.1) return 40;
    else if (accessesPerHour > 0) return 20;
    else return 0;
  }

  calculateIndexEfficiency(indexStat, collStats) {
    const indexSize = indexStat.size || 0;
    const accessCount = indexStat.accesses?.ops || 0;
    const totalCollectionSize = collStats.size || 1;

    // Efficiency based on size-to-usage ratio
    const sizeRatio = indexSize / totalCollectionSize;
    const usageEfficiency = accessCount > 0 ? Math.min(100, accessCount / sizeRatio) : 0;

    return Math.round(usageEfficiency);
  }

  categorizeIndexUsage(indexStat) {
    const utilizationScore = this.calculateIndexUtilizationScore(indexStat);

    if (utilizationScore === 0) return 'unused';
    else if (utilizationScore < 20) return 'underutilized';
    else if (utilizationScore < 60) return 'moderate';
    else if (utilizationScore < 90) return 'well_used';
    else return 'critical';
  }

  calculateIndexImpactScore(indexStat, collStats) {
    const utilizationScore = this.calculateIndexUtilizationScore(indexStat);
    const efficiency = this.calculateIndexEfficiency(indexStat, collStats);
    const sizeImpact = (indexStat.size || 0) / (collStats.size || 1) * 100;

    // Combined impact score
    return Math.round((utilizationScore * 0.5) + (efficiency * 0.3) + (sizeImpact * 0.2));
  }

  calculateQueryPatternPriority(pattern) {
    const frequencyScore = Math.min(100, pattern.count * 2);
    const performanceScore = pattern.avgExecutionTime > 100 ? 50 : 
                           pattern.avgExecutionTime > 50 ? 30 : 10;
    const efficiencyScore = pattern.efficiency > 0.8 ? 0 : 
                          pattern.efficiency > 0.5 ? 20 : 40;

    return frequencyScore + performanceScore + efficiencyScore;
  }

  generateIndexOptimizationRecommendations(analysis) {
    const recommendations = [];

    // Unused index recommendations
    analysis.unusedIndexes.forEach(index => {
      if (index.name !== '_id_') { // Never recommend removing _id_ index
        recommendations.push({
          type: 'DROP_INDEX',
          priority: 'LOW',
          index: index.name,
          reason: `Index has ${index.accessCount} accesses since last restart`,
          estimatedSavings: `${(index.sizeBytes / 1024 / 1024).toFixed(2)}MB storage`,
          risk: 'Low - unused index can be safely removed'
        });
      }
    });

    // Underutilized index recommendations
    analysis.underutilizedIndexes.forEach(index => {
      recommendations.push({
        type: 'REVIEW_INDEX',
        priority: 'MEDIUM',
        index: index.name,
        reason: `Low utilization score: ${index.utilizationScore}`,
        suggestion: 'Review query patterns to determine if index can be optimized or removed',
        risk: 'Medium - verify index necessity before removal'
      });
    });

    // High impact index recommendations
    analysis.highImpactIndexes.forEach(index => {
      recommendations.push({
        type: 'OPTIMIZE_INDEX',
        priority: 'HIGH',
        index: index.name,
        reason: `High impact index with score: ${index.impactScore}`,
        suggestion: 'Consider optimizing or creating covering index variants',
        risk: 'High - critical for query performance'
      });
    });

    return recommendations.sort((a, b) => {
      const priorityOrder = { 'HIGH': 3, 'MEDIUM': 2, 'LOW': 1 };
      return priorityOrder[b.priority] - priorityOrder[a.priority];
    });
  }
}

SQL-Style Index Management with QueryLeaf

QueryLeaf provides familiar SQL syntax for MongoDB index management and optimization:

-- QueryLeaf advanced indexing with SQL-familiar syntax

-- Create comprehensive compound indexes using ESR pattern optimization
CREATE INDEX idx_users_esr_optimized ON users (
  -- Equality fields first (highest selectivity)
  status,           -- Equality filter: active, premium, trial
  subscription_tier, -- Equality filter: basic, premium, enterprise

  -- Sort fields second (maintain sort order)
  created_at DESC,  -- Sort field for chronological ordering
  last_login_at DESC, -- Sort field for activity-based ordering

  -- Range fields last (lowest selectivity impact)  
  total_spent,      -- Range filter for value-based queries
  account_score     -- Range filter for scoring queries
)
WITH INDEX_OPTIONS (
  background = true,
  name = 'idx_users_comprehensive_esr',

  -- Partial filter for active users only (reduces index size by ~70%)
  partial_filter = {
    status: { $in: ['active', 'premium', 'trial'] },
    subscription_tier: { $ne: null },
    last_login_at: { $gte: DATE('2024-01-01') }
  },

  -- Optimization hints
  optimization_level = 'aggressive',
  estimated_selectivity = 0.15,
  expected_query_patterns = ['user_dashboard', 'admin_user_list', 'billing_reports']
);

-- Advanced compound index with covering capability
CREATE COVERING INDEX idx_orders_comprehensive ON orders (
  -- Key fields (used in WHERE and ORDER BY)
  user_id,          -- Join field for user lookups
  status,           -- Filter field: pending, completed, cancelled
  order_date DESC,  -- Sort field for chronological ordering

  -- Included fields (returned in SELECT without document lookup)  
  INCLUDE (
    total_amount,
    discount_amount,
    payment_method,
    shipping_address,
    product_categories,
    order_notes
  )
)
WITH INDEX_OPTIONS (
  background = true,
  name = 'idx_orders_user_status_covering',

  -- Partial filter for recent orders
  partial_filter = {
    order_date: { $gte: DATE_SUB(CURRENT_DATE, INTERVAL 2 YEAR) },
    status: { $in: ['pending', 'processing', 'completed', 'shipped'] }
  },

  covering_optimization = true,
  estimated_coverage = '85% of order queries',
  storage_overhead = 'moderate'
);

-- Specialized indexes for different query patterns
CREATE TEXT INDEX idx_products_search ON products (
  product_name,
  description,
  tags,
  category
)
WITH TEXT_OPTIONS (
  default_language = 'english',
  language_override = 'language_field',
  weights = {
    product_name: 10,
    description: 5,  
    tags: 8,
    category: 3
  },
  text_index_version = 3
);

-- Geospatial index for location-based queries
CREATE GEOSPATIAL INDEX idx_stores_location ON stores (
  location  -- GeoJSON Point field
)
WITH GEO_OPTIONS (
  index_version = '2dsphere_v3',
  coordinate_system = 'WGS84',
  sparse = true,
  background = true
);

-- TTL index for session management
CREATE TTL INDEX idx_sessions_expiry ON user_sessions (
  created_at
)
WITH TTL_OPTIONS (
  expire_after_seconds = 3600,  -- 1 hour
  background = true,
  sparse = true
);

-- Partial index for selective filtering (high-value customers only)
CREATE PARTIAL INDEX idx_users_premium ON users (
  email,
  last_login_at DESC,
  total_lifetime_value DESC
)
WHERE subscription_tier IN ('premium', 'enterprise') 
  AND total_lifetime_value > 1000
  AND status = 'active'
WITH INDEX_OPTIONS (
  background = true,
  estimated_size_reduction = '80%',
  target_queries = ['premium_customer_analysis', 'high_value_user_reports']
);

-- Multi-key index for array fields
CREATE MULTIKEY INDEX idx_orders_products ON orders (
  product_ids,      -- Array field
  order_date DESC,
  total_amount
)
WITH INDEX_OPTIONS (
  background = true,
  multikey_optimization = true,
  array_field_hints = ['product_ids']
);

-- Comprehensive index analysis and optimization query
WITH index_usage_analysis AS (
  SELECT 
    collection_name,
    index_name,
    index_key,
    index_size_mb,
    access_count,
    access_rate_per_hour,

    -- Index efficiency metrics
    ROUND((access_count::float / GREATEST(index_size_mb, 0.1))::numeric, 2) as efficiency_ratio,

    -- Usage categorization
    CASE 
      WHEN access_rate_per_hour > 100 THEN 'critical'
      WHEN access_rate_per_hour > 10 THEN 'high_usage'
      WHEN access_rate_per_hour > 1 THEN 'moderate_usage'
      WHEN access_rate_per_hour > 0.1 THEN 'low_usage'
      ELSE 'unused'
    END as usage_category,

    -- Performance impact assessment
    CASE
      WHEN access_rate_per_hour > 50 AND efficiency_ratio > 10 THEN 'high_impact'
      WHEN access_rate_per_hour > 10 AND efficiency_ratio > 5 THEN 'medium_impact'  
      WHEN access_count > 0 THEN 'low_impact'
      ELSE 'no_impact'
    END as performance_impact,

    -- Storage overhead analysis
    CASE
      WHEN index_size_mb > 1000 THEN 'very_large'
      WHEN index_size_mb > 100 THEN 'large'
      WHEN index_size_mb > 10 THEN 'medium'
      ELSE 'small'
    END as storage_overhead

  FROM index_statistics
  WHERE collection_name IN ('users', 'orders', 'products', 'sessions')
),

query_pattern_analysis AS (
  SELECT 
    collection_name,
    query_shape,
    query_frequency,
    avg_execution_time_ms,
    avg_docs_examined,
    avg_docs_returned,

    -- Query efficiency metrics
    avg_docs_returned::float / GREATEST(avg_docs_examined, 1) as query_efficiency,

    -- Performance classification
    CASE
      WHEN avg_execution_time_ms > 1000 THEN 'slow'
      WHEN avg_execution_time_ms > 100 THEN 'moderate'  
      ELSE 'fast'
    END as performance_category,

    -- Index usage effectiveness
    CASE
      WHEN index_hit_rate > 0.9 THEN 'excellent_index_usage'
      WHEN index_hit_rate > 0.7 THEN 'good_index_usage'
      WHEN index_hit_rate > 0.5 THEN 'fair_index_usage'
      ELSE 'poor_index_usage'
    END as index_effectiveness

  FROM query_performance_log
  WHERE execution_timestamp >= CURRENT_TIMESTAMP - INTERVAL '7 days'
    AND query_frequency >= 10  -- Filter low-frequency queries
),

index_optimization_recommendations AS (
  SELECT 
    iu.collection_name,
    iu.index_name,
    iu.usage_category,
    iu.performance_impact,
    iu.storage_overhead,
    iu.efficiency_ratio,

    -- Optimization recommendations based on usage patterns
    CASE 
      WHEN iu.usage_category = 'unused' AND iu.index_name != '_id_' THEN 
        'DROP - Index is unused and consuming storage'
      WHEN iu.usage_category = 'low_usage' AND iu.efficiency_ratio < 1 THEN
        'REVIEW - Low usage and poor efficiency, consider dropping'
      WHEN iu.performance_impact = 'high_impact' AND iu.storage_overhead = 'very_large' THEN
        'OPTIMIZE - Consider partial index or covering index alternative'  
      WHEN iu.usage_category = 'critical' AND qp.performance_category = 'slow' THEN
        'ENHANCE - Critical index supporting slow queries, needs optimization'
      WHEN iu.efficiency_ratio > 50 AND iu.performance_impact = 'high_impact' THEN
        'MAINTAIN - Well-performing index, continue monitoring'
      ELSE 'MONITOR - Acceptable performance, regular monitoring recommended'
    END as recommendation,

    -- Priority calculation
    CASE 
      WHEN iu.performance_impact = 'high_impact' AND qp.performance_category = 'slow' THEN 'CRITICAL'
      WHEN iu.usage_category = 'unused' AND iu.storage_overhead = 'very_large' THEN 'HIGH'
      WHEN iu.efficiency_ratio < 1 AND iu.storage_overhead IN ('large', 'very_large') THEN 'MEDIUM'
      ELSE 'LOW'
    END as priority,

    -- Estimated impact
    CASE
      WHEN iu.usage_category = 'unused' THEN 
        CONCAT('Storage savings: ', iu.index_size_mb, 'MB')
      WHEN iu.performance_impact = 'high_impact' THEN
        CONCAT('Query performance: ', ROUND(qp.avg_execution_time_ms * 0.3), 'ms reduction potential')
      ELSE 'Minimal impact expected'
    END as estimated_impact

  FROM index_usage_analysis iu
  LEFT JOIN query_pattern_analysis qp ON iu.collection_name = qp.collection_name
)

SELECT 
  collection_name,
  index_name,
  usage_category,
  performance_impact,
  recommendation,
  priority,
  estimated_impact,

  -- Action items
  CASE priority
    WHEN 'CRITICAL' THEN 'Immediate action required - review within 24 hours'
    WHEN 'HIGH' THEN 'Schedule optimization within 1 week'
    WHEN 'MEDIUM' THEN 'Include in next maintenance window'
    ELSE 'Monitor and review quarterly'
  END as action_timeline,

  -- Technical implementation guidance
  CASE 
    WHEN recommendation LIKE 'DROP%' THEN 
      CONCAT('Execute: DROP INDEX ', collection_name, '.', index_name)
    WHEN recommendation LIKE 'OPTIMIZE%' THEN
      'Analyze query patterns and create optimized compound index'
    WHEN recommendation LIKE 'ENHANCE%' THEN
      'Review index field order and consider covering index'
    ELSE 'Continue current monitoring procedures'
  END as implementation_guidance

FROM index_optimization_recommendations
WHERE priority IN ('CRITICAL', 'HIGH', 'MEDIUM')
ORDER BY 
  CASE priority WHEN 'CRITICAL' THEN 1 WHEN 'HIGH' THEN 2 WHEN 'MEDIUM' THEN 3 ELSE 4 END,
  collection_name,
  index_name;

-- Real-time index performance monitoring
CREATE MATERIALIZED VIEW index_performance_dashboard AS
WITH real_time_metrics AS (
  SELECT 
    collection_name,
    index_name,
    DATE_TRUNC('minute', access_timestamp) as minute_bucket,

    -- Real-time utilization metrics
    COUNT(*) as accesses_per_minute,
    AVG(query_execution_time_ms) as avg_query_time,
    SUM(docs_examined) as total_docs_examined,
    SUM(docs_returned) as total_docs_returned,

    -- Index efficiency in real-time
    SUM(docs_returned)::float / GREATEST(SUM(docs_examined), 1) as real_time_efficiency,

    -- Performance trends
    LAG(COUNT(*)) OVER (
      PARTITION BY collection_name, index_name 
      ORDER BY DATE_TRUNC('minute', access_timestamp)
    ) as prev_minute_accesses,

    LAG(AVG(query_execution_time_ms)) OVER (
      PARTITION BY collection_name, index_name
      ORDER BY DATE_TRUNC('minute', access_timestamp)  
    ) as prev_minute_avg_time

  FROM index_access_log
  WHERE access_timestamp >= CURRENT_TIMESTAMP - INTERVAL '1 hour'
  GROUP BY collection_name, index_name, DATE_TRUNC('minute', access_timestamp)
),

performance_alerts AS (
  SELECT 
    collection_name,
    index_name,
    minute_bucket,
    accesses_per_minute,
    avg_query_time,
    real_time_efficiency,

    -- Performance change indicators
    CASE 
      WHEN prev_minute_accesses IS NOT NULL THEN
        ((accesses_per_minute - prev_minute_accesses)::float / prev_minute_accesses * 100)
      ELSE 0
    END as access_rate_change_pct,

    CASE
      WHEN prev_minute_avg_time IS NOT NULL THEN
        ((avg_query_time - prev_minute_avg_time)::float / prev_minute_avg_time * 100) 
      ELSE 0
    END as latency_change_pct,

    -- Alert conditions
    CASE
      WHEN avg_query_time > 1000 THEN 'HIGH_LATENCY_ALERT'
      WHEN real_time_efficiency < 0.1 THEN 'LOW_EFFICIENCY_ALERT'
      WHEN accesses_per_minute > 1000 THEN 'HIGH_LOAD_ALERT'
      WHEN prev_minute_accesses IS NOT NULL AND 
           accesses_per_minute > prev_minute_accesses * 5 THEN 'LOAD_SPIKE_ALERT'
      ELSE 'NORMAL'
    END as alert_status,

    -- Optimization suggestions
    CASE
      WHEN avg_query_time > 1000 AND real_time_efficiency < 0.2 THEN 
        'Consider index redesign or query optimization'
      WHEN accesses_per_minute > 500 AND real_time_efficiency > 0.8 THEN
        'High-performing index under load - monitor for scaling needs'
      WHEN real_time_efficiency < 0.1 THEN
        'Poor selectivity - review partial index opportunities'
      ELSE 'Performance within acceptable parameters'
    END as optimization_suggestion

  FROM real_time_metrics
  WHERE minute_bucket >= CURRENT_TIMESTAMP - INTERVAL '15 minutes'
)

SELECT 
  collection_name,
  index_name,
  ROUND(AVG(accesses_per_minute)::numeric, 1) as avg_accesses_per_minute,
  ROUND(AVG(avg_query_time)::numeric, 2) as avg_latency_ms,
  ROUND(AVG(real_time_efficiency)::numeric, 3) as avg_efficiency,
  ROUND(AVG(access_rate_change_pct)::numeric, 1) as avg_load_change_pct,
  ROUND(AVG(latency_change_pct)::numeric, 1) as avg_latency_change_pct,

  -- Alert summary
  COUNT(*) FILTER (WHERE alert_status != 'NORMAL') as alert_count,
  STRING_AGG(DISTINCT alert_status, ', ') FILTER (WHERE alert_status != 'NORMAL') as active_alerts,
  MODE() WITHIN GROUP (ORDER BY optimization_suggestion) as primary_recommendation,

  -- Performance status
  CASE 
    WHEN COUNT(*) FILTER (WHERE alert_status LIKE '%HIGH%') > 0 THEN 'ATTENTION_REQUIRED'
    WHEN AVG(real_time_efficiency) > 0.7 AND AVG(avg_query_time) < 100 THEN 'OPTIMAL'
    WHEN AVG(real_time_efficiency) > 0.5 AND AVG(avg_query_time) < 250 THEN 'GOOD'  
    ELSE 'NEEDS_OPTIMIZATION'
  END as overall_status

FROM performance_alerts
GROUP BY collection_name, index_name
ORDER BY 
  CASE overall_status 
    WHEN 'ATTENTION_REQUIRED' THEN 1 
    WHEN 'NEEDS_OPTIMIZATION' THEN 2
    WHEN 'GOOD' THEN 3
    WHEN 'OPTIMAL' THEN 4
  END,
  avg_accesses_per_minute DESC;

-- QueryLeaf provides comprehensive indexing capabilities:
-- 1. SQL-familiar syntax for complex MongoDB index creation and management
-- 2. Advanced compound index design with ESR pattern optimization
-- 3. Partial and covering index support for storage and performance optimization
-- 4. Specialized index types: text, geospatial, TTL, sparse, and multikey indexes
-- 5. Real-time index performance monitoring and alerting
-- 6. Automated optimization recommendations based on usage patterns
-- 7. Production-ready index management with background creation and maintenance
-- 8. Comprehensive index analysis and resource utilization tracking
-- 9. Cross-collection optimization opportunities identification  
-- 10. Integration with MongoDB's native indexing capabilities and query optimizer

Best Practices for Production Index Management

Index Design Strategy

Essential principles for effective MongoDB index design and management:

  1. ESR Pattern Application: Design compound indexes following Equality, Sort, Range field ordering for optimal performance
  2. Selective Filtering: Use partial indexes for selective data filtering to reduce storage overhead and improve performance
  3. Covering Index Design: Create covering indexes for frequently accessed query patterns to eliminate document retrieval
  4. Index Consolidation: Minimize index count by designing compound indexes that support multiple query patterns
  5. Performance Monitoring: Implement comprehensive index utilization monitoring and automated optimization
  6. Maintenance Planning: Schedule regular index maintenance and optimization during low-traffic periods

Production Optimization Workflow

Optimize MongoDB indexes systematically for production environments:

  1. Usage Analysis: Analyze actual index usage patterns using database profiler and index statistics
  2. Query Pattern Recognition: Identify common query patterns and optimize indexes for primary use cases
  3. Performance Validation: Validate index performance improvements with comprehensive testing
  4. Resource Management: Balance query performance with storage overhead and maintenance costs
  5. Continuous Monitoring: Implement ongoing performance monitoring and automated alert systems
  6. Iterative Optimization: Regularly review and refine indexing strategies based on evolving query patterns

Conclusion

MongoDB's advanced indexing capabilities provide comprehensive tools for optimizing database performance through sophisticated compound indexes, partial filtering, covering indexes, and specialized index types. The flexible indexing architecture enables developers to design highly optimized indexes that support complex query patterns while minimizing storage overhead and maintenance costs.

Key MongoDB Advanced Indexing benefits include:

  • Comprehensive Index Types: Support for compound, partial, covering, text, geospatial, TTL, and sparse indexes
  • ESR Pattern Optimization: Systematic compound index design following proven optimization patterns
  • Performance Intelligence: Advanced index utilization analysis and automated optimization recommendations
  • Production-Ready Management: Sophisticated index creation, maintenance, and monitoring capabilities
  • Resource Optimization: Intelligent index design that balances performance with storage efficiency
  • Query Pattern Adaptation: Flexible indexing strategies that adapt to evolving application requirements

Whether you're optimizing existing applications, designing new database schemas, or implementing production indexing strategies, MongoDB's advanced indexing capabilities with QueryLeaf's familiar SQL interface provide the foundation for high-performance database operations.

QueryLeaf Integration: QueryLeaf automatically manages MongoDB indexing strategies while providing SQL-familiar syntax for index creation, analysis, and optimization. Advanced indexing patterns, performance monitoring capabilities, and production-ready index management are seamlessly handled through familiar SQL constructs, making sophisticated database optimization both powerful and accessible to SQL-oriented development teams.

The combination of MongoDB's flexible indexing architecture with SQL-style index management makes it an ideal platform for applications requiring both high-performance queries and familiar database optimization patterns, ensuring your applications achieve optimal performance while remaining maintainable and scalable as they grow.

MongoDB Replica Sets and High Availability: Advanced Disaster Recovery and Fault Tolerance Strategies for Mission-Critical Applications

Mission-critical applications require database infrastructure that can withstand hardware failures, network outages, and data center disasters while maintaining continuous availability and data consistency. Traditional database replication approaches often introduce complexity, performance overhead, and operational challenges that become increasingly problematic as application scale and reliability requirements grow.

MongoDB's replica set architecture provides sophisticated high availability and disaster recovery capabilities that eliminate single points of failure while maintaining strong data consistency and automatic failover functionality. Unlike traditional master-slave replication systems with manual failover processes, MongoDB replica sets offer self-healing infrastructure with intelligent election algorithms, configurable read preferences, and comprehensive disaster recovery features that ensure business continuity even during catastrophic failures.

The Traditional Database Replication Challenge

Conventional database replication systems have significant limitations for high-availability requirements:

-- Traditional PostgreSQL streaming replication - manual failover and limited flexibility

-- Primary server configuration (postgresql.conf)
wal_level = replica
max_wal_senders = 3
wal_keep_segments = 64
archive_mode = on
archive_command = 'cp %p /var/lib/postgresql/wal_archive/%f'

-- Standby server configuration (recovery.conf)  
standby_mode = 'on'
primary_conninfo = 'host=primary-server port=5432 user=replicator'
restore_command = 'cp /var/lib/postgresql/wal_archive/%f %p'
trigger_file = '/tmp/postgresql.trigger.5432'

-- Manual failover process (complex and error-prone)
-- 1. Detect primary failure through monitoring
SELECT pg_is_in_recovery(); -- Check if server is in standby mode

-- 2. Promote standby to primary (manual intervention required)
-- Touch trigger file on standby server
-- $ touch /tmp/postgresql.trigger.5432

-- 3. Redirect application traffic (requires external load balancer configuration)
-- Update DNS/load balancer to point to new primary
-- Verify all applications can connect to new primary

-- 4. Reconfigure remaining servers (manual process)
-- Update primary_conninfo on other standby servers
-- Restart PostgreSQL services with new configuration

-- Complex query for checking replication lag
WITH replication_status AS (
  SELECT 
    client_addr,
    client_hostname,
    state,
    sent_lsn,
    write_lsn,
    flush_lsn,
    replay_lsn,
    write_lag,
    flush_lag,
    replay_lag,
    sync_priority,
    sync_state,

    -- Calculate replication delay in bytes
    pg_wal_lsn_diff(pg_current_wal_lsn(), replay_lsn) as replay_delay_bytes,

    -- Check if standby is healthy
    CASE 
      WHEN state = 'streaming' AND pg_wal_lsn_diff(pg_current_wal_lsn(), replay_lsn) < 16777216 THEN 'healthy'
      WHEN state = 'streaming' AND pg_wal_lsn_diff(pg_current_wal_lsn(), replay_lsn) < 134217728 THEN 'lagging'
      WHEN state = 'streaming' THEN 'severely_lagging'
      ELSE 'disconnected'
    END as health_status,

    -- Estimate recovery time if primary fails
    CASE 
      WHEN replay_lag IS NOT NULL THEN 
        EXTRACT(EPOCH FROM replay_lag)::int
      ELSE 
        GREATEST(
          EXTRACT(EPOCH FROM flush_lag)::int,
          pg_wal_lsn_diff(pg_current_wal_lsn(), replay_lsn) / 16777216 * 10
        )
    END as estimated_recovery_seconds

  FROM pg_stat_replication
  WHERE state IS NOT NULL
),

connection_health AS (
  SELECT 
    datname,
    usename,
    client_addr,
    state,
    query,
    state_change,

    -- Connection duration
    EXTRACT(EPOCH FROM (now() - backend_start))::int as connection_age_seconds,

    -- Query duration  
    CASE 
      WHEN state = 'active' THEN EXTRACT(EPOCH FROM (now() - query_start))::int
      ELSE 0
    END as active_query_duration_seconds,

    -- Identify potentially problematic connections
    CASE
      WHEN state = 'idle in transaction' AND (now() - state_change) > interval '5 minutes' THEN 'long_idle_transaction'
      WHEN state = 'active' AND (now() - query_start) > interval '10 minutes' THEN 'long_running_query'
      WHEN backend_type = 'walsender' THEN 'replication_connection'
      ELSE 'normal'
    END as connection_type

  FROM pg_stat_activity
  WHERE backend_type IN ('client backend', 'walsender')
    AND datname IS NOT NULL
)

-- Comprehensive replication monitoring query
SELECT 
  rs.client_addr as standby_server,
  rs.client_hostname as standby_hostname,
  rs.state as replication_state,
  rs.health_status,

  -- Lag information
  COALESCE(EXTRACT(EPOCH FROM rs.replay_lag)::int, 0) as replay_lag_seconds,
  ROUND(rs.replay_delay_bytes / 1048576.0, 2) as replay_delay_mb,
  rs.estimated_recovery_seconds,

  -- Sync configuration
  rs.sync_priority,
  rs.sync_state,

  -- Connection health
  ch.connection_age_seconds,
  ch.active_query_duration_seconds,

  -- Health assessment
  CASE 
    WHEN rs.health_status = 'healthy' AND rs.sync_state = 'sync' THEN 'excellent'
    WHEN rs.health_status = 'healthy' AND rs.sync_state = 'async' THEN 'good'
    WHEN rs.health_status = 'lagging' THEN 'warning'
    WHEN rs.health_status = 'severely_lagging' THEN 'critical'
    ELSE 'unknown'
  END as overall_health,

  -- Failover readiness
  CASE
    WHEN rs.health_status = 'healthy' AND rs.estimated_recovery_seconds < 30 THEN 'ready'
    WHEN rs.health_status IN ('healthy', 'lagging') AND rs.estimated_recovery_seconds < 120 THEN 'acceptable'
    ELSE 'not_ready'
  END as failover_readiness,

  -- Recommendations
  CASE
    WHEN rs.health_status = 'disconnected' THEN 'Check network connectivity and standby server status'
    WHEN rs.health_status = 'severely_lagging' THEN 'Investigate standby performance and network bandwidth'
    WHEN rs.replay_delay_bytes > 134217728 THEN 'Consider increasing wal_keep_segments or using replication slots'
    WHEN rs.sync_state != 'sync' AND rs.sync_priority > 0 THEN 'Review synchronous_standby_names configuration'
    ELSE 'Replication operating normally'
  END as recommendation

FROM replication_status rs
LEFT JOIN connection_health ch ON rs.client_addr = ch.client_addr 
                                AND ch.connection_type = 'replication_connection'
ORDER BY rs.sync_priority DESC, rs.replay_delay_bytes ASC;

-- Problems with traditional PostgreSQL replication:
-- 1. Manual failover process requiring human intervention and expertise
-- 2. Complex configuration management across multiple servers
-- 3. Limited built-in monitoring and health checking capabilities
-- 4. Potential for data loss during failover if not configured properly
-- 5. Application-level connection management complexity
-- 6. No automatic discovery of new primary after failover
-- 7. Split-brain scenarios possible without proper fencing mechanisms
-- 8. Limited geographic distribution capabilities for disaster recovery
-- 9. Difficulty in adding/removing replica servers without downtime
-- 10. Complex backup and point-in-time recovery coordination across replicas

-- Additional monitoring complexity
-- Check for replication slots to prevent WAL accumulation
SELECT 
  slot_name,
  plugin,
  slot_type,
  datoid,
  database,
  temporary,
  active,
  active_pid,
  xmin,
  catalog_xmin,
  restart_lsn,
  confirmed_flush_lsn,

  -- Calculate slot lag
  pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn) as slot_lag_bytes,

  -- Check if slot is causing WAL retention
  CASE 
    WHEN active = false THEN 'inactive_slot'
    WHEN pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn) > 1073741824 THEN 'excessive_lag'
    ELSE 'healthy'
  END as slot_status

FROM pg_replication_slots
ORDER BY slot_lag_bytes DESC;

-- MySQL replication (even more limited)
-- Master configuration
log-bin=mysql-bin
server-id=1
binlog-format=ROW
sync-binlog=1
innodb-flush-log-at-trx-commit=1

-- Slave configuration  
server-id=2
relay-log=mysql-relay
read-only=1

-- Basic replication status (limited information)
SHOW SLAVE STATUS\G

-- Manual failover process (basic and risky)
STOP SLAVE;
RESET SLAVE ALL;
-- Manually change master configuration

-- MySQL replication limitations:
-- - Even more manual failover process
-- - Limited monitoring and diagnostics
-- - Poor handling of network partitions
-- - Basic conflict resolution
-- - Limited geographic replication support
-- - Minimal built-in health checking
-- - Simple master-slave topology only

MongoDB provides comprehensive high availability through replica sets:

// MongoDB Replica Sets - automatic failover with advanced high availability features
const { MongoClient } = require('mongodb');

// Advanced MongoDB Replica Set Management and High Availability System
class MongoReplicaSetManager {
  constructor(connectionString) {
    this.connectionString = connectionString;
    this.client = null;
    this.db = null;

    // High availability configuration
    this.replicaSetConfig = {
      members: [],
      settings: {
        chainingAllowed: true,
        heartbeatIntervalMillis: 2000,
        heartbeatTimeoutSecs: 10,
        electionTimeoutMillis: 10000,
        catchUpTimeoutMillis: 60000,
        getLastErrorModes: {},
        getLastErrorDefaults: { w: 1, wtimeout: 0 }
      }
    };

    this.healthMetrics = new Map();
    this.failoverHistory = [];
    this.performanceTargets = {
      maxReplicationLagSeconds: 10,
      maxElectionTimeSeconds: 30,
      minHealthyMembers: 2
    };
  }

  async initializeReplicaSet(members, options = {}) {
    console.log('Initializing MongoDB replica set with advanced high availability...');

    const {
      replicaSetName = 'rs0',
      priority = { primary: 1, secondary: 0.5, arbiter: 0 },
      tags = {},
      writeConcern = { w: 'majority', j: true },
      readPreference = 'primaryPreferred'
    } = options;

    try {
      // Connect to the primary candidate
      this.client = new MongoClient(this.connectionString, {
        useNewUrlParser: true,
        useUnifiedTopology: true,
        replicaSet: replicaSetName,
        readPreference: readPreference,
        writeConcern: writeConcern,
        maxPoolSize: 10,
        serverSelectionTimeoutMS: 5000,
        socketTimeoutMS: 45000,
        heartbeatFrequencyMS: 10000,
        retryWrites: true,
        retryReads: true
      });

      await this.client.connect();
      this.db = this.client.db('admin');

      // Build replica set configuration
      const replicaSetConfig = {
        _id: replicaSetName,
        version: 1,
        members: members.map((member, index) => ({
          _id: index,
          host: member.host,
          priority: member.priority || priority[member.type] || 1,
          votes: member.type === 'arbiter' ? 1 : 1,
          arbiterOnly: member.type === 'arbiter',
          buildIndexes: member.type !== 'arbiter',
          hidden: member.hidden || false,
          slaveDelay: member.slaveDelay || 0,
          tags: { ...tags[member.type], region: member.region, datacenter: member.datacenter }
        })),
        settings: {
          chainingAllowed: true,
          heartbeatIntervalMillis: 2000,
          heartbeatTimeoutSecs: 10,
          electionTimeoutMillis: 10000,
          catchUpTimeoutMillis: 60000,

          // Advanced write concern configurations
          getLastErrorModes: {
            multiDataCenter: { datacenter: 2 },
            majority: { region: 2 }
          },
          getLastErrorDefaults: { 
            w: 'majority', 
            j: true,
            wtimeout: 10000 
          }
        }
      };

      // Initialize replica set
      const initResult = await this.db.runCommand({
        replSetInitiate: replicaSetConfig
      });

      if (initResult.ok === 1) {
        console.log('Replica set initialized successfully');

        // Wait for primary election
        await this.waitForPrimaryElection();

        // Perform initial health check
        const healthStatus = await this.performHealthCheck();

        // Setup monitoring
        await this.setupAdvancedMonitoring();

        console.log('Replica set ready for high availability operations');
        return {
          success: true,
          replicaSetName: replicaSetName,
          members: members,
          healthStatus: healthStatus
        };
      } else {
        throw new Error(`Replica set initialization failed: ${initResult.errmsg}`);
      }

    } catch (error) {
      console.error('Replica set initialization error:', error);
      return {
        success: false,
        error: error.message
      };
    }
  }

  async performComprehensiveHealthCheck() {
    console.log('Performing comprehensive replica set health assessment...');

    const healthReport = {
      timestamp: new Date(),
      replicaSetStatus: null,
      memberHealth: [],
      replicationLag: {},
      electionMetrics: {},
      networkConnectivity: {},
      performanceMetrics: {},
      alerts: [],
      recommendations: []
    };

    try {
      // Get replica set status
      const rsStatus = await this.db.runCommand({ replSetGetStatus: 1 });
      healthReport.replicaSetStatus = {
        name: rsStatus.set,
        primary: rsStatus.members.find(m => m.state === 1)?.name,
        memberCount: rsStatus.members.length,
        healthyMembers: rsStatus.members.filter(m => [1, 2, 7].includes(m.state)).length,
        state: rsStatus.myState
      };

      // Analyze each member
      for (const member of rsStatus.members) {
        const memberHealth = {
          name: member.name,
          state: member.state,
          stateStr: member.stateStr,
          health: member.health,
          uptime: member.uptime,
          lastHeartbeat: member.lastHeartbeat,
          lastHeartbeatRecv: member.lastHeartbeatRecv,
          pingMs: member.pingMs,
          syncSourceHost: member.syncingTo,

          // Calculate replication lag
          replicationLag: member.optimeDate && rsStatus.date ? 
            (rsStatus.date - member.optimeDate) / 1000 : null,

          // Member status assessment
          status: this.assessMemberStatus(member),

          // Performance metrics
          performanceMetrics: {
            heartbeatLatency: member.pingMs,
            connectionHealth: member.health === 1 ? 'healthy' : 'unhealthy',
            stateStability: this.assessStateStability(member)
          }
        };

        healthReport.memberHealth.push(memberHealth);

        // Track replication lag
        if (memberHealth.replicationLag !== null) {
          healthReport.replicationLag[member.name] = memberHealth.replicationLag;
        }
      }

      // Analyze election metrics
      healthReport.electionMetrics = await this.analyzeElectionMetrics(rsStatus);

      // Check network connectivity
      healthReport.networkConnectivity = await this.checkNetworkConnectivity(rsStatus.members);

      // Generate alerts based on thresholds
      healthReport.alerts = this.generateHealthAlerts(healthReport);

      // Generate recommendations
      healthReport.recommendations = this.generateHealthRecommendations(healthReport);

      console.log(`Health check completed: ${healthReport.memberHealth.length} members analyzed`);
      console.log(`Healthy members: ${healthReport.replicaSetStatus.healthyMembers}/${healthReport.replicaSetStatus.memberCount}`);
      console.log(`Alerts generated: ${healthReport.alerts.length}`);

      return healthReport;

    } catch (error) {
      console.error('Health check failed:', error);
      healthReport.error = error.message;
      return healthReport;
    }
  }

  assessMemberStatus(member) {
    const status = {
      overall: 'unknown',
      issues: [],
      strengths: []
    };

    // State-based assessment
    switch (member.state) {
      case 1: // PRIMARY
        status.overall = 'primary';
        status.strengths.push('Acting as primary, accepting writes');
        break;
      case 2: // SECONDARY
        status.overall = 'healthy';
        status.strengths.push('Healthy secondary, replicating data');
        if (member.optimeDate && Date.now() - member.optimeDate > 30000) {
          status.issues.push('Replication lag exceeds 30 seconds');
          status.overall = 'lagging';
        }
        break;
      case 3: // RECOVERING
        status.overall = 'recovering';
        status.issues.push('Member is in recovery state');
        break;
      case 5: // STARTUP2
        status.overall = 'starting';
        status.issues.push('Member is in startup phase');
        break;
      case 6: // UNKNOWN
        status.overall = 'unknown';
        status.issues.push('Member state is unknown');
        break;
      case 7: // ARBITER
        status.overall = 'arbiter';
        status.strengths.push('Functioning arbiter for elections');
        break;
      case 8: // DOWN
        status.overall = 'down';
        status.issues.push('Member is down or unreachable');
        break;
      case 9: // ROLLBACK
        status.overall = 'rollback';
        status.issues.push('Member is performing rollback');
        break;
      case 10: // REMOVED
        status.overall = 'removed';
        status.issues.push('Member has been removed from replica set');
        break;
      default:
        status.overall = 'unknown';
        status.issues.push(`Unexpected state: ${member.state}`);
    }

    // Health-based assessment
    if (member.health !== 1) {
      status.issues.push('Member health check failing');
      if (status.overall === 'healthy') {
        status.overall = 'unhealthy';
      }
    }

    // Network latency assessment
    if (member.pingMs && member.pingMs > 100) {
      status.issues.push(`High network latency: ${member.pingMs}ms`);
    } else if (member.pingMs && member.pingMs < 10) {
      status.strengths.push(`Low network latency: ${member.pingMs}ms`);
    }

    return status;
  }

  async implementAutomaticFailoverTesting() {
    console.log('Implementing automatic failover testing and validation...');

    const failoverTest = {
      testId: require('crypto').randomUUID(),
      timestamp: new Date(),
      phases: [],
      results: {
        success: false,
        totalTimeMs: 0,
        electionTimeMs: 0,
        dataConsistencyVerified: false,
        applicationConnectivityRestored: false
      }
    };

    try {
      // Phase 1: Pre-failover health check
      console.log('Phase 1: Pre-failover health assessment...');
      const preFailoverHealth = await this.performComprehensiveHealthCheck();
      failoverTest.phases.push({
        phase: 'pre_failover_health',
        timestamp: new Date(),
        status: 'completed',
        data: preFailoverHealth
      });

      if (preFailoverHealth.replicaSetStatus.healthyMembers < this.performanceTargets.minHealthyMembers + 1) {
        throw new Error('Insufficient healthy members for safe failover testing');
      }

      // Phase 2: Insert test data for consistency verification
      console.log('Phase 2: Inserting test data for consistency verification...');
      const testCollection = this.client.db('failover_test').collection('consistency_check');
      const testDocuments = Array.from({ length: 100 }, (_, i) => ({
        _id: `failover_test_${failoverTest.testId}_${i}`,
        timestamp: new Date(),
        sequenceNumber: i,
        testData: `Failover test data ${i}`,
        checksum: require('crypto').createHash('md5').update(`test_${i}`).digest('hex')
      }));

      await testCollection.insertMany(testDocuments, { writeConcern: { w: 'majority', j: true } });
      failoverTest.phases.push({
        phase: 'test_data_insertion',
        timestamp: new Date(),
        status: 'completed',
        data: { documentsInserted: testDocuments.length }
      });

      // Phase 3: Simulate primary failure (step down primary)
      console.log('Phase 3: Simulating primary failure...');
      const startTime = Date.now();

      await this.db.runCommand({ replSetStepDown: 60, force: true });

      failoverTest.phases.push({
        phase: 'primary_step_down',
        timestamp: new Date(),
        status: 'completed',
        data: { stepDownInitiated: true }
      });

      // Phase 4: Wait for new primary election
      console.log('Phase 4: Waiting for new primary election...');
      const electionStartTime = Date.now();

      const newPrimary = await this.waitForPrimaryElection(30000); // 30 second timeout
      const electionEndTime = Date.now();

      failoverTest.results.electionTimeMs = electionEndTime - electionStartTime;

      failoverTest.phases.push({
        phase: 'primary_election',
        timestamp: new Date(),
        status: 'completed',
        data: { 
          newPrimary: newPrimary,
          electionTimeMs: failoverTest.results.electionTimeMs
        }
      });

      // Phase 5: Verify data consistency
      console.log('Phase 5: Verifying data consistency...');

      // Reconnect to new primary
      await this.client.close();
      this.client = new MongoClient(this.connectionString, {
        useNewUrlParser: true,
        useUnifiedTopology: true,
        readPreference: 'primary'
      });
      await this.client.connect();

      const verificationCollection = this.client.db('failover_test').collection('consistency_check');
      const retrievedDocs = await verificationCollection.find({
        _id: { $regex: `^failover_test_${failoverTest.testId}_` }
      }).toArray();

      const consistencyCheck = {
        expectedCount: testDocuments.length,
        retrievedCount: retrievedDocs.length,
        dataIntegrityVerified: true,
        checksumMatches: 0
      };

      // Verify checksums
      for (const doc of retrievedDocs) {
        const expectedChecksum = require('crypto').createHash('md5')
          .update(`test_${doc.sequenceNumber}`).digest('hex');
        if (doc.checksum === expectedChecksum) {
          consistencyCheck.checksumMatches++;
        }
      }

      consistencyCheck.dataIntegrityVerified = 
        consistencyCheck.expectedCount === consistencyCheck.retrievedCount &&
        consistencyCheck.checksumMatches === consistencyCheck.expectedCount;

      failoverTest.results.dataConsistencyVerified = consistencyCheck.dataIntegrityVerified;

      failoverTest.phases.push({
        phase: 'data_consistency_verification',
        timestamp: new Date(),
        status: 'completed',
        data: consistencyCheck
      });

      // Phase 6: Test application connectivity
      console.log('Phase 6: Testing application connectivity...');

      try {
        // Simulate application operations
        await verificationCollection.insertOne({
          _id: `post_failover_${failoverTest.testId}`,
          timestamp: new Date(),
          message: 'Post-failover connectivity test'
        }, { writeConcern: { w: 'majority' } });

        const postFailoverDoc = await verificationCollection.findOne({
          _id: `post_failover_${failoverTest.testId}`
        });

        failoverTest.results.applicationConnectivityRestored = postFailoverDoc !== null;

      } catch (error) {
        console.error('Application connectivity test failed:', error);
        failoverTest.results.applicationConnectivityRestored = false;
      }

      failoverTest.phases.push({
        phase: 'application_connectivity_test',
        timestamp: new Date(),
        status: failoverTest.results.applicationConnectivityRestored ? 'completed' : 'failed',
        data: { connectivityRestored: failoverTest.results.applicationConnectivityRestored }
      });

      // Phase 7: Post-failover health check
      console.log('Phase 7: Post-failover health assessment...');
      const postFailoverHealth = await this.performComprehensiveHealthCheck();
      failoverTest.phases.push({
        phase: 'post_failover_health',
        timestamp: new Date(),
        status: 'completed',
        data: postFailoverHealth
      });

      // Calculate total test time
      failoverTest.results.totalTimeMs = Date.now() - startTime;

      // Determine overall success
      failoverTest.results.success = 
        failoverTest.results.electionTimeMs <= (this.performanceTargets.maxElectionTimeSeconds * 1000) &&
        failoverTest.results.dataConsistencyVerified &&
        failoverTest.results.applicationConnectivityRestored &&
        postFailoverHealth.replicaSetStatus.healthyMembers >= this.performanceTargets.minHealthyMembers;

      // Cleanup test data
      await verificationCollection.deleteMany({
        _id: { $regex: `^(failover_test_${failoverTest.testId}_|post_failover_${failoverTest.testId})` }
      });

      console.log(`Failover test completed: ${failoverTest.results.success ? 'SUCCESS' : 'PARTIAL_SUCCESS'}`);
      console.log(`Total failover time: ${failoverTest.results.totalTimeMs}ms`);
      console.log(`Election time: ${failoverTest.results.electionTimeMs}ms`);
      console.log(`Data consistency: ${failoverTest.results.dataConsistencyVerified ? 'VERIFIED' : 'FAILED'}`);
      console.log(`Application connectivity: ${failoverTest.results.applicationConnectivityRestored ? 'RESTORED' : 'FAILED'}`);

      // Record failover test in history
      this.failoverHistory.push(failoverTest);

      return failoverTest;

    } catch (error) {
      console.error('Failover test failed:', error);
      failoverTest.phases.push({
        phase: 'error',
        timestamp: new Date(),
        status: 'failed',
        error: error.message
      });
      failoverTest.results.success = false;
      return failoverTest;
    }
  }

  async setupAdvancedReadPreferences(applications) {
    console.log('Setting up advanced read preferences for optimal performance...');

    const readPreferenceConfigurations = {
      // Real-time dashboard - prefer primary for latest data
      realtime_dashboard: {
        readPreference: 'primary',
        maxStalenessSeconds: 0,
        tags: [],
        description: 'Real-time data requires primary reads',
        useCase: 'Live dashboards, real-time analytics'
      },

      // Reporting queries - can use secondaries with some lag tolerance
      reporting_analytics: {
        readPreference: 'secondaryPreferred',
        maxStalenessSeconds: 30,
        tags: [{ region: 'us-east', workload: 'analytics' }],
        description: 'Analytics workload can tolerate slight lag',
        useCase: 'Business intelligence, historical reports'
      },

      // Geographically distributed reads
      geographic_reads: {
        readPreference: 'nearest',
        maxStalenessSeconds: 60,
        tags: [],
        description: 'Prioritize network proximity for user-facing reads',
        useCase: 'User-facing applications, content delivery'
      },

      // Heavy analytical workloads
      heavy_analytics: {
        readPreference: 'secondary',
        maxStalenessSeconds: 120,
        tags: [{ workload: 'analytics', ssd: 'true' }],
        description: 'Dedicated secondary for heavy analytical queries',
        useCase: 'Data mining, complex aggregations, ML training'
      },

      // Backup and archival operations
      backup_operations: {
        readPreference: 'secondary',
        maxStalenessSeconds: 300,
        tags: [{ backup: 'true', priority: 'low' }],
        description: 'Use dedicated backup secondary',
        useCase: 'Backup operations, data archival, compliance exports'
      }
    };

    const clientConfigurations = {};

    for (const [appName, app] of Object.entries(applications)) {
      const config = readPreferenceConfigurations[app.readPattern] || readPreferenceConfigurations.geographic_reads;

      console.log(`Configuring read preferences for ${appName}:`);
      console.log(`  Pattern: ${app.readPattern}`);
      console.log(`  Read Preference: ${config.readPreference}`);
      console.log(`  Max Staleness: ${config.maxStalenessSeconds}s`);

      clientConfigurations[appName] = {
        connectionString: this.buildConnectionString(config),
        readPreference: config.readPreference,
        readPreferenceTags: config.tags,
        maxStalenessSeconds: config.maxStalenessSeconds,

        // Additional client options for optimization
        options: {
          maxPoolSize: app.connectionPoolSize || 10,
          minPoolSize: app.minConnectionPoolSize || 2,
          maxIdleTimeMS: 30000,
          serverSelectionTimeoutMS: 5000,
          socketTimeoutMS: 45000,
          connectTimeoutMS: 10000,

          // Retry configuration
          retryWrites: true,
          retryReads: true,

          // Write concern based on application requirements
          writeConcern: app.writeConcern || { w: 'majority', j: true },

          // Read concern for consistency requirements
          readConcern: { level: app.readConcern || 'majority' }
        },

        // Monitoring configuration
        monitoring: {
          commandMonitoring: true,
          serverMonitoring: true,
          topologyMonitoring: true
        },

        description: config.description,
        useCase: config.useCase,
        optimizationTips: this.generateReadOptimizationTips(config, app)
      };
    }

    // Setup monitoring for read preference effectiveness
    await this.setupReadPreferenceMonitoring(clientConfigurations);

    console.log(`Read preference configurations created for ${Object.keys(clientConfigurations).length} applications`);

    return clientConfigurations;
  }

  async implementDisasterRecoveryProcedures(options = {}) {
    console.log('Implementing comprehensive disaster recovery procedures...');

    const {
      backupSchedule = 'daily',
      retentionPolicy = { daily: 7, weekly: 4, monthly: 6 },
      geographicDistribution = true,
      automaticFailback = false,
      rtoTarget = 300, // Recovery Time Objective in seconds
      rpoTarget = 60   // Recovery Point Objective in seconds
    } = options;

    const disasterRecoveryPlan = {
      backupStrategy: await this.implementBackupStrategy(backupSchedule, retentionPolicy),
      failoverProcedures: await this.implementFailoverProcedures(rtoTarget),
      recoveryValidation: await this.implementRecoveryValidation(),
      monitoringAndAlerting: await this.setupDisasterRecoveryMonitoring(),
      documentationAndRunbooks: await this.generateDisasterRecoveryRunbooks(),
      testingSchedule: await this.createDisasterRecoveryTestSchedule()
    };

    // Geographic distribution setup
    if (geographicDistribution) {
      disasterRecoveryPlan.geographicDistribution = await this.setupGeographicDistribution();
    }

    // Automatic failback configuration
    if (automaticFailback) {
      disasterRecoveryPlan.automaticFailback = await this.configureAutomaticFailback();
    }

    console.log('Disaster recovery procedures implemented successfully');
    return disasterRecoveryPlan;
  }

  async implementBackupStrategy(schedule, retentionPolicy) {
    console.log('Implementing comprehensive backup strategy...');

    const backupStrategy = {
      hotBackups: {
        enabled: true,
        schedule: schedule,
        method: 'mongodump_with_oplog',
        compression: true,
        encryption: true,
        storageLocation: ['local', 's3', 'gcs'],
        retentionPolicy: retentionPolicy
      },

      continuousBackup: {
        enabled: true,
        oplogTailing: true,
        changeStreams: true,
        pointInTimeRecovery: true,
        maxRecoveryWindow: '7 days'
      },

      consistencyChecks: {
        enabled: true,
        frequency: 'daily',
        validationMethods: ['checksum', 'document_count', 'index_integrity']
      },

      crossRegionReplication: {
        enabled: true,
        regions: ['us-east-1', 'us-west-2', 'eu-west-1'],
        replicationLag: '< 60 seconds'
      }
    };

    // Implement backup automation
    const backupJobs = await this.createAutomatedBackupJobs(backupStrategy);

    return {
      ...backupStrategy,
      automationJobs: backupJobs,
      estimatedRPO: this.calculateEstimatedRPO(backupStrategy),
      storageRequirements: this.calculateStorageRequirements(backupStrategy)
    };
  }

  async waitForPrimaryElection(timeoutMs = 30000) {
    console.log('Waiting for primary election...');

    const startTime = Date.now();
    const pollInterval = 1000; // Check every second

    while (Date.now() - startTime < timeoutMs) {
      try {
        const status = await this.db.runCommand({ replSetGetStatus: 1 });
        const primary = status.members.find(member => member.state === 1);

        if (primary) {
          console.log(`Primary elected: ${primary.name}`);
          return primary.name;
        }

        await new Promise(resolve => setTimeout(resolve, pollInterval));
      } catch (error) {
        // Connection might be lost during election, continue polling
        await new Promise(resolve => setTimeout(resolve, pollInterval));
      }
    }

    throw new Error(`Primary election timeout after ${timeoutMs}ms`);
  }

  generateHealthAlerts(healthReport) {
    const alerts = [];

    // Check for unhealthy members
    const unhealthyMembers = healthReport.memberHealth.filter(m => 
      ['unhealthy', 'down', 'unknown'].includes(m.status.overall)
    );

    if (unhealthyMembers.length > 0) {
      alerts.push({
        severity: 'HIGH',
        type: 'UNHEALTHY_MEMBERS',
        message: `${unhealthyMembers.length} replica set members are unhealthy`,
        members: unhealthyMembers.map(m => m.name),
        impact: 'Reduced fault tolerance and potential for data inconsistency'
      });
    }

    // Check replication lag
    const laggedMembers = Object.entries(healthReport.replicationLag)
      .filter(([, lag]) => lag > this.performanceTargets.maxReplicationLagSeconds);

    if (laggedMembers.length > 0) {
      alerts.push({
        severity: 'MEDIUM',
        type: 'REPLICATION_LAG',
        message: `${laggedMembers.length} members have excessive replication lag`,
        details: Object.fromEntries(laggedMembers),
        impact: 'Potential data loss during failover'
      });
    }

    // Check minimum healthy members threshold
    if (healthReport.replicaSetStatus.healthyMembers < this.performanceTargets.minHealthyMembers) {
      alerts.push({
        severity: 'CRITICAL',
        type: 'INSUFFICIENT_HEALTHY_MEMBERS',
        message: `Only ${healthReport.replicaSetStatus.healthyMembers} healthy members (minimum: ${this.performanceTargets.minHealthyMembers})`,
        impact: 'Risk of complete service outage if another member fails'
      });
    }

    return alerts;
  }

  generateHealthRecommendations(healthReport) {
    const recommendations = [];

    // Analyze member distribution
    const membersByState = healthReport.memberHealth.reduce((acc, member) => {
      acc[member.stateStr] = (acc[member.stateStr] || 0) + 1;
      return acc;
    }, {});

    if (membersByState.SECONDARY < 2) {
      recommendations.push({
        priority: 'HIGH',
        category: 'REDUNDANCY',
        recommendation: 'Add additional secondary members for better fault tolerance',
        reasoning: 'Minimum of 2 secondary members recommended for high availability',
        implementation: 'Use rs.add() to add new replica set members'
      });
    }

    // Check for arbiter usage
    if (membersByState.ARBITER > 0) {
      recommendations.push({
        priority: 'MEDIUM',
        category: 'ARCHITECTURE',
        recommendation: 'Consider replacing arbiters with data-bearing members',
        reasoning: 'Data-bearing members provide better fault tolerance than arbiters',
        implementation: 'Add data-bearing member and remove arbiter when safe'
      });
    }

    // Check geographic distribution
    const regions = new Set(healthReport.memberHealth
      .map(m => m.tags?.region)
      .filter(r => r)
    );

    if (regions.size < 2) {
      recommendations.push({
        priority: 'MEDIUM',
        category: 'DISASTER_RECOVERY',
        recommendation: 'Implement geographic distribution of replica set members',
        reasoning: 'Multi-region deployment protects against datacenter-level failures',
        implementation: 'Deploy members across multiple availability zones or regions'
      });
    }

    return recommendations;
  }

  buildConnectionString(config) {
    // Build MongoDB connection string with read preference options
    const params = new URLSearchParams();

    params.append('readPreference', config.readPreference);

    if (config.maxStalenessSeconds > 0) {
      params.append('maxStalenessSeconds', config.maxStalenessSeconds.toString());
    }

    if (config.tags && config.tags.length > 0) {
      config.tags.forEach((tag, index) => {
        Object.entries(tag).forEach(([key, value]) => {
          params.append(`readPreferenceTags[${index}][${key}]`, value);
        });
      });
    }

    return `${this.connectionString}?${params.toString()}`;
  }

  generateReadOptimizationTips(config, app) {
    const tips = [];

    if (config.readPreference === 'secondary' || config.readPreference === 'secondaryPreferred') {
      tips.push('Consider using connection pooling to maintain connections to multiple secondaries');
      tips.push('Monitor secondary lag to ensure data freshness meets application requirements');
    }

    if (config.maxStalenessSeconds > 60) {
      tips.push('Verify that application logic can handle potentially stale data');
      tips.push('Implement application-level caching for frequently accessed but slow-changing data');
    }

    if (app.queryTypes && app.queryTypes.includes('aggregation')) {
      tips.push('Heavy aggregation workloads benefit from dedicated secondary members with optimized hardware');
      tips.push('Consider using $merge or $out stages to pre-compute results on secondaries');
    }

    return tips;
  }

  async createAutomatedBackupJobs(backupStrategy) {
    // Implementation would create actual backup automation
    // This is a simplified representation
    return {
      dailyHotBackup: {
        schedule: '0 2 * * *', // 2 AM daily
        retention: backupStrategy.hotBackups.retentionPolicy.daily,
        enabled: true
      },
      continuousOplogBackup: {
        enabled: backupStrategy.continuousBackup.enabled,
        method: 'changeStreams'
      },
      weeklyFullBackup: {
        schedule: '0 1 * * 0', // 1 AM Sunday
        retention: backupStrategy.hotBackups.retentionPolicy.weekly,
        enabled: true
      }
    };
  }

  calculateEstimatedRPO(backupStrategy) {
    if (backupStrategy.continuousBackup.enabled) {
      return '< 1 minute'; // With oplog tailing
    } else {
      return '24 hours'; // With daily backups only
    }
  }

  calculateStorageRequirements(backupStrategy) {
    // Simplified storage calculation
    return {
      daily: 'Database size × compression ratio × daily retention',
      weekly: 'Database size × compression ratio × weekly retention', 
      monthly: 'Database size × compression ratio × monthly retention',
      estimated: 'Contact administrator for detailed storage analysis'
    };
  }

  async close() {
    if (this.client) {
      await this.client.close();
    }
  }
}

// Benefits of MongoDB Replica Sets:
// - Automatic failover with intelligent primary election algorithms
// - Strong consistency with configurable write and read concerns
// - Geographic distribution support for disaster recovery
// - Built-in health monitoring and self-healing capabilities
// - Flexible read preference configuration for performance optimization
// - Comprehensive backup and point-in-time recovery options
// - Zero-downtime member addition and removal
// - Advanced replication monitoring and alerting
// - Split-brain prevention through majority-based decisions
// - SQL-compatible high availability management through QueryLeaf integration

module.exports = {
  MongoReplicaSetManager
};

Understanding MongoDB Replica Set Architecture

Advanced High Availability Patterns and Strategies

Implement sophisticated replica set configurations for production environments:

// Advanced replica set patterns for enterprise deployments
class EnterpriseReplicaSetManager extends MongoReplicaSetManager {
  constructor(connectionString, enterpriseConfig) {
    super(connectionString);

    this.enterpriseConfig = {
      multiRegionDeployment: true,
      dedicatedAnalyticsNodes: true,
      priorityBasedElections: true,
      customWriteConcerns: true,
      advancedMonitoring: true,
      ...enterpriseConfig
    };

    this.deploymentTopology = new Map();
    this.performanceOptimizations = new Map();
  }

  async deployGeographicallyDistributedReplicaSet(regions) {
    console.log('Deploying geographically distributed replica set...');

    const topology = {
      regions: regions,
      memberDistribution: this.calculateOptimalMemberDistribution(regions),
      networkLatencyMatrix: await this.measureInterRegionLatency(regions),
      failoverStrategy: this.designFailoverStrategy(regions)
    };

    // Configure members with geographic awareness
    const members = [];
    let memberIndex = 0;

    for (const region of regions) {
      const regionConfig = topology.memberDistribution[region.name];

      for (let i = 0; i < regionConfig.dataMembers; i++) {
        members.push({
          _id: memberIndex++,
          host: `${region.name}-data-${i}.${region.domain}:27017`,
          priority: regionConfig.priority,
          votes: 1,
          tags: {
            region: region.name,
            datacenter: region.datacenter,
            nodeType: 'data',
            ssd: 'true',
            workload: i === 0 ? 'primary' : 'secondary'
          }
        });
      }

      // Add analytics-dedicated members
      if (regionConfig.analyticsMembers > 0) {
        for (let i = 0; i < regionConfig.analyticsMembers; i++) {
          members.push({
            _id: memberIndex++,
            host: `${region.name}-analytics-${i}.${region.domain}:27017`,
            priority: 0, // Never become primary
            votes: 1,
            tags: {
              region: region.name,
              datacenter: region.datacenter,
              nodeType: 'analytics',
              workload: 'analytics',
              ssd: 'true'
            },
            hidden: true // Hidden from application discovery
          });
        }
      }

      // Add arbiter if needed for odd number of voting members
      if (regionConfig.needsArbiter) {
        members.push({
          _id: memberIndex++,
          host: `${region.name}-arbiter.${region.domain}:27017`,
          arbiterOnly: true,
          priority: 0,
          votes: 1,
          tags: {
            region: region.name,
            datacenter: region.datacenter,
            nodeType: 'arbiter'
          }
        });
      }
    }

    // Configure advanced settings for geographic distribution
    const replicaSetConfig = {
      _id: 'global-rs',
      version: 1,
      members: members,
      settings: {
        chainingAllowed: true,
        heartbeatIntervalMillis: 2000,
        heartbeatTimeoutSecs: 10,
        electionTimeoutMillis: 10000,
        catchUpTimeoutMillis: 60000,

        // Custom write concerns for multi-region safety
        getLastErrorModes: {
          // Require writes to be acknowledged by majority in each region
          multiRegion: Object.fromEntries(
            regions.map(r => [r.name, 1])
          ),
          // Require acknowledgment from majority of data centers
          multiDataCenter: { datacenter: Math.ceil(regions.length / 2) },
          // For critical operations, require all regions
          allRegions: Object.fromEntries(
            regions.map(r => [r.name, 1])
          )
        },

        getLastErrorDefaults: {
          w: 'multiRegion',
          j: true,
          wtimeout: 15000 // Higher timeout for geographic distribution
        }
      }
    };

    // Initialize the distributed replica set
    const initResult = await this.initializeReplicaSet(members, {
      replicaSetName: 'global-rs',
      writeConcern: { w: 'multiRegion', j: true },
      readPreference: 'primaryPreferred'
    });

    if (initResult.success) {
      // Configure regional read preferences
      await this.configureRegionalReadPreferences(regions);

      // Setup cross-region monitoring
      await this.setupCrossRegionMonitoring(regions);

      // Validate network connectivity and latency
      await this.validateCrossRegionConnectivity(regions);
    }

    return {
      topology: topology,
      replicaSetConfig: replicaSetConfig,
      initResult: initResult,
      optimizations: await this.generateGlobalOptimizations(topology)
    };
  }

  async implementZeroDowntimeMaintenance(maintenancePlan) {
    console.log('Implementing zero-downtime maintenance procedures...');

    const maintenance = {
      planId: require('crypto').randomUUID(),
      startTime: new Date(),
      phases: [],
      rollbackPlan: null,
      success: false
    };

    try {
      // Phase 1: Pre-maintenance health check
      const preMaintenanceHealth = await this.performComprehensiveHealthCheck();

      if (preMaintenanceHealth.alerts.some(alert => alert.severity === 'CRITICAL')) {
        throw new Error('Cannot perform maintenance: critical health issues detected');
      }

      maintenance.phases.push({
        phase: 'pre_maintenance_health_check',
        status: 'completed',
        timestamp: new Date(),
        data: { healthyMembers: preMaintenanceHealth.replicaSetStatus.healthyMembers }
      });

      // Phase 2: Create maintenance plan execution order
      const executionOrder = this.createMaintenanceExecutionOrder(maintenancePlan, preMaintenanceHealth);

      maintenance.phases.push({
        phase: 'execution_order_planning',
        status: 'completed',
        timestamp: new Date(),
        data: { executionOrder: executionOrder }
      });

      // Phase 3: Execute maintenance on each member
      for (const step of executionOrder) {
        console.log(`Executing maintenance step: ${step.description}`);

        const stepResult = await this.executeMaintenanceStep(step);

        maintenance.phases.push({
          phase: `maintenance_step_${step.memberId}`,
          status: stepResult.success ? 'completed' : 'failed',
          timestamp: new Date(),
          data: stepResult
        });

        if (!stepResult.success && step.critical) {
          throw new Error(`Critical maintenance step failed: ${step.description}`);
        }

        // Wait for member to rejoin and catch up
        if (stepResult.requiresRejoin) {
          await this.waitForMemberRecovery(step.memberId, 300000); // 5 minute timeout
        }

        // Validate cluster health before proceeding
        const intermediateHealth = await this.performComprehensiveHealthCheck();
        if (intermediateHealth.replicaSetStatus.healthyMembers < this.performanceTargets.minHealthyMembers) {
          throw new Error('Insufficient healthy members to continue maintenance');
        }
      }

      // Phase 4: Post-maintenance validation
      const postMaintenanceHealth = await this.performComprehensiveHealthCheck();
      const validationResult = await this.validateMaintenanceCompletion(maintenancePlan, postMaintenanceHealth);

      maintenance.phases.push({
        phase: 'post_maintenance_validation',
        status: validationResult.success ? 'completed' : 'failed',
        timestamp: new Date(),
        data: validationResult
      });

      maintenance.success = validationResult.success;
      maintenance.endTime = new Date();
      maintenance.totalDurationMs = maintenance.endTime - maintenance.startTime;

      console.log(`Zero-downtime maintenance ${maintenance.success ? 'completed successfully' : 'completed with issues'}`);
      console.log(`Total duration: ${maintenance.totalDurationMs}ms`);

      return maintenance;

    } catch (error) {
      console.error('Maintenance procedure failed:', error);

      maintenance.phases.push({
        phase: 'error',
        status: 'failed',
        timestamp: new Date(),
        error: error.message
      });

      // Attempt rollback if configured
      if (maintenance.rollbackPlan) {
        console.log('Attempting rollback...');
        const rollbackResult = await this.executeRollback(maintenance.rollbackPlan);
        maintenance.rollback = rollbackResult;
      }

      maintenance.success = false;
      maintenance.endTime = new Date();
      return maintenance;
    }
  }

  calculateOptimalMemberDistribution(regions) {
    const totalRegions = regions.length;
    const distribution = {};

    if (totalRegions === 1) {
      // Single region deployment
      distribution[regions[0].name] = {
        dataMembers: 3,
        analyticsMembers: 1,
        priority: 1,
        needsArbiter: false
      };
    } else if (totalRegions === 2) {
      // Two region deployment - need arbiter for odd voting members
      distribution[regions[0].name] = {
        dataMembers: 2,
        analyticsMembers: 1,
        priority: 1,
        needsArbiter: false
      };
      distribution[regions[1].name] = {
        dataMembers: 2,
        analyticsMembers: 1,
        priority: 0.5,
        needsArbiter: true // Add arbiter to prevent split-brain
      };
    } else if (totalRegions >= 3) {
      // Multi-region deployment with primary preference
      const primaryRegion = regions[0];
      distribution[primaryRegion.name] = {
        dataMembers: 2,
        analyticsMembers: 1,
        priority: 1,
        needsArbiter: false
      };

      regions.slice(1).forEach((region, index) => {
        distribution[region.name] = {
          dataMembers: 1,
          analyticsMembers: index === 0 ? 1 : 0, // Analytics in first secondary region
          priority: 0.5 - (index * 0.1), // Decreasing priority
          needsArbiter: false
        };
      });
    }

    return distribution;
  }

  async measureInterRegionLatency(regions) {
    console.log('Measuring inter-region network latency...');

    const latencyMatrix = {};

    for (const sourceRegion of regions) {
      latencyMatrix[sourceRegion.name] = {};

      for (const targetRegion of regions) {
        if (sourceRegion.name === targetRegion.name) {
          latencyMatrix[sourceRegion.name][targetRegion.name] = 0;
          continue;
        }

        try {
          // Simulate latency measurement (in production, use actual network tests)
          const estimatedLatency = this.estimateLatencyBetweenRegions(sourceRegion, targetRegion);
          latencyMatrix[sourceRegion.name][targetRegion.name] = estimatedLatency;

        } catch (error) {
          console.warn(`Failed to measure latency between ${sourceRegion.name} and ${targetRegion.name}:`, error.message);
          latencyMatrix[sourceRegion.name][targetRegion.name] = 999; // High value for unreachable
        }
      }
    }

    return latencyMatrix;
  }

  estimateLatencyBetweenRegions(source, target) {
    // Simplified latency estimation based on geographic distance
    const latencyMap = {
      'us-east-1_us-west-2': 70,
      'us-east-1_eu-west-1': 85,
      'us-west-2_eu-west-1': 140,
      'us-east-1_ap-southeast-1': 180,
      'us-west-2_ap-southeast-1': 120,
      'eu-west-1_ap-southeast-1': 160
    };

    const key = `${source.name}_${target.name}`;
    const reverseKey = `${target.name}_${source.name}`;

    return latencyMap[key] || latencyMap[reverseKey] || 200; // Default high latency
  }

  designFailoverStrategy(regions) {
    return {
      primaryRegionFailure: {
        strategy: 'automatic_election',
        timeoutMs: 10000,
        requiredVotes: Math.ceil((regions.length * 2 + 1) / 2) // Majority
      },

      networkPartition: {
        strategy: 'majority_partition_wins',
        description: 'Partition with majority of voting members continues operation'
      },

      crossRegionReplication: {
        strategy: 'eventual_consistency',
        maxLagSeconds: 60,
        description: 'Accept eventual consistency during network issues'
      }
    };
  }

  async waitForMemberRecovery(memberId, timeoutMs) {
    console.log(`Waiting for member ${memberId} to recover...`);

    const startTime = Date.now();
    const pollInterval = 5000; // Check every 5 seconds

    while (Date.now() - startTime < timeoutMs) {
      try {
        const status = await this.db.runCommand({ replSetGetStatus: 1 });
        const member = status.members.find(m => m._id === memberId);

        if (member && [1, 2].includes(member.state)) { // PRIMARY or SECONDARY
          console.log(`Member ${memberId} recovered successfully`);
          return true;
        }

        await new Promise(resolve => setTimeout(resolve, pollInterval));
      } catch (error) {
        console.warn(`Error checking member ${memberId} status:`, error.message);
        await new Promise(resolve => setTimeout(resolve, pollInterval));
      }
    }

    throw new Error(`Member ${memberId} recovery timeout after ${timeoutMs}ms`);
  }

  createMaintenanceExecutionOrder(maintenancePlan, healthStatus) {
    const executionOrder = [];

    // Always start with secondaries, then primary
    const secondaries = healthStatus.memberHealth
      .filter(m => m.stateStr === 'SECONDARY')
      .sort((a, b) => (b.priority || 0) - (a.priority || 0)); // Highest priority secondary first

    const primary = healthStatus.memberHealth.find(m => m.stateStr === 'PRIMARY');

    // Add secondary maintenance steps
    secondaries.forEach((member, index) => {
      executionOrder.push({
        memberId: member._id,
        memberName: member.name,
        description: `Maintenance on secondary: ${member.name}`,
        critical: false,
        requiresRejoin: maintenancePlan.requiresRestart,
        estimatedDurationMs: maintenancePlan.estimatedDurationMs || 300000,
        order: index
      });
    });

    // Add primary maintenance step (with step-down)
    if (primary) {
      executionOrder.push({
        memberId: primary._id,
        memberName: primary.name,
        description: `Maintenance on primary: ${primary.name} (with step-down)`,
        critical: true,
        requiresRejoin: maintenancePlan.requiresRestart,
        requiresStepDown: true,
        estimatedDurationMs: (maintenancePlan.estimatedDurationMs || 300000) + 30000, // Extra time for election
        order: secondaries.length
      });
    }

    return executionOrder;
  }

  async executeMaintenanceStep(step) {
    console.log(`Executing maintenance step: ${step.description}`);

    try {
      // Step down primary if required
      if (step.requiresStepDown) {
        console.log(`Stepping down primary: ${step.memberName}`);
        await this.db.runCommand({ 
          replSetStepDown: Math.ceil(step.estimatedDurationMs / 1000) + 60, // Add buffer
          force: false 
        });

        // Wait for new primary election
        await this.waitForPrimaryElection(30000);
      }

      // Simulate maintenance operation (replace with actual maintenance logic)
      console.log(`Performing maintenance on ${step.memberName}...`);
      await new Promise(resolve => setTimeout(resolve, 5000)); // Simulate maintenance work

      return {
        success: true,
        memberId: step.memberId,
        memberName: step.memberName,
        requiresRejoin: step.requiresRejoin,
        completionTime: new Date()
      };

    } catch (error) {
      console.error(`Maintenance step failed for ${step.memberName}:`, error);
      return {
        success: false,
        memberId: step.memberId,
        memberName: step.memberName,
        error: error.message,
        requiresRejoin: false
      };
    }
  }

  async validateMaintenanceCompletion(maintenancePlan, postMaintenanceHealth) {
    console.log('Validating maintenance completion...');

    const validation = {
      success: true,
      checks: [],
      issues: []
    };

    // Check that all members are healthy
    const healthyMembers = postMaintenanceHealth.memberHealth
      .filter(m => ['primary', 'healthy'].includes(m.status.overall));

    validation.checks.push({
      check: 'member_health',
      passed: healthyMembers.length >= this.performanceTargets.minHealthyMembers,
      details: `${healthyMembers.length} healthy members (minimum: ${this.performanceTargets.minHealthyMembers})`
    });

    // Check replication lag
    const maxLag = Math.max(...Object.values(postMaintenanceHealth.replicationLag));
    validation.checks.push({
      check: 'replication_lag',
      passed: maxLag <= this.performanceTargets.maxReplicationLagSeconds,
      details: `Maximum lag: ${maxLag}s (target: ${this.performanceTargets.maxReplicationLagSeconds}s)`
    });

    // Check for any alerts
    const criticalAlerts = postMaintenanceHealth.alerts
      .filter(alert => alert.severity === 'CRITICAL');

    validation.checks.push({
      check: 'critical_alerts',
      passed: criticalAlerts.length === 0,
      details: `${criticalAlerts.length} critical alerts`
    });

    // Overall success determination
    validation.success = validation.checks.every(check => check.passed);

    if (!validation.success) {
      validation.issues = validation.checks
        .filter(check => !check.passed)
        .map(check => `${check.check}: ${check.details}`);
    }

    return validation;
  }
}

SQL-Style Replica Set Management with QueryLeaf

QueryLeaf provides familiar SQL syntax for MongoDB replica set management and monitoring:

-- QueryLeaf replica set management with SQL-familiar syntax

-- Create replica set with advanced configuration
CREATE REPLICA SET global_ecommerce_rs WITH (
  members = [
    { host = 'us-east-primary-1.company.com:27017', priority = 1.0, tags = { region = 'us-east', datacenter = 'dc1' } },
    { host = 'us-east-secondary-1.company.com:27017', priority = 0.5, tags = { region = 'us-east', datacenter = 'dc2' } },
    { host = 'us-west-secondary-1.company.com:27017', priority = 0.3, tags = { region = 'us-west', datacenter = 'dc3' } },
    { host = 'eu-west-secondary-1.company.com:27017', priority = 0.3, tags = { region = 'eu-west', datacenter = 'dc4' } },
    { host = 'analytics-secondary-1.company.com:27017', priority = 0, hidden = true, tags = { workload = 'analytics' } }
  ],

  -- Advanced replica set settings
  heartbeat_interval = '2 seconds',
  election_timeout = '10 seconds',
  catchup_timeout = '60 seconds',

  -- Custom write concerns for multi-region safety
  write_concerns = {
    multi_region = { us_east = 1, us_west = 1, eu_west = 1 },
    majority_datacenter = { datacenter = 3 },
    analytics_safe = { workload_analytics = 0, datacenter = 2 }
  },

  default_write_concern = { w = 'multi_region', j = true, wtimeout = '15 seconds' }
);

-- Monitor replica set health with comprehensive metrics
WITH replica_set_health AS (
  SELECT 
    member_name,
    member_state,
    member_state_str,
    health_status,
    uptime_seconds,
    ping_ms,

    -- Replication lag calculation
    EXTRACT(EPOCH FROM (CURRENT_TIMESTAMP - optime_date)) as replication_lag_seconds,

    -- Member performance assessment
    CASE member_state
      WHEN 1 THEN 'PRIMARY'
      WHEN 2 THEN 'SECONDARY'
      WHEN 7 THEN 'ARBITER'
      WHEN 8 THEN 'DOWN'
      WHEN 3 THEN 'RECOVERING'
      ELSE 'UNKNOWN'
    END as role,

    -- Health grade assignment
    CASE 
      WHEN health_status = 1 AND member_state IN (1, 2) AND ping_ms < 50 THEN 'A'
      WHEN health_status = 1 AND member_state IN (1, 2) AND ping_ms < 100 THEN 'B'
      WHEN health_status = 1 AND member_state IN (1, 2, 7) THEN 'C'
      WHEN health_status = 1 AND member_state NOT IN (1, 2, 7) THEN 'D'
      ELSE 'F'
    END as health_grade,

    -- Network performance indicators
    CASE
      WHEN ping_ms IS NULL THEN 'UNREACHABLE'
      WHEN ping_ms < 10 THEN 'EXCELLENT'
      WHEN ping_ms < 50 THEN 'GOOD'
      WHEN ping_ms < 100 THEN 'ACCEPTABLE'
      WHEN ping_ms < 250 THEN 'POOR'
      ELSE 'CRITICAL'
    END as network_performance,

    -- Extract member tags for analysis
    member_tags.region as member_region,
    member_tags.datacenter as member_datacenter,
    member_tags.workload as member_workload,
    sync_source_host

  FROM rs_status()  -- QueryLeaf function to get replica set status
),

replication_analysis AS (
  SELECT 
    member_region,
    member_datacenter,
    role,

    -- Regional distribution analysis
    COUNT(*) as members_in_region,
    COUNT(*) FILTER (WHERE role = 'SECONDARY') as secondaries_in_region,
    COUNT(*) FILTER (WHERE health_grade IN ('A', 'B')) as healthy_members_in_region,

    -- Performance metrics by region
    AVG(replication_lag_seconds) as avg_replication_lag,
    MAX(replication_lag_seconds) as max_replication_lag,
    AVG(ping_ms) as avg_network_latency,
    MAX(ping_ms) as max_network_latency,

    -- Health distribution
    COUNT(*) FILTER (WHERE health_grade = 'A') as grade_a_members,
    COUNT(*) FILTER (WHERE health_grade = 'B') as grade_b_members,
    COUNT(*) FILTER (WHERE health_grade IN ('D', 'F')) as problematic_members,

    -- Fault tolerance assessment
    CASE
      WHEN COUNT(*) FILTER (WHERE role IN ('PRIMARY', 'SECONDARY') AND health_grade IN ('A', 'B')) >= 2 
      THEN 'FAULT_TOLERANT'
      WHEN COUNT(*) FILTER (WHERE role IN ('PRIMARY', 'SECONDARY')) >= 2 
      THEN 'MINIMAL_REDUNDANCY'
      ELSE 'AT_RISK'
    END as fault_tolerance_status

  FROM replica_set_health
  WHERE role != 'ARBITER'  -- Exclude arbiters from data analysis
  GROUP BY member_region, member_datacenter, role
),

failover_readiness_assessment AS (
  SELECT 
    rh.member_name,
    rh.role,
    rh.health_grade,
    rh.replication_lag_seconds,
    rh.member_region,

    -- Failover readiness scoring
    CASE 
      WHEN rh.role = 'PRIMARY' THEN 'N/A - Current Primary'
      WHEN rh.role = 'SECONDARY' AND rh.health_grade IN ('A', 'B') AND rh.replication_lag_seconds < 10 THEN 'READY'
      WHEN rh.role = 'SECONDARY' AND rh.health_grade = 'C' AND rh.replication_lag_seconds < 30 THEN 'ACCEPTABLE'
      WHEN rh.role = 'SECONDARY' AND rh.replication_lag_seconds < 120 THEN 'DELAYED'
      ELSE 'NOT_READY'
    END as failover_readiness,

    -- Estimated failover time
    CASE 
      WHEN rh.role = 'SECONDARY' AND rh.health_grade IN ('A', 'B') AND rh.replication_lag_seconds < 10 
      THEN '< 15 seconds'
      WHEN rh.role = 'SECONDARY' AND rh.replication_lag_seconds < 60 
      THEN '15-45 seconds'  
      WHEN rh.role = 'SECONDARY' AND rh.replication_lag_seconds < 300 
      THEN '1-5 minutes'
      ELSE '> 5 minutes or unknown'
    END as estimated_failover_time,

    -- Regional failover preference
    ROW_NUMBER() OVER (
      PARTITION BY rh.member_region 
      ORDER BY 
        CASE rh.health_grade WHEN 'A' THEN 1 WHEN 'B' THEN 2 WHEN 'C' THEN 3 ELSE 4 END,
        rh.replication_lag_seconds,
        rh.ping_ms
    ) as regional_failover_preference

  FROM replica_set_health rh
  WHERE rh.role IN ('PRIMARY', 'SECONDARY')
)

-- Comprehensive replica set status report
SELECT 
  'REPLICA SET HEALTH SUMMARY' as report_section,

  -- Overall cluster health
  (SELECT COUNT(*) FROM replica_set_health WHERE health_grade IN ('A', 'B')) as healthy_members,
  (SELECT COUNT(*) FROM replica_set_health WHERE role IN ('PRIMARY', 'SECONDARY')) as data_bearing_members,
  (SELECT COUNT(DISTINCT member_region) FROM replica_set_health) as regions_covered,
  (SELECT COUNT(DISTINCT member_datacenter) FROM replica_set_health) as datacenters_covered,

  -- Performance indicators
  (SELECT ROUND(AVG(replication_lag_seconds)::numeric, 2) FROM replica_set_health WHERE role = 'SECONDARY') as avg_replication_lag_sec,
  (SELECT ROUND(MAX(replication_lag_seconds)::numeric, 2) FROM replica_set_health WHERE role = 'SECONDARY') as max_replication_lag_sec,
  (SELECT ROUND(AVG(ping_ms)::numeric, 1) FROM replica_set_health WHERE ping_ms IS NOT NULL) as avg_network_latency_ms,

  -- Fault tolerance assessment
  (SELECT fault_tolerance_status FROM replication_analysis LIMIT 1) as overall_fault_tolerance,

  -- Failover readiness
  (SELECT COUNT(*) FROM failover_readiness_assessment WHERE failover_readiness = 'READY') as failover_ready_secondaries,
  (SELECT member_name FROM failover_readiness_assessment WHERE regional_failover_preference = 1 AND role = 'SECONDARY' ORDER BY replication_lag_seconds LIMIT 1) as preferred_failover_candidate

UNION ALL

-- Regional distribution analysis
SELECT 
  'REGIONAL DISTRIBUTION' as report_section,

  member_region as region,
  members_in_region,
  secondaries_in_region,  
  healthy_members_in_region,
  ROUND(avg_replication_lag::numeric, 2) as avg_lag_sec,
  ROUND(avg_network_latency::numeric, 1) as avg_latency_ms,
  fault_tolerance_status,

  -- Regional health grade
  CASE 
    WHEN problematic_members = 0 AND grade_a_members >= 1 THEN 'EXCELLENT'
    WHEN problematic_members = 0 AND healthy_members_in_region >= 1 THEN 'GOOD'
    WHEN problematic_members <= 1 THEN 'ACCEPTABLE'
    ELSE 'NEEDS_ATTENTION'
  END as regional_health_grade

FROM replication_analysis
WHERE member_region IS NOT NULL

UNION ALL

-- Failover readiness details
SELECT 
  'FAILOVER READINESS' as report_section,

  member_name,
  role,
  health_grade,
  failover_readiness,
  estimated_failover_time,
  member_region,

  CASE 
    WHEN failover_readiness = 'READY' THEN 'Can handle immediate failover'
    WHEN failover_readiness = 'ACCEPTABLE' THEN 'Can handle failover with short delay'
    WHEN failover_readiness = 'DELAYED' THEN 'Requires catch-up time before failover'
    ELSE 'Not suitable for failover'
  END as failover_notes

FROM failover_readiness_assessment
ORDER BY 
  CASE failover_readiness 
    WHEN 'READY' THEN 1 
    WHEN 'ACCEPTABLE' THEN 2 
    WHEN 'DELAYED' THEN 3 
    ELSE 4 
  END,
  replication_lag_seconds;

-- Advanced read preference configuration
CREATE READ PREFERENCE CONFIGURATION application_read_preferences AS (

  -- Real-time dashboard queries - require primary for consistency
  real_time_dashboard = {
    read_preference = 'primary',
    max_staleness = '0 seconds',
    tags = {},
    description = 'Live dashboards requiring immediate consistency'
  },

  -- Business intelligence queries - can use secondaries
  business_intelligence = {
    read_preference = 'secondaryPreferred',
    max_staleness = '30 seconds', 
    tags = [{ workload = 'analytics' }, { region = 'us-east' }],
    description = 'BI queries with slight staleness tolerance'
  },

  -- Geographic user queries - prefer regional secondaries
  geographic_user_queries = {
    read_preference = 'nearest',
    max_staleness = '60 seconds',
    tags = [{ region = '${user_region}' }],
    description = 'User-facing queries optimized for geographic proximity'
  },

  -- Reporting and archival - use dedicated analytics secondary
  reporting_archival = {
    read_preference = 'secondary',
    max_staleness = '300 seconds',
    tags = [{ workload = 'analytics' }, { hidden = 'true' }],
    description = 'Heavy reporting queries isolated from primary workload'
  },

  -- Backup operations - use specific backup-designated secondary
  backup_operations = {
    read_preference = 'secondary', 
    max_staleness = '600 seconds',
    tags = [{ backup = 'true' }],
    description = 'Backup and compliance operations'
  }
);

-- Automatic failover testing and validation
CREATE FAILOVER TEST PROCEDURE comprehensive_failover_test AS (

  -- Test configuration
  test_duration = '5 minutes',
  data_consistency_validation = true,
  application_connectivity_testing = true,
  performance_impact_measurement = true,

  -- Test phases
  phases = [
    {
      phase = 'pre_test_health_check',
      description = 'Validate cluster health before testing',
      required_healthy_members = 3,
      max_replication_lag = '30 seconds'
    },

    {
      phase = 'test_data_insertion', 
      description = 'Insert test data for consistency verification',
      test_documents = 1000,
      write_concern = { w = 'majority', j = true }
    },

    {
      phase = 'primary_step_down',
      description = 'Force primary to step down',
      step_down_duration = '300 seconds',
      force_step_down = false
    },

    {
      phase = 'election_monitoring',
      description = 'Monitor primary election process', 
      max_election_time = '30 seconds',
      log_election_details = true
    },

    {
      phase = 'connectivity_validation',
      description = 'Test application connectivity to new primary',
      connection_timeout = '10 seconds',
      retry_attempts = 3
    },

    {
      phase = 'data_consistency_check',
      description = 'Verify data consistency after failover',
      verify_test_data = true,
      checksum_validation = true
    },

    {
      phase = 'performance_assessment',
      description = 'Measure failover impact on performance',
      metrics = ['election_time', 'connectivity_restore_time', 'replication_catch_up_time']
    }
  ],

  -- Success criteria
  success_criteria = {
    max_election_time = '30 seconds',
    data_consistency = 'required',
    zero_data_loss = 'required',
    application_connectivity_restore = '< 60 seconds'
  },

  -- Automated scheduling
  schedule = 'monthly',
  notification_recipients = ['dba-team@company.com', 'ops-team@company.com']
);

-- Disaster recovery configuration and procedures
CREATE DISASTER RECOVERY PLAN enterprise_dr_plan AS (

  -- Backup strategy
  backup_strategy = {
    hot_backups = {
      frequency = 'daily',
      retention = '30 days',
      compression = true,
      encryption = true,
      storage_locations = ['s3://company-mongo-backups', 'gcs://company-mongo-dr']
    },

    continuous_backup = {
      oplog_tailing = true,
      change_streams = true,
      point_in_time_recovery = true,
      max_recovery_window = '7 days'
    },

    cross_region_replication = {
      enabled = true,
      target_regions = ['us-west-2', 'eu-central-1'],
      replication_lag_target = '< 60 seconds'
    }
  },

  -- Recovery procedures
  recovery_procedures = {

    -- Single member failure
    member_failure = {
      detection_time_target = '< 30 seconds',
      automatic_response = true,
      procedures = [
        'Automatic failover via replica set election',
        'Alert operations team',
        'Provision replacement member',
        'Add replacement to replica set',
        'Monitor replication catch-up'
      ]
    },

    -- Regional failure  
    regional_failure = {
      detection_time_target = '< 2 minutes',
      automatic_response = 'partial',
      procedures = [
        'Automatic failover to available regions',
        'Redirect application traffic',
        'Scale remaining regions for increased load',
        'Provision new regional deployment', 
        'Restore full geographic distribution'
      ]
    },

    -- Complete cluster failure
    complete_failure = {
      detection_time_target = '< 5 minutes',
      automatic_response = false,
      procedures = [
        'Activate disaster recovery plan',
        'Restore from most recent backup',
        'Apply oplog entries for point-in-time recovery',
        'Provision new cluster infrastructure',
        'Validate data integrity',
        'Redirect application traffic to recovered cluster'
      ]
    }
  },

  -- RTO/RPO targets
  recovery_targets = {
    member_failure = { rto = '< 1 minute', rpo = '0 seconds' },
    regional_failure = { rto = '< 5 minutes', rpo = '< 30 seconds' },
    complete_failure = { rto = '< 2 hours', rpo = '< 15 minutes' }
  },

  -- Testing and validation
  testing_schedule = {
    failover_tests = 'monthly',
    disaster_recovery_drills = 'quarterly', 
    backup_restoration_tests = 'weekly',
    cross_region_connectivity_tests = 'daily'
  }
);

-- Real-time monitoring and alerting configuration
CREATE MONITORING CONFIGURATION replica_set_monitoring AS (

  -- Health check intervals
  health_check_interval = '10 seconds',
  performance_sampling_interval = '30 seconds',
  trend_analysis_window = '1 hour',

  -- Alert thresholds
  alert_thresholds = {

    -- Replication lag alerts
    replication_lag = {
      warning = '30 seconds',
      critical = '2 minutes',
      escalation = '5 minutes'
    },

    -- Member health alerts  
    member_health = {
      warning = 'any_member_down',
      critical = 'primary_down_or_majority_unavailable',
      escalation = 'split_brain_detected'
    },

    -- Network latency alerts
    network_latency = {
      warning = '100 ms average',
      critical = '500 ms average', 
      escalation = 'member_unreachable'
    },

    -- Election frequency alerts
    election_frequency = {
      warning = '2 elections per hour',
      critical = '5 elections per hour',
      escalation = 'continuous_election_cycling'
    }
  },

  -- Notification configuration
  notifications = {
    email = ['dba-team@company.com', 'ops-team@company.com'],
    slack = '#database-alerts',
    pagerduty = 'mongodb-replica-set-service',
    webhook = 'https://monitoring.company.com/mongodb-alerts'
  },

  -- Automated responses
  automated_responses = {
    member_down = 'log_alert_and_notify',
    high_replication_lag = 'investigate_and_notify',
    primary_election = 'log_details_and_validate_health',
    split_brain_detection = 'immediate_escalation'
  }
);

-- QueryLeaf provides comprehensive replica set management:
-- 1. SQL-familiar syntax for replica set creation and configuration
-- 2. Advanced health monitoring with comprehensive metrics and alerting
-- 3. Automated failover testing and validation procedures
-- 4. Sophisticated read preference management for performance optimization
-- 5. Comprehensive disaster recovery planning and implementation
-- 6. Real-time monitoring with customizable thresholds and notifications
-- 7. Geographic distribution management for multi-region deployments  
-- 8. Zero-downtime maintenance procedures with automatic validation
-- 9. Performance impact assessment and optimization recommendations
-- 10. Integration with MongoDB's native replica set functionality

Best Practices for Replica Set Implementation

High Availability Design Principles

Essential guidelines for robust MongoDB replica set deployments:

  1. Odd Number of Voting Members: Always maintain an odd number of voting members to prevent split-brain scenarios
  2. Geographic Distribution: Deploy members across multiple availability zones or regions for disaster recovery
  3. Resource Planning: Size replica set members appropriately for expected workload and failover scenarios
  4. Network Optimization: Ensure low-latency, high-bandwidth connections between replica set members
  5. Monitoring Integration: Implement comprehensive monitoring with proactive alerting for health and performance
  6. Regular Testing: Conduct regular failover tests and disaster recovery drills to validate procedures

Operational Excellence

Optimize replica set operations for production environments:

  1. Automated Deployment: Use infrastructure as code for consistent replica set deployments
  2. Configuration Management: Maintain consistent configuration across all replica set members
  3. Security Implementation: Enable authentication, authorization, and encryption for all replica communications
  4. Backup Strategy: Implement multiple backup strategies including hot backups and point-in-time recovery
  5. Performance Monitoring: Track replication lag, network latency, and resource utilization continuously
  6. Documentation Maintenance: Keep runbooks and procedures updated with current configuration and processes

Conclusion

MongoDB's replica set architecture provides comprehensive high availability and disaster recovery capabilities that eliminate the complexity and limitations of traditional database replication systems. The sophisticated election algorithms, automatic failover mechanisms, and flexible configuration options ensure business continuity even during catastrophic failures while maintaining data consistency and application performance.

Key MongoDB Replica Set benefits include:

  • Automatic Failover: Intelligent primary election with no manual intervention required
  • Strong Consistency: Configurable write and read concerns for application-specific consistency requirements
  • Geographic Distribution: Multi-region deployment support for comprehensive disaster recovery
  • Zero Downtime Operations: Add, remove, and maintain replica set members without service interruption
  • Flexible Read Scaling: Advanced read preference configuration for optimal performance distribution
  • Comprehensive Monitoring: Built-in health monitoring with detailed metrics and alerting capabilities

Whether you're building resilient e-commerce platforms, financial applications, or global content delivery systems, MongoDB's replica sets with QueryLeaf's familiar SQL interface provide the foundation for mission-critical high availability infrastructure.

QueryLeaf Integration: QueryLeaf automatically manages MongoDB replica set operations while providing SQL-familiar syntax for replica set creation, health monitoring, and disaster recovery procedures. Advanced high availability patterns, automated failover testing, and comprehensive monitoring are seamlessly handled through familiar SQL constructs, making sophisticated database resilience both powerful and accessible to SQL-oriented operations teams.

The combination of MongoDB's robust replica set capabilities with SQL-style operations makes it an ideal platform for applications requiring both high availability and familiar database management patterns, ensuring your applications maintain continuous operation while remaining manageable as they scale globally.