# Storage Integration - S3 & GCS with Retry Logic

Complete storage integration for AWS S3 and Google Cloud Storage with the following features:

## Features

✅ **AWS S3 Support** - Full S3 integration with @aws-sdk/client-s3
✅ **Google Cloud Storage Support** - Full GCS integration with @google-cloud/storage
✅ **Retry Logic** - Exponential backoff retry mechanism for failed uploads/downloads/deletes
✅ **Signed URLs** - Temporary access URLs with automatic expiration handling
✅ **URL Caching** - In-memory cache with TTL to reduce repeated requests
✅ **Logging** - Detailed logging for all operations
✅ **Unified API** - Single interface supporting both providers

## Installation

```bash
cd /workspace
npm install --save @aws-sdk/client-s3 @google-cloud/storage crypto-js
```

## Environment Variables

### AWS S3
```bash
S3_ACCESS_KEY=your_aws_access_key
S3_SECRET_KEY=your_aws_secret_key
S3_REGION=us-east-1
STORAGE_BUCKET=your-bucket-name
```

### Google Cloud Storage
```bash
GCP_PROJECT_ID=your-project-id
GCS_BUCKET=your-bucket-name
GCP_KEY_FILE=/path/to/service-account-key.json  # Optional
```

## Services

### 1. Logger Service (`src/services/Logger.js`)
Utility logger for file and console output:
- ERROR, WARN, INFO, DEBUG levels
- Auto file logging to `/workspace/logs/`

### 2. S3 Storage Service (`src/services/S3StorageService.js`)
AWS S3-specific operations:
- `uploadToS3()` - Upload with retry
- `getSignedUploadUrl()` - Generate upload URL
- `getSignedDownloadUrl()` - Generate download URL
- `deleteFromS3()` - Delete file with retry
- `generateFileKey()` - Create unique keys
- `getCacheStats()` - Cache metrics
- `clearExpiredCache()` - Cleanup expired URLs

### 3. GCS Storage Service (`src/services/GCSStorageService.js`)
Google Cloud Storage operations:
- `uploadToGCS()` - Upload with retry
- `getSignedUploadUrl()` - Generate upload URL
- `getSignedDownloadUrl()` - Generate download URL
- `deleteFromGCS()` - Delete file with retry
- `generateFileKey()` - Create unique keys
- `getCacheStats()` - Cache metrics
- `clearExpiredCache()` - Cleanup expired URLs

### 4. Unified Storage Service (`src/services/StorageServiceUnified.js`)
Single API for both providers:
- Auto-detects configured provider (S3 or GCS)
- `uploadFile()` - Upload to configured storage
- `getUploadUrl()` - Get signed upload URL
- `getDownloadUrl()` - Get signed download URL
- `deleteFile()` - Delete from storage
- `getCacheStats()` - Get cache statistics
- `clearExpiredCache()` - Clear expired URLs

## REST API Routes (`src/routes/storage.js`)

All endpoints are accessible at `/api/storage/`

### POST `/api/storage/upload-url`
Get a signed URL for file upload
```json
{
  "filename": "my-video.mp4",
  "expiresInMinutes": 60,
  "forceRefresh": false
}
```

Response:
```json
{
  "success": true,
  "data": {
    "provider": "s3",
    "url": "https://...",
    "bucket": "...",
    "key": "uploads/my-video-1234567890-abc123.mp4"
  }
}
```

### POST `/api/storage/upload`
Upload file directly (base64 encoded)
```json
{
  "filename": "my-video.mp4",
  "data": "base64_encoded_data",
  "contentType": "video/mp4",
  "metadata": {"description": "My video"}
}
```

### POST `/api/storage/download-url`
Get a signed download URL
```json
{
  "fileKey": "uploads/my-video-1234567890-abc123.mp4",
  "expiresInMinutes": 60,
  "forceRefresh": false
}
```

### POST `/api/storage/delete`
Delete a file
```json
{
  "fileKey": "uploads/my-video-1234567890-abc123.mp4"
}
```

### GET `/api/storage/cache-stats`
Get cache statistics
```json
{
  "success": true,
  "data": {
    "provider": "s3",
    "stats": {
      "totalCached": 5,
      "cacheEntries": [...]
    }
  }
}
```

### POST `/api/storage/clear-cache`
Clear expired cache entries
```json
{
  "success": true,
  "data": {
    "provider": "s3",
    "removed": 2,
    "remaining": 3
  }
}
```

### GET `/api/storage/health`
Health check
```json
{
  "success": true,
  "status": "healthy",
  "provider": "s3",
  "timestamp": "2024-05-01T20:30:00.000Z"
}
```

## Retry Logic

All operations use exponential backoff:
- **Max retries**: 3 attempts
- **Initial delay**: 1000ms
- **Backoff multiplier**: 2 (1s → 2s → 4s)

Example flow:
1. First attempt fails
2. Wait 1 second
3. Second attempt fails
4. Wait 2 seconds
5. Third attempt (final)

## Signed URL Caching

URLs are cached in-memory with automatic expiration:
- **Cache key**: filename (upload) or `download:filename` (download)
- **Refresh time**: 1 minute before expiration
- **Auto-cleanup**: On `clearExpiredCache()` call

## Usage Example

```javascript
import StorageServiceUnified from './services/StorageServiceUnified.js';

// Upload a file
const uploadResult = await StorageServiceUnified.uploadFile(
  'my-video.mp4',
  videoBuffer,
  'video/mp4',
  { title: 'My Video' }
);
console.log(uploadResult.key);

// Get upload URL
const uploadUrl = await StorageServiceUnified.getUploadUrl('my-video.mp4');
console.log(uploadUrl.url);

// Get download URL
const downloadUrl = await StorageServiceUnified.getDownloadUrl(uploadResult.key);
console.log(downloadUrl.url);

// Delete file
await StorageServiceUnified.deleteFile(uploadResult.key);

// Check cache
const stats = StorageServiceUnified.getCacheStats();
console.log(stats);
```

## Logging

Logs are written to:
- Console: Real-time output
- File: `/workspace/logs/Logger.log`
- Service logs: `/workspace/logs/S3Storage.log`, `/workspace/logs/GCSStorage.log`, etc.

## Integration with Express

```javascript
import express from 'express';
import storageRoutes from './routes/storage.js';

const app = express();
app.use(express.json());
app.use('/api/storage', storageRoutes);

app.listen(3000, () => {
  console.log('Server running on http://localhost:3000');
});
```

## Files Created

- `/src/services/Logger.js` - Logging utility
- `/src/services/S3StorageService.js` - AWS S3 implementation
- `/src/services/GCSStorageService.js` - Google Cloud Storage implementation
- `/src/services/StorageServiceUnified.js` - Unified API
- `/src/routes/storage.js` - Express routes
- `/docs/STORAGE-INTEGRATION.md` - This file

## Next Steps

1. Configure environment variables (S3 or GCS credentials)
2. Update main Express server to mount storage routes
3. Test endpoints with curl/Postman
4. Integrate with video generation service
