Gira Attachment System - Comprehensive Documentation¶
Table of Contents¶
- 1. User Guide
- 1.1 Overview
- 1.2 Getting Started
- 1.3 Command Reference
- 1.4 Common Workflows
- 1.5 Best Practices
- 2. Cloud Storage Provider Setup
- 2.1 Amazon S3
- 2.2 Cloudflare R2
- 2.3 Backblaze B2
- 2.4 Google Cloud Storage
- 2.5 Azure Blob Storage
- 3. Security and Credentials
- 3.1 Credential Management
- 3.2 Access Control
- 3.3 Encryption
- 3.4 Audit Logging
- 4. Administration Guide
- 4.1 Troubleshooting
- 4.2 Performance Tuning
- 4.3 Migration from Other Systems
- 4.4 Backup and Recovery
- 5. Developer Documentation
- 5.1 Architecture Overview
- 5.2 Provider Plugin System
- 5.3 API Reference
- 5.4 Extension Points
- 6. Cookbook and Recipes
- 6.1 Automation Scripts
- 6.2 CI/CD Integration
- 6.3 Git Hook Examples
- 6.4 Bulk Operations
- 7. CLI Command Reference
1. User Guide¶
1.1 Overview¶
Gira's attachment system provides a powerful yet flexible way to associate files with tickets and epics while maintaining a clean Git repository. The system supports two primary storage modes:
Cloud Storage (Recommended)¶
- Small YAML pointer files stored in Git (under
.gira/attachments/
) - Actual file content stored in cloud storage providers
- Supports: Amazon S3, Google Cloud Storage, Azure Blob Storage, Cloudflare R2, Backblaze B2
- Benefits: Fast repository clones, unlimited storage capacity, enterprise-grade security
Git LFS Integration¶
- Files stored directly in the repository using Git Large File Storage
- No external dependencies or cloud accounts required
- Benefits: Simpler setup, unified storage, works offline
Key Features¶
✅ Multiple File Upload: Add dozens of files at once with patterns and filters
✅ Directory Upload: Upload entire directories with include/exclude patterns
✅ Wildcard Support: Use glob patterns for precise file selection
✅ Multiple Storage Providers: Choose the best provider for your needs
✅ Metadata Tracking: Automatic file type detection, checksums, and timestamps
✅ Version Control: Full history of attachment changes in Git
✅ AI Agent Friendly: Commands designed for automation and scripting
✅ Security: Encryption, access control, and audit logging support
1.2 Getting Started¶
Prerequisites¶
- Gira Project: Ensure you're in a Gira-enabled project directory
- Storage Provider: Choose and configure a storage provider
- Credentials: Set up authentication for your chosen provider
Quick Setup¶
Option 1: Git LFS (Simplest)
# Install Git LFS if not already installed
# macOS: brew install git-lfs
# Ubuntu: sudo apt-get install git-lfs
# Windows: choco install git-lfs
# Initialize Git LFS in your repository
git lfs install
# Configure Gira to use Git LFS
gira storage configure --provider git-lfs
# Test the configuration
gira storage test-connection
Option 2: Cloud Storage (Recommended for Teams)
# Interactive setup wizard
gira storage configure
# Follow prompts to select provider and enter credentials
# Example output:
# ? Select storage provider: Amazon S3
# ? Bucket name: my-gira-attachments
# ? Region: us-east-1
# ? Access Key ID: [enter your key]
# ? Secret Access Key: [hidden input]
# Test the configuration
gira storage test-connection
First Attachment¶
Once configured, adding your first attachment is simple:
# Add a screenshot to a ticket
gira attachment add PROJ-123 screenshot.png --note "Login error on mobile"
# Verify it was added
gira attachment list PROJ-123
1.3 Command Reference¶
Core Commands¶
Command | Purpose | Example |
---|---|---|
gira attachment add |
Attach files to tickets/epics | gira attachment add PROJ-123 file.pdf |
gira attachment list |
List attachments for an entity | gira attachment list PROJ-123 |
gira attachment download |
Download attachments | gira attachment download PROJ-123 file.pdf |
gira attachment remove |
Remove attachments | gira attachment remove PROJ-123 file.pdf |
gira attachment cat |
Display text file contents | gira attachment cat PROJ-123 log.txt |
gira attachment open |
Open files with system apps | gira attachment open PROJ-123 design.pdf |
Storage Management¶
Command | Purpose | Example |
---|---|---|
gira storage configure |
Set up storage provider | gira storage configure --provider s3 |
gira storage test-connection |
Test storage connectivity | gira storage test-connection |
gira storage show-config |
Display current configuration | gira storage show-config |
1.4 Common Workflows¶
Bug Reporting Workflow¶
When documenting bugs, you typically need multiple types of evidence:
# 1. Screenshot of the error
gira attachment add BUG-101 error-screenshot.png \
--note "Error dialog showing 'Connection timeout' message"
# 2. Browser console logs
gira attachment add BUG-101 console-logs.json \
--note "Browser console output captured at 14:32 UTC"
# 3. Server logs from the same time period
gira attachment add BUG-101 ./server-logs/ \
--include "*.log" \
--exclude "*debug.log" \
--note "Server logs from 14:30-14:35 UTC showing 500 errors"
# 4. Network trace
gira attachment add BUG-101 network-trace.har \
--note "HAR file showing failed API requests"
# 5. Verify all attachments
gira attachment list BUG-101
Feature Development Workflow¶
For feature development, organize design assets and specifications:
# 1. Design mockups and assets
gira attachment add FEAT-200 ./design-assets/ \
--include "*.fig" "*.sketch" "*.png" "*.svg" \
--note "UI mockups for new dashboard feature"
# 2. Technical specifications
gira attachment add FEAT-200 api-spec.yaml technical-design.md \
--note "API specification and technical design document"
# 3. Test data and scenarios
gira attachment add FEAT-200 ./test-data/ \
--include "*.json" "*.csv" \
--exclude "*-temp.*" \
--note "Test data for dashboard analytics feature"
# 4. Performance benchmarks
gira attachment add FEAT-200 performance-baseline.txt \
--note "Baseline metrics before optimization"
Code Review Workflow¶
Supporting code reviews with evidence and documentation:
# 1. Before/after performance data
gira attachment add REVIEW-50 ./benchmarks/ \
--include "*.txt" "*.csv" \
--note "Performance comparison: before vs after optimization"
# 2. Test coverage reports
gira attachment add REVIEW-50 coverage-report.html \
--note "Coverage increased from 72% to 89%"
# 3. Security scan results
gira attachment add REVIEW-50 security-scan.json \
--note "Static analysis results - no new vulnerabilities"
# 4. Documentation updates
gira attachment add REVIEW-50 updated-docs.pdf \
--note "Updated API documentation with new endpoints"
1.5 Best Practices¶
File Naming Conventions¶
Use descriptive, structured filenames:
# ✅ Good - includes date, context, and type
gira attachment add PROJ-123 2024-07-28-mobile-login-error-chrome.png
gira attachment add PROJ-123 2024-07-28-api-response-logs.json
gira attachment add PROJ-123 2024-07-28-performance-profile-production.json
# ❌ Avoid - vague and non-descriptive
gira attachment add PROJ-123 screenshot1.png
gira attachment add PROJ-123 logs.txt
gira attachment add PROJ-123 data.json
Organization Strategies¶
Group related files:
# Create organized directory structure
mkdir bug-analysis
cd bug-analysis
mkdir screenshots logs traces profiles
# Add files to appropriate directories
cp error1.png screenshots/
cp server.log logs/
cp network.har traces/
# Upload the organized structure
gira attachment add BUG-123 ./bug-analysis/ \
--note "Complete bug analysis package"
Note and Context Best Practices¶
Always include detailed notes:
# ✅ Excellent - provides full context
gira attachment add PROJ-123 heap-dump-20240728-1432.hprof \
--note "Memory dump captured during OOM error. 8GB heap, 2000 concurrent users, production server web-01"
# ✅ Good - includes relevant details
gira attachment add PROJ-123 api-response.json \
--note "Failed API response from /auth/login endpoint, status 500"
# ⚠️ Minimal but acceptable
gira attachment add PROJ-123 screenshot.png \
--note "Login error dialog"
# ❌ Avoid - no context
gira attachment add PROJ-123 file.pdf
File Size Considerations¶
Optimize for performance:
# For large files, consider compression
gzip large-logfile.log
gira attachment add PROJ-123 large-logfile.log.gz \
--note "Compressed server logs (original 50MB)"
# For images, optimize resolution
# Use tools like imagemagick to reduce size
convert screenshot.png -quality 85 -resize 1920x1080 screenshot-optimized.png
gira attachment add PROJ-123 screenshot-optimized.png \
--note "UI error screenshot (optimized for web)"
# For development files, exclude unnecessary items
gira attachment add PROJ-123 ./project-backup/ \
--exclude "node_modules/" "*.tmp" ".DS_Store" \
--note "Project snapshot excluding build artifacts"
2. Cloud Storage Provider Setup¶
2.1 Amazon S3¶
Amazon S3 is the most widely supported object storage service, offering excellent performance, durability, and global availability.
Prerequisites¶
- AWS Account with S3 access
- S3 Bucket for storing attachments
- IAM User with programmatic access
Step 1: Create S3 Bucket¶
# Using AWS CLI
aws s3 mb s3://my-gira-attachments --region us-east-1
# Or use the AWS Console:
# 1. Go to S3 Console
# 2. Click "Create bucket"
# 3. Name: my-gira-attachments
# 4. Region: Choose closest to your team
# 5. Keep default settings for now
Step 2: Create IAM Policy¶
Create a policy with minimal required permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-gira-attachments",
"arn:aws:s3:::my-gira-attachments/*"
]
}
]
}
Step 3: Create IAM User¶
# Create user
aws iam create-user --user-name gira-attachments
# Attach policy (replace with your policy ARN)
aws iam attach-user-policy \
--user-name gira-attachments \
--policy-arn arn:aws:iam::123456789012:policy/GiraAttachmentsPolicy
# Create access key
aws iam create-access-key --user-name gira-attachments
Step 4: Configure Gira¶
# Interactive configuration
gira storage configure --provider s3
# Or set environment variables and configure non-interactively
export AWS_ACCESS_KEY_ID=your-access-key
export AWS_SECRET_ACCESS_KEY=your-secret-key
export AWS_DEFAULT_REGION=us-east-1
gira storage configure \
--provider s3 \
--bucket my-gira-attachments \
--region us-east-1
Step 5: Test Configuration¶
# Test connection
gira storage test-connection
# Expected output:
# ✅ Connection successful
# ✅ Bucket accessible: my-gira-attachments
# ✅ Write permissions: OK
# ✅ Read permissions: OK
Cost Optimization Tips¶
Storage Classes:
# Configure lifecycle rules to reduce costs
aws s3api put-bucket-lifecycle-configuration \
--bucket my-gira-attachments \
--lifecycle-configuration file://lifecycle.json
Example lifecycle policy (lifecycle.json
):
{
"Rules": [
{
"ID": "GiraAttachmentLifecycle",
"Status": "Enabled",
"Transitions": [
{
"Days": 30,
"StorageClass": "STANDARD_IA"
},
{
"Days": 90,
"StorageClass": "GLACIER"
}
]
}
]
}
2.2 Cloudflare R2¶
Cloudflare R2 offers S3-compatible storage with zero egress fees, making it cost-effective for teams that frequently download attachments.
Prerequisites¶
- Cloudflare Account with R2 enabled
- R2 Bucket for attachments
- R2 API Token with appropriate permissions
Step 1: Create R2 Bucket¶
- Go to Cloudflare Dashboard → R2 Object Storage
- Click "Create bucket"
- Name:
my-gira-attachments
- Location: Choose based on your team's location
Step 2: Create API Token¶
- Go to Cloudflare Dashboard → My Profile → API Tokens
- Click "Create Token"
- Use "Custom token" template:
- Token name: Gira Attachments
- Permissions:
- Account - Cloudflare R2:Edit
- Account Resources: Include - Your Account
- Zone Resources: Include - All zones
Step 3: Configure Gira¶
# Configure R2 (S3-compatible)
gira storage configure --provider s3
# When prompted:
# Endpoint URL: https://your-account-id.r2.cloudflarestorage.com
# Access Key ID: [from R2 API token]
# Secret Access Key: [from R2 API token]
# Bucket: my-gira-attachments
Step 4: Environment Variables¶
# Set R2 credentials
export AWS_ACCESS_KEY_ID=your-r2-access-key
export AWS_SECRET_ACCESS_KEY=your-r2-secret-key
export AWS_ENDPOINT_URL=https://your-account-id.r2.cloudflarestorage.com
export AWS_DEFAULT_REGION=auto
# Test configuration
gira storage test-connection
Cost Benefits¶
- Zero egress fees - No charges for downloading files
- Competitive storage pricing - Often lower than S3
- Global edge network - Fast access worldwide
2.3 Backblaze B2¶
Backblaze B2 offers the most cost-effective cloud storage, ideal for teams with large attachment volumes.
Prerequisites¶
- Backblaze Account
- B2 Bucket for attachments
- Application Key with bucket access
Step 1: Create B2 Bucket¶
- Go to Backblaze B2 Console
- Click "Create a Bucket"
- Bucket Name:
my-gira-attachments
- Files in Bucket are: Private
- Default Encryption: Enabled (recommended)
Step 2: Create Application Key¶
- Go to App Keys section
- Click "Add a New Application Key"
- Key Name: Gira Attachments
- Allow access to Bucket(s): Select your bucket
- Type of Access: Read and Write
Step 3: Configure Gira¶
# Configure B2 (requires custom endpoint)
gira storage configure --provider b2
# When prompted:
# Account ID: [from B2 account]
# Application Key: [from step 2]
# Bucket Name: my-gira-attachments
Step 4: Test and Verify¶
# Test B2 connection
gira storage test-connection
# Upload test file
echo "Test content" > test.txt
gira attachment add TEST-1 test.txt --note "B2 connection test"
# Verify upload
gira attachment list TEST-1
Cost Analysis¶
Storage Pricing (as of 2024): - First 10 GB: Free - Additional storage: $0.005/GB/month - Downloads: Free up to 3x stored data per month
Example monthly costs: - 100 GB storage: $0.50/month - 1 TB storage: $5.00/month - 10 TB storage: $50.00/month
2.4 Google Cloud Storage¶
Google Cloud Storage provides excellent integration with Google Workspace and strong consistency guarantees.
Prerequisites¶
- Google Cloud Project with billing enabled
- Cloud Storage API enabled
- Service Account with Storage permissions
Step 1: Create GCS Bucket¶
# Using gcloud CLI
gsutil mb -p your-project-id -c STANDARD -l us-central1 gs://my-gira-attachments
# Or use Cloud Console:
# 1. Go to Cloud Storage → Browser
# 2. Click "Create bucket"
# 3. Name: my-gira-attachments
# 4. Location: Choose region closest to team
# 5. Storage class: Standard
Step 2: Create Service Account¶
# Create service account
gcloud iam service-accounts create gira-attachments \
--display-name="Gira Attachments Service Account"
# Grant Storage Object Admin role
gcloud projects add-iam-policy-binding your-project-id \
--member="serviceAccount:gira-attachments@your-project-id.iam.gserviceaccount.com" \
--role="roles/storage.objectAdmin"
# Create and download key file
gcloud iam service-accounts keys create gira-key.json \
--iam-account=gira-attachments@your-project-id.iam.gserviceaccount.com
Step 3: Configure Gira¶
# Set service account key
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/gira-key.json
# Configure Gira
gira storage configure --provider gcs
# When prompted:
# Project ID: your-project-id
# Bucket Name: my-gira-attachments
Step 4: Test Configuration¶
# Verify setup
gira storage test-connection
# Test upload
gira attachment add TEST-1 sample-file.txt --note "GCS test"
Security Best Practices¶
Bucket-level IAM:
# Grant specific permissions to bucket
gsutil iam ch serviceAccount:gira-attachments@your-project-id.iam.gserviceaccount.com:objectAdmin gs://my-gira-attachments
Uniform bucket-level access:
# Enable uniform bucket-level access for better security
gsutil uniformbucketlevelaccess set on gs://my-gira-attachments
2.5 Azure Blob Storage¶
Azure Blob Storage integrates well with Microsoft environments and offers excellent enterprise features.
Prerequisites¶
- Azure Subscription
- Storage Account
- Container for attachments
- Access Key or Service Principal
Step 1: Create Storage Account¶
# Using Azure CLI
az storage account create \
--name girastorageaccount \
--resource-group your-resource-group \
--location eastus \
--sku Standard_LRS
# Create container
az storage container create \
--name gira-attachments \
--account-name girastorageaccount
Step 2: Get Access Keys¶
# List account keys
az storage account keys list \
--resource-group your-resource-group \
--account-name girastorageaccount
# Note the key1 value for configuration
Step 3: Configure Gira¶
# Configure Azure Blob Storage
gira storage configure --provider azure
# When prompted:
# Account Name: girastorageaccount
# Account Key: [from step 2]
# Container Name: gira-attachments
Step 4: Alternative - Service Principal¶
For enhanced security, use a service principal:
# Create service principal
az ad sp create-for-rbac --name gira-attachments
# Assign Storage Blob Data Contributor role
az role assignment create \
--assignee your-service-principal-id \
--role "Storage Blob Data Contributor" \
--scope /subscriptions/your-subscription-id/resourceGroups/your-resource-group/providers/Microsoft.Storage/storageAccounts/girastorageaccount
# Configure Gira with service principal
export AZURE_CLIENT_ID=your-client-id
export AZURE_CLIENT_SECRET=your-client-secret
export AZURE_TENANT_ID=your-tenant-id
gira storage configure --provider azure --auth-method service-principal
3. Security and Credentials¶
3.1 Credential Management¶
Proper credential management is crucial for maintaining security while enabling team collaboration.
Environment Variables (Recommended)¶
AWS S3/R2:
# ~/.bashrc or ~/.zshrc
export AWS_ACCESS_KEY_ID=your-access-key
export AWS_SECRET_ACCESS_KEY=your-secret-key
export AWS_DEFAULT_REGION=us-east-1
# For R2, also set:
export AWS_ENDPOINT_URL=https://your-account-id.r2.cloudflarestorage.com
Google Cloud Storage:
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-key.json
export GOOGLE_CLOUD_PROJECT=your-project-id
Azure Blob Storage:
export AZURE_STORAGE_ACCOUNT=your-storage-account
export AZURE_STORAGE_KEY=your-storage-key
# Or for service principal:
export AZURE_CLIENT_ID=your-client-id
export AZURE_CLIENT_SECRET=your-client-secret
export AZURE_TENANT_ID=your-tenant-id
Credential Files¶
AWS Credentials File (~/.aws/credentials
):
[default]
aws_access_key_id = your-access-key
aws_secret_access_key = your-secret-key
[gira]
aws_access_key_id = gira-specific-key
aws_secret_access_key = gira-specific-secret
Google Cloud Service Account Key:
# Store service account key securely
chmod 600 /path/to/service-account-key.json
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-key.json
Team Credential Sharing¶
Using AWS IAM Roles (Recommended):
# Each team member has their own IAM user
# All users are assigned to a group with the same policy
aws iam create-group --group-name gira-users
aws iam attach-group-policy \
--group-name gira-users \
--policy-arn arn:aws:iam::123456789012:policy/GiraAttachmentsPolicy
# Add users to group
aws iam add-user-to-group --group-name gira-users --user-name alice
aws iam add-user-to-group --group-name gira-users --user-name bob
3.2 Access Control¶
Principle of Least Privilege¶
S3 Bucket Policy Example:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GiraAttachmentAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:group/gira-users"
},
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::my-gira-attachments/*"
},
{
"Sid": "GiraListBucket",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:group/gira-users"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::my-gira-attachments"
}
]
}
Path-Based Access Control¶
Organize attachments with path prefixes to enable granular access:
# Configure Gira to use project-based prefixes
gira storage configure --path-prefix "project-alpha/"
# Different teams can have different prefixes:
# Team Alpha: project-alpha/
# Team Beta: project-beta/
# Shared: shared/
Read-Only Access for CI/CD¶
Create read-only credentials for CI/CD systems:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-gira-attachments",
"arn:aws:s3:::my-gira-attachments/*"
]
}
]
}
3.3 Encryption¶
Encryption at Rest¶
S3 Server-Side Encryption:
# Enable default encryption on bucket
aws s3api put-bucket-encryption \
--bucket my-gira-attachments \
--server-side-encryption-configuration '{
"Rules": [
{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "AES256"
}
}
]
}'
Using AWS KMS:
aws s3api put-bucket-encryption \
--bucket my-gira-attachments \
--server-side-encryption-configuration '{
"Rules": [
{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "aws:kms",
"KMSMasterKeyID": "arn:aws:kms:us-east-1:123456789012:key/your-key-id"
}
}
]
}'
Encryption in Transit¶
All cloud providers use HTTPS/TLS by default. Ensure your configuration enforces secure connections:
# Gira automatically uses HTTPS endpoints
# Verify with:
gira storage show-config
# Should show HTTPS endpoints like:
# Endpoint: https://s3.amazonaws.com
Client-Side Encryption¶
For additional security, implement client-side encryption before upload:
# Example: Encrypt files before attaching
gpg --cipher-algo AES256 --compress-algo 1 --symmetric sensitive-file.pdf
gira attachment add PROJ-123 sensitive-file.pdf.gpg \
--note "Encrypted sensitive document (GPG)"
3.4 Audit Logging¶
Enable Cloud Provider Logging¶
AWS CloudTrail:
# Create CloudTrail for S3 API calls
aws cloudtrail create-trail \
--name gira-attachments-trail \
--s3-bucket-name gira-audit-logs
# Enable logging for S3 data events
aws cloudtrail put-event-selectors \
--trail-name gira-attachments-trail \
--event-selectors '[
{
"ReadWriteType": "All",
"IncludeManagementEvents": false,
"DataResources": [
{
"Type": "AWS::S3::Object",
"Values": ["arn:aws:s3:::my-gira-attachments/*"]
}
]
}
]'
Google Cloud Audit Logs:
# Enable audit logs for Cloud Storage
gcloud logging sinks create gira-audit-sink \
bigquery.googleapis.com/projects/your-project/datasets/gira_audit \
--log-filter='resource.type="gcs_bucket" resource.labels.bucket_name="my-gira-attachments"'
Local Audit Trail¶
Gira maintains local audit information in Git:
# View attachment history for a ticket
git log --oneline --grep="GCM-123" -- .gira/attachments/
# View all attachment operations
git log --oneline -- .gira/attachments/
# Detailed history with file changes
git log --stat -- .gira/attachments/GCM-123/
Compliance Considerations¶
GDPR Compliance: - Ensure attachment deletion removes all copies - Implement data retention policies - Provide data export capabilities
SOC 2 / ISO 27001: - Use encrypted storage and transmission - Implement access controls and audit logging - Regular security reviews and assessments
4. Administration Guide¶
4.1 Troubleshooting¶
Common Error Messages and Solutions¶
Error: "Storage provider not configured"
# Cause: No storage provider set up
# Solution: Configure storage
gira storage configure
# Check current configuration
gira storage show-config
Error: "Access denied" or "403 Forbidden"
# Cause: Insufficient permissions or expired credentials
# Solution: Verify credentials and permissions
# Test connection
gira storage test-connection
# Check AWS credentials
aws sts get-caller-identity
# Check IAM permissions
aws iam simulate-principal-policy \
--policy-source-arn arn:aws:iam::123456789012:user/gira-user \
--action-names s3:GetObject s3:PutObject \
--resource-arns arn:aws:s3:::my-gira-attachments/test-file
Error: "File not found" during download
# Cause: File doesn't exist or path mismatch
# Solution: Verify file exists and check exact filename
# List all attachments
gira attachment list PROJ-123
# Check for partial matches
gira attachment list PROJ-123 | grep -i "partial-name"
# Try downloading with wildcard
gira attachment download PROJ-123 "*partial*"
Error: "Connection timeout" or "Network error"
# Cause: Network connectivity issues
# Solution: Check network and proxy settings
# Test basic connectivity
curl -I https://s3.amazonaws.com
ping 8.8.8.8
# Check proxy settings
echo $HTTP_PROXY
echo $HTTPS_PROXY
# Configure proxy if needed
export HTTP_PROXY=http://proxy.company.com:8080
export HTTPS_PROXY=http://proxy.company.com:8080
Debug Mode¶
Enable verbose logging for detailed troubleshooting:
# Enable debug mode
export GIRA_DEBUG=1
# Run commands with detailed output
gira attachment add PROJ-123 file.txt --note "Debug test"
# Check logs
cat ~/.gira/logs/gira.log
# Disable debug mode
unset GIRA_DEBUG
Storage Provider Specific Issues¶
AWS S3:
# Check bucket region
aws s3api get-bucket-location --bucket my-gira-attachments
# Verify bucket exists and is accessible
aws s3 ls s3://my-gira-attachments/
# Test upload permissions
echo "test" | aws s3 cp - s3://my-gira-attachments/test.txt
aws s3 rm s3://my-gira-attachments/test.txt
Google Cloud Storage:
# Check service account permissions
gcloud auth list
# Test bucket access
gsutil ls gs://my-gira-attachments/
# Verify project configuration
gcloud config get-value project
Azure Blob Storage:
# Check storage account
az storage account show --name girastorageaccount
# Test container access
az storage blob list --container-name gira-attachments --account-name girastorageaccount
# Verify credentials
az storage account keys list --account-name girastorageaccount
4.2 Performance Tuning¶
Upload Performance¶
Parallel Uploads for Multiple Files:
# Gira automatically uses parallel uploads for multiple files
# Monitor progress with:
gira attachment add PROJ-123 large-dir/ --verbose
# For very large files, consider compression
gzip large-file.log
gira attachment add PROJ-123 large-file.log.gz
Bandwidth Optimization:
# Configure upload timeout and retry settings
export GIRA_UPLOAD_TIMEOUT=300 # 5 minutes
export GIRA_MAX_RETRIES=3
export GIRA_RETRY_DELAY=5 # seconds
# For slow connections, reduce concurrent uploads
export GIRA_MAX_CONCURRENT_UPLOADS=2
Download Performance¶
Optimize Download Strategy:
# Download specific files instead of all attachments
gira attachment download PROJ-123 "*.log" --output ./logs/
# Use parallel downloads for multiple files
gira attachment download PROJ-123 file1.txt file2.txt file3.txt
# For large files, consider streaming
gira attachment cat PROJ-123 large-file.txt | head -100
Storage Performance¶
S3 Performance Best Practices:
# Use random prefixes for high-volume uploads
# Gira automatically uses timestamp-based prefixes
# Example path: attachments/2024/07/28/14/32/GCM-123/file.txt
# Enable S3 Transfer Acceleration
aws s3api put-bucket-accelerate-configuration \
--bucket my-gira-attachments \
--accelerate-configuration Status=Enabled
Regional Considerations:
# Configure storage region closest to team
gira storage configure --region us-west-2 # West Coast team
gira storage configure --region eu-west-1 # European team
gira storage configure --region ap-southeast-1 # Asian team
4.3 Migration from Other Systems¶
From Jira Attachments¶
Export Jira Attachments:
# Use Jira REST API to export attachments
# Example script: export-jira-attachments.py
#!/usr/bin/env python3
import os
import requests
from pathlib import Path
def export_jira_attachments(jira_url, auth, issue_key):
"""Export all attachments from a Jira issue."""
# Get issue details
response = requests.get(
f"{jira_url}/rest/api/2/issue/{issue_key}",
auth=auth
)
issue = response.json()
# Download each attachment
for attachment in issue['fields']['attachment']:
filename = attachment['filename']
download_url = attachment['content']
# Download file
file_response = requests.get(download_url, auth=auth)
Path(f"./jira-exports/{issue_key}").mkdir(parents=True, exist_ok=True)
with open(f"./jira-exports/{issue_key}/{filename}", 'wb') as f:
f.write(file_response.content)
print(f"Downloaded: {filename}")
# Usage
export_jira_attachments(
"https://company.atlassian.net",
("username", "api_token"),
"PROJ-123"
)
Import to Gira:
# After exporting from Jira
for dir in ./jira-exports/*/; do
issue_key=$(basename "$dir")
gira attachment add "$issue_key" "$dir"/* \
--note "Migrated from Jira on $(date)"
done
From GitHub Issues¶
Export GitHub Issue Attachments:
# Use GitHub CLI to export issue data
gh issue list --repo owner/repo --state all --json number,title,body > issues.json
# Extract attachment URLs from issue bodies
# GitHub embeds attachments as markdown images
python3 extract-github-attachments.py issues.json
# Download and import attachments
for issue_dir in ./github-exports/*/; do
issue_num=$(basename "$issue_dir")
gira attachment add "GH-$issue_num" "$issue_dir"/* \
--note "Migrated from GitHub issue #$issue_num"
done
From Local File Systems¶
Organize Existing Files:
# Create mapping file for existing attachments
cat > file-mapping.csv << EOF
file_path,ticket_id,description
/shared/bugs/bug-123-screenshot.png,BUG-123,Login error screenshot
/shared/designs/feature-200-mockup.fig,FEAT-200,Feature mockup
/docs/api-spec-v2.yaml,EPIC-001,API specification
EOF
# Import using the mapping
while IFS=, read -r file_path ticket_id description; do
if [[ -f "$file_path" ]]; then
gira attachment add "$ticket_id" "$file_path" --note "$description"
echo "Imported: $file_path -> $ticket_id"
fi
done < file-mapping.csv
4.4 Backup and Recovery¶
Git-Based Backup¶
Since attachment metadata is stored in Git, your regular Git backups protect the attachment references:
# Ensure all attachment metadata is committed
git add .gira/attachments/
git commit -m "backup: attachment metadata snapshot"
# Push to remote repository
git push origin main
# Create backup branch
git branch backup-attachments-$(date +%Y%m%d)
git push origin backup-attachments-$(date +%Y%m%d)
Cloud Storage Backup¶
S3 Cross-Region Replication:
# Enable versioning (required for replication)
aws s3api put-bucket-versioning \
--bucket my-gira-attachments \
--versioning-configuration Status=Enabled
# Create replication configuration
aws s3api put-bucket-replication \
--bucket my-gira-attachments \
--replication-configuration file://replication.json
Example replication configuration (replication.json
):
{
"Role": "arn:aws:iam::123456789012:role/replication-role",
"Rules": [
{
"ID": "ReplicateEverything",
"Status": "Enabled",
"Priority": 1,
"DeleteMarkerReplication": {
"Status": "Enabled"
},
"Filter": {
"Prefix": ""
},
"Destination": {
"Bucket": "arn:aws:s3:::my-gira-attachments-backup",
"StorageClass": "STANDARD_IA"
}
}
]
}
Recovery Procedures¶
Restore from Git Backup:
# Restore attachment metadata from backup branch
git checkout backup-attachments-20240728 -- .gira/attachments/
git commit -m "restore: attachment metadata from backup"
# Verify attachments are accessible
gira attachment list PROJ-123
Restore from Cloud Storage Backup:
# For S3, restore from backup bucket
aws s3 sync s3://my-gira-attachments-backup/ s3://my-gira-attachments/
# For versioned objects, restore specific version
aws s3api list-object-versions --bucket my-gira-attachments --prefix attachments/
aws s3api restore-object \
--bucket my-gira-attachments \
--key attachments/PROJ-123/file.txt \
--version-id specific-version-id
Disaster Recovery Plan:
- Assessment Phase (0-1 hour):
- Identify scope of data loss
- Check Git repository integrity
-
Verify cloud storage status
-
Recovery Phase (1-4 hours):
- Restore Git metadata from backups
- Restore cloud storage files if needed
-
Verify attachment accessibility
-
Verification Phase (4-6 hours):
- Test random sample of attachments
- Verify all critical attachments are accessible
- Update team on recovery status
# Disaster recovery script
#!/bin/bash
set -e
echo "Starting Gira attachment disaster recovery..."
# Step 1: Backup current state
git branch disaster-recovery-backup-$(date +%Y%m%d-%H%M%S)
# Step 2: Restore metadata from latest backup
LATEST_BACKUP=$(git branch -r | grep backup-attachments | sort | tail -1)
git checkout $LATEST_BACKUP -- .gira/attachments/
# Step 3: Test critical attachments
CRITICAL_TICKETS="PROJ-1 PROJ-2 PROJ-3"
for ticket in $CRITICAL_TICKETS; do
echo "Testing $ticket..."
gira attachment list $ticket > /dev/null || echo "WARNING: $ticket failed"
done
echo "Recovery complete. Please verify manually."
5. Developer Documentation¶
5.1 Architecture Overview¶
The Gira attachment system is built on a layered architecture that separates concerns and enables extensibility:
graph TB
CLI[CLI Commands] --> Core[Core Logic]
Core --> Storage[Storage Abstraction]
Storage --> Providers[Storage Providers]
subgraph "CLI Layer"
Add[gira attachment add]
List[gira attachment list]
Download[gira attachment download]
Remove[gira attachment remove]
Cat[gira attachment cat]
Open[gira attachment open]
end
subgraph "Core Layer"
Validation[Input Validation]
Metadata[Metadata Management]
FileOps[File Operations]
GitOps[Git Integration]
end
subgraph "Storage Layer"
Interface[Storage Interface]
Factory[Provider Factory]
Config[Configuration]
end
subgraph "Provider Layer"
Local[Git LFS]
S3[Amazon S3]
GCS[Google Cloud]
Azure[Azure Blob]
R2[Cloudflare R2]
B2[Backblaze B2]
end
CLI --> Add
CLI --> List
CLI --> Download
CLI --> Remove
CLI --> Cat
CLI --> Open
Core --> Validation
Core --> Metadata
Core --> FileOps
Core --> GitOps
Storage --> Interface
Storage --> Factory
Storage --> Config
Providers --> Local
Providers --> S3
Providers --> GCS
Providers --> Azure
Providers --> R2
Providers --> B2
Key Design Principles¶
- Storage Agnostic: Core logic doesn't depend on specific storage providers
- Metadata First: All operations work through metadata stored in Git
- Incremental Adoption: Teams can start with Git LFS and migrate to cloud storage
- Git Native: Leverages Git for versioning, history, and collaboration
- Performance Oriented: Parallel operations, streaming, and caching
- Security by Default: Encrypted transmission, secure credential handling
Data Flow¶
Upload Process:
sequenceDiagram
participant User
participant CLI
participant Core
participant Storage
participant Git
participant Cloud
User->>CLI: gira attachment add PROJ-123 file.pdf
CLI->>Core: validate_and_process(file.pdf)
Core->>Storage: upload_file(file.pdf, metadata)
Storage->>Cloud: PUT /attachments/PROJ-123/file.pdf
Cloud-->>Storage: upload_complete
Storage-->>Core: file_info(url, checksum, size)
Core->>Git: create_pointer_file(.gira/attachments/PROJ-123/file.pdf.yaml)
Git-->>Core: file_committed
Core-->>CLI: success(file_info)
CLI-->>User: ✅ Attached file.pdf to PROJ-123
Download Process:
sequenceDiagram
participant User
participant CLI
participant Core
participant Storage
participant Git
participant Cloud
User->>CLI: gira attachment download PROJ-123 file.pdf
CLI->>Core: find_attachment(PROJ-123, file.pdf)
Core->>Git: read_pointer_file(.gira/attachments/PROJ-123/file.pdf.yaml)
Git-->>Core: attachment_metadata
Core->>Storage: download_file(url, local_path)
Storage->>Cloud: GET /attachments/PROJ-123/file.pdf
Cloud-->>Storage: file_content
Storage-->>Core: local_file_path
Core-->>CLI: success(local_path)
CLI-->>User: ✅ Downloaded to ./file.pdf
5.2 Provider Plugin System¶
The storage provider system uses a plugin architecture that makes it easy to add new providers:
Base Storage Interface¶
from abc import ABC, abstractmethod
from pathlib import Path
from typing import Dict, Any, List, Optional
class StorageProvider(ABC):
"""Abstract base class for storage providers."""
@abstractmethod
def upload_file(self, local_path: Path, remote_path: str, metadata: Dict[str, Any]) -> Dict[str, Any]:
"""Upload a file to storage.
Args:
local_path: Path to local file
remote_path: Destination path in storage
metadata: File metadata (size, content_type, etc.)
Returns:
Upload result with URL, etag, etc.
"""
pass
@abstractmethod
def download_file(self, remote_path: str, local_path: Path) -> None:
"""Download a file from storage.
Args:
remote_path: Source path in storage
local_path: Destination path on local filesystem
"""
pass
@abstractmethod
def delete_file(self, remote_path: str) -> None:
"""Delete a file from storage.
Args:
remote_path: Path to file in storage
"""
pass
@abstractmethod
def file_exists(self, remote_path: str) -> bool:
"""Check if a file exists in storage.
Args:
remote_path: Path to check
Returns:
True if file exists, False otherwise
"""
pass
@abstractmethod
def list_files(self, prefix: str) -> List[Dict[str, Any]]:
"""List files with given prefix.
Args:
prefix: Path prefix to filter by
Returns:
List of file information dictionaries
"""
pass
@abstractmethod
def test_connection(self) -> bool:
"""Test connection to storage provider.
Returns:
True if connection successful, False otherwise
"""
pass
Example Provider Implementation¶
from gira.storage.base import StorageProvider
import boto3
from botocore.exceptions import ClientError
class S3Provider(StorageProvider):
"""Amazon S3 storage provider."""
def __init__(self, config: Dict[str, Any]):
self.bucket = config['bucket']
self.region = config.get('region', 'us-east-1')
self.endpoint_url = config.get('endpoint_url') # For S3-compatible services
self.s3_client = boto3.client(
's3',
region_name=self.region,
endpoint_url=self.endpoint_url
)
def upload_file(self, local_path: Path, remote_path: str, metadata: Dict[str, Any]) -> Dict[str, Any]:
"""Upload file to S3."""
try:
extra_args = {
'ContentType': metadata.get('content_type', 'application/octet-stream'),
'Metadata': {
'gira-checksum': metadata.get('checksum', ''),
'gira-uploaded-by': metadata.get('uploaded_by', ''),
'gira-upload-timestamp': metadata.get('timestamp', '')
}
}
self.s3_client.upload_file(
str(local_path),
self.bucket,
remote_path,
ExtraArgs=extra_args
)
# Get object info
response = self.s3_client.head_object(Bucket=self.bucket, Key=remote_path)
return {
'url': f"s3://{self.bucket}/{remote_path}",
'etag': response['ETag'].strip('"'),
'size': response['ContentLength'],
'last_modified': response['LastModified']
}
except ClientError as e:
raise StorageError(f"S3 upload failed: {e}")
def download_file(self, remote_path: str, local_path: Path) -> None:
"""Download file from S3."""
try:
self.s3_client.download_file(
self.bucket,
remote_path,
str(local_path)
)
except ClientError as e:
raise StorageError(f"S3 download failed: {e}")
# ... implement other methods
Provider Registration¶
from gira.storage.providers import S3Provider, GCSProvider, AzureProvider
STORAGE_PROVIDERS = {
's3': S3Provider,
'gcs': GCSProvider,
'azure': AzureProvider,
'r2': S3Provider, # R2 uses S3-compatible API
'b2': B2Provider,
'git-lfs': GitLFSProvider,
}
def get_storage_provider(provider_name: str, config: Dict[str, Any]) -> StorageProvider:
"""Get storage provider instance."""
if provider_name not in STORAGE_PROVIDERS:
raise ValueError(f"Unknown storage provider: {provider_name}")
provider_class = STORAGE_PROVIDERS[provider_name]
return provider_class(config)
5.3 API Reference¶
Core Classes¶
AttachmentPointer Class:
from dataclasses import dataclass
from datetime import datetime
from pathlib import Path
from typing import Optional, Dict, Any
@dataclass
class AttachmentPointer:
"""Represents an attachment pointer stored in Git."""
filename: str
size: int
content_type: str
checksum: str
upload_timestamp: datetime
storage_provider: str
storage_path: str
storage_metadata: Dict[str, Any]
note: Optional[str] = None
uploaded_by: Optional[str] = None
@classmethod
def from_file(cls, file_path: Path) -> 'AttachmentPointer':
"""Load attachment pointer from YAML file."""
with open(file_path, 'r') as f:
data = yaml.safe_load(f)
return cls(**data)
def to_file(self, file_path: Path) -> None:
"""Save attachment pointer to YAML file."""
with open(file_path, 'w') as f:
yaml.dump(asdict(self), f, default_flow_style=False)
def get_display_size(self) -> str:
"""Get human-readable file size."""
for unit in ['B', 'KB', 'MB', 'GB']:
if self.size < 1024:
return f"{self.size:.1f} {unit}"
self.size /= 1024
return f"{self.size:.1f} TB"
AttachmentManager Class:
class AttachmentManager:
"""Manages attachment operations for a Gira project."""
def __init__(self, project_root: Path):
self.project_root = project_root
self.attachments_dir = project_root / '.gira' / 'attachments'
self.config = load_config(project_root)
self.storage = get_storage_backend(self.config)
def add_attachment(
self,
entity_id: str,
file_path: Path,
note: Optional[str] = None
) -> AttachmentPointer:
"""Add an attachment to an entity."""
# Validate entity exists
self._validate_entity(entity_id)
# Calculate file metadata
file_info = get_file_info(file_path)
# Generate storage path
storage_path = self._generate_storage_path(entity_id, file_path.name)
# Upload to storage
upload_result = self.storage.upload_file(
file_path,
storage_path,
file_info
)
# Create pointer
pointer = AttachmentPointer(
filename=file_path.name,
size=file_info['size'],
content_type=file_info['content_type'],
checksum=file_info['checksum'],
upload_timestamp=datetime.now(timezone.utc),
storage_provider=self.config.storage.provider,
storage_path=storage_path,
storage_metadata=upload_result,
note=note,
uploaded_by=self._get_current_user()
)
# Save pointer file
pointer_path = self._get_pointer_path(entity_id, file_path.name)
pointer.to_file(pointer_path)
# Commit to Git
self._commit_attachment_change(entity_id, f"Add attachment: {file_path.name}")
return pointer
def list_attachments(self, entity_id: str) -> List[AttachmentPointer]:
"""List all attachments for an entity."""
entity_dir = self.attachments_dir / entity_id
if not entity_dir.exists():
return []
pointers = []
for pointer_file in entity_dir.glob('*.yaml'):
pointers.append(AttachmentPointer.from_file(pointer_file))
return sorted(pointers, key=lambda p: p.upload_timestamp, reverse=True)
def download_attachment(
self,
entity_id: str,
filename: str,
output_path: Optional[Path] = None
) -> Path:
"""Download an attachment."""
pointer = self._find_attachment(entity_id, filename)
if not pointer:
raise AttachmentNotFoundError(f"Attachment not found: {filename}")
if output_path is None:
output_path = Path.cwd() / pointer.filename
self.storage.download_file(pointer.storage_path, output_path)
return output_path
# ... other methods
5.4 Extension Points¶
Custom Storage Providers¶
To add a new storage provider, implement the StorageProvider
interface:
class MyCustomProvider(StorageProvider):
"""Custom storage provider implementation."""
def __init__(self, config: Dict[str, Any]):
# Initialize your provider with configuration
self.api_key = config['api_key']
self.base_url = config['base_url']
self.client = MyStorageClient(self.api_key, self.base_url)
def upload_file(self, local_path: Path, remote_path: str, metadata: Dict[str, Any]) -> Dict[str, Any]:
# Implement upload logic
result = self.client.upload(local_path, remote_path, metadata)
return {
'url': result.url,
'id': result.file_id,
'version': result.version
}
# ... implement other required methods
# Register your provider
from gira.storage.providers import register_provider
register_provider('my-provider', MyCustomProvider)
Command Extensions¶
Add new attachment-related commands by extending the CLI:
import typer
from gira.commands.attachment import attachment_app
@attachment_app.command("sync")
def sync_attachments(
entity_id: str = typer.Argument(..., help="Entity ID to sync"),
dry_run: bool = typer.Option(False, "--dry-run", help="Show what would be synced")
):
"""Synchronize attachments with remote storage."""
manager = AttachmentManager(get_project_root())
# Get local attachments
local_attachments = manager.list_attachments(entity_id)
# Check remote storage
for attachment in local_attachments:
exists = manager.storage.file_exists(attachment.storage_path)
if not exists:
if dry_run:
console.print(f"Would re-upload: {attachment.filename}")
else:
console.print(f"Re-uploading: {attachment.filename}")
# Re-upload logic here
Hooks and Plugins¶
Gira supports hooks for extending attachment behavior:
from gira.hooks import register_hook
@register_hook('before_upload')
def scan_for_secrets(file_path: Path, metadata: Dict[str, Any]) -> bool:
"""Scan files for secrets before upload."""
# Use tools like detect-secrets, truffleHog, etc.
if has_secrets(file_path):
console.print(f"⚠️ Secrets detected in {file_path.name}")
return False # Prevent upload
return True # Allow upload
@register_hook('after_upload')
def notify_team(entity_id: str, attachment: AttachmentPointer) -> None:
"""Notify team when attachments are added."""
send_slack_notification(
f"📎 New attachment added to {entity_id}: {attachment.filename}"
)
6. Cookbook and Recipes¶
6.1 Automation Scripts¶
Automated Bug Report Collection¶
Create a script that automatically gathers all relevant debugging information:
#!/bin/bash
# collect-bug-info.sh - Automated bug report collection
set -e
TICKET_ID=$1
BUG_DIR="bug-report-$(date +%Y%m%d-%H%M%S)"
if [[ -z "$TICKET_ID" ]]; then
echo "Usage: $0 <TICKET_ID>"
exit 1
fi
echo "🔍 Collecting bug information for $TICKET_ID..."
# Create temporary directory
mkdir -p "$BUG_DIR"/{logs,screenshots,network,system}
# Collect system information
echo "📋 Gathering system info..."
{
echo "Date: $(date)"
echo "User: $(whoami)"
echo "OS: $(uname -a)"
echo "Git commit: $(git rev-parse HEAD)"
echo "Node version: $(node --version 2>/dev/null || echo 'N/A')"
echo "Python version: $(python --version 2>/dev/null || echo 'N/A')"
} > "$BUG_DIR/system/environment.txt"
# Collect recent logs
echo "📝 Collecting logs..."
if [[ -d "logs" ]]; then
cp logs/*.log "$BUG_DIR/logs/" 2>/dev/null || true
fi
# Browser console logs (if available)
if [[ -f "console.log" ]]; then
cp console.log "$BUG_DIR/network/"
fi
# Recent Git history
echo "📚 Git history..."
git log --oneline -10 > "$BUG_DIR/system/recent-commits.txt"
# Package information
if [[ -f "package.json" ]]; then
cp package.json "$BUG_DIR/system/"
fi
if [[ -f "requirements.txt" ]]; then
cp requirements.txt "$BUG_DIR/system/"
fi
# Upload everything to Gira
echo "⬆️ Uploading to Gira..."
gira attachment add "$TICKET_ID" "$BUG_DIR"/ \
--note "Automated bug report collection from $(hostname) at $(date)"
# Cleanup
rm -rf "$BUG_DIR"
echo "✅ Bug information uploaded to $TICKET_ID"
Performance Testing Automation¶
#!/bin/bash
# performance-test.sh - Automated performance testing with result upload
TICKET_ID=$1
TEST_URL=$2
RESULTS_DIR="perf-results-$(date +%Y%m%d-%H%M%S)"
mkdir -p "$RESULTS_DIR"
echo "🚀 Running performance tests for $TICKET_ID..."
# Lighthouse performance test
if command -v lighthouse >/dev/null; then
echo "📊 Running Lighthouse audit..."
lighthouse "$TEST_URL" \
--output=html,json \
--output-path="$RESULTS_DIR/lighthouse" \
--chrome-flags="--headless"
fi
# Load testing with curl
echo "⚡ Running load test..."
{
echo "Load Test Results - $(date)"
echo "===================="
echo
for i in {1..10}; do
time_total=$(curl -w "%{time_total}" -s -o /dev/null "$TEST_URL")
echo "Request $i: ${time_total}s"
done
} > "$RESULTS_DIR/load-test.txt"
# Memory usage monitoring
echo "💾 Monitoring memory usage..."
if command -v ps >/dev/null; then
ps aux | grep -E "(node|python|java)" > "$RESULTS_DIR/memory-usage.txt"
fi
# Upload results
gira attachment add "$TICKET_ID" "$RESULTS_DIR"/ \
--note "Performance test results from $(date)"
rm -rf "$RESULTS_DIR"
echo "✅ Performance test results uploaded"
6.2 CI/CD Integration¶
GitHub Actions Workflow¶
# .github/workflows/gira-attachments.yml
name: Gira Attachment Upload
on:
pull_request:
branches: [main]
push:
branches: [main]
jobs:
upload-artifacts:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install Gira
run: pip install gira-cli
- name: Configure Gira Storage
env:
AWS_ACCESS_KEY_ID: ${{ secrets.GIRA_AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.GIRA_AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: us-east-1
run: |
gira storage configure \
--provider s3 \
--bucket ${{ secrets.GIRA_S3_BUCKET }} \
--region us-east-1
- name: Run tests with coverage
run: |
pytest --cov=. --cov-report=html --cov-report=json
- name: Build application
run: |
npm run build
tar -czf build-artifacts.tar.gz dist/
- name: Extract ticket ID from PR/commit
id: ticket
run: |
if [[ "${{ github.event_name }}" == "pull_request" ]]; then
TITLE="${{ github.event.pull_request.title }}"
else
TITLE="${{ github.event.head_commit.message }}"
fi
TICKET_ID=$(echo "$TITLE" | grep -oE '[A-Z]+-[0-9]+' | head -1)
if [[ -n "$TICKET_ID" ]]; then
echo "ticket_id=$TICKET_ID" >> $GITHUB_OUTPUT
fi
- name: Upload test results
if: steps.ticket.outputs.ticket_id
run: |
gira attachment add ${{ steps.ticket.outputs.ticket_id }} \
htmlcov/ \
coverage.json \
--note "Test coverage report from CI build #${{ github.run_number }}"
- name: Upload build artifacts
if: steps.ticket.outputs.ticket_id && github.event_name == 'pull_request'
run: |
gira attachment add ${{ steps.ticket.outputs.ticket_id }} \
build-artifacts.tar.gz \
--note "Build artifacts from PR #${{ github.event.pull_request.number }}"
- name: Upload failure logs on error
if: failure() && steps.ticket.outputs.ticket_id
run: |
# Collect failure information
mkdir -p ci-failure-logs
cp pytest.log ci-failure-logs/ 2>/dev/null || true
npm run build:log > ci-failure-logs/build.log 2>&1 || true
gira attachment add ${{ steps.ticket.outputs.ticket_id }} \
ci-failure-logs/ \
--note "CI failure logs from build #${{ github.run_number }}"
GitLab CI Pipeline¶
# .gitlab-ci.yml
stages:
- test
- build
- deploy
- upload-artifacts
variables:
GIRA_S3_BUCKET: "my-gira-attachments"
AWS_DEFAULT_REGION: "us-east-1"
before_script:
- pip install gira-cli
- gira storage configure --provider s3 --bucket $GIRA_S3_BUCKET --region $AWS_DEFAULT_REGION
test:
stage: test
script:
- pytest --cov=. --cov-report=html --cov-report=json
- coverage xml
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage.xml
paths:
- htmlcov/
- coverage.json
coverage: '/TOTAL.*\s+(\d+%)$/'
build:
stage: build
script:
- npm run build
- tar -czf build-$CI_COMMIT_SHORT_SHA.tar.gz dist/
artifacts:
paths:
- build-*.tar.gz
upload-to-gira:
stage: upload-artifacts
script:
- |
# Extract ticket ID from commit message or MR title
TICKET_ID=""
if [[ -n "$CI_MERGE_REQUEST_TITLE" ]]; then
TICKET_ID=$(echo "$CI_MERGE_REQUEST_TITLE" | grep -oE '[A-Z]+-[0-9]+' | head -1)
else
TICKET_ID=$(echo "$CI_COMMIT_MESSAGE" | grep -oE '[A-Z]+-[0-9]+' | head -1)
fi
if [[ -n "$TICKET_ID" ]]; then
echo "Uploading artifacts for ticket: $TICKET_ID"
# Upload test coverage
gira attachment add "$TICKET_ID" htmlcov/ coverage.json \
--note "Test coverage from pipeline $CI_PIPELINE_ID"
# Upload build artifacts
gira attachment add "$TICKET_ID" build-*.tar.gz \
--note "Build artifacts from commit $CI_COMMIT_SHORT_SHA"
else
echo "No ticket ID found in commit message or MR title"
fi
dependencies:
- test
- build
only:
- merge_requests
- main
6.3 Git Hook Examples¶
Pre-Commit Hook: Attachment Validation¶
#!/bin/bash
# .git/hooks/pre-commit
# Validates attachments before commit
set -e
echo "🔍 Validating Gira attachments..."
# Check for oversized attachments when using Git LFS
if git config --get-regexp "^lfs\." >/dev/null; then
# Find large files that should use LFS
large_files=$(git diff --cached --name-only | xargs -I {} sh -c 'test -f "{}" && find "{}" -size +10M' 2>/dev/null || true)
if [[ -n "$large_files" ]]; then
echo "⚠️ Large files detected that should use Git LFS:"
echo "$large_files"
echo
echo "Run: git lfs track '*.ext' for appropriate extensions"
exit 1
fi
fi
# Validate attachment metadata files
for yaml_file in $(git diff --cached --name-only | grep '\.gira/attachments/.*\.yaml$'); do
echo "Validating $yaml_file..."
# Check YAML syntax
if ! python -c "import yaml; yaml.safe_load(open('$yaml_file'))" 2>/dev/null; then
echo "❌ Invalid YAML syntax in $yaml_file"
exit 1
fi
# Check required fields
if ! grep -q "filename:" "$yaml_file"; then
echo "❌ Missing filename field in $yaml_file"
exit 1
fi
if ! grep -q "checksum:" "$yaml_file"; then
echo "❌ Missing checksum field in $yaml_file"
exit 1
fi
done
echo "✅ All attachment validations passed"
Post-Commit Hook: Auto-Upload to Storage¶
#!/bin/bash
# .git/hooks/post-commit
# Automatically uploads new attachments after commit
# Only run if storage is configured
if ! gira storage show-config >/dev/null 2>&1; then
exit 0
fi
echo "📎 Checking for new attachments to upload..."
# Find new attachment YAML files in the last commit
new_attachments=$(git diff-tree --no-commit-id --name-only -r HEAD | grep '\.gira/attachments/.*\.yaml$' || true)
if [[ -n "$new_attachments" ]]; then
echo "Found new attachments, uploading to storage..."
for yaml_file in $new_attachments; do
if [[ -f "$yaml_file" ]]; then
# Extract entity ID and filename from path
entity_id=$(echo "$yaml_file" | sed 's|\.gira/attachments/\([^/]*\)/.*|\1|')
filename=$(basename "$yaml_file" .yaml)
echo "Uploading $filename for $entity_id..."
# Check if local file exists and upload if needed
local_file=$(dirname "$yaml_file")/"$filename"
if [[ -f "$local_file" ]]; then
gira attachment upload-missing "$entity_id" "$filename" || true
fi
fi
done
fi
Prepare-Commit-Msg Hook: Auto-Reference Attachments¶
#!/bin/bash
# .git/hooks/prepare-commit-msg
# Automatically references new attachments in commit messages
COMMIT_MSG_FILE=$1
COMMIT_SOURCE=$2
# Only modify message for regular commits
if [[ "$COMMIT_SOURCE" = "message" ]] || [[ -z "$COMMIT_SOURCE" ]]; then
# Find attachment changes in staged files
attachment_changes=$(git diff --cached --name-only | grep '\.gira/attachments/' || true)
if [[ -n "$attachment_changes" ]]; then
# Extract unique entity IDs
entity_ids=$(echo "$attachment_changes" | sed 's|\.gira/attachments/\([^/]*\)/.*|\1|' | sort -u)
# Count attachments per entity
attachment_summary=""
for entity_id in $entity_ids; do
count=$(echo "$attachment_changes" | grep "\.gira/attachments/$entity_id/" | wc -l)
if [[ $count -gt 0 ]]; then
attachment_summary="$attachment_summary\n📎 $entity_id: $count attachment(s)"
fi
done
if [[ -n "$attachment_summary" ]]; then
# Append to commit message
echo "" >> "$COMMIT_MSG_FILE"
echo "Attachments:" >> "$COMMIT_MSG_FILE"
echo -e "$attachment_summary" >> "$COMMIT_MSG_FILE"
fi
fi
fi
6.4 Bulk Operations¶
Batch File Upload Script¶
#!/bin/bash
# batch-upload.sh - Upload multiple files to different tickets
set -e
MAPPING_FILE=$1
if [[ -z "$MAPPING_FILE" ]] || [[ ! -f "$MAPPING_FILE" ]]; then
echo "Usage: $0 <mapping-file.csv>"
echo
echo "CSV format: file_path,ticket_id,description"
echo "Example:"
echo " /path/to/screenshot.png,BUG-123,Login error screenshot"
echo " /path/to/logs/*.log,BUG-123,Server logs"
echo " /docs/spec.pdf,FEAT-456,Technical specification"
exit 1
fi
echo "🚀 Starting batch upload from $MAPPING_FILE..."
# Read CSV file and process each line
total_lines=$(wc -l < "$MAPPING_FILE")
current_line=0
while IFS=, read -r file_path ticket_id description; do
current_line=$((current_line + 1))
# Skip header line
if [[ $current_line -eq 1 ]] && [[ "$file_path" == "file_path" ]]; then
continue
fi
echo "[$current_line/$total_lines] Processing: $file_path -> $ticket_id"
# Handle glob patterns
if [[ "$file_path" == *"*"* ]]; then
# Expand glob pattern
files=($(ls $file_path 2>/dev/null || true))
if [[ ${#files[@]} -eq 0 ]]; then
echo " ⚠️ No files match pattern: $file_path"
continue
fi
echo " 📁 Found ${#files[@]} files matching pattern"
gira attachment add "$ticket_id" "${files[@]}" --note "$description"
else
# Single file
if [[ -f "$file_path" ]]; then
gira attachment add "$ticket_id" "$file_path" --note "$description"
elif [[ -d "$file_path" ]]; then
gira attachment add "$ticket_id" "$file_path"/ --note "$description"
else
echo " ❌ File not found: $file_path"
continue
fi
fi
echo " ✅ Uploaded successfully"
done < "$MAPPING_FILE"
echo "🎉 Batch upload completed!"
Example mapping file (batch-upload.csv
):
file_path,ticket_id,description
/tmp/bug-reports/error-screenshot.png,BUG-101,Error dialog screenshot
/var/log/app/*.log,BUG-101,Application logs from error period
/home/user/Downloads/network-trace.har,BUG-101,Network trace showing failed requests
/docs/feature-spec.pdf,FEAT-200,Feature specification document
/designs/mockups/,FEAT-200,UI mockups and design assets
/test-results/coverage-report.html,FEAT-200,Test coverage report
Bulk Download Script¶
#!/bin/bash
# bulk-download.sh - Download all attachments for multiple tickets
set -e
TICKET_LIST=$1
OUTPUT_DIR=${2:-./downloads}
if [[ -z "$TICKET_LIST" ]]; then
echo "Usage: $0 <ticket-list-file> [output-directory]"
echo
echo "Ticket list file should contain one ticket ID per line:"
echo " BUG-101"
echo " FEAT-200"
echo " EPIC-001"
exit 1
fi
echo "📥 Starting bulk download to $OUTPUT_DIR..."
mkdir -p "$OUTPUT_DIR"
while read -r ticket_id; do
# Skip empty lines and comments
if [[ -z "$ticket_id" ]] || [[ "$ticket_id" == \#* ]]; then
continue
fi
echo "Processing $ticket_id..."
# Create ticket-specific directory
ticket_dir="$OUTPUT_DIR/$ticket_id"
mkdir -p "$ticket_dir"
# Check if ticket has attachments
attachment_count=$(gira attachment list "$ticket_id" --quiet --count 2>/dev/null || echo "0")
if [[ "$attachment_count" -eq 0 ]]; then
echo " 📭 No attachments found for $ticket_id"
continue
fi
echo " 📎 Found $attachment_count attachment(s)"
# Download all attachments
if gira attachment download "$ticket_id" --all --output "$ticket_dir" --quiet; then
echo " ✅ Downloaded $attachment_count files"
else
echo " ❌ Download failed for $ticket_id"
fi
done < "$TICKET_LIST"
echo "🎉 Bulk download completed!"
echo "Files saved to: $OUTPUT_DIR"
Archive Old Attachments¶
#!/bin/bash
# archive-old-attachments.sh - Archive attachments older than specified days
set -e
DAYS_OLD=${1:-90}
ARCHIVE_DIR=${2:-./archived-attachments}
DRY_RUN=${3:-false}
echo "🗄️ Archiving attachments older than $DAYS_OLD days..."
if [[ "$DRY_RUN" == "true" ]]; then
echo "🔍 DRY RUN MODE - No files will be moved"
fi
mkdir -p "$ARCHIVE_DIR"
# Find all attachment YAML files
find .gira/attachments -name "*.yaml" -type f | while read -r yaml_file; do
# Extract upload timestamp from YAML
upload_date=$(grep "upload_timestamp:" "$yaml_file" | cut -d' ' -f2- | head -1)
if [[ -n "$upload_date" ]]; then
# Convert to epoch timestamp
upload_epoch=$(date -d "$upload_date" +%s 2>/dev/null || continue)
current_epoch=$(date +%s)
age_days=$(( (current_epoch - upload_epoch) / 86400 ))
if [[ $age_days -gt $DAYS_OLD ]]; then
# Extract entity ID and filename
entity_id=$(echo "$yaml_file" | sed 's|\.gira/attachments/\([^/]*\)/.*|\1|')
filename=$(basename "$yaml_file" .yaml)
echo "📦 Archiving: $entity_id/$filename (${age_days} days old)"
if [[ "$DRY_RUN" != "true" ]]; then
# Create archive directory structure
archive_entity_dir="$ARCHIVE_DIR/$entity_id"
mkdir -p "$archive_entity_dir"
# Move YAML file
mv "$yaml_file" "$archive_entity_dir/"
# Download and archive the actual file
if gira attachment download "$entity_id" "$filename" --output "$archive_entity_dir" --quiet 2>/dev/null; then
echo " ✅ Archived to $archive_entity_dir"
# Remove from storage (optional)
# gira attachment remove "$entity_id" "$filename" --delete-remote --force
else
echo " ⚠️ Could not download file, YAML archived only"
fi
fi
fi
fi
done
if [[ "$DRY_RUN" != "true" ]]; then
echo "🎉 Archiving completed!"
echo "Archived files saved to: $ARCHIVE_DIR"
else
echo "🔍 Dry run completed - use 'false' as third argument to actually archive"
fi
7. CLI Command Reference¶
This section provides complete syntax and examples for every Gira attachment command.
gira attachment add
¶
Add one or more files as attachments to a ticket or epic.
Syntax¶
Arguments¶
<entity-id>
: Ticket ID (e.g.,PROJ-123
) or epic ID (e.g.,EPIC-001
)<file-paths...>
: One or more file paths, directory paths, or glob patterns
Options¶
--note, -n TEXT
: Optional description for the attachments--include PATTERN
: Include only files matching this glob pattern (for directories)--exclude PATTERN
: Exclude files matching this glob pattern (for directories)--recursive, -r
: Recursively include subdirectories (default: true)--dry-run
: Show what files would be uploaded without actually uploading--quiet, -q
: Suppress progress output--force, -f
: Overwrite existing attachments with same name
Examples¶
Single file:
Multiple files:
Directory upload:
Directory with filters:
Glob patterns:
Complex filtering:
gira attachment add PROJ-123 ./build-output/ \
--include "*.tar.gz" "*.zip" \
--exclude "*-debug.*" "*-temp.*" \
--note "Release artifacts v1.2.3"
Exit Codes¶
0
: Success1
: General error (file not found, permission denied)2
: Storage error (upload failed, quota exceeded)3
: Configuration error (storage not configured)
gira attachment list
¶
List all attachments for a ticket or epic.
Syntax¶
Arguments¶
<entity-id>
: Ticket ID or epic ID
Options¶
--format FORMAT
: Output format (table
,json
,csv
) [default: table]--sort-by FIELD
: Sort by field (name
,size
,date
,type
) [default: date]--reverse, -r
: Reverse sort order--filter PATTERN
: Filter attachments by filename pattern--show-urls
: Include storage URLs in output--quiet, -q
: Output only essential information--count
: Output only the count of attachments
Examples¶
Basic listing:
JSON output:
Filtered results:
Sorted by size:
Count only:
Sample Output¶
📎 Attachments for PROJ-123 (3 files, 15.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Filename Size Type Uploaded Note
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
screenshot.png 2.1 MB image/png 2024-07-28 14:32 Login error
error.log 512 KB text/plain 2024-07-28 14:33 Server logs
trace.json 8.7 MB application 2024-07-28 14:35 Performance
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
gira attachment download
¶
Download one or more attachments from a ticket or epic.
Syntax¶
Arguments¶
<entity-id>
: Ticket ID or epic ID[filenames...]
: Specific filenames to download (supports wildcards)
Options¶
--all, -a
: Download all attachments--output, -o PATH
: Output directory [default: current directory]--overwrite
: Overwrite existing files--quiet, -q
: Suppress progress output--verify
: Verify file integrity after download using checksums
Examples¶
Single file:
Multiple specific files:
Wildcard patterns:
All attachments:
To specific directory:
With verification:
Exit Codes¶
0
: Success1
: File not found2
: Download failed3
: Storage error4
: Verification failed
gira attachment remove
¶
Remove attachment references and optionally delete files from storage.
Syntax¶
Arguments¶
<entity-id>
: Ticket ID or epic ID<filenames...>
: Filenames to remove (supports wildcards)
Options¶
--delete-remote
: Also delete files from storage (destructive!)--force, -f
: Skip confirmation prompts--dry-run
: Show what would be removed without actually removing--quiet, -q
: Suppress output
Examples¶
Remove reference only:
Remove multiple files:
Remove with wildcards:
Delete from storage too:
Dry run first:
Exit Codes¶
0
: Success1
: File not found2
: Removal failed3
: Storage deletion failed
gira attachment cat
¶
Display the contents of text attachments without downloading.
Syntax¶
Arguments¶
<entity-id>
: Ticket ID or epic ID<filename>
: Name of text file to display
Options¶
--lines, -n NUMBER
: Display only first N lines--tail, -t NUMBER
: Display only last N lines--encoding ENCODING
: Text encoding [default: utf-8]--raw
: Output raw content without formatting
Examples¶
Display entire file:
First 20 lines:
Last 50 lines:
Pipe to other commands:
Raw output:
gira attachment open
¶
Open attachments using system default applications.
Syntax¶
Arguments¶
<entity-id>
: Ticket ID or epic ID<filename>
: Name of file to open
Options¶
--app APPLICATION
: Specify application to use--download-only
: Download file but don't open it--temp
: Download to temporary location
Examples¶
Open with default app:
Open PDF with specific app:
Download to temp and open:
Storage Management Commands¶
gira storage configure
¶
Configure storage provider for attachments.
Syntax¶
Options¶
--provider PROVIDER
: Storage provider (s3
,gcs
,azure
,r2
,b2
,git-lfs
)--bucket BUCKET
: Storage bucket name--region REGION
: Storage region--endpoint-url URL
: Custom endpoint URL (for S3-compatible services)--interactive
: Use interactive configuration wizard
Examples¶
Interactive setup:
Direct S3 setup:
Cloudflare R2 setup:
gira storage configure \
--provider s3 \
--bucket my-attachments \
--endpoint-url https://account-id.r2.cloudflarestorage.com
Git LFS setup:
gira storage test-connection
¶
Test connectivity to configured storage provider.
Syntax¶
Options¶
--verbose, -v
: Show detailed test results
Example Output¶
✅ Connection successful
✅ Bucket accessible: my-gira-attachments
✅ Write permissions: OK
✅ Read permissions: OK
✅ Delete permissions: OK
📊 Estimated latency: 45ms
gira storage show-config
¶
Display current storage configuration.
Syntax¶
Options¶
--show-credentials
: Include credential information (use cautiously)--format FORMAT
: Output format (table
,json
) [default: table]
Example Output¶
Storage Configuration
━━━━━━━━━━━━━━━━━━━━━
Provider: Amazon S3
Bucket: my-gira-attachments
Region: us-east-1
Endpoint: https://s3.amazonaws.com
Encryption: AES-256
Access: ✅ Configured
This comprehensive documentation provides everything needed to effectively use Gira's attachment system, from basic usage to advanced automation and troubleshooting. The modular structure allows teams to reference specific sections as needed while providing a complete resource for all attachment-related operations.