POTE/scripts/calculate_all_returns.py
ilia 0d8d85adc1 Add complete automation, reporting, and CI/CD system
Features Added:
==============

📧 EMAIL REPORTING SYSTEM:
- EmailReporter: Send reports via SMTP (Gmail, SendGrid, custom)
- ReportGenerator: Generate daily/weekly summaries with HTML/text formatting
- Configurable via .env (SMTP_HOST, SMTP_PORT, etc.)
- Scripts: send_daily_report.py, send_weekly_report.py

🤖 AUTOMATED RUNS:
- automated_daily_run.sh: Full daily ETL pipeline + reporting
- automated_weekly_run.sh: Weekly pattern analysis + reports
- setup_cron.sh: Interactive cron job setup (5-minute setup)
- Logs saved to ~/logs/ with automatic cleanup

🔍 HEALTH CHECKS:
- health_check.py: System health monitoring
- Checks: DB connection, data freshness, counts, recent alerts
- JSON output for programmatic use
- Exit codes for monitoring integration

🚀 CI/CD PIPELINE:
- .github/workflows/ci.yml: Full CI/CD pipeline
- GitHub Actions / Gitea Actions compatible
- Jobs: lint & test, security scan, dependency scan, Docker build
- PostgreSQL service for integration tests
- 93 tests passing in CI

📚 COMPREHENSIVE DOCUMENTATION:
- AUTOMATION_QUICKSTART.md: 5-minute email setup guide
- docs/12_automation_and_reporting.md: Full automation guide
- Updated README.md with automation links
- Deployment → Production workflow guide

🛠️ IMPROVEMENTS:
- All shell scripts made executable
- Environment variable examples in .env.example
- Report logs saved with timestamps
- 30-day log retention with auto-cleanup
- Health checks can be scheduled via cron

WHAT THIS ENABLES:
==================
After deployment, users can:
1. Set up automated daily/weekly email reports (5 min)
2. Receive HTML+text emails with:
   - New trades, market alerts, suspicious timing
   - Weekly patterns, rankings, repeat offenders
3. Monitor system health automatically
4. Run full CI/CD pipeline on every commit
5. Deploy with confidence (tests + security scans)

USAGE:
======
# One-time setup (on deployed server)
./scripts/setup_cron.sh

# Or manually send reports
python scripts/send_daily_report.py --to user@example.com
python scripts/send_weekly_report.py --to user@example.com

# Check system health
python scripts/health_check.py

See AUTOMATION_QUICKSTART.md for full instructions.

93 tests passing | Full CI/CD | Email reports ready
2025-12-15 15:34:31 -05:00

118 lines
4.0 KiB
Python
Executable File

#!/usr/bin/env python3
"""
Calculate returns for all trades and display summary statistics.
"""
import argparse
import logging
from pote.analytics.metrics import PerformanceMetrics
from pote.db import get_session
logging.basicConfig(level=logging.INFO, format="%(message)s")
logger = logging.getLogger(__name__)
def main():
parser = argparse.ArgumentParser(description="Calculate returns for all trades")
parser.add_argument(
"--window",
type=int,
default=90,
help="Return window in days (default: 90)",
)
parser.add_argument(
"--benchmark",
default="SPY",
help="Benchmark ticker (default: SPY)",
)
parser.add_argument(
"--top",
type=int,
default=10,
help="Number of top performers to show (default: 10)",
)
args = parser.parse_args()
with next(get_session()) as session:
metrics = PerformanceMetrics(session)
# Get system-wide statistics
logger.info("\n" + "=" * 70)
logger.info(" POTE System-Wide Performance Analysis")
logger.info("=" * 70)
summary = metrics.summary_statistics(
window_days=args.window,
benchmark=args.benchmark,
)
logger.info(f"\n📊 OVERALL STATISTICS")
logger.info("-" * 70)
logger.info(f"Total Officials: {summary['total_officials']}")
logger.info(f"Total Securities: {summary['total_securities']}")
logger.info(f"Total Trades: {summary['total_trades']}")
logger.info(f"Trades Analyzed: {summary.get('total_trades', 0)}")
logger.info(f"Window: {summary['window_days']} days")
logger.info(f"Benchmark: {summary['benchmark']}")
if summary.get('avg_alpha') is not None:
logger.info(f"\n🎯 AGGREGATE PERFORMANCE")
logger.info("-" * 70)
logger.info(f"Average Alpha: {float(summary['avg_alpha']):+.2f}%")
logger.info(f"Median Alpha: {float(summary['median_alpha']):+.2f}%")
logger.info(f"Max Alpha: {float(summary['max_alpha']):+.2f}%")
logger.info(f"Min Alpha: {float(summary['min_alpha']):+.2f}%")
logger.info(f"Beat Market Rate: {summary['beat_market_rate']:.1%}")
# Top performers
logger.info(f"\n🏆 TOP {args.top} PERFORMERS (by Alpha)")
logger.info("-" * 70)
top_performers = metrics.top_performers(
window_days=args.window,
benchmark=args.benchmark,
limit=args.top,
)
for i, perf in enumerate(top_performers, 1):
name = perf['name'][:25].ljust(25)
party = perf['party'][:3]
trades = perf['trades_analyzed']
alpha = float(perf['avg_alpha'])
logger.info(f"{i:2d}. {name} ({party}) | {trades:2d} trades | Alpha: {alpha:+6.2f}%")
# Sector analysis
logger.info(f"\n📊 PERFORMANCE BY SECTOR")
logger.info("-" * 70)
sectors = metrics.sector_analysis(
window_days=args.window,
benchmark=args.benchmark,
)
for sector_data in sectors:
sector = sector_data['sector'][:20].ljust(20)
count = sector_data['trade_count']
alpha = float(sector_data['avg_alpha'])
win_rate = sector_data['win_rate']
logger.info(f"{sector} | {count:3d} trades | Alpha: {alpha:+6.2f}% | Win: {win_rate:.1%}")
# Timing analysis
logger.info(f"\n⏱️ DISCLOSURE TIMING")
logger.info("-" * 70)
timing = metrics.timing_analysis()
if 'error' not in timing:
logger.info(f"Average Disclosure Lag: {timing['avg_disclosure_lag_days']:.1f} days")
logger.info(f"Median Disclosure Lag: {timing['median_disclosure_lag_days']} days")
logger.info(f"Max Disclosure Lag: {timing['max_disclosure_lag_days']} days")
logger.info("\n" + "=" * 70 + "\n")
if __name__ == "__main__":
main()