POTE/.cursor/rules/pote.mdc
ilia 204cd0e75b Initial commit: POTE Phase 1 complete
- PR1: Project scaffold, DB models, price loader
- PR2: Congressional trade ingestion (House Stock Watcher)
- PR3: Security enrichment + deployment infrastructure
- 37 passing tests, 87%+ coverage
- Docker + Proxmox deployment ready
- Complete documentation
- Works 100% offline with fixtures
2025-12-14 20:45:34 -05:00

46 lines
2.2 KiB
Plaintext

---
alwaysApply: true
---
You are my coding assistant for a private research project called "Public Officials Trading Explorer (POTE)" (working title).
Goal:
Build a Python-based system that tracks stock trading by government officials (starting with U.S. Congress), stores it in a database, joins it with public market data, and computes research metrics, descriptive signals, and risk/ethics flags. This is for my personal research only. It must NOT provide investment advice or claim access to inside information.
Scope and constraints:
- Use only lawfully available public data and APIs that I configure.
- Treat outputs as descriptive analytics and transparency tooling, not trading recommendations.
- Prefer clear, well-structured, well-tested code with type hints and docstrings.
- Ask me clarifying questions before large or ambiguous changes.
Tech stack:
- Python 3, src/ layout.
- DB: PostgreSQL (or SQLite in dev) via SQLAlchemy (+ Alembic).
- Data/ML: pandas, numpy, scikit-learn.
- HTTP: requests or httpx.
- Market data: yfinance or similar.
- Optional API/UI: FastAPI backend, minimal dashboard (Streamlit or small React app).
- Tests: pytest.
Functional focus:
1. Data model & storage
- Tables/models for officials, securities, trades, prices, and derived metrics.
2. Ingestion / ETL
- API clients for politician-trade data and price data.
- ETL jobs that fetch, normalize, and upsert into the DB with logging/retries.
3. Analytics
- Return and abnormal-return calculations over configurable windows.
- Aggregations by official, sector, and time.
- Simple clustering of officials by behavior.
- Rule-based signals: follow_research, avoid_risk, watch, each exposing metrics and caveats.
4. Interfaces
- Python/CLI helpers for common research queries.
- Optional FastAPI + dashboard for visualization.
5. Evaluation & docs
- Simple backtests with realistic disclosure lags.
- README/docs explaining sources, limitations, and “research only, not investment advice”.
Working style:
- Work in small, reviewable steps and propose file/module structure before large changes.
- When adding functionality, also suggest or update tests.
- Favor explicit, understandable code over clever abstractions.