Yupa ingests tech news from 20+ verified sources, matches concepts to your repos, and generates feature specs — ready to approve and deploy as PRs.
New streaming interface allows agents to emit partial tool results during execution, reducing perceived latency for long-running operations.
"You read about a new framework. You think 'might be useful someday.' You never integrate it."
Hacker News, arXiv, Twitter, Substacks — you skim dozens of sources daily. Most of it isn't relevant to your stack.
"Interesting framework" → "How do we actually use this?" requires hours of reading docs, comparing APIs, and checking compatibility.
The ticket gets deprioritized. The bookmark collects dust. Your team adopts the tool 6 months later — after your competitor already shipped it.
Yupa's intelligence loop runs daily, automatically.
20+ verified sources scraped daily. Anthropic, DeepMind, Hacker News, arXiv, Y Combinator, engineering Substacks. Structured concept extraction via Claude Haiku.
Your repos are fingerprinted — dependencies, frameworks, architecture. Each concept is scored 0-100 against your actual stack. Language mismatches are pre-filtered.
For high-scoring matches, Claude Sonnet generates a grounded PRD with integration steps, dependencies, risks, and discovery keywords. Every claim cites the source article.
Review the spec in your dashboard. Approve it. Yupa creates a feature branch, generates the code, and opens a PR. You merge when ready.
Most news aggregators give you titles and summaries. Yupa extracts structured concept signatures — machine-readable representations of what a technology does, what stacks it's compatible with, and what problems it solves.
A raw article about a new Python async library becomes a structured object that Yupa can reason about:
{
"concept_name": "AnyIO 5.0 — Structured Concurrency",
"category": "Async Framework",
"core_technologies": ["Python", "asyncio", "trio"],
"compatible_architectures": ["FastAPI", "ASGI"],
"problem_solved": "Task groups with automatic cleanup on failure",
"relevance_score": 82
}
This isn't keyword matching. Claude Haiku reads the full article and extracts the semantic intent — understanding what a tool does and which codebases would benefit from adopting it.
Curated, not firehosed. Each source is verified for signal quality.
Naive keyword matching would tell you every Python library is relevant to your Python repo. That's useless. Yupa's matching is contextual — it understands what your repo actually does.
A new async streaming library scores 87 against your FastAPI backend (directly improves request handling) but only 12 against your React frontend (wrong language family, wrong architecture). The pre-filter catches the language mismatch before Claude Haiku even runs — saving API costs.
Scores above 60 get specs auto-generated. Scores between 30-60 appear in your digest for manual review. Below 30, they're silently filtered. You only see what matters.
No surprise charges. You see what every action costs before it runs.
Per concept scored against your repo via Claude Haiku. Pre-filtered by language first.
Per spec generated via Claude Sonnet with grounded citations. One auto-spec per repo per run.
Run specs locally via Ollama + Nemotron. No API costs. Slightly lower quality, full privacy.
Generic AI summaries invent plausible-sounding details. Yupa specs use the Anthropic Citations API to ground every claim in the original source article. If the source doesn't mention a detail, Sonnet says "unknown" instead of guessing.
Each spec includes: what changed in the ecosystem, a concrete integration plan with specific files and imports, dependencies to add or remove, risks and unknowns, and discovery keywords for further research. It's a structured PRD, not a paragraph of marketing copy.
Human review at every step. Nothing executes without your approval. Yupa is your autonomous PM, not your autonomous engineer — it researches and recommends, you decide and deploy.
Get notified when Yupa launches. Early users get free access to all source families.