[2026.05 Week 2] Five Trending Repos of the Week
TL;DR
Skill packs dominated trending again. Curated bundles of Claude Code, Codex, or Cursor skills kept flooding the top 100. The format that broke out last month is now a steady drumbeat.
AI gateways landed in a wave. A string of trending repos pitched “route your Claude, OpenAI, or Gemini calls through one endpoint with format conversion or fallback.”
Trading agents are back. Automated financial bots resurfaced across the top 100, echoing last month’s batch almost line for line.
Document-as-graph kept showing up. More repos sold “turn your codebase or knowledge base into a queryable graph for your agent.”
This week’s picks:
zero-native (⭐ 2.2k). A Zig desktop framework that ships web frontends as tiny native binaries with instant rebuilds.
symphony (⭐ 23.1k). OpenAI’s open-source spec for a Codex orchestrator that pulls issues from Linear and runs them autonomously.
PageIndex (⭐ 30.4k). A reasoning-based RAG system that builds a hierarchical tree from PDFs and skips vector embeddings entirely.
hunk (⭐ 3.1k). A review-first terminal diff viewer for AI-authored changesets, built on OpenTUI.
new-api (⭐ 32.2k). An AI gateway that cross-converts requests between OpenAI, Claude, and Gemini formats.
vercel-labs/zero-native
⭐ 2.2k · Zig
Vercel Labs’s first desktop framework is a Zig binary hosting a web frontend, smaller than Electron and faster to rebuild than Tauri. Most desktop frameworks pick one rendering engine. zero-native ships two. The same Zig code can target the platform WebView or bundle Chromium through CEF.
That is hard to do without forking the runtime. Zig has no interface keyword, so zero-native reaches for the idiomatic alternative, a struct of function pointers.
pub const PlatformServices = struct {
context: ?*anyopaque = null,
read_clipboard_fn: ?*const fn (context: ?*anyopaque, buffer: []u8) anyerror![]const u8 = null,
write_clipboard_fn: ?*const fn (context: ?*anyopaque, text: []const u8) anyerror!void = null,
load_webview_fn: ?*const fn (context: ?*anyopaque, source: WebViewSource) anyerror!void = null,
load_window_webview_fn: ?*const fn (context: ?*anyopaque, window_id: WindowId, source: WebViewSource) anyerror!void = null,
complete_bridge_fn: ?*const fn (context: ?*anyopaque, response: []const u8) anyerror!void = null,
complete_window_bridge_fn: ?*const fn (context: ?*anyopaque, window_id: WindowId, response: []const u8) anyerror!void = null,
create_window_fn: ?*const fn (context: ?*anyopaque, options: WindowOptions) anyerror!WindowInfo = null,
focus_window_fn: ?*const fn (context: ?*anyopaque, window_id: WindowId) anyerror!void = null,
close_window_fn: ?*const fn (context: ?*anyopaque, window_id: WindowId) anyerror!void = null,
show_open_dialog_fn: ?*const fn (context: ?*anyopaque, options: OpenDialogOptions, buffer: []u8) anyerror!OpenDialogResult = null,
// ... save dialog, message dialog, tray, security policy, window events ...
configure_security_policy_fn: ?*const fn (context: ?*anyopaque, policy: security.Policy) anyerror!void = null,
};PlatformServices is the vtable. Every function pointer is nullable, and the helper methods on the struct return error.UnsupportedService when a backend leaves a field empty. Each platform fills in what it can. The macOS appkit_host.m implements load_webview_fn against WKWebView. The CEF host in src/platform/macos/cef_host.mm implements the same field against Chromium. Linux and Windows do the same thing with GTK, WebView2, and CEF in their own subdirectories.
The Runtime never branches on engine type. It calls services.loadWebView(source) and the right backend handles it. Selection happens at startup, where app.zon‘s web_engine = "system" or "chromium" flag drives which backend swaps into the vtable (src/tooling/web_engine.zig:60-70). The same idiom applies to security policy, dialogs, clipboard, and tray. One surface across two engines and three operating systems, with no runtime cost beyond the function-pointer call.
openai/symphony
⭐ 23.1k · Elixir
OpenAI’s blog post on Symphony pitches it as a spec for orchestrating Codex agents. Each issue runs in its own filesystem workspace, isolated from every other concurrent agent run, and the orchestrator keeps agents from clobbering each other when they wander outside the workspace tree.
The path validation lives in Workspace.validate_workspace_path. Path traversal attacks are not usually what you worry about when the input is a Linear issue identifier, but Symphony assumes nothing.
elixir/lib/symphony_elixir/workspace.ex:358-384
defp validate_workspace_path(workspace, nil) when is_binary(workspace) do
expanded_workspace = Path.expand(workspace)
expanded_root = Path.expand(Config.settings!().workspace.root)
expanded_root_prefix = expanded_root <> "/"
with {:ok, canonical_workspace} <- PathSafety.canonicalize(expanded_workspace),
{:ok, canonical_root} <- PathSafety.canonicalize(expanded_root) do
canonical_root_prefix = canonical_root <> "/"
cond do
canonical_workspace == canonical_root ->
{:error, {:workspace_equals_root, canonical_workspace, canonical_root}}
String.starts_with?(canonical_workspace <> "/", canonical_root_prefix) ->
:ok
String.starts_with?(expanded_workspace <> "/", expanded_root_prefix) ->
{:error, {:workspace_symlink_escape, expanded_workspace, canonical_root}}
true ->
{:error, {:workspace_outside_root, canonical_workspace, canonical_root}}
end
end
endThe check runs twice. First with Path.expand, which resolves .. and tilde syntax but does not follow symlinks. Then with PathSafety.canonicalize, which follows symlinks all the way to the real path. Both versions need to live under workspace.root. If the textual path looks contained but the canonical path escapes, somebody planted a symlink that points outside the root, and Symphony refuses to operate on it with :workspace_symlink_escape.
The issue identifier itself gets sanitized first by safe_identifier in the same module, which replaces every non-alphanumeric character with an underscore (workspace.ex:206-208). That blocks ../ injection at the source. The dual-path check is the second line of defense, in case someone preconfigures a symlink or wins a race against mkdir -p.
The hooks before_run, after_run, and before_remove all execute with cd: workspace, so an agent that gets confused about its working directory cannot accidentally write into a sibling issue’s workspace. Combined with BEAM’s per-process supervision, every issue’s agent run is its own crash domain.
VectifyAI/PageIndex
⭐ 30.4k · Python
The Show HN thread on PageIndex spent its top comments arguing about whether “vectorless RAG” is a real category or just relocates the vibes from embeddings to the LLM. The maintainer’s reply pointed at how the LLM walks the tree without ever loading the whole document into context. The agentic demo in the repo makes the protocol concrete.
examples/agentic_vectorless_rag_demo.py:44-79
AGENT_SYSTEM_PROMPT = """
You are PageIndex, a document QA assistant.
TOOL USE:
- Call get_document() first to confirm status and page/line count.
- Call get_document_structure() to identify relevant page ranges.
- Call get_page_content(pages="5-7") with tight ranges; never fetch the whole document.
- Before each tool call, output one short sentence explaining the reason.
Answer based only on tool output. Be concise.
"""
# ...
@function_tool
def get_document_structure() -> str:
"""Get the document's full tree structure (without text) to find relevant sections."""
return client.get_document_structure(doc_id)
@function_tool
def get_page_content(pages: str) -> str:
"""
Get the text content of specific pages or line numbers.
Use tight ranges: e.g. '5-7' for pages 5 to 7, '3,8' for pages 3 and 8, '12' for page 12.
"""
return client.get_page_content(doc_id, pages)Three tools and one rule. get_document_structure() returns the tree of titles, summaries, and page ranges with the body text stripped. get_page_content("5-7") fetches specific pages. The “never fetch the whole document” rule is what makes this scale.
When the agent loop runs, the LLM reads the tree summaries, picks the nodes most likely to contain the answer, and calls get_page_content with just those page numbers. The structure-without-text response is small enough that it fits in context even for thousand-page filings.
That is the actual scaling story. A vector store trades accuracy for retrieval cost. PageIndex trades retrieval cost for LLM calls, but the tree shape means each call sees a logarithmic slice of the document. The HN debate was whether that beats hybrid retrieval. PageIndex hits 98.7% on FinanceBench against roughly 50% for vector RAG (VectifyAI/Mafin2.5-FinanceBench). Summaries plus tool calls fit documents with real hierarchy, like SEC filings or legal contracts, where vector similarity has nothing to grip onto.
modem-dev/hunk
⭐ 3.1k · TypeScript
Hunk is a terminal diff viewer built for reviewing changesets an AI agent produced. The 0.10.0 release added a daemon mode and an --agent-context flag that takes a JSON sidecar with rationale for the diff. The question is how that sidecar gets pinned to specific lines in specific hunks across a multi-file changeset.
src/ui/lib/agentAnnotations.ts:16-43
/** Check whether two inclusive line ranges overlap. */
function overlap(rangeA: [number, number], rangeB: [number, number]) {
return rangeA[0] <= rangeB[1] && rangeB[0] <= rangeA[1];
}
/** Check whether an annotation belongs to the visible span of a hunk. */
function annotationOverlapsHunk(annotation: AgentAnnotation, hunk: Hunk) {
const hunkRange = hunkLineRange(hunk);
if (annotation.newRange && overlap(annotation.newRange, hunkRange.newRange)) {
return true;
}
if (annotation.oldRange && overlap(annotation.oldRange, hunkRange.oldRange)) {
return true;
}
return false;
}
/** Return the annotations relevant to the currently selected hunk. */
export function getSelectedAnnotations(file: DiffFile | undefined, hunk: Hunk | undefined) {
if (!file?.agent || !hunk) {
return [];
}
return file.agent.annotations.filter((annotation) => annotationOverlapsHunk(annotation, hunk));
}Each hunk has two line ranges, one for the old side of the diff and one for the new side. Each agent annotation in the sidecar can carry an oldRange, a newRange, or both. The match rule is just whether either range overlaps with the hunk on its own side. A note attached to a deleted line lights up the hunk that deleted it. One on an added line lights up the hunk that added it. Notes spanning both sides, like a refactor explanation, show up in any hunk they touch.
Picture an annotation { oldRange: [10, 12], message: "removed dead branch" } against a hunk that deletes old lines 8 through 14. overlap([10, 12], [8, 14]) returns true, so the note pins to that hunk. Another hunk touching only new lines 50 through 55 has no old-side range to compare against and stays clean.
The two-sided overlap is what makes the rule survive renames. When findAgentFileContext matches a sidecar entry to a diff file, it tries the current path first and then falls back to previousPath (src/core/agent.ts:124-128). The annotation’s line numbers stay valid against whichever side of the diff they were authored against, so a comment on a deleted line in old/foo.ts still finds the right hunk after the file becomes new/bar.ts.
The same overlap check feeds the sidebar markers. getAnnotatedHunkIndices walks every hunk in a file and collects which ones have at least one matching annotation, so the sidebar can show review density at a glance (src/ui/lib/agentAnnotations.ts:46-59). One ten-line helper, three different UI surfaces.
QuantumNous/new-api
⭐ 32.2k · Go
new-api is a fork of one-api that grew into a centralized AI gateway, with cross-provider format conversion as the headline feature. The cleanest example of why that is harder than it looks is the OpenAI-to-Claude path. OpenAI’s Chat Completions API uses standalone messages with role: "tool" for tool results. Claude’s Messages API embeds tool results as content blocks inside a user message. Mapping one to the other means rewriting the conversation shape, not just renaming fields.
relay/channel/claude/relay-claude.go:334-361
if message.Role == "tool" {
if len(claudeMessages) > 0 && claudeMessages[len(claudeMessages)-1].Role == "user" {
lastMessage := claudeMessages[len(claudeMessages)-1]
if content, ok := lastMessage.Content.(string); ok {
lastMessage.Content = []dto.ClaudeMediaMessage{
{
Type: "text",
Text: common.GetPointer[string](content),
},
}
}
lastMessage.Content = append(lastMessage.Content.([]dto.ClaudeMediaMessage), dto.ClaudeMediaMessage{
Type: "tool_result",
ToolUseId: message.ToolCallId,
Content: message.Content,
})
claudeMessages[len(claudeMessages)-1] = lastMessage
continue
} else {
claudeMessage.Role = "user"
claudeMessage.Content = []dto.ClaudeMediaMessage{
{
Type: "tool_result",
ToolUseId: message.ToolCallId,
Content: message.Content,
},
}
}
}When the OpenAI request has a tool message and the previous Claude message is already a user, the converter appends a tool_result block to that user’s content. The string-content branch upgrades the previous user from a plain string to a list of content blocks first, because Claude rejects mixing string and structured content in the same message. When there is no preceding user message to attach to, the converter synthesizes one from scratch.
The same function does similar surgery elsewhere. System messages, which OpenAI puts in the messages array, get pulled out and accumulated into Claude’s top-level system field (relay-claude.go:287-315). If the first non-system message has the assistant role, the converter inserts a placeholder user message with content "..." because Claude rejects assistant-first conversations (relay-claude.go:315-330).
These fixups are why “OpenAI-compatible gateway” is a much bigger lift than it sounds. Each provider has its own constraints on conversation shape, and the gateway has to know all of them before it can pretend the differences do not exist.

