Article 1: last30days-skill — The AI Research Tool That’s Taking GitHub by Storm
An AI skill that scours Reddit, X, YouTube, Hacker News, Bluesky, TikTok, Instagram, and Polymarket prediction markets — then synthesizes everything into a single grounded briefing. That’s the promise of last30days-skill, the open-source research tool that has accumulated over 12,000 stars on GitHub in a remarkably short time, with nearly 2,900 stars coming in just the last 24 hours.
The tool — created by mvanhorn in collaboration with the Claude team — addresses one of the most persistent frustrations in AI-assisted research: the gap between what a language model “knows” and what’s actually trending, contested, or gaining momentum right now. Traditional AI assistants are frozen in time at their training cutoff. Last30days-skill bridges that gap by giving any AI agent the ability to do live, multi-platform research on any topic.
How It Works
At its core, last30days-skill is an open-class skill compatible with AI coding agents like Claude Code and ChatGPT. Users invoke it with a simple command — /last30days — and the skill kicks off parallel research passes across up to nine different social and content platforms simultaneously.
The results don’t just come back as a raw dump of links and snippets. The skill runs every result through a sophisticated multi-signal quality-ranking pipeline that considers bidirectional text similarity, engagement velocity, source authority, cross-platform convergence, and temporal recency decay. In other words, it doesn’t just find what’s being discussed — it finds what’s being taken seriously.
The Polymarket integration is particularly noteworthy. Unlike upvotes or retweets, prediction market positions represent real money betting on real outcomes. A topic that shows up in Polymarket markets with significant volume behind it has a different kind of credibility than one that’s merely trending on social media. The skill’s two-pass query expansion can even discover topics buried inside broader market events, surfacing betting odds on secondary outcomes that keyword search would miss.
Version 2.9.5: Bluesky, Comparative Mode, and Better Config
The latest release, v2.9.5, added several significant features. Bluesky and the AT Protocol are now supported as a social source, giving users access to what’s being discussed on one of the fastest-growing social platforms. A new Comparative Mode lets users ask “X vs Y” and receive three parallel research passes with a side-by-side comparison table — useful for anyone evaluating competing tools, frameworks, or approaches.
The per-project .env configuration is a welcome addition for teams: individual projects can now have their own API keys, preventing key conflicts in multi-project environments. A session-start config validator automatically checks configuration on every new Claude Code session, reducing the frustration of silent failures due to missing credentials.
The Research Quality Problem in AI Agents
The motivation behind last30days-skill touches on a fundamental limitation of current AI systems: static knowledge versus dynamic, evolving discourse. A model trained in 2024 has no native awareness of what happened in the 2026 NCAA Tournament, what the betting odds are on a particular geopolitical outcome, or which subreddit is currently the most authoritative voice on a niche technical topic.
For developers and researchers who need to stay current — particularly in fast-moving fields like AI itself — this knowledge gap is a genuine productivity killer. The traditional workaround involves manually checking multiple platforms, which is time-consuming and makes it easy to miss convergent signals across communities.
Last30days-skill automates that process and adds intelligent synthesis on top. The v2.5 release introduced a blinded evaluation where the tool scored 4.38 out of 5.0 for research quality, compared to 3.73 out of 5.0 for the previous version — a meaningful improvement that validates the multi-signal approach.
Build Your Own Research Library
One of the more interesting features introduced in v2.9.1 is automatic saving: every research run now exports a complete briefing as a Markdown file to ~/Documents/Last30Days/, named by topic. Over time, this builds a personal research library that compounds in value — a researcher working in AI for six months would have briefings on dozens of topics, each one a timestamped snapshot of what the internet knew and thought at a specific moment.
The Bigger Picture
What’s notable about last30days-skill is not just what it does, but what its existence signals. The tool was built by a single developer (mvanhorn) in collaboration with the Claude team, suggesting that the broader AI ecosystem is beginning to treat continuous, multi-source research as a fundamental capability rather than a nice-to-have feature.
As AI coding agents become more capable and are increasingly used for complex, multi-step tasks, the ability to do grounded, current, cross-platform research becomes essential. A coding agent that can write code but can’t verify whether a library it wants to use is still maintained, or whether a technique has been superseded, is only half useful.
With 12,000 stars and climbing, it’s clear the developer community agrees. Last30days-skill is available now on GitHub and Clawhub, and can be installed directly into Claude Code or any compatible AI agent environment.