Building an AI Conference Directory That Populates Itself
The Problem: AI Conferences Are Everywhere and Nowhere
If you’ve ever tried to find a comprehensive list of upcoming AI conferences, you know the pain. There’s no single source. AAAI has their page. NeurIPS has theirs. ICML posts deadlines on OpenReview. Half the emerging summits only exist on LinkedIn event pages or buried in Reddit threads.
I wanted a simple, searchable directory of AI conferences — one site where I could see what’s coming up, filter by topic, and get the key details. But I didn’t want to manually curate it. I’ve seen too many “awesome lists” on GitHub that are lovingly maintained for three months and then abandoned.
What I wanted was a system that populates itself.
So I built one. And with Claude Code running through my PAI system, the whole pipeline — from search to database to website — came together over a few focused sessions.
Here’s the full story.
The Architecture: Three Layers, Zero Manual Data Entry
The final system has three layers, each handling a distinct responsibility:
SearXNG (search engine)
→ conference_tracker.py (discovery)
→ Airtable (database)
→ fetch-events.mjs (build-time fetch)
→ React + Vite site on Netlify
Each layer is independently useful, loosely coupled, and replaceable. Let’s walk through them.
Layer 1: The Tracker — Finding Conferences Automatically
The foundation is a Python script called conference_tracker.py. Its job is simple: search the web for AI conferences and store what it finds.
Search: SearXNG Instead of Google
Rather than hitting the Google API (with its quotas and billing), I use SearXNG — an open-source, self-hosted meta-search engine. It aggregates results from Google, Bing, DuckDuckGo, and others without API keys or rate limits.
The tracker runs a curated list of search queries defined in config.yaml:
search_queries:
- "AI conference 2026"
- "artificial intelligence conference 2026"
- "machine learning conference 2026"
- "NeurIPS 2026"
- "ICML 2026"
- "AAAI 2026"
- "AI summit 2026"
- "deep learning conference 2026"
- "computer vision conference 2026 CVPR"
- "natural language processing conference 2026"
Each query returns up to 10 results. The tracker extracts the title, URL, and snippet from each result, deduplicates against what’s already in the database, and stores new finds.
Storage: Airtable as the Source of Truth
Why Airtable? Because it’s a real database with an API, but it also has a spreadsheet-like UI for manual review. When you’re building a pipeline that discovers data automatically, you want a way to eyeball the results and clean up noise — and Airtable is perfect for that.
The tracker writes five fields per record: title, websiteUrl, description, Source Query, and Date Found. That’s it. Just the raw discovery data. The structured details come later.
The deduplication is URL-based — normalized and lowercased. If we’ve already stored neurips.cc/2026, we don’t store it again even if it appears in a different search query.
def extract_conference_info(result, source_query):
return {
"title": result["title"][:200],
"websiteUrl": result["url"],
"description": result["snippet"][:1000],
"Source Query": source_query,
"Date Found": datetime.now(timezone.utc).strftime("%Y-%m-%d"),
}
After one run, we had 87 unique conference records. The real stuff — NeurIPS, ICML, CVPR, AAAI — alongside smaller but interesting events like the Quantum AI and NLP Conference, Deep Learning Indaba, and the Wharton Human-AI Research summit.
Layer 2: The Website — React + Vite on Netlify
The directory itself is a React app built with Vite and deployed on Netlify. It’s a single-page app with search, tag filtering, and individual event pages.
The key architectural decision: data is fetched at build time, not runtime. A prebuild script (fetch-events.mjs) pulls conference data from the database and writes it to a data.ts file that Vite bundles into the site. This means:
- No API keys exposed in the browser
- No CORS issues
- Instant page loads (data is already in the bundle)
- The site works even if Airtable is temporarily down
The prebuild hook in package.json makes this automatic:
{
"scripts": {
"fetch-events": "bun scripts/fetch-events.mjs",
"prebuild": "bun scripts/fetch-events.mjs",
"build": "vite build"
}
}
Every time Netlify builds the site, it automatically fetches the latest data from Airtable. Fresh data on every deploy.
The Middleman Problem: Cutting Google Sheets
Here’s where the story gets interesting.
The original pipeline had an extra step: Airtable → Google Sheets → website. The fetch-events.mjs script was pulling from a published Google Sheet CSV. Why? Because when I first prototyped the site, I started with a spreadsheet. It was quick and easy.
But once the conference tracker was writing directly to Airtable, Google Sheets became a middleman with no purpose. Data had to be synced from Airtable to Sheets (manually or via Zapier), and that sync was another thing that could break.
The fix was straightforward: teach fetch-events.mjs to talk directly to the Airtable API.
Airtable’s REST API
The Airtable API is clean. A single GET request returns records as JSON:
const url = new URL(`https://api.airtable.com/v0/${baseId}/${tableId}`);
const resp = await fetch(url.toString(), {
headers: { Authorization: `Bearer ${pat}` },
});
const data = await resp.json();
// data.records = [{ id, fields: { title, date, ... } }]
The one gotcha: Airtable paginates at 100 records. You need to follow the offset token:
async function fetchFromAirtable(pat, baseId, tableId) {
const allRecords = [];
let offset = null;
do {
const url = new URL(`https://api.airtable.com/v0/${baseId}/${tableId}`);
if (offset) url.searchParams.set('offset', offset);
const resp = await fetch(url.toString(), {
headers: { Authorization: `Bearer ${pat}` },
});
const data = await resp.json();
allRecords.push(...data.records);
offset = data.offset || null;
} while (offset);
return allRecords;
}
Graceful Fallback
I kept the Google Sheets path as a fallback. The main() function uses a priority chain:
- Airtable — if
AIRTABLE_PAT,AIRTABLE_BASE_ID,AIRTABLE_TABLE_IDare set - Google Sheets — if
GOOGLE_SHEET_CSV_URLis set - Fallback events — hardcoded sample data so the build never fails
This means you can’t break the site by misconfiguring a data source. The build always succeeds.
Layer 3: The Enrichment — AI-Powered Data Extraction
This is where things got really interesting.
After cutting Google Sheets, I had 87 conference records in Airtable. But they only had three useful fields: title, description, and URL. No dates. No locations. No tags. The site worked, but every event card was sparse — no way to filter by date or location, no tags to browse by topic.
Filling in 87 records by hand? No thanks.
The Idea: Visit Each URL and Ask AI to Extract the Data
The approach: for each conference record, fetch its web page, extract the text content, and use AI inference to pull out structured fields like date, location, organizer, and tags.
I built an enrichment script — enrich_conferences.py — that sits alongside the tracker in the same project.
Step 1: Fetch and Clean the Page
Each conference URL gets fetched with requests, then cleaned with BeautifulSoup. Navigation, footers, scripts, and styling get stripped, leaving just the text content:
def fetch_page_text(url, timeout=15):
resp = requests.get(url, headers=headers, timeout=timeout)
soup = BeautifulSoup(resp.text, "html.parser")
for tag in soup(["script", "style", "nav", "footer", "header", "aside"]):
tag.decompose()
text = soup.get_text(separator="\n", strip=True)
lines = [line.strip() for line in text.splitlines() if line.strip()]
return "\n".join(lines)
Step 2: AI Extraction via PAI Inference
The cleaned text gets sent to Claude (via PAI’s Inference tool) with a structured extraction prompt. The prompt is specific about what to extract and what format to use:
Given text from a conference web page, extract these fields as JSON:
{
"date": "human-readable date like 'May 5-6, 2026'",
"endDate": "ISO end date like '2026-05-06'",
"location": "City, State/Country",
"venue": "venue name",
"price": "ticket price or 'Free'",
"organizer": "organizing body",
"tags": "comma-separated topic tags (max 4)"
}
One critical addition: if the page is a list of conferences (like “Top 10 AI Conferences of 2026”), the AI returns {"is_list_page": true} and the script skips it. This was essential — about 15% of our URLs were aggregator pages, not individual conference pages.
Step 3: Write Back to Airtable
Non-empty extracted fields get PATCHed back to Airtable. The script only writes fields that actually exist in the table schema — a lesson learned the hard way when venue and imageUrl threw 422 errors because those columns hadn’t been created yet.
def build_patch_fields(extracted, allowed_fields):
if extracted.get("is_list_page"):
return None
patch = {}
for key in ["date", "endDate", "location", "venue", "price", "organizer", "tags"]:
if key not in allowed_fields:
continue
val = extracted.get(key, "")
if isinstance(val, str) and val.strip():
patch[key] = val.strip()
return patch if patch else None
The Results
Running the enrichment script across all 87 records:
| Outcome | Count |
|---|---|
| Records enriched | 48 |
| List/aggregator pages (correctly skipped) | 12 |
| No extractable fields (social media, OpenReview, etc.) | 11 |
| Errors (timeouts, HTTP 403s) | 16 |
After enrichment:
| Field | Records populated |
|---|---|
| Date | 42 |
| Location | 41 |
| Tags | 47 |
| Organizer | 27 |
| Price | 4 |
From zero structured data to a directory where most events have dates, locations, and topic tags — without opening a single conference website manually.
Some highlights from the extraction:
- NeurIPS 2026: December 6-12, Sydney, Australia — Deep Learning, Research, Algorithms, LLMs
- CVPR 2026: June 3-7, Denver, CO — Computer Vision, Deep Learning, Research
- ICML 2026: July 6-11, Seoul, South Korea — LLMs, Computer Vision, NLP, Robotics
- AI Council 2026: May 12-14, San Francisco, CA — Generative AI, ML Ops, AI Safety
- MIDL 2026: July 8-10, Taipei — Deep Learning, Healthcare AI, Computer Vision
The Pipeline Today
Here’s what the full system looks like now:
SearXNG (self-hosted search)
→ conference_tracker.py (Python — discovers conferences)
→ Airtable (source of truth — 87 records)
→ enrich_conferences.py (Python — AI-powered field extraction)
→ Airtable (now with dates, locations, tags)
→ fetch-events.mjs (Node — build-time data fetch)
→ data.ts (bundled into the site)
→ React + Vite app on Netlify
The tracker discovers. The enricher structures. The fetcher delivers. The site displays. Each piece runs independently and can be re-run at any time.
The enrichment script is idempotent — it only processes records where the date field is empty, so running it again only touches new or previously-failed records.
What I’d Do Differently (And What’s Next)
The Timeout Problem
About 16 records hit the 25-second inference timeout. The fast tier (Haiku) is quick but occasionally chokes on pages with dense, complex content. A retry mechanism using the standard tier (Sonnet) for failed records would catch most of these.
Missing Table Columns
The venue and imageUrl fields don’t exist in the Airtable table yet. The enrichment script extracts venue names beautifully (The Venetian for Ai4, COEX Convention Center for ICML, Dongguk University for AAAI Summer), but the data gets dropped because the columns aren’t there. A quick table schema update in the Airtable UI fixes this.
Scheduled Runs
Right now, both the tracker and enricher are manual. The natural next step is scheduling — run the tracker daily to discover new conferences, the enricher on new records, and trigger a Netlify deploy afterward. The Netlify build hook is already configured; it just needs a cron job or GitHub Action to call it.
Data Quality
Some records are noise — Reddit discussion threads, Amazon Science blog posts, Twitter/X profiles. A quality filter (either rule-based on URL patterns or AI-powered) would clean the dataset before enrichment runs.
Lessons Learned
1. Eliminate Middlemen Early
Google Sheets added zero value once Airtable was in the picture. But it lingered because it was the “original” approach. Every extra hop in a pipeline is a thing that can break, a thing that needs syncing, and a thing that slows you down. Cut it.
2. Build-Time Data Fetching Is Underrated
Pulling data at build time instead of runtime means no API keys in the browser, no loading spinners, and no CORS headaches. For data that changes daily (not per-second), this is the right architecture.
3. AI Extraction Beats Manual Curation
Using AI to extract structured data from unstructured web pages isn’t perfect — we got 48 out of 87 records enriched, not 87 out of 87. But it took 20 minutes of runtime versus what would have been hours of manual work. And the script is re-runnable. Improvement is incremental.
4. Detect Your Data’s Shape Before Writing
The Airtable 422 errors on venue were entirely preventable. The enrichment script now probes the table schema at startup and only writes to fields that exist. Defensive coding at system boundaries saves debugging time.
5. List Page Detection Is Essential for Web Scraping Pipelines
When you’re scraping URLs from search results, a significant percentage will be aggregator pages (“Top 10 Best AI Conferences”) rather than individual event pages. If you don’t detect and skip these, you’ll corrupt your dataset with merged data from multiple events. The is_list_page flag in the AI extraction prompt was one of the highest-value additions to the whole pipeline.
The Bigger Picture
This project is a miniature version of a pattern I keep coming back to: systems that compound.
The tracker runs once and discovers 87 conferences. The enricher runs once and structures 48 of them. The next time the tracker runs, it discovers only new conferences (deduplication handles the rest). The next time the enricher runs, it only processes records it hasn’t touched yet.
Every run makes the dataset better without redoing previous work. That’s the whole point of building infrastructure instead of doing things manually — you invest upfront so the system improves over time with minimal additional effort.
Working with Claude through PAI made each layer come together faster than I expected. The tracker, the Airtable integration, the Google Sheets elimination, the enrichment script — each was a focused session where the AI handled the implementation details while I focused on architecture decisions.
That’s the augmented part of Augmented Resilience. Not replacing the thinking — amplifying it.