Core features: - Add `joplin` command: syncs exported Markdown to Joplin via local REST API - Notebooks auto-created per provider+project (e.g. "ChatGPT - My Project") - Idempotent: notes updated (not duplicated) on re-run; note ID tracked in manifest - Add `--project` filter to `export` and `list` commands (substring or 'none') - Add ChatGPT Projects support via CHATGPT_PROJECT_IDS env var Config: - Add JOPLIN_API_TOKEN, JOPLIN_API_URL, JOPLIN_REQUEST_TIMEOUT - Version now read from importlib.metadata (single source of truth: pyproject.toml) - Bump version to 0.2.0 Quality: - Explicit Timeout handling in JoplinClient with actionable error messages - token validation (validate_token) separate from connectivity (ping) - Remove debug_auth.py, debug_claude.py, and untracked .har file - Add *.har to .gitignore (may contain auth cookies/session tokens) - Update README, CHANGELOG, FUTURE.md to reflect v0.2.0 Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
5.8 KiB
Planned Future Work
Items completed in each release are moved to the changelog. Items here are designed for but not yet implemented. The codebase is structured to make each of these additions straightforward.
Completed:
- v0.1.0 — Core export: ChatGPT + Claude, incremental sync, Markdown + JSON output
- v0.2.0 — Joplin import automation (
joplincommand, create/update notes, notebook auto-creation)
Export --force Flag (v0.2.x)
Add --force to the export command to re-export already-cached conversations
without permanently clearing the entire manifest. Useful for re-generating files
after changing the Markdown template or output structure.
Implementation: pass a force=True flag to cache.get_new_or_updated(), which
returns all conversations regardless of cache state when force is True.
Current workaround: python -m src.main cache --clear then re-run export.
Joplin --force Flag (v0.2.x)
Similarly, add --force to the joplin command to re-sync all cached
conversations to Joplin regardless of whether they've been synced before.
Useful after making formatting changes to the Markdown exporter.
Implementation: in get_joplin_pending(), return all entries that have a
file_path when force=True, ignoring joplin_synced_at.
Per-Conversation Cache Reset (v0.2.x)
Add cache --reset --conversation <id> to force re-export or re-sync of a
single conversation without clearing the entire provider cache.
Current workaround: manually edit ~/.ai-chat-exporter/manifest.json and
delete the entry, then re-run export.
Official API Fallback (v0.3.0)
If the unofficial internal web API approach breaks, migrate to official export file parsing as a fallback:
- ChatGPT: parse
conversations.jsonfrom Settings → Export Data - Claude: parse
conversations.jsonfrom Settings → Privacy → Export Data
The BaseProvider abstract class is intentionally designed so that a
FileProvider subclass can implement the same interface
(list_conversations, get_conversation, normalize_conversation)
without any changes to cache, exporters, or CLI code.
To add this: implement src/providers/file_chatgpt.py and
src/providers/file_claude.py, then add --input-file flag to the
export command to accept a pre-downloaded export ZIP or JSON.
Rich Content Support (v0.4.0)
Currently only text content is exported. Future versions should handle:
Claude
- Artifacts (code, documents, HTML) — export as separate files, link from Markdown
- Uploaded images — download and embed or link
- Extended thinking/reasoning blocks — include as collapsible sections
- Tool call results and web search citations — include as footnotes or appendices
ChatGPT
- DALL-E generated images — download and embed or link
- Code Interpreter outputs — export code and results
- File attachments — download and reference
- Voice transcripts — include as text
Implementation note: the normalized message schema already includes a
content_type field placeholder. When this work begins, extend the schema
rather than replacing it. Non-text content already logs a WARNING when
encountered so users can see what was skipped.
Scheduled / Watch Mode (v0.5.0)
Add a watch command (or cron integration helper) to run exports automatically
on a schedule:
python -m src.main watch --interval 6h # poll every 6 hours
This would run export + joplin in sequence, then sleep. Alternatively,
provide a cron command that prints the correct crontab line for the user's
setup.
Implementation: simple loop with time.sleep(), or emit a crontab entry
string that calls the export and joplin commands in sequence. A --once
flag would do a single run then exit (useful for cron itself).
Obsidian Vault Output (v0.5.0)
Add an obsidian command (or --target obsidian flag) to sync exported
conversations into an Obsidian vault directory. The current Markdown format
is already largely compatible; the main differences are:
- Obsidian uses YAML frontmatter
properties(same format, already supported) - Tags should use
#taginline ortags:list in frontmatter (already done) - Wikilinks (
[[Title]]) instead of Markdown links — optional, Obsidian supports both
Implementation: the existing MarkdownExporter output is already valid in
Obsidian. An ObsidianSyncer class (mirroring JoplinClient) would simply
copy files to the vault directory and maintain a flat or nested folder
structure matching the user's Obsidian setup. No API needed — just file I/O.
Joplin Nested Notebooks (future)
Currently notebooks are flat: ChatGPT - My Project. Joplin supports nested
notebooks via parent_id. A future option (JOPLIN_NESTED_NOTEBOOKS=true)
could create a two-level hierarchy:
ChatGPT/
My Project/
No Project/
Claude/
Budget Tracker/
Implementation: get_or_create_notebook would first find/create the provider
notebook, then find/create the project notebook as a child.
Token Expiry Notifications (future)
Proactively warn when a token is close to expiry (within 48h for ChatGPT), rather than only surfacing the warning at startup. Options:
- Add an
expirysubcommand that prints token status and exits non-zero if any token is expired or expiring soon (useful in scripts/cron) - Send a desktop notification via
notify-send(Linux) orosascript(macOS) when a token is within 24h of expiry
Search Command (future)
Add a search command to full-text search across all exported Markdown files:
python -m src.main search "kubernetes ingress"
python -m src.main search "kubernetes ingress" --provider claude --project devops
Implementation: grep/ripgrep over EXPORT_DIR, display results with
conversation title, date, and a snippet. No index needed — Markdown files are
small enough to grep directly.