# Planned Future Work Items completed in each release are moved to the changelog. Items here are designed for but not yet implemented. The codebase is structured to make each of these additions straightforward. **Completed:** - v0.1.0 — Core export: ChatGPT + Claude, incremental sync, Markdown + JSON output - v0.2.0 — Joplin import automation (`joplin` command, create/update notes, notebook auto-creation) --- ## Export `--force` Flag (v0.2.x) Add `--force` to the `export` command to re-export already-cached conversations without permanently clearing the entire manifest. Useful for re-generating files after changing the Markdown template or output structure. Implementation: pass a `force=True` flag to `cache.get_new_or_updated()`, which returns all conversations regardless of cache state when force is True. Current workaround: `python -m src.main cache --clear` then re-run export. ## Joplin `--force` Flag (v0.2.x) Similarly, add `--force` to the `joplin` command to re-sync all cached conversations to Joplin regardless of whether they've been synced before. Useful after making formatting changes to the Markdown exporter. Implementation: in `get_joplin_pending()`, return all entries that have a `file_path` when `force=True`, ignoring `joplin_synced_at`. ## Per-Conversation Cache Reset (v0.2.x) Add `cache --reset --conversation ` to force re-export or re-sync of a single conversation without clearing the entire provider cache. Current workaround: manually edit `~/.ai-chat-exporter/manifest.json` and delete the entry, then re-run export. --- ## Official API Fallback (v0.3.0) If the unofficial internal web API approach breaks, migrate to official export file parsing as a fallback: - ChatGPT: parse `conversations.json` from Settings → Export Data - Claude: parse `conversations.json` from Settings → Privacy → Export Data The `BaseProvider` abstract class is intentionally designed so that a `FileProvider` subclass can implement the same interface (`list_conversations`, `get_conversation`, `normalize_conversation`) without any changes to cache, exporters, or CLI code. To add this: implement `src/providers/file_chatgpt.py` and `src/providers/file_claude.py`, then add `--input-file` flag to the export command to accept a pre-downloaded export ZIP or JSON. --- ## Rich Content Support (v0.4.0) Currently only text content is exported. Future versions should handle: ### Claude - Artifacts (code, documents, HTML) — export as separate files, link from Markdown - Uploaded images — download and embed or link - Extended thinking/reasoning blocks — include as collapsible sections - Tool call results and web search citations — include as footnotes or appendices ### ChatGPT - DALL-E generated images — download and embed or link - Code Interpreter outputs — export code and results - File attachments — download and reference - Voice transcripts — include as text Implementation note: the normalized message schema already includes a `content_type` field placeholder. When this work begins, extend the schema rather than replacing it. Non-text content already logs a WARNING when encountered so users can see what was skipped. --- ## Scheduled / Watch Mode (v0.5.0) Add a `watch` command (or cron integration helper) to run exports automatically on a schedule: ```bash python -m src.main watch --interval 6h # poll every 6 hours ``` This would run `export` + `joplin` in sequence, then sleep. Alternatively, provide a `cron` command that prints the correct crontab line for the user's setup. Implementation: simple loop with `time.sleep()`, or emit a crontab entry string that calls the export and joplin commands in sequence. A `--once` flag would do a single run then exit (useful for cron itself). --- ## Obsidian Vault Output (v0.5.0) Add an `obsidian` command (or `--target obsidian` flag) to sync exported conversations into an Obsidian vault directory. The current Markdown format is already largely compatible; the main differences are: - Obsidian uses YAML frontmatter `properties` (same format, already supported) - Tags should use `#tag` inline or `tags:` list in frontmatter (already done) - Wikilinks (`[[Title]]`) instead of Markdown links — optional, Obsidian supports both Implementation: the existing `MarkdownExporter` output is already valid in Obsidian. An `ObsidianSyncer` class (mirroring `JoplinClient`) would simply copy files to the vault directory and maintain a flat or nested folder structure matching the user's Obsidian setup. No API needed — just file I/O. --- ## Joplin Nested Notebooks (future) Currently notebooks are flat: `ChatGPT - My Project`. Joplin supports nested notebooks via `parent_id`. A future option (`JOPLIN_NESTED_NOTEBOOKS=true`) could create a two-level hierarchy: ``` ChatGPT/ My Project/ No Project/ Claude/ Budget Tracker/ ``` Implementation: `get_or_create_notebook` would first find/create the provider notebook, then find/create the project notebook as a child. --- ## Token Expiry Notifications (future) Proactively warn when a token is close to expiry (within 48h for ChatGPT), rather than only surfacing the warning at startup. Options: - Add an `expiry` subcommand that prints token status and exits non-zero if any token is expired or expiring soon (useful in scripts/cron) - Send a desktop notification via `notify-send` (Linux) or `osascript` (macOS) when a token is within 24h of expiry --- ## Search Command (future) Add a `search` command to full-text search across all exported Markdown files: ```bash python -m src.main search "kubernetes ingress" python -m src.main search "kubernetes ingress" --provider claude --project devops ``` Implementation: `grep`/`ripgrep` over `EXPORT_DIR`, display results with conversation title, date, and a snippet. No index needed — Markdown files are small enough to grep directly.