Files
ai-chatexport/FUTURE.md
JesseMarkowitz 62445c7c0c chore: initialize project scaffold
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-27 22:45:46 -05:00

73 lines
3.4 KiB
Markdown

# Planned Future Work
These items are explicitly out of scope for v0.1.0 but have been designed for.
The codebase is structured to make each of these additions straightforward.
## Export --force Flag (v0.1.x)
Add `--force` to the `export` command to re-export already-cached conversations
without permanently clearing the entire manifest. Useful for re-generating files
after changing the Markdown template or output structure.
Implementation: pass a `force=True` flag to `cache.get_new_or_updated()`, which
returns all conversations regardless of cache state when force is True.
Current workaround: `python -m src.main cache --clear` then re-run export.
## Joplin Integration (v0.2.0)
Automate importing exported Markdown files into Joplin as new notes.
Joplin exposes a local REST API (requires Joplin desktop running with Web Clipper enabled).
Approach: after export, iterate exported files and POST each to
`http://localhost:41184/notes` with the appropriate notebook ID.
The output folder structure maps directly to Joplin notebooks:
- exports/chatgpt/my-project/ → Joplin notebook "ChatGPT - My Project"
- exports/claude/my-project/ → Joplin notebook "Claude - My Project"
- exports/chatgpt/no-project/ → Joplin notebook "ChatGPT - No Project"
- exports/claude/no-project/ → Joplin notebook "Claude - No Project"
Prerequisites:
- Joplin desktop must be running with Web Clipper enabled
- `JOPLIN_API_TOKEN` env var (get from Joplin → Tools → Web Clipper Options)
- The Joplin import script will need to create notebooks if they don't exist,
then POST each note into the correct notebook
Note: The default OUTPUT_STRUCTURE of provider/project/year is assumed when
implementing the import script. If the user has changed OUTPUT_STRUCTURE,
the import script will need updating accordingly.
## Official API Migration (v0.3.0)
If the unofficial internal web API approach breaks, migrate to official export
file parsing as a fallback:
- ChatGPT: parse `conversations.json` from Settings → Export Data
- Claude: parse `conversations.json` from Settings → Privacy → Export Data
The `BaseProvider` abstract class is intentionally designed so that a
`FileProvider` subclass can implement the same interface
(list_conversations, get_conversation, normalize_conversation)
without any changes to cache, exporters, or CLI code.
To add this: implement `src/providers/file_chatgpt.py` and
`src/providers/file_claude.py`, then add `--input-file` flag to the
export command to accept a pre-downloaded export ZIP or JSON.
## Rich Content Support (v0.4.0)
Currently only text content is exported. Future versions should handle:
### Claude
- Artifacts (code, documents, HTML) — export as separate files, link from Markdown
- Uploaded images — download and embed or link
- Extended thinking/reasoning blocks — include as collapsible sections
- Tool call results and web search citations — include as footnotes or appendices
### ChatGPT
- DALL-E generated images — download and embed or link
- Code Interpreter outputs — export code and results
- File attachments — download and reference
- Voice transcripts — include as text
Implementation note: the normalized message schema already includes a
`content_type` field placeholder. When this work begins, extend the schema
rather than replacing it. In v0.1.0, log a WARNING whenever non-text content
is encountered so users know what was skipped.