feat: v0.2.0 — Joplin import, ChatGPT Projects, --project filter
Core features: - Add `joplin` command: syncs exported Markdown to Joplin via local REST API - Notebooks auto-created per provider+project (e.g. "ChatGPT - My Project") - Idempotent: notes updated (not duplicated) on re-run; note ID tracked in manifest - Add `--project` filter to `export` and `list` commands (substring or 'none') - Add ChatGPT Projects support via CHATGPT_PROJECT_IDS env var Config: - Add JOPLIN_API_TOKEN, JOPLIN_API_URL, JOPLIN_REQUEST_TIMEOUT - Version now read from importlib.metadata (single source of truth: pyproject.toml) - Bump version to 0.2.0 Quality: - Explicit Timeout handling in JoplinClient with actionable error messages - token validation (validate_token) separate from connectivity (ping) - Remove debug_auth.py, debug_claude.py, and untracked .har file - Add *.har to .gitignore (may contain auth cookies/session tokens) - Update README, CHANGELOG, FUTURE.md to reflect v0.2.0 Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
141
FUTURE.md
141
FUTURE.md
@@ -1,9 +1,17 @@
|
||||
# Planned Future Work
|
||||
|
||||
These items are explicitly out of scope for v0.1.0 but have been designed for.
|
||||
The codebase is structured to make each of these additions straightforward.
|
||||
Items completed in each release are moved to the changelog. Items here are
|
||||
designed for but not yet implemented. The codebase is structured to make each
|
||||
of these additions straightforward.
|
||||
|
||||
**Completed:**
|
||||
- v0.1.0 — Core export: ChatGPT + Claude, incremental sync, Markdown + JSON output
|
||||
- v0.2.0 — Joplin import automation (`joplin` command, create/update notes, notebook auto-creation)
|
||||
|
||||
---
|
||||
|
||||
## Export `--force` Flag (v0.2.x)
|
||||
|
||||
## Export --force Flag (v0.1.x)
|
||||
Add `--force` to the `export` command to re-export already-cached conversations
|
||||
without permanently clearing the entire manifest. Useful for re-generating files
|
||||
after changing the Markdown template or output structure.
|
||||
@@ -13,30 +21,27 @@ returns all conversations regardless of cache state when force is True.
|
||||
|
||||
Current workaround: `python -m src.main cache --clear` then re-run export.
|
||||
|
||||
## Joplin Integration (v0.2.0)
|
||||
Automate importing exported Markdown files into Joplin as new notes.
|
||||
Joplin exposes a local REST API (requires Joplin desktop running with Web Clipper enabled).
|
||||
## Joplin `--force` Flag (v0.2.x)
|
||||
|
||||
Approach: after export, iterate exported files and POST each to
|
||||
`http://localhost:41184/notes` with the appropriate notebook ID.
|
||||
Similarly, add `--force` to the `joplin` command to re-sync all cached
|
||||
conversations to Joplin regardless of whether they've been synced before.
|
||||
Useful after making formatting changes to the Markdown exporter.
|
||||
|
||||
The output folder structure maps directly to Joplin notebooks:
|
||||
- exports/chatgpt/my-project/ → Joplin notebook "ChatGPT - My Project"
|
||||
- exports/claude/my-project/ → Joplin notebook "Claude - My Project"
|
||||
- exports/chatgpt/no-project/ → Joplin notebook "ChatGPT - No Project"
|
||||
- exports/claude/no-project/ → Joplin notebook "Claude - No Project"
|
||||
Implementation: in `get_joplin_pending()`, return all entries that have a
|
||||
`file_path` when `force=True`, ignoring `joplin_synced_at`.
|
||||
|
||||
Prerequisites:
|
||||
- Joplin desktop must be running with Web Clipper enabled
|
||||
- `JOPLIN_API_TOKEN` env var (get from Joplin → Tools → Web Clipper Options)
|
||||
- The Joplin import script will need to create notebooks if they don't exist,
|
||||
then POST each note into the correct notebook
|
||||
## Per-Conversation Cache Reset (v0.2.x)
|
||||
|
||||
Note: The default OUTPUT_STRUCTURE of provider/project/year is assumed when
|
||||
implementing the import script. If the user has changed OUTPUT_STRUCTURE,
|
||||
the import script will need updating accordingly.
|
||||
Add `cache --reset --conversation <id>` to force re-export or re-sync of a
|
||||
single conversation without clearing the entire provider cache.
|
||||
|
||||
Current workaround: manually edit `~/.ai-chat-exporter/manifest.json` and
|
||||
delete the entry, then re-run export.
|
||||
|
||||
---
|
||||
|
||||
## Official API Fallback (v0.3.0)
|
||||
|
||||
## Official API Migration (v0.3.0)
|
||||
If the unofficial internal web API approach breaks, migrate to official export
|
||||
file parsing as a fallback:
|
||||
- ChatGPT: parse `conversations.json` from Settings → Export Data
|
||||
@@ -44,14 +49,17 @@ file parsing as a fallback:
|
||||
|
||||
The `BaseProvider` abstract class is intentionally designed so that a
|
||||
`FileProvider` subclass can implement the same interface
|
||||
(list_conversations, get_conversation, normalize_conversation)
|
||||
(`list_conversations`, `get_conversation`, `normalize_conversation`)
|
||||
without any changes to cache, exporters, or CLI code.
|
||||
|
||||
To add this: implement `src/providers/file_chatgpt.py` and
|
||||
`src/providers/file_claude.py`, then add `--input-file` flag to the
|
||||
export command to accept a pre-downloaded export ZIP or JSON.
|
||||
|
||||
---
|
||||
|
||||
## Rich Content Support (v0.4.0)
|
||||
|
||||
Currently only text content is exported. Future versions should handle:
|
||||
|
||||
### Claude
|
||||
@@ -68,5 +76,88 @@ Currently only text content is exported. Future versions should handle:
|
||||
|
||||
Implementation note: the normalized message schema already includes a
|
||||
`content_type` field placeholder. When this work begins, extend the schema
|
||||
rather than replacing it. In v0.1.0, log a WARNING whenever non-text content
|
||||
is encountered so users know what was skipped.
|
||||
rather than replacing it. Non-text content already logs a WARNING when
|
||||
encountered so users can see what was skipped.
|
||||
|
||||
---
|
||||
|
||||
## Scheduled / Watch Mode (v0.5.0)
|
||||
|
||||
Add a `watch` command (or cron integration helper) to run exports automatically
|
||||
on a schedule:
|
||||
|
||||
```bash
|
||||
python -m src.main watch --interval 6h # poll every 6 hours
|
||||
```
|
||||
|
||||
This would run `export` + `joplin` in sequence, then sleep. Alternatively,
|
||||
provide a `cron` command that prints the correct crontab line for the user's
|
||||
setup.
|
||||
|
||||
Implementation: simple loop with `time.sleep()`, or emit a crontab entry
|
||||
string that calls the export and joplin commands in sequence. A `--once`
|
||||
flag would do a single run then exit (useful for cron itself).
|
||||
|
||||
---
|
||||
|
||||
## Obsidian Vault Output (v0.5.0)
|
||||
|
||||
Add an `obsidian` command (or `--target obsidian` flag) to sync exported
|
||||
conversations into an Obsidian vault directory. The current Markdown format
|
||||
is already largely compatible; the main differences are:
|
||||
|
||||
- Obsidian uses YAML frontmatter `properties` (same format, already supported)
|
||||
- Tags should use `#tag` inline or `tags:` list in frontmatter (already done)
|
||||
- Wikilinks (`[[Title]]`) instead of Markdown links — optional, Obsidian
|
||||
supports both
|
||||
|
||||
Implementation: the existing `MarkdownExporter` output is already valid in
|
||||
Obsidian. An `ObsidianSyncer` class (mirroring `JoplinClient`) would simply
|
||||
copy files to the vault directory and maintain a flat or nested folder
|
||||
structure matching the user's Obsidian setup. No API needed — just file I/O.
|
||||
|
||||
---
|
||||
|
||||
## Joplin Nested Notebooks (future)
|
||||
|
||||
Currently notebooks are flat: `ChatGPT - My Project`. Joplin supports nested
|
||||
notebooks via `parent_id`. A future option (`JOPLIN_NESTED_NOTEBOOKS=true`)
|
||||
could create a two-level hierarchy:
|
||||
|
||||
```
|
||||
ChatGPT/
|
||||
My Project/
|
||||
No Project/
|
||||
Claude/
|
||||
Budget Tracker/
|
||||
```
|
||||
|
||||
Implementation: `get_or_create_notebook` would first find/create the provider
|
||||
notebook, then find/create the project notebook as a child.
|
||||
|
||||
---
|
||||
|
||||
## Token Expiry Notifications (future)
|
||||
|
||||
Proactively warn when a token is close to expiry (within 48h for ChatGPT),
|
||||
rather than only surfacing the warning at startup. Options:
|
||||
|
||||
- Add an `expiry` subcommand that prints token status and exits non-zero if
|
||||
any token is expired or expiring soon (useful in scripts/cron)
|
||||
- Send a desktop notification via `notify-send` (Linux) or `osascript` (macOS)
|
||||
when a token is within 24h of expiry
|
||||
|
||||
---
|
||||
|
||||
## Search Command (future)
|
||||
|
||||
Add a `search` command to full-text search across all exported Markdown files:
|
||||
|
||||
```bash
|
||||
python -m src.main search "kubernetes ingress"
|
||||
python -m src.main search "kubernetes ingress" --provider claude --project devops
|
||||
```
|
||||
|
||||
Implementation: `grep`/`ripgrep` over `EXPORT_DIR`, display results with
|
||||
conversation title, date, and a snippet. No index needed — Markdown files are
|
||||
small enough to grep directly.
|
||||
|
||||
Reference in New Issue
Block a user