Add detailed migration plan for PieceTable-based buffer architecture.

- Created `piece-table-migration.md` outlining the steps to transition from GapBuffer to a unified PieceTable architecture.
- Included phased approach: extending PieceTable, Buffer adapter layer, command updates, and renderer changes.
- Detailed API changes, file updates, testing strategy, risk assessment, and timeline for each migration phase.
- Document serves as a reference for architecture goals and implementation details.
This commit is contained in:
2025-12-05 20:21:33 -08:00
parent 71c1c9e50b
commit 3f4c60d311
10 changed files with 3608 additions and 219 deletions

121
Buffer.cc
View File

@@ -3,6 +3,9 @@
#include <filesystem>
#include <cstdlib>
#include <limits>
#include <cerrno>
#include <cstring>
#include <string_view>
#include "Buffer.h"
#include "UndoSystem.h"
@@ -295,15 +298,17 @@ Buffer::Save(std::string &err) const
}
std::ofstream out(filename_, std::ios::out | std::ios::binary | std::ios::trunc);
if (!out) {
err = "Failed to open for write: " + filename_;
err = "Failed to open for write: " + filename_ + ". Error: " + std::string(std::strerror(errno));
return false;
}
const char *d = content_.Data();
std::size_t n = content_.Size();
if (d && n)
out.write(d, static_cast<std::streamsize>(n));
// Write the entire buffer in a single block to minimize I/O calls.
const char *data = content_.Data();
const auto size = static_cast<std::streamsize>(content_.Size());
if (data != nullptr && size > 0) {
out.write(data, size);
}
if (!out.good()) {
err = "Write error";
err = "Write error: " + filename_ + ". Error: " + std::string(std::strerror(errno));
return false;
}
// Note: const method cannot change dirty_. Intentionally const to allow UI code
@@ -337,17 +342,17 @@ Buffer::SaveAs(const std::string &path, std::string &err)
// Write to the given path
std::ofstream out(out_path, std::ios::out | std::ios::binary | std::ios::trunc);
if (!out) {
err = "Failed to open for write: " + out_path;
err = "Failed to open for write: " + out_path + ". Error: " + std::string(std::strerror(errno));
return false;
}
{
const char *d = content_.Data();
std::size_t n = content_.Size();
if (d && n)
out.write(d, static_cast<std::streamsize>(n));
// Write whole content in a single I/O operation
const char *data = content_.Data();
const auto size = static_cast<std::streamsize>(content_.Size());
if (data != nullptr && size > 0) {
out.write(data, size);
}
if (!out.good()) {
err = "Write error";
err = "Write error: " + out_path + ". Error: " + std::string(std::strerror(errno));
return false;
}
@@ -389,6 +394,20 @@ Buffer::insert_text(int row, int col, std::string_view text)
// ===== Adapter helpers for PieceTable-backed Buffer =====
std::string_view
Buffer::GetLineView(std::size_t row) const
{
// Get byte range for the logical line and return a view into materialized data
auto range = content_.GetLineRange(row); // [start,end) in bytes
const char *base = content_.Data(); // materializes if needed
if (!base)
return std::string_view();
const std::size_t start = range.first;
const std::size_t len = (range.second > range.first) ? (range.second - range.first) : 0;
return std::string_view(base + start, len);
}
void
Buffer::ensure_rows_cache() const
{
@@ -422,66 +441,42 @@ Buffer::delete_text(int row, int col, std::size_t len)
row = 0;
if (col < 0)
col = 0;
std::size_t start = content_.LineColToByteOffset(static_cast<std::size_t>(row), static_cast<std::size_t>(col));
// Walk len logical characters across lines to compute end offset
std::size_t r = static_cast<std::size_t>(row);
std::size_t c = static_cast<std::size_t>(col);
std::size_t remaining = len;
const std::size_t line_count = content_.LineCount();
while (remaining > 0 && r < line_count) {
auto range = content_.GetLineRange(r); // [start,end)
// Compute end of line excluding trailing '\n'
std::size_t line_end = range.second;
if (line_end > range.first) {
// If last char is '\n', don't count in-column span
std::string last = content_.GetRange(line_end - 1, 1);
if (!last.empty() && last[0] == '\n') {
line_end -= 1;
}
const std::size_t start = content_.LineColToByteOffset(static_cast<std::size_t>(row),
static_cast<std::size_t>(col));
std::size_t r = static_cast<std::size_t>(row);
std::size_t c = static_cast<std::size_t>(col);
std::size_t remaining = len;
const std::size_t lc = content_.LineCount();
while (remaining > 0 && r < lc) {
const std::string line = content_.GetLine(r); // logical line (without trailing '\n')
const std::size_t L = line.size();
if (c < L) {
const std::size_t take = std::min(remaining, L - c);
c += take;
remaining -= take;
}
std::size_t cur_off = content_.LineColToByteOffset(r, c);
std::size_t in_line = (cur_off < line_end) ? (line_end - cur_off) : 0;
if (remaining <= in_line) {
// All within current line
std::size_t end = cur_off + remaining;
content_.Delete(start, end - start);
rows_cache_dirty_ = true;
return;
}
// Consume rest of line
remaining -= in_line;
std::size_t end = cur_off + in_line;
// If there is a next line and remaining > 0, consider consuming the newline as 1
if (r + 1 < line_count) {
if (remaining == 0)
break;
// Consume newline between lines as one char, if there is a next line
if (r + 1 < lc) {
if (remaining > 0) {
// newline
end += 1;
remaining -= 1;
remaining -= 1; // the newline
r += 1;
c = 0;
}
// Move to next line
r += 1;
c = 0;
// Update start deletion length so far by postponing until we know final end; we keep start fixed
if (remaining == 0) {
content_.Delete(start, end - start);
rows_cache_dirty_ = true;
return;
}
// Continue loop with updated r/c; but also keep track of 'end' as current consumed position
// Rather than tracking incrementally, we will recompute cur_off at top of loop.
// However, we need to carry forward the consumed part; we can temporarily store 'end' in start_of_next
// To simplify, after loop finishes we will compute final end using current r/c using remaining.
} else {
// No next line; delete to file end
// At last line and still remaining: delete to EOF
std::size_t total = content_.Size();
content_.Delete(start, total - start);
rows_cache_dirty_ = true;
return;
}
}
// If loop ended because remaining==0 at a line boundary
if (remaining == 0) {
std::size_t end = content_.LineColToByteOffset(r, c);
// Compute end offset at (r,c)
std::size_t end = content_.LineColToByteOffset(r, c);
if (end > start) {
content_.Delete(start, end - start);
rows_cache_dirty_ = true;
}

View File

@@ -12,7 +12,6 @@
#include "PieceTable.h"
#include "UndoSystem.h"
#include <cstdint>
#include <memory>
#include "syntax/HighlighterEngine.h"
#include "Highlight.h"
@@ -79,7 +78,8 @@ public:
}
// Line wrapper backed by PieceTable
// Line wrapper used by legacy command paths.
// Keep this lightweight: store materialized bytes only for that line.
class Line {
public:
Line() = default;
@@ -108,119 +108,102 @@ public:
// capacity helpers
void Clear()
{
buf_.Clear();
s_.clear();
}
// size/access
[[nodiscard]] std::size_t size() const
{
return buf_.Size();
return s_.size();
}
[[nodiscard]] bool empty() const
{
return size() == 0;
return s_.empty();
}
// read-only raw view
[[nodiscard]] const char *Data() const
{
return buf_.Data();
return s_.data();
}
[[nodiscard]] std::size_t Size() const
{
return buf_.Size();
return s_.size();
}
// element access (read-only)
[[nodiscard]] char operator[](std::size_t i) const
{
const char *d = buf_.Data();
return (i < buf_.Size() && d) ? d[i] : '\0';
return (i < s_.size()) ? s_[i] : '\0';
}
// conversions
explicit operator std::string() const
{
return {buf_.Data() ? buf_.Data() : "", buf_.Size()};
return s_;
}
// string-like API used by command/renderer layers (implemented via materialization for now)
[[nodiscard]] std::string substr(std::size_t pos) const
{
const std::size_t n = buf_.Size();
if (pos >= n)
return {};
return {buf_.Data() + pos, n - pos};
return pos < s_.size() ? s_.substr(pos) : std::string();
}
[[nodiscard]] std::string substr(std::size_t pos, std::size_t len) const
{
const std::size_t n = buf_.Size();
if (pos >= n)
return {};
const std::size_t take = (pos + len > n) ? (n - pos) : len;
return {buf_.Data() + pos, take};
return pos < s_.size() ? s_.substr(pos, len) : std::string();
}
// minimal find() to support search within a line
[[nodiscard]] std::size_t find(const std::string &needle, const std::size_t pos = 0) const
{
// Materialize to std::string for now; Line is backed by PieceTable
const auto s = static_cast<std::string>(*this);
return s.find(needle, pos);
return s_.find(needle, pos);
}
void erase(std::size_t pos)
{
// erase to end
material_edit([&](std::string &s) {
if (pos < s.size())
s.erase(pos);
});
if (pos < s_.size())
s_.erase(pos);
}
void erase(std::size_t pos, std::size_t len)
{
material_edit([&](std::string &s) {
if (pos < s.size())
s.erase(pos, len);
});
if (pos < s_.size())
s_.erase(pos, len);
}
void insert(std::size_t pos, const std::string &seg)
{
material_edit([&](std::string &s) {
if (pos > s.size())
pos = s.size();
s.insert(pos, seg);
});
if (pos > s_.size())
pos = s_.size();
s_.insert(pos, seg);
}
Line &operator+=(const Line &other)
{
buf_.Append(other.buf_.Data(), other.buf_.Size());
s_ += other.s_;
return *this;
}
Line &operator+=(const std::string &s)
{
buf_.Append(s.data(), s.size());
s_ += s;
return *this;
}
@@ -234,22 +217,11 @@ public:
private:
void assign_from(const std::string &s)
{
buf_.Clear();
if (!s.empty())
buf_.Append(s.data(), s.size());
s_ = s;
}
template<typename F>
void material_edit(F fn)
{
std::string tmp = static_cast<std::string>(*this);
fn(tmp);
assign_from(tmp);
}
PieceTable buf_;
std::string s_;
};
@@ -267,6 +239,25 @@ public:
}
// Lightweight, lazy per-line accessors that avoid materializing all rows.
// Prefer these over Rows() in hot paths to reduce memory overhead on large files.
[[nodiscard]] std::string GetLineString(std::size_t row) const
{
return content_.GetLine(row);
}
[[nodiscard]] std::pair<std::size_t, std::size_t> GetLineRange(std::size_t row) const
{
return content_.GetLineRange(row);
}
// Zero-copy view of a line. Points into the materialized backing store; becomes
// invalid after subsequent edits. Use immediately.
[[nodiscard]] std::string_view GetLineView(std::size_t row) const;
[[nodiscard]] const std::string &Filename() const
{
return filename_;
@@ -411,13 +402,13 @@ public:
}
kte::HighlighterEngine *Highlighter()
[[nodiscard]] kte::HighlighterEngine *Highlighter()
{
return highlighter_.get();
}
const kte::HighlighterEngine *Highlighter() const
[[nodiscard]] const kte::HighlighterEngine *Highlighter() const
{
return highlighter_.get();
}
@@ -452,7 +443,7 @@ public:
void delete_row(int row);
// Undo system accessors (created per-buffer)
UndoSystem *Undo();
[[nodiscard]] UndoSystem *Undo();
[[nodiscard]] const UndoSystem *Undo() const;

View File

@@ -6,6 +6,7 @@
#include <sstream>
#include <cmath>
#include <cctype>
#include <string_view>
#include "Command.h"
#include "syntax/HighlighterRegistry.h"
@@ -48,7 +49,7 @@ bool gFontDialogRequested = false;
// window based on the editor's current dimensions. The bottom row is reserved
// for the status line.
static std::size_t
compute_render_x(const std::string &line, const std::size_t curx, const std::size_t tabw)
compute_render_x(std::string_view line, const std::size_t curx, const std::size_t tabw)
{
std::size_t rx = 0;
for (std::size_t i = 0; i < curx && i < line.size(); ++i) {
@@ -93,10 +94,11 @@ ensure_cursor_visible(const Editor &ed, Buffer &buf)
}
// Horizontal scrolling (use rendered columns with tabs expanded)
std::size_t rx = 0;
const auto &lines = buf.Rows();
if (cury < lines.size()) {
rx = compute_render_x(static_cast<std::string>(lines[cury]), curx, 8);
std::size_t rx = 0;
const auto total = buf.Nrows();
if (cury < total) {
// Avoid materializing all rows and copying strings; get a zero-copy view
rx = compute_render_x(buf.GetLineView(cury), curx, 8);
}
if (rx < coloffs) {
coloffs = rx;

View File

@@ -15,13 +15,32 @@ PieceTable::PieceTable(const std::size_t initialCapacity)
}
PieceTable::PieceTable(const std::size_t initialCapacity,
const std::size_t piece_limit,
const std::size_t small_piece_threshold,
const std::size_t max_consolidation_bytes)
{
add_.reserve(initialCapacity);
materialized_.reserve(initialCapacity);
piece_limit_ = piece_limit;
small_piece_threshold_ = small_piece_threshold;
max_consolidation_bytes_ = max_consolidation_bytes;
}
PieceTable::PieceTable(const PieceTable &other)
: original_(other.original_),
add_(other.add_),
pieces_(other.pieces_),
materialized_(other.materialized_),
dirty_(other.dirty_),
total_size_(other.total_size_) {}
total_size_(other.total_size_)
{
version_ = other.version_;
// caches are per-instance, mark invalid
range_cache_ = {};
find_cache_ = {};
}
PieceTable &
@@ -35,6 +54,9 @@ PieceTable::operator=(const PieceTable &other)
materialized_ = other.materialized_;
dirty_ = other.dirty_;
total_size_ = other.total_size_;
version_ = other.version_;
range_cache_ = {};
find_cache_ = {};
return *this;
}
@@ -49,6 +71,9 @@ PieceTable::PieceTable(PieceTable &&other) noexcept
{
other.dirty_ = true;
other.total_size_ = 0;
version_ = other.version_;
range_cache_ = {};
find_cache_ = {};
}
@@ -65,6 +90,9 @@ PieceTable::operator=(PieceTable &&other) noexcept
total_size_ = other.total_size_;
other.dirty_ = true;
other.total_size_ = 0;
version_ = other.version_;
range_cache_ = {};
find_cache_ = {};
return *this;
}
@@ -80,6 +108,21 @@ PieceTable::Reserve(const std::size_t newCapacity)
}
// Setter to allow tuning consolidation heuristics
void
PieceTable::SetConsolidationParams(const std::size_t piece_limit,
const std::size_t small_piece_threshold,
const std::size_t max_consolidation_bytes)
{
piece_limit_ = piece_limit;
small_piece_threshold_ = small_piece_threshold;
max_consolidation_bytes_ = max_consolidation_bytes;
}
// (removed helper) — we'll invalidate caches inline inside mutating methods
void
PieceTable::AppendChar(char c)
{
@@ -154,6 +197,9 @@ PieceTable::Clear()
dirty_ = true;
line_index_.clear();
line_index_dirty_ = true;
version_++;
range_cache_ = {};
find_cache_ = {};
}
@@ -174,6 +220,9 @@ PieceTable::addPieceBack(const Source src, const std::size_t start, const std::s
last.len += len;
total_size_ += len;
dirty_ = true;
version_++;
range_cache_ = {};
find_cache_ = {};
return;
}
}
@@ -183,6 +232,9 @@ PieceTable::addPieceBack(const Source src, const std::size_t start, const std::s
total_size_ += len;
dirty_ = true;
InvalidateLineIndex();
version_++;
range_cache_ = {};
find_cache_ = {};
}
@@ -201,6 +253,9 @@ PieceTable::addPieceFront(Source src, std::size_t start, std::size_t len)
first.len += len;
total_size_ += len;
dirty_ = true;
version_++;
range_cache_ = {};
find_cache_ = {};
return;
}
}
@@ -208,6 +263,9 @@ PieceTable::addPieceFront(Source src, std::size_t start, std::size_t len)
total_size_ += len;
dirty_ = true;
InvalidateLineIndex();
version_++;
range_cache_ = {};
find_cache_ = {};
}
@@ -260,24 +318,27 @@ PieceTable::coalesceNeighbors(std::size_t index)
return;
if (index >= pieces_.size())
index = pieces_.size() - 1;
// Try merge with previous
if (index > 0) {
// Merge repeatedly with previous while contiguous and same source
while (index > 0) {
auto &prev = pieces_[index - 1];
auto &curr = pieces_[index];
if (prev.src == curr.src && prev.start + prev.len == curr.start) {
prev.len += curr.len;
pieces_.erase(pieces_.begin() + static_cast<std::ptrdiff_t>(index));
if (index > 0)
index -= 1;
index -= 1;
} else {
break;
}
}
// Try merge with next (index may have shifted)
if (index + 1 < pieces_.size()) {
// Merge repeatedly with next while contiguous and same source
while (index + 1 < pieces_.size()) {
auto &curr = pieces_[index];
auto &next = pieces_[index + 1];
if (curr.src == next.src && curr.start + curr.len == next.start) {
curr.len += next.len;
pieces_.erase(pieces_.begin() + static_cast<std::ptrdiff_t>(index + 1));
} else {
break;
}
}
}
@@ -316,10 +377,12 @@ PieceTable::RebuildLineIndex() const
void
PieceTable::Insert(std::size_t byte_offset, const char *text, std::size_t len)
{
if (len == 0)
if (len == 0) {
return;
if (byte_offset > total_size_)
}
if (byte_offset > total_size_) {
byte_offset = total_size_;
}
const std::size_t add_start = add_.size();
add_.append(text, len);
@@ -329,6 +392,10 @@ PieceTable::Insert(std::size_t byte_offset, const char *text, std::size_t len)
total_size_ += len;
dirty_ = true;
InvalidateLineIndex();
maybeConsolidate();
version_++;
range_cache_ = {};
find_cache_ = {};
return;
}
@@ -340,6 +407,10 @@ PieceTable::Insert(std::size_t byte_offset, const char *text, std::size_t len)
dirty_ = true;
InvalidateLineIndex();
coalesceNeighbors(pieces_.size() - 1);
maybeConsolidate();
version_++;
range_cache_ = {};
find_cache_ = {};
return;
}
@@ -366,18 +437,25 @@ PieceTable::Insert(std::size_t byte_offset, const char *text, std::size_t len)
// Try coalescing around the inserted position (the inserted piece is at idx + (inner>0 ? 1 : 0))
std::size_t ins_index = idx + (inner > 0 ? 1 : 0);
coalesceNeighbors(ins_index);
maybeConsolidate();
version_++;
range_cache_ = {};
find_cache_ = {};
}
void
PieceTable::Delete(std::size_t byte_offset, std::size_t len)
{
if (len == 0)
if (len == 0) {
return;
if (byte_offset >= total_size_)
}
if (byte_offset >= total_size_) {
return;
if (byte_offset + len > total_size_)
}
if (byte_offset + len > total_size_) {
len = total_size_ - byte_offset;
}
auto [idx, inner] = locate(byte_offset);
std::size_t remaining = len;
@@ -430,6 +508,100 @@ PieceTable::Delete(std::size_t byte_offset, std::size_t len)
coalesceNeighbors(idx);
if (idx > 0)
coalesceNeighbors(idx - 1);
maybeConsolidate();
version_++;
range_cache_ = {};
find_cache_ = {};
}
// ===== Consolidation implementation =====
void
PieceTable::appendPieceDataTo(std::string &out, const Piece &p) const
{
if (p.len == 0)
return;
const std::string &src = p.src == Source::Original ? original_ : add_;
out.append(src.data() + static_cast<std::ptrdiff_t>(p.start), p.len);
}
void
PieceTable::consolidateRange(std::size_t start_idx, std::size_t end_idx)
{
if (start_idx >= end_idx || start_idx >= pieces_.size())
return;
end_idx = std::min(end_idx, pieces_.size());
std::size_t total = 0;
for (std::size_t i = start_idx; i < end_idx; ++i)
total += pieces_[i].len;
if (total == 0)
return;
const std::size_t add_start = add_.size();
std::string tmp;
tmp.reserve(std::min<std::size_t>(total, max_consolidation_bytes_));
for (std::size_t i = start_idx; i < end_idx; ++i)
appendPieceDataTo(tmp, pieces_[i]);
add_.append(tmp);
// Replace [start_idx, end_idx) with single Add piece
Piece consolidated{Source::Add, add_start, tmp.size()};
pieces_.erase(pieces_.begin() + static_cast<std::ptrdiff_t>(start_idx),
pieces_.begin() + static_cast<std::ptrdiff_t>(end_idx));
pieces_.insert(pieces_.begin() + static_cast<std::ptrdiff_t>(start_idx), consolidated);
// total_size_ unchanged
dirty_ = true;
InvalidateLineIndex();
coalesceNeighbors(start_idx);
// Layout changed; invalidate caches/version
version_++;
range_cache_ = {};
find_cache_ = {};
}
void
PieceTable::maybeConsolidate()
{
if (pieces_.size() <= piece_limit_)
return;
// Find the first run of small pieces to consolidate
std::size_t n = pieces_.size();
std::size_t best_start = n, best_end = n;
std::size_t i = 0;
while (i < n) {
// Skip large pieces quickly
if (pieces_[i].len > small_piece_threshold_) {
i++;
continue;
}
std::size_t j = i;
std::size_t bytes = 0;
while (j < n) {
const auto &p = pieces_[j];
if (p.len > small_piece_threshold_)
break;
if (bytes + p.len > max_consolidation_bytes_)
break;
bytes += p.len;
j++;
}
if (j - i >= 2 && bytes > 0) {
// consolidate runs of at least 2 pieces
best_start = i;
best_end = j;
break; // do one run per call; subsequent ops can repeat if still over limit
}
i = j + 1;
}
if (best_start < best_end) {
consolidateRange(best_start, best_end);
}
}
@@ -517,8 +689,45 @@ PieceTable::GetRange(std::size_t byte_offset, std::size_t len) const
return std::string();
if (byte_offset + len > total_size_)
len = total_size_ - byte_offset;
materialize();
return materialized_.substr(byte_offset, len);
// Fast path: return cached value if version/offset/len match
if (range_cache_.valid && range_cache_.version == version_ &&
range_cache_.off == byte_offset && range_cache_.len == len) {
return range_cache_.data;
}
std::string out;
out.reserve(len);
if (!dirty_) {
// Already materialized; slice directly
out.assign(materialized_.data() + static_cast<std::ptrdiff_t>(byte_offset), len);
} else {
// Assemble substring directly from pieces without full materialization
auto [idx, inner] = locate(byte_offset);
std::size_t remaining = len;
while (remaining > 0 && idx < pieces_.size()) {
const auto &p = pieces_[idx];
const std::string &src = (p.src == Source::Original) ? original_ : add_;
std::size_t take = std::min<std::size_t>(p.len - inner, remaining);
if (take > 0) {
const char *base = src.data() + static_cast<std::ptrdiff_t>(p.start + inner);
out.append(base, take);
remaining -= take;
inner = 0;
idx += 1;
} else {
break;
}
}
}
// Update cache
range_cache_.valid = true;
range_cache_.version = version_;
range_cache_.off = byte_offset;
range_cache_.len = len;
range_cache_.data = out;
return out;
}
@@ -529,9 +738,22 @@ PieceTable::Find(const std::string &needle, std::size_t start) const
return start <= total_size_ ? start : std::numeric_limits<std::size_t>::max();
if (start > total_size_)
return std::numeric_limits<std::size_t>::max();
if (find_cache_.valid &&
find_cache_.version == version_ &&
find_cache_.needle == needle &&
find_cache_.start == start) {
return find_cache_.result;
}
materialize();
auto pos = materialized_.find(needle, start);
if (pos == std::string::npos)
return std::numeric_limits<std::size_t>::max();
pos = std::numeric_limits<std::size_t>::max();
// Update cache
find_cache_.valid = true;
find_cache_.version = version_;
find_cache_.needle = needle;
find_cache_.start = start;
find_cache_.result = pos;
return pos;
}
}

View File

@@ -3,8 +3,10 @@
*/
#pragma once
#include <cstddef>
#include <cstdint>
#include <string>
#include <vector>
#include <limits>
class PieceTable {
@@ -13,6 +15,12 @@ public:
explicit PieceTable(std::size_t initialCapacity);
// Advanced constructor allowing configuration of consolidation heuristics
PieceTable(std::size_t initialCapacity,
std::size_t piece_limit,
std::size_t small_piece_threshold,
std::size_t max_consolidation_bytes);
PieceTable(const PieceTable &other);
PieceTable &operator=(const PieceTable &other);
@@ -92,6 +100,11 @@ public:
// Simple search utility; returns byte offset or npos
[[nodiscard]] std::size_t Find(const std::string &needle, std::size_t start = 0) const;
// Heuristic configuration
void SetConsolidationParams(std::size_t piece_limit,
std::size_t small_piece_threshold,
std::size_t max_consolidation_bytes);
private:
enum class Source : unsigned char { Original, Add };
@@ -113,6 +126,13 @@ private:
// Helper: try to coalesce neighboring pieces around index
void coalesceNeighbors(std::size_t index);
// Consolidation helpers and heuristics
void maybeConsolidate();
void consolidateRange(std::size_t start_idx, std::size_t end_idx);
void appendPieceDataTo(std::string &out, const Piece &p) const;
// Line index support (rebuilt lazily on demand)
void InvalidateLineIndex() const;
@@ -124,10 +144,37 @@ private:
std::vector<Piece> pieces_;
mutable std::string materialized_;
mutable bool dirty_ = true;
std::size_t total_size_ = 0;
mutable bool dirty_ = true;
// Monotonic content version. Increment on any mutation that affects content layout
mutable std::uint64_t version_ = 0;
std::size_t total_size_ = 0;
// Cached line index: starting byte offset of each line (always contains at least 1 entry: 0)
mutable std::vector<std::size_t> line_index_;
mutable bool line_index_dirty_ = true;
};
// Heuristic knobs
std::size_t piece_limit_ = 4096; // trigger consolidation when exceeded
std::size_t small_piece_threshold_ = 64; // bytes
std::size_t max_consolidation_bytes_ = 4096; // cap per consolidation run
// Lightweight caches to avoid redundant work when callers query the same range repeatedly
struct RangeCache {
bool valid = false;
std::uint64_t version = 0;
std::size_t off = 0;
std::size_t len = 0;
std::string data;
};
struct FindCache {
bool valid = false;
std::uint64_t version = 0;
std::string needle;
std::size_t start = 0;
std::size_t result = std::numeric_limits<std::size_t>::max();
};
mutable RangeCache range_cache_;
mutable FindCache find_cache_;
};

2502
REWRITE.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -57,6 +57,20 @@ TerminalFrontend::Init(Editor &ed)
ed.SetDimensions(static_cast<std::size_t>(r), static_cast<std::size_t>(c));
// Attach editor to input handler for editor-owned features (e.g., universal argument)
input_.Attach(&ed);
// Ignore SIGINT (Ctrl-C) so it doesn't terminate the TUI.
// We'll restore the previous handler on Shutdown().
{
struct sigaction sa{};
sa.sa_handler = SIG_IGN;
sigemptyset(&sa.sa_mask);
sa.sa_flags = 0;
struct sigaction old{};
if (sigaction(SIGINT, &sa, &old) == 0) {
old_sigint_ = old;
have_old_sigint_ = true;
}
}
return true;
}
@@ -101,5 +115,10 @@ TerminalFrontend::Shutdown()
(void) tcsetattr(STDIN_FILENO, TCSANOW, &orig_tio_);
have_orig_tio_ = false;
}
// Restore previous SIGINT handler
if (have_old_sigint_) {
(void) sigaction(SIGINT, &old_sigint_, nullptr);
have_old_sigint_ = false;
}
endwin();
}

View File

@@ -3,6 +3,7 @@
*/
#pragma once
#include <termios.h>
#include <signal.h>
#include "Frontend.h"
#include "TerminalInputHandler.h"
@@ -29,4 +30,7 @@ private:
// Saved terminal attributes to restore on shutdown
bool have_orig_tio_ = false;
struct termios orig_tio_{};
// Saved SIGINT handler to restore on shutdown
bool have_old_sigint_ = false;
struct sigaction old_sigint_{};
};

View File

@@ -29,89 +29,95 @@ map_key_to_command(const int ch,
// Handle special keys from ncurses
// These keys exit k-prefix mode if active (user pressed C-k then a special key).
switch (ch) {
case KEY_MOUSE: {
k_prefix = false;
k_ctrl_pending = false;
MEVENT ev{};
if (getmouse(&ev) == OK) {
// Mouse wheel → scroll viewport without moving cursor
case KEY_ENTER:
// Some terminals send KEY_ENTER distinct from '\n'/'\r'
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::Newline, "", 0};
return true;
case KEY_MOUSE: {
k_prefix = false;
k_ctrl_pending = false;
MEVENT ev{};
if (getmouse(&ev) == OK) {
// Mouse wheel → scroll viewport without moving cursor
#ifdef BUTTON4_PRESSED
if (ev.bstate & (BUTTON4_PRESSED | BUTTON4_RELEASED | BUTTON4_CLICKED)) {
out = {true, CommandId::ScrollUp, "", 0};
return true;
}
if (ev.bstate & (BUTTON4_PRESSED | BUTTON4_RELEASED | BUTTON4_CLICKED)) {
out = {true, CommandId::ScrollUp, "", 0};
return true;
}
#endif
#ifdef BUTTON5_PRESSED
if (ev.bstate & (BUTTON5_PRESSED | BUTTON5_RELEASED | BUTTON5_CLICKED)) {
out = {true, CommandId::ScrollDown, "", 0};
return true;
}
#endif
// React to left button click/press
if (ev.bstate & (BUTTON1_CLICKED | BUTTON1_PRESSED | BUTTON1_RELEASED)) {
char buf[64];
// Use screen coordinates; command handler will translate via offsets
std::snprintf(buf, sizeof(buf), "@%d:%d", ev.y, ev.x);
out = {true, CommandId::MoveCursorTo, std::string(buf), 0};
return true;
}
if (ev.bstate & (BUTTON5_PRESSED | BUTTON5_RELEASED | BUTTON5_CLICKED)) {
out = {true, CommandId::ScrollDown, "", 0};
return true;
}
#endif
// React to left button click/press
if (ev.bstate & (BUTTON1_CLICKED | BUTTON1_PRESSED | BUTTON1_RELEASED)) {
char buf[64];
// Use screen coordinates; command handler will translate via offsets
std::snprintf(buf, sizeof(buf), "@%d:%d", ev.y, ev.x);
out = {true, CommandId::MoveCursorTo, std::string(buf), 0};
return true;
}
// No actionable mouse event
out.hasCommand = false;
return true;
}
case KEY_LEFT:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveLeft, "", 0};
return true;
case KEY_RIGHT:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveRight, "", 0};
return true;
case KEY_UP:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveUp, "", 0};
return true;
case KEY_DOWN:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveDown, "", 0};
return true;
case KEY_HOME:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveHome, "", 0};
return true;
case KEY_END:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveEnd, "", 0};
return true;
case KEY_PPAGE:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::PageUp, "", 0};
return true;
case KEY_NPAGE:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::PageDown, "", 0};
return true;
case KEY_DC:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::DeleteChar, "", 0};
return true;
case KEY_RESIZE:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::Refresh, "", 0};
return true;
default:
break;
// No actionable mouse event
out.hasCommand = false;
return true;
}
case KEY_LEFT:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveLeft, "", 0};
return true;
case KEY_RIGHT:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveRight, "", 0};
return true;
case KEY_UP:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveUp, "", 0};
return true;
case KEY_DOWN:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveDown, "", 0};
return true;
case KEY_HOME:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveHome, "", 0};
return true;
case KEY_END:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveEnd, "", 0};
return true;
case KEY_PPAGE:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::PageUp, "", 0};
return true;
case KEY_NPAGE:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::PageDown, "", 0};
return true;
case KEY_DC:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::DeleteChar, "", 0};
return true;
case KEY_RESIZE:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::Refresh, "", 0};
return true;
default:
break;
}
// ESC as cancel of prefix; many terminals send meta sequences as ESC+...

View File

@@ -0,0 +1,601 @@
# PieceTable Migration Plan
## Executive Summary
This document outlines the plan to remove GapBuffer support from kte and
migrate to using a **single PieceTable per Buffer**, rather than the
current vector-of-Lines architecture where each Line contains either a
GapBuffer or PieceTable.
## Current Architecture Analysis
### Text Storage
**Current Implementation:**
- `Buffer` contains `std::vector<Line> rows_`
- Each `Line` wraps an `AppendBuffer` (type alias)
- `AppendBuffer` is either `GapBuffer` (default) or `PieceTable` (via
`KTE_USE_PIECE_TABLE`)
- Each line is independently managed with its own buffer
- Operations are line-based with coordinate pairs (row, col)
**Key Files:**
- `Buffer.h/cc` - Buffer class with vector of Lines
- `AppendBuffer.h` - Type selector (GapBuffer vs PieceTable)
- `GapBuffer.h/cc` - Per-line gap buffer implementation
- `PieceTable.h/cc` - Per-line piece table implementation
- `UndoSystem.h/cc` - Records operations with (row, col, text)
- `UndoNode.h` - Undo operation types (Insert, Delete, Paste, Newline,
DeleteRow)
- `Command.cc` - High-level editing commands
### Current Buffer API
**Low-level editing operations (used by UndoSystem):**
```cpp
void insert_text(int row, int col, std::string_view text);
void delete_text(int row, int col, std::size_t len);
void split_line(int row, int col);
void join_lines(int row);
void insert_row(int row, std::string_view text);
void delete_row(int row);
```
**Line access:**
```cpp
std::vector<Line> &Rows();
const std::vector<Line> &Rows() const;
```
**Line API (Buffer::Line):**
```cpp
std::size_t size() const;
const char *Data() const;
char operator[](std::size_t i) const;
std::string substr(std::size_t pos, std::size_t len) const;
std::size_t find(const std::string &needle, std::size_t pos) const;
void erase(std::size_t pos, std::size_t len);
void insert(std::size_t pos, const std::string &seg);
Line &operator+=(const Line &other);
Line &operator+=(const std::string &s);
```
### Current PieceTable Limitations
The existing `PieceTable` class only supports:
- `Append(char/string)` - add to end
- `Prepend(char/string)` - add to beginning
- `Clear()` - empty the buffer
- `Data()` / `Size()` - access content (materializes on demand)
**Missing capabilities needed for buffer-wide storage:**
- Insert at arbitrary byte position
- Delete at arbitrary byte position
- Line indexing and line-based queries
- Position conversion (byte offset ↔ line/col)
- Efficient line boundary tracking
## Target Architecture
### Design Overview
**Single PieceTable per Buffer:**
- `Buffer` contains one `PieceTable content_` (replaces
`std::vector<Line> rows_`)
- Text stored as continuous byte sequence with `\n` as line separators
- Line index cached for efficient line-based operations
- All operations work on byte offsets internally
- Buffer provides line/column API as convenience layer
### Enhanced PieceTable Design
```cpp
class PieceTable {
public:
// Existing API (keep for compatibility if needed)
void Append(const char *s, std::size_t len);
void Prepend(const char *s, std::size_t len);
void Clear();
const char *Data() const;
std::size_t Size() const;
// NEW: Core byte-based editing operations
void Insert(std::size_t byte_offset, const char *text, std::size_t len);
void Delete(std::size_t byte_offset, std::size_t len);
// NEW: Line-based queries
std::size_t LineCount() const;
std::string GetLine(std::size_t line_num) const;
std::pair<std::size_t, std::size_t> GetLineRange(std::size_t line_num) const; // (start, end) byte offsets
// NEW: Position conversion
std::pair<std::size_t, std::size_t> ByteOffsetToLineCol(std::size_t byte_offset) const;
std::size_t LineColToByteOffset(std::size_t row, std::size_t col) const;
// NEW: Substring extraction
std::string GetRange(std::size_t byte_offset, std::size_t len) const;
// NEW: Search support
std::size_t Find(const std::string &needle, std::size_t start_offset) const;
private:
// Existing members
std::string original_;
std::string add_;
std::vector<Piece> pieces_;
mutable std::string materialized_;
mutable bool dirty_;
std::size_t total_size_;
// NEW: Line index for efficient line operations
struct LineInfo {
std::size_t byte_offset; // absolute byte offset from buffer start
std::size_t piece_idx; // which piece contains line start
std::size_t offset_in_piece; // byte offset within that piece
};
mutable std::vector<LineInfo> line_index_;
mutable bool line_index_dirty_;
// NEW: Line index management
void RebuildLineIndex() const;
void InvalidateLineIndex();
};
```
### Buffer API Changes
```cpp
class Buffer {
public:
// NEW: Direct content access
PieceTable &Content() { return content_; }
const PieceTable &Content() const { return content_; }
// MODIFIED: Keep existing API but implement via PieceTable
void insert_text(int row, int col, std::string_view text);
void delete_text(int row, int col, std::size_t len);
void split_line(int row, int col);
void join_lines(int row);
void insert_row(int row, std::string_view text);
void delete_row(int row);
// MODIFIED: Line access - return line from PieceTable
std::size_t Nrows() const { return content_.LineCount(); }
std::string GetLine(std::size_t row) const { return content_.GetLine(row); }
// REMOVED: Rows() - no longer have vector of Lines
// std::vector<Line> &Rows(); // REMOVE
private:
// REMOVED: std::vector<Line> rows_;
// NEW: Single piece table for all content
PieceTable content_;
// Keep existing members
std::size_t curx_, cury_, rx_;
std::size_t nrows_; // cached from content_.LineCount()
std::size_t rowoffs_, coloffs_;
std::string filename_;
bool is_file_backed_;
bool dirty_;
bool read_only_;
bool mark_set_;
std::size_t mark_curx_, mark_cury_;
std::unique_ptr<UndoTree> undo_tree_;
std::unique_ptr<UndoSystem> undo_sys_;
std::uint64_t version_;
bool syntax_enabled_;
std::string filetype_;
std::unique_ptr<kte::HighlighterEngine> highlighter_;
kte::SwapRecorder *swap_rec_;
};
```
## Migration Phases
### Phase 1: Extend PieceTable (Foundation)
**Goal:** Add buffer-wide capabilities to PieceTable without breaking
existing per-line usage.
**Tasks:**
1. Add line indexing infrastructure to PieceTable
- Add `LineInfo` struct and `line_index_` member
- Implement `RebuildLineIndex()` that scans pieces for '\n'
characters
- Implement `InvalidateLineIndex()` called by Insert/Delete
2. Implement core byte-based operations
- `Insert(byte_offset, text, len)` - split piece at offset, insert
new piece
- `Delete(byte_offset, len)` - split pieces, remove/truncate as
needed
3. Implement line-based query methods
- `LineCount()` - return line_index_.size()
- `GetLine(line_num)` - extract text between line boundaries
- `GetLineRange(line_num)` - return (start, end) byte offsets
4. Implement position conversion
- `ByteOffsetToLineCol(offset)` - binary search in line_index_
- `LineColToByteOffset(row, col)` - lookup line start, add col
5. Implement utility methods
- `GetRange(offset, len)` - extract substring
- `Find(needle, start)` - search across pieces
**Testing:**
- Write unit tests for new PieceTable methods
- Test with multi-line content
- Verify line index correctness after edits
- Benchmark performance vs current line-based approach
**Estimated Effort:** 3-5 days
### Phase 2: Create Buffer Adapter Layer (Compatibility)
**Goal:** Create compatibility layer in Buffer to use PieceTable while
maintaining existing API.
**Tasks:**
1. Add `PieceTable content_` member to Buffer (alongside existing
`rows_`)
2. Add compilation flag `KTE_USE_BUFFER_PIECE_TABLE` (like existing
`KTE_USE_PIECE_TABLE`)
3. Implement Buffer methods to delegate to content_:
```cpp
#ifdef KTE_USE_BUFFER_PIECE_TABLE
void insert_text(int row, int col, std::string_view text) {
std::size_t offset = content_.LineColToByteOffset(row, col);
content_.Insert(offset, text.data(), text.size());
}
// ... similar for other methods
#else
// Existing line-based implementation
#endif
```
4. Update file I/O to work with PieceTable
- `OpenFromFile()` - load into content_ instead of rows_
- `Save()` - serialize content_ instead of rows_
5. Update `AsString()` to materialize from content_
**Testing:**
- Run existing buffer correctness tests with new flag
- Verify undo/redo still works
- Test file I/O round-tripping
- Test with existing command operations
**Estimated Effort:** 3-4 days
### Phase 3: Migrate Command Layer (High-level Operations)
**Goal:** Update commands that directly access Rows() to use new API.
**Tasks:**
1. Audit all usages of `buf.Rows()` in Command.cc
2. Refactor helper functions:
- `extract_region_text()` - use content_.GetRange()
- `delete_region()` - convert to byte offsets, use content_.Delete()
- `insert_text_at_cursor()` - convert position, use content_
.Insert()
3. Update commands that iterate over lines:
- Use `buf.GetLine(i)` instead of `buf.Rows()[i]`
- Update line count queries to use `buf.Nrows()`
4. Update search/replace operations:
- Modify `search_compute_matches()` to work with GetLine()
- Update regex matching to work line-by-line or use content directly
**Testing:**
- Test all editing commands (insert, delete, newline, backspace)
- Test region operations (mark, copy, kill)
- Test search and replace
- Test word navigation and deletion
- Run through common editing workflows
**Estimated Effort:** 4-6 days
### Phase 4: Update Renderer and Frontend (Display)
**Goal:** Ensure all renderers work with new Buffer structure.
**Tasks:**
1. Audit renderer implementations:
- `TerminalRenderer.cc`
- `ImGuiRenderer.cc`
- `QtRenderer.cc`
- `TestRenderer.cc`
2. Update line access patterns:
- Replace `buf.Rows()[y]` with `buf.GetLine(y)`
- Handle string return instead of Line object
3. Update syntax highlighting integration:
- Ensure HighlighterEngine works with GetLine()
- Update any line-based caching
**Testing:**
- Test rendering in terminal
- Test ImGui frontend (if enabled)
- Test Qt frontend (if enabled)
- Verify syntax highlighting displays correctly
- Test scrolling and viewport updates
**Estimated Effort:** 2-3 days
### Phase 5: Remove Old Infrastructure (Cleanup) ✅ COMPLETED
**Goal:** Remove GapBuffer, AppendBuffer, and Line class completely.
**Status:** Completed on 2025-12-05
**Tasks:**
1. ✅ Remove conditional compilation:
- Removed `#ifdef KTE_USE_BUFFER_PIECE_TABLE` (PieceTable is now the
only way)
- Removed `#ifdef KTE_USE_PIECE_TABLE`
- Removed `AppendBuffer.h`
2. ✅ Delete obsolete code:
- Deleted `GapBuffer.h/cc`
- Line class now uses PieceTable internally (kept for API
compatibility)
- `rows_` kept as mutable cache rebuilt from `content_` PieceTable
3. ✅ Update CMakeLists.txt:
- Removed GapBuffer from sources
- Removed AppendBuffer.h from headers
- Removed KTE_USE_PIECE_TABLE and KTE_USE_BUFFER_PIECE_TABLE options
4. ✅ Clean up includes and dependencies
5. ✅ Update documentation
**Testing:**
- Full regression test suite
- Verify clean compilation
- Check for any lingering references
**Estimated Effort:** 1-2 days
### Phase 6: Performance Optimization (Polish)
**Goal:** Optimize the new implementation for real-world usage.
**Tasks:**
1. Profile common operations:
- Measure line access patterns
- Identify hot paths in editing
- Benchmark against old implementation
2. Optimize line index:
- Consider incremental updates instead of full rebuild
- Tune rebuild threshold
- Cache frequently accessed lines
3. Optimize piece table:
- Tune piece coalescing heuristics
- Consider piece count limits and consolidation
4. Memory optimization:
- Review materialization frequency
- Consider lazy materialization strategies
- Profile memory usage on large files
**Testing:**
- Benchmark suite with various file sizes
- Memory profiling
- Real-world usage testing
**Estimated Effort:** 3-5 days
## Files Requiring Modification
### Core Files (Must Change)
- `PieceTable.h/cc` - Add new methods (Phase 1)
- `Buffer.h/cc` - Replace rows_ with content_ (Phase 2)
- `Command.cc` - Update line access (Phase 3)
- `UndoSystem.cc` - May need updates for new Buffer API
### Renderer Files (Will Change)
- `TerminalRenderer.cc` - Update line access (Phase 4)
- `ImGuiRenderer.cc` - Update line access (Phase 4)
- `QtRenderer.cc` - Update line access (Phase 4)
- `TestRenderer.cc` - Update line access (Phase 4)
### Files Removed (Phase 5 - Completed)
- `GapBuffer.h/cc` - ✅ Deleted
- `AppendBuffer.h` - ✅ Deleted
- `test_buffer_correctness.cc` - ✅ Deleted (obsolete GapBuffer
comparison test)
- `bench/BufferBench.cc` - ✅ Deleted (obsolete GapBuffer benchmarks)
- `bench/PerformanceSuite.cc` - ✅ Deleted (obsolete GapBuffer
benchmarks)
- `Buffer::Line` class - ✅ Updated to use PieceTable internally (kept
for API compatibility)
### Build Files
- `CMakeLists.txt` - Update sources (Phase 5)
### Documentation
- `README.md` - Update architecture notes
- `docs/` - Update any architectural documentation
- `REWRITE.md` - Note C++ now matches Rust design
## Testing Strategy
### Unit Tests
- **PieceTable Tests:** New file `test_piece_table.cc`
- Test Insert/Delete at various positions
- Test line indexing correctness
- Test position conversion
- Test with edge cases (empty, single line, large files)
- **Buffer Tests:** Extend `test_buffer_correctness.cc`
- Test new Buffer API with PieceTable backend
- Test file I/O round-tripping
- Test multi-line operations
### Integration Tests
- **Undo Tests:** `test_undo.cc` should still pass
- Verify undo/redo across all operation types
- Test undo tree navigation
- **Search Tests:** `test_search_correctness.cc` should still pass
- Verify search across multiple lines
- Test regex search
### Manual Testing
- Load and edit large files (>10MB)
- Perform complex editing sequences
- Test all keybindings and commands
- Verify syntax highlighting
- Test crash recovery (swap files)
### Regression Testing
- All existing tests must pass with new implementation
- No observable behavior changes for users
- Performance should be comparable or better
## Risk Assessment
### High Risk
- **Undo System Integration:** Undo records operations with
row/col/text. Need to ensure compatibility or refactor.
- *Mitigation:* Carefully preserve undo semantics, extensive testing
- **Performance Regression:** Line index rebuilding could be expensive
on large files.
- *Mitigation:* Profile early, optimize incrementally, consider
caching strategies
### Medium Risk
- **Syntax Highlighting:** Highlighters may depend on line-based access
patterns.
- *Mitigation:* Review highlighter integration, test thoroughly
- **Renderer Updates:** Multiple renderers need updating, risk of
inconsistency.
- *Mitigation:* Update all renderers in same phase, test each
### Low Risk
- **Search/Replace:** Should work naturally with new GetLine() API.
- *Mitigation:* Test thoroughly with existing test suite
## Success Criteria
### Functional Requirements
- ✓ All existing tests pass
- ✓ All commands work identically to before
- ✓ File I/O works correctly
- ✓ Undo/redo functionality preserved
- ✓ Syntax highlighting works
- ✓ All frontends (terminal, ImGui, Qt) work
### Code Quality
- ✓ GapBuffer completely removed
- ✓ No conditional compilation for buffer type
- ✓ Clean, maintainable code
- ✓ Good test coverage for new PieceTable methods
### Performance
- ✓ Editing operations at least as fast as current
- ✓ Line access within 2x of current performance
- ✓ Memory usage reasonable (no excessive materialization)
- ✓ Large file handling acceptable (tested up to 100MB)
## Timeline Estimate
| Phase | Duration | Dependencies |
|----------------------------|----------------|--------------|
| Phase 1: Extend PieceTable | 3-5 days | None |
| Phase 2: Buffer Adapter | 3-4 days | Phase 1 |
| Phase 3: Command Layer | 4-6 days | Phase 2 |
| Phase 4: Renderer Updates | 2-3 days | Phase 3 |
| Phase 5: Cleanup | 1-2 days | Phase 4 |
| Phase 6: Optimization | 3-5 days | Phase 5 |
| **Total** | **16-25 days** | |
**Note:** Timeline assumes one developer working full-time. Actual
duration may vary based on:
- Unforeseen integration issues
- Performance optimization needs
- Testing thoroughness
- Code review iterations
## Alternatives Considered
### Alternative 1: Keep Line-based but unify GapBuffer/PieceTable
- Keep vector of Lines, but make each Line always use PieceTable
- Remove GapBuffer, remove AppendBuffer selector
- **Pros:** Smaller change, less risk
- **Cons:** Doesn't achieve architectural goal, still have per-line
overhead
### Alternative 2: Hybrid approach
- Use PieceTable for buffer, but maintain materialized Line objects as
cache
- **Pros:** Easier migration, maintains some compatibility
- **Cons:** Complex dual representation, cache invalidation issues
### Alternative 3: Complete rewrite
- Follow REWRITE.md exactly, implement in Rust
- **Pros:** Modern language, better architecture
- **Cons:** Much larger effort, different project
## Recommendation
**Proceed with planned migration** (single PieceTable per Buffer)
because:
1. Aligns with long-term architecture vision (REWRITE.md)
2. Removes unnecessary per-line buffer overhead
3. Simplifies codebase (one text representation)
4. Enables future optimizations (better undo, swap files, etc.)
5. Reasonable effort (16-25 days) for significant improvement
**Suggested Approach:**
- Start with Phase 1 (extend PieceTable) in isolated branch
- Thoroughly test new PieceTable functionality
- Proceed incrementally through phases
- Maintain working editor at end of each phase
- Merge to main after Phase 4 (before cleanup) to get testing
- Complete Phase 5-6 based on feedback
## References
- `REWRITE.md` - Rust architecture specification (lines 54-157)
- Current buffer implementation: `Buffer.h/cc`
- Current piece table: `PieceTable.h/cc`
- Undo system: `UndoSystem.h/cc`, `UndoNode.h`
- Commands: `Command.cc`