20 Commits

Author SHA1 Message Date
cc0c187481 Improve macOS app build process and bundle handling.
- Updated `make-app-release` script to use `macdeployqt` with proper verbosity and bundle fixup.
- Introduced post-build fixup using CMake's `BundleUtilities` to internalize non-Qt dylibs.
- Enhanced macOS bundle RPATH settings for accurate Framework resolution.
- Added optional `kge_fixup_bundle` CMake target for post-build handling.
- Refined `default.nix` to load Nixpkgs in a default argument.
2025-12-09 18:49:16 -08:00
a8dcfbec58 Fix C-k c handling. 2025-12-08 15:28:45 -08:00
65705e3354 bump version 2025-12-07 15:25:50 -08:00
e1f9a9eb6a Preserve cursor position on buffer reload.
- Remember and restore the cursor's position after reloading a buffer, clamping if necessary.
- Improve user experience by maintaining editing context.
2025-12-07 15:25:40 -08:00
c9f34003f2 Add unit testing plan documentation.
- Introduced comprehensive test plan to guide development and ensure coverage.
- Documented test principles, execution harness, build steps, and test catalog.
- Categorized test cases by functionality (e.g., filesystem I/O, PieceTable semantics, buffer editing, undo system, etc.).
- Outlined regression tests and performance/stress scenarios.
- Provided a phased roadmap for implementing planned test cases.
2025-12-07 12:34:47 -08:00
f450ef825c Replace individual test binaries with unified test runner.
- Removed standalone test executables (`test_undo`, `test_buffer_save`, `test_buffer_open_nonexistent_save`, etc.).
- Introduced `kte_tests` as a unified test runner.
- Migrated existing tests to a new minimal, reusable framework in `tests/Test.h`.
- Updated `CMakeLists.txt` to build a single `kte_tests` executable.
- Simplified dependencies, reducing the need for ncurses/GUI in test builds.
2025-12-07 00:37:16 -08:00
f6f0c11be4 Add PieceTable-based buffer tests and improvements for file I/O and editing.
- Introduced comprehensive tests:
  - `test_buffer_open_nonexistent_save.cc`: Save after opening a non-existent file.
  - `test_buffer_save.cc`: Save buffer contents to disk.
  - `test_buffer_save_existing.cc`: Save after opening existing files.
- Implemented `PieceTable::WriteToStream()` to directly stream content without full materialization.
- Updated `Buffer::Save` and `Buffer::SaveAs` to use efficient streaming via `PieceTable`.
- Enhanced editing commands (`Insert`, `Delete`, `Replace`, etc.) to use PieceTable APIs, ensuring proper undo and save functionality.
2025-12-07 00:30:11 -08:00
657c9bbc19 bump version 2025-12-06 11:40:27 -08:00
3493695165 Add support for creating a new empty buffer (C-k i).
- Introduced `BufferNew` command to create and switch to a new unnamed buffer.
- Registered `BufferNew` in the command registry and updated keymap and help text.
- Implemented `cmd_buffer_new()` to handle buffer creation and switching logic.
2025-12-06 11:40:00 -08:00
5f57cf23dc bump version 2025-12-05 21:31:46 -08:00
9312550be4 Fix scrolling issue in TUI. 2025-12-05 21:31:33 -08:00
f734f98891 update mac app release 2025-12-05 20:53:04 -08:00
1191e14ce9 Bump version. 2025-12-05 20:53:04 -08:00
12cc04d7e0 Improve input handling and scrolling behavior for high-resolution trackpads.
- Added precise fractional mouse wheel delta handling with per-step command emission.
- Introduced scroll accumulators (`wheel_accum_y_`, `wheel_accum_x_`) for high-resolution trackpad input.
- Replaced hardcoded ESC delay with configurable `kEscDelayMs` constant in `TerminalFrontend`.
- Enabled mouse position reporting and reduced CPU usage during idle with optimized `timeout()` setting.
2025-12-05 20:53:04 -08:00
3f4c60d311 Add detailed migration plan for PieceTable-based buffer architecture.
- Created `piece-table-migration.md` outlining the steps to transition from GapBuffer to a unified PieceTable architecture.
- Included phased approach: extending PieceTable, Buffer adapter layer, command updates, and renderer changes.
- Detailed API changes, file updates, testing strategy, risk assessment, and timeline for each migration phase.
- Document serves as a reference for architecture goals and implementation details.
2025-12-05 20:53:04 -08:00
71c1c9e50b Remove GapBuffer and associated legacy implementation.
- Deleted `GapBuffer` class and its API implementations.
- Removed `AppendBuffer` selector and conditional `KTE_USE_PIECE_TABLE` macros.
- Eliminated legacy support in buffer APIs, file I/O, benchmarks, and correctness tests.
- Updated guidelines and comments to reflect PieceTable as the default and only buffer backend.
2025-12-05 20:53:04 -08:00
afb6888c31 Introduce PieceTable-based buffer backend (Phase 1)
- Added `PieceTable` class for efficient text manipulation and implemented core editing APIs (`Insert`, `Delete`, `Find`, etc.).
- Integrated `PieceTable` into `Buffer` class with an adapter for rows caching.
- Enabled seamless switching between legacy row-based and new PieceTable-backed editing via `KTE_USE_BUFFER_PIECE_TABLE`.
- Updated file I/O, line-based queries, and cursor operations to support PieceTable-based storage.
- Lazy rebuilding of line index and improved management of edit state for performance.
2025-12-05 20:53:04 -08:00
222f73252b nixos: rename kge->kge-qt 2025-12-05 10:37:16 -08:00
51ea473a91 nixos and qt fixup 2025-12-05 09:25:48 -08:00
fd517b5d57 fix nixos build 2025-12-05 08:21:38 -08:00
38 changed files with 5315 additions and 2320 deletions

View File

@@ -1,5 +1,6 @@
<component name="ProjectCodeStyleConfiguration">
<state>
<option name="USE_PER_PROJECT_SETTINGS" value="true" />
<option name="PREFERRED_PROJECT_CODE_STYLE" value="sccl" />
</state>
</component>

View File

@@ -1,28 +1,35 @@
# Project Guidelines
kte is Kyle's Text Editor — a simple, fast text editor written in C++17. It
replaces the earlier C implementation, ke (see the ke manual in `docs/ke.md`). The
design draws inspiration from Antirez' kilo, with keybindings rooted in the
kte is Kyle's Text Editor — a simple, fast text editor written in C++17.
It
replaces the earlier C implementation, ke (see the ke manual in
`docs/ke.md`). The
design draws inspiration from Antirez' kilo, with keybindings rooted in
the
WordStar/VDE family and emacs. The spiritual parent is `mg(1)`.
These guidelines summarize the goals, interfaces, key operations, and current
These guidelines summarize the goals, interfaces, key operations, and
current
development practices for kte.
## Goals
- Keep the core small, fast, and understandable.
- Provide an ncurses-based terminal-first editing experience, with an additional ImGui GUI.
- Provide an ncurses-based terminal-first editing experience, with an
additional ImGui GUI.
- Preserve familiar keybindings from ke while modernizing the internals.
- Favor simple data structures (e.g., piece table) and incremental evolution.
- Favor simple data structures (e.g., piece table) and incremental
evolution.
Project entry point: `main.cpp`
## Core Components (current codebase)
- Buffer: editing model and file I/O (`Buffer.h/.cpp`).
- GapBuffer: editable in-memory text representation (`GapBuffer.h/.cpp`).
- PieceTable: experimental/alternative representation (`PieceTable.h/.cpp`).
- InputHandler: interface for handling text input (`InputHandler.h/`), along
- PieceTable: editable in-memory text representation (
`PieceTable.h/.cpp`).
- InputHandler: interface for handling text input (`InputHandler.h/`),
along
with `TerminalInputHandler` (ncurses-based) and `GUIInputHandler`.
- Renderer: interface for rendering text (`Renderer.h`), along with
`TerminalRenderer` (ncurses-based) and `GUIRenderer`.
@@ -38,11 +45,13 @@ The file `docs/ke.md` contains the canonical reference for keybindings.
- C++ standard: C++17.
- Keep dependencies minimal.
- Prefer small, focused changes that preserve kes UX unless explicitly changing
- Prefer small, focused changes that preserve kes UX unless explicitly
changing
behavior.
## References
- Previous editor manual: `ke.md` (canonical keybinding/spec reference for now).
- Previous editor manual: `ke.md` (canonical keybinding/spec reference
for now).
- Inspiration: kilo, WordStar/VDE, emacs, `mg(1)`.

View File

@@ -1,12 +0,0 @@
/*
* AppendBuffer.h - selector header to choose GapBuffer or PieceTable
*/
#pragma once
#ifdef KTE_USE_PIECE_TABLE
#include "PieceTable.h"
using AppendBuffer = PieceTable;
#else
#include "GapBuffer.h"
using AppendBuffer = GapBuffer;
#endif

415
Buffer.cc
View File

@@ -2,6 +2,10 @@
#include <sstream>
#include <filesystem>
#include <cstdlib>
#include <limits>
#include <cerrno>
#include <cstring>
#include <string_view>
#include "Buffer.h"
#include "UndoSystem.h"
@@ -29,20 +33,22 @@ Buffer::Buffer(const std::string &path)
// Copy constructor/assignment: perform a deep copy of core fields; reinitialize undo for the new buffer.
Buffer::Buffer(const Buffer &other)
{
curx_ = other.curx_;
cury_ = other.cury_;
rx_ = other.rx_;
nrows_ = other.nrows_;
rowoffs_ = other.rowoffs_;
coloffs_ = other.coloffs_;
rows_ = other.rows_;
filename_ = other.filename_;
is_file_backed_ = other.is_file_backed_;
dirty_ = other.dirty_;
read_only_ = other.read_only_;
mark_set_ = other.mark_set_;
mark_curx_ = other.mark_curx_;
mark_cury_ = other.mark_cury_;
curx_ = other.curx_;
cury_ = other.cury_;
rx_ = other.rx_;
nrows_ = other.nrows_;
rowoffs_ = other.rowoffs_;
coloffs_ = other.coloffs_;
rows_ = other.rows_;
content_ = other.content_;
rows_cache_dirty_ = other.rows_cache_dirty_;
filename_ = other.filename_;
is_file_backed_ = other.is_file_backed_;
dirty_ = other.dirty_;
read_only_ = other.read_only_;
mark_set_ = other.mark_set_;
mark_curx_ = other.mark_curx_;
mark_cury_ = other.mark_cury_;
// Copy syntax/highlighting flags
version_ = other.version_;
syntax_enabled_ = other.syntax_enabled_;
@@ -77,23 +83,25 @@ Buffer::operator=(const Buffer &other)
{
if (this == &other)
return *this;
curx_ = other.curx_;
cury_ = other.cury_;
rx_ = other.rx_;
nrows_ = other.nrows_;
rowoffs_ = other.rowoffs_;
coloffs_ = other.coloffs_;
rows_ = other.rows_;
filename_ = other.filename_;
is_file_backed_ = other.is_file_backed_;
dirty_ = other.dirty_;
read_only_ = other.read_only_;
mark_set_ = other.mark_set_;
mark_curx_ = other.mark_curx_;
mark_cury_ = other.mark_cury_;
version_ = other.version_;
syntax_enabled_ = other.syntax_enabled_;
filetype_ = other.filetype_;
curx_ = other.curx_;
cury_ = other.cury_;
rx_ = other.rx_;
nrows_ = other.nrows_;
rowoffs_ = other.rowoffs_;
coloffs_ = other.coloffs_;
rows_ = other.rows_;
content_ = other.content_;
rows_cache_dirty_ = other.rows_cache_dirty_;
filename_ = other.filename_;
is_file_backed_ = other.is_file_backed_;
dirty_ = other.dirty_;
read_only_ = other.read_only_;
mark_set_ = other.mark_set_;
mark_curx_ = other.mark_curx_;
mark_cury_ = other.mark_cury_;
version_ = other.version_;
syntax_enabled_ = other.syntax_enabled_;
filetype_ = other.filetype_;
// Recreate undo system for this instance
undo_tree_ = std::make_unique<UndoTree>();
undo_sys_ = std::make_unique<UndoSystem>(*this, *undo_tree_);
@@ -137,10 +145,12 @@ Buffer::Buffer(Buffer &&other) noexcept
undo_sys_(std::move(other.undo_sys_))
{
// Move syntax/highlighting state
version_ = other.version_;
syntax_enabled_ = other.syntax_enabled_;
filetype_ = std::move(other.filetype_);
highlighter_ = std::move(other.highlighter_);
version_ = other.version_;
syntax_enabled_ = other.syntax_enabled_;
filetype_ = std::move(other.filetype_);
highlighter_ = std::move(other.highlighter_);
content_ = std::move(other.content_);
rows_cache_dirty_ = other.rows_cache_dirty_;
// Update UndoSystem's buffer reference to point to this object
if (undo_sys_) {
undo_sys_->UpdateBufferReference(*this);
@@ -173,11 +183,12 @@ Buffer::operator=(Buffer &&other) noexcept
undo_sys_ = std::move(other.undo_sys_);
// Move syntax/highlighting state
version_ = other.version_;
syntax_enabled_ = other.syntax_enabled_;
filetype_ = std::move(other.filetype_);
highlighter_ = std::move(other.highlighter_);
version_ = other.version_;
syntax_enabled_ = other.syntax_enabled_;
filetype_ = std::move(other.filetype_);
highlighter_ = std::move(other.highlighter_);
content_ = std::move(other.content_);
rows_cache_dirty_ = other.rows_cache_dirty_;
// Update UndoSystem's buffer reference to point to this object
if (undo_sys_) {
undo_sys_->UpdateBufferReference(*this);
@@ -229,6 +240,10 @@ Buffer::OpenFromFile(const std::string &path, std::string &err)
mark_set_ = false;
mark_curx_ = mark_cury_ = 0;
// Empty PieceTable
content_.Clear();
rows_cache_dirty_ = true;
return true;
}
@@ -238,50 +253,23 @@ Buffer::OpenFromFile(const std::string &path, std::string &err)
return false;
}
// Detect if file ends with a newline so we can preserve a final empty line
// in our in-memory representation (mg-style semantics).
bool ends_with_nl = false;
{
in.seekg(0, std::ios::end);
std::streamoff sz = in.tellg();
if (sz > 0) {
in.seekg(-1, std::ios::end);
char last = 0;
in.read(&last, 1);
ends_with_nl = (last == '\n');
} else {
in.clear();
}
// Rewind to start for line-by-line read
in.clear();
// Read entire file into PieceTable as-is
std::string data;
in.seekg(0, std::ios::end);
auto sz = in.tellg();
if (sz > 0) {
data.resize(static_cast<std::size_t>(sz));
in.seekg(0, std::ios::beg);
in.read(data.data(), static_cast<std::streamsize>(data.size()));
}
rows_.clear();
std::string line;
while (std::getline(in, line)) {
// std::getline strips the '\n', keep raw line content only
// Handle potential Windows CRLF: strip trailing '\r'
if (!line.empty() && line.back() == '\r') {
line.pop_back();
}
rows_.emplace_back(line);
}
// If the file ended with a newline and we didn't already get an
// empty final row from getline (e.g., when the last textual line
// had content followed by '\n'), append an empty row to represent
// the cursor position past the last newline.
if (ends_with_nl) {
if (rows_.empty() || !rows_.back().empty()) {
rows_.emplace_back(std::string());
}
}
nrows_ = rows_.size();
filename_ = norm;
is_file_backed_ = true;
dirty_ = false;
content_.Clear();
if (!data.empty())
content_.Append(data.data(), data.size());
rows_cache_dirty_ = true;
nrows_ = 0; // not used under PieceTable
filename_ = norm;
is_file_backed_ = true;
dirty_ = false;
// Reset/initialize undo system for this loaded file
if (!undo_tree_)
@@ -304,31 +292,29 @@ Buffer::OpenFromFile(const std::string &path, std::string &err)
bool
Buffer::Save(std::string &err) const
{
if (!is_file_backed_ || filename_.empty()) {
err = "Buffer is not file-backed; use SaveAs()";
return false;
}
std::ofstream out(filename_, std::ios::out | std::ios::binary | std::ios::trunc);
if (!out) {
err = "Failed to open for write: " + filename_;
return false;
}
for (std::size_t i = 0; i < rows_.size(); ++i) {
const char *d = rows_[i].Data();
std::size_t n = rows_[i].Size();
if (d && n)
out.write(d, static_cast<std::streamsize>(n));
if (i + 1 < rows_.size()) {
out.put('\n');
}
}
if (!out.good()) {
err = "Write error";
return false;
}
// Note: const method cannot change dirty_. Intentionally const to allow UI code
// to decide when to flip dirty flag after successful save.
return true;
if (!is_file_backed_ || filename_.empty()) {
err = "Buffer is not file-backed; use SaveAs()";
return false;
}
std::ofstream out(filename_, std::ios::out | std::ios::binary | std::ios::trunc);
if (!out) {
err = "Failed to open for write: " + filename_ + ". Error: " + std::string(std::strerror(errno));
return false;
}
// Stream the content directly from the piece table to avoid relying on
// full materialization, which may yield an empty pointer when size > 0.
if (content_.Size() > 0) {
content_.WriteToStream(out);
}
// Ensure data hits the OS buffers
out.flush();
if (!out.good()) {
err = "Write error: " + filename_ + ". Error: " + std::string(std::strerror(errno));
return false;
}
// Note: const method cannot change dirty_. Intentionally const to allow UI code
// to decide when to flip dirty flag after successful save.
return true;
}
@@ -357,22 +343,19 @@ Buffer::SaveAs(const std::string &path, std::string &err)
// Write to the given path
std::ofstream out(out_path, std::ios::out | std::ios::binary | std::ios::trunc);
if (!out) {
err = "Failed to open for write: " + out_path;
return false;
}
for (std::size_t i = 0; i < rows_.size(); ++i) {
const char *d = rows_[i].Data();
std::size_t n = rows_[i].Size();
if (d && n)
out.write(d, static_cast<std::streamsize>(n));
if (i + 1 < rows_.size()) {
out.put('\n');
}
}
if (!out.good()) {
err = "Write error";
err = "Failed to open for write: " + out_path + ". Error: " + std::string(std::strerror(errno));
return false;
}
// Stream content without forcing full materialization
if (content_.Size() > 0) {
content_.WriteToStream(out);
}
// Ensure data hits the OS buffers
out.flush();
if (!out.good()) {
err = "Write error: " + out_path + ". Error: " + std::string(std::strerror(errno));
return false;
}
filename_ = out_path;
is_file_backed_ = true;
@@ -389,7 +372,7 @@ Buffer::AsString() const
if (this->Dirty()) {
ss << "*";
}
ss << ">: " << rows_.size() << " lines";
ss << ">: " << content_.LineCount() << " lines";
return ss.str();
}
@@ -400,111 +383,135 @@ Buffer::insert_text(int row, int col, std::string_view text)
{
if (row < 0)
row = 0;
if (static_cast<std::size_t>(row) > rows_.size())
row = static_cast<int>(rows_.size());
if (rows_.empty())
rows_.emplace_back("");
if (static_cast<std::size_t>(row) >= rows_.size())
rows_.emplace_back("");
auto y = static_cast<std::size_t>(row);
auto x = static_cast<std::size_t>(col);
if (x > rows_[y].size())
x = rows_[y].size();
std::string remain(text);
while (true) {
auto pos = remain.find('\n');
if (pos == std::string::npos) {
rows_[y].insert(x, remain);
break;
}
// Insert up to newline
std::string seg = remain.substr(0, pos);
rows_[y].insert(x, seg);
x += seg.size();
// Split line at x
std::string tail = rows_[y].substr(x);
rows_[y].erase(x);
rows_.insert(rows_.begin() + static_cast<std::ptrdiff_t>(y + 1), Line(tail));
y += 1;
x = 0;
remain.erase(0, pos + 1);
if (col < 0)
col = 0;
const std::size_t off = content_.LineColToByteOffset(static_cast<std::size_t>(row),
static_cast<std::size_t>(col));
if (!text.empty()) {
content_.Insert(off, text.data(), text.size());
rows_cache_dirty_ = true;
}
// Do not set dirty here; UndoSystem will manage state/dirty externally
}
// ===== Adapter helpers for PieceTable-backed Buffer =====
std::string_view
Buffer::GetLineView(std::size_t row) const
{
// Get byte range for the logical line and return a view into materialized data
auto range = content_.GetLineRange(row); // [start,end) in bytes
const char *base = content_.Data(); // materializes if needed
if (!base)
return std::string_view();
const std::size_t start = range.first;
const std::size_t len = (range.second > range.first) ? (range.second - range.first) : 0;
return std::string_view(base + start, len);
}
void
Buffer::ensure_rows_cache() const
{
if (!rows_cache_dirty_)
return;
rows_.clear();
const std::size_t lc = content_.LineCount();
rows_.reserve(lc);
for (std::size_t i = 0; i < lc; ++i) {
rows_.emplace_back(content_.GetLine(i));
}
// Keep nrows_ in sync for any legacy code that still reads it
const_cast<Buffer *>(this)->nrows_ = rows_.size();
rows_cache_dirty_ = false;
}
std::size_t
Buffer::content_LineCount_() const
{
return content_.LineCount();
}
void
Buffer::delete_text(int row, int col, std::size_t len)
{
if (rows_.empty() || len == 0)
if (len == 0)
return;
if (row < 0)
row = 0;
if (static_cast<std::size_t>(row) >= rows_.size())
return;
const auto y = static_cast<std::size_t>(row);
const auto x = std::min<std::size_t>(static_cast<std::size_t>(col), rows_[y].size());
if (col < 0)
col = 0;
const std::size_t start = content_.LineColToByteOffset(static_cast<std::size_t>(row),
static_cast<std::size_t>(col));
std::size_t r = static_cast<std::size_t>(row);
std::size_t c = static_cast<std::size_t>(col);
std::size_t remaining = len;
while (remaining > 0 && y < rows_.size()) {
auto &line = rows_[y];
const std::size_t in_line = std::min<std::size_t>(remaining, line.size() - std::min(x, line.size()));
if (x < line.size() && in_line > 0) {
line.erase(x, in_line);
remaining -= in_line;
const std::size_t lc = content_.LineCount();
while (remaining > 0 && r < lc) {
const std::string line = content_.GetLine(r); // logical line (without trailing '\n')
const std::size_t L = line.size();
if (c < L) {
const std::size_t take = std::min(remaining, L - c);
c += take;
remaining -= take;
}
if (remaining == 0)
break;
// If at or beyond end of line and there is a next line, join it (deleting the implied '\n')
if (y + 1 < rows_.size()) {
line += rows_[y + 1];
rows_.erase(rows_.begin() + static_cast<std::ptrdiff_t>(y + 1));
// deleting the newline consumes one virtual character
// Consume newline between lines as one char, if there is a next line
if (r + 1 < lc) {
if (remaining > 0) {
// Treat the newline as one deletion unit if len spans it
// We already joined, so nothing else to do here.
remaining -= 1; // the newline
r += 1;
c = 0;
}
} else {
break;
// At last line and still remaining: delete to EOF
std::size_t total = content_.Size();
content_.Delete(start, total - start);
rows_cache_dirty_ = true;
return;
}
}
// Compute end offset at (r,c)
std::size_t end = content_.LineColToByteOffset(r, c);
if (end > start) {
content_.Delete(start, end - start);
rows_cache_dirty_ = true;
}
}
void
Buffer::split_line(int row, const int col)
{
if (row < 0) {
if (row < 0)
row = 0;
}
if (static_cast<std::size_t>(row) >= rows_.size()) {
rows_.resize(static_cast<std::size_t>(row) + 1);
}
const auto y = static_cast<std::size_t>(row);
const auto x = std::min<std::size_t>(static_cast<std::size_t>(col), rows_[y].size());
const auto tail = rows_[y].substr(x);
rows_[y].erase(x);
rows_.insert(rows_.begin() + static_cast<std::ptrdiff_t>(y + 1), Line(tail));
if (col < 0)
row = 0;
const std::size_t off = content_.LineColToByteOffset(static_cast<std::size_t>(row),
static_cast<std::size_t>(col));
const char nl = '\n';
content_.Insert(off, &nl, 1);
rows_cache_dirty_ = true;
}
void
Buffer::join_lines(int row)
{
if (row < 0) {
if (row < 0)
row = 0;
}
const auto y = static_cast<std::size_t>(row);
if (y + 1 >= rows_.size()) {
std::size_t r = static_cast<std::size_t>(row);
if (r + 1 >= content_.LineCount())
return;
}
rows_[y] += rows_[y + 1];
rows_.erase(rows_.begin() + static_cast<std::ptrdiff_t>(y + 1));
// Delete the newline between line r and r+1
std::size_t end_of_line = content_.LineColToByteOffset(r, std::numeric_limits<std::size_t>::max());
// end_of_line now equals line end (clamped before newline). The newline should be exactly at this position.
content_.Delete(end_of_line, 1);
rows_cache_dirty_ = true;
}
@@ -513,9 +520,12 @@ Buffer::insert_row(int row, const std::string_view text)
{
if (row < 0)
row = 0;
if (static_cast<std::size_t>(row) > rows_.size())
row = static_cast<int>(rows_.size());
rows_.insert(rows_.begin() + row, Line(std::string(text)));
std::size_t off = content_.LineColToByteOffset(static_cast<std::size_t>(row), 0);
if (!text.empty())
content_.Insert(off, text.data(), text.size());
const char nl = '\n';
content_.Insert(off + text.size(), &nl, 1);
rows_cache_dirty_ = true;
}
@@ -524,9 +534,16 @@ Buffer::delete_row(int row)
{
if (row < 0)
row = 0;
if (static_cast<std::size_t>(row) >= rows_.size())
std::size_t r = static_cast<std::size_t>(row);
if (r >= content_.LineCount())
return;
rows_.erase(rows_.begin() + row);
auto range = content_.GetLineRange(r); // [start,end)
// If not last line, ensure we include the separating newline by using end as-is (which points to next line start)
// If last line, end may equal total_size_. We still delete [start,end) which removes the last line content.
std::size_t start = range.first;
std::size_t end = range.second;
content_.Delete(start, end - start);
rows_cache_dirty_ = true;
}
@@ -542,4 +559,4 @@ const UndoSystem *
Buffer::Undo() const
{
return undo_sys_.get();
}
}

117
Buffer.h
View File

@@ -9,10 +9,9 @@
#include <vector>
#include <string_view>
#include "AppendBuffer.h"
#include "PieceTable.h"
#include "UndoSystem.h"
#include <cstdint>
#include <memory>
#include "syntax/HighlighterEngine.h"
#include "Highlight.h"
@@ -63,7 +62,7 @@ public:
[[nodiscard]] std::size_t Nrows() const
{
return nrows_;
return content_LineCount_();
}
@@ -79,7 +78,8 @@ public:
}
// Line wrapper backed by AppendBuffer (GapBuffer/PieceTable)
// Line wrapper used by legacy command paths.
// Keep this lightweight: store materialized bytes only for that line.
class Line {
public:
Line() = default;
@@ -108,119 +108,102 @@ public:
// capacity helpers
void Clear()
{
buf_.Clear();
s_.clear();
}
// size/access
[[nodiscard]] std::size_t size() const
{
return buf_.Size();
return s_.size();
}
[[nodiscard]] bool empty() const
{
return size() == 0;
return s_.empty();
}
// read-only raw view
[[nodiscard]] const char *Data() const
{
return buf_.Data();
return s_.data();
}
[[nodiscard]] std::size_t Size() const
{
return buf_.Size();
return s_.size();
}
// element access (read-only)
[[nodiscard]] char operator[](std::size_t i) const
{
const char *d = buf_.Data();
return (i < buf_.Size() && d) ? d[i] : '\0';
return (i < s_.size()) ? s_[i] : '\0';
}
// conversions
explicit operator std::string() const
{
return {buf_.Data() ? buf_.Data() : "", buf_.Size()};
return s_;
}
// string-like API used by command/renderer layers (implemented via materialization for now)
[[nodiscard]] std::string substr(std::size_t pos) const
{
const std::size_t n = buf_.Size();
if (pos >= n)
return {};
return {buf_.Data() + pos, n - pos};
return pos < s_.size() ? s_.substr(pos) : std::string();
}
[[nodiscard]] std::string substr(std::size_t pos, std::size_t len) const
{
const std::size_t n = buf_.Size();
if (pos >= n)
return {};
const std::size_t take = (pos + len > n) ? (n - pos) : len;
return {buf_.Data() + pos, take};
return pos < s_.size() ? s_.substr(pos, len) : std::string();
}
// minimal find() to support search within a line
[[nodiscard]] std::size_t find(const std::string &needle, const std::size_t pos = 0) const
{
// Materialize to std::string for now; Line is backed by AppendBuffer
const auto s = static_cast<std::string>(*this);
return s.find(needle, pos);
return s_.find(needle, pos);
}
void erase(std::size_t pos)
{
// erase to end
material_edit([&](std::string &s) {
if (pos < s.size())
s.erase(pos);
});
if (pos < s_.size())
s_.erase(pos);
}
void erase(std::size_t pos, std::size_t len)
{
material_edit([&](std::string &s) {
if (pos < s.size())
s.erase(pos, len);
});
if (pos < s_.size())
s_.erase(pos, len);
}
void insert(std::size_t pos, const std::string &seg)
{
material_edit([&](std::string &s) {
if (pos > s.size())
pos = s.size();
s.insert(pos, seg);
});
if (pos > s_.size())
pos = s_.size();
s_.insert(pos, seg);
}
Line &operator+=(const Line &other)
{
buf_.Append(other.buf_.Data(), other.buf_.Size());
s_ += other.s_;
return *this;
}
Line &operator+=(const std::string &s)
{
buf_.Append(s.data(), s.size());
s_ += s;
return *this;
}
@@ -234,37 +217,47 @@ public:
private:
void assign_from(const std::string &s)
{
buf_.Clear();
if (!s.empty())
buf_.Append(s.data(), s.size());
s_ = s;
}
template<typename F>
void material_edit(F fn)
{
std::string tmp = static_cast<std::string>(*this);
fn(tmp);
assign_from(tmp);
}
AppendBuffer buf_;
std::string s_;
};
[[nodiscard]] const std::vector<Line> &Rows() const
{
ensure_rows_cache();
return rows_;
}
[[nodiscard]] std::vector<Line> &Rows()
{
ensure_rows_cache();
return rows_;
}
// Lightweight, lazy per-line accessors that avoid materializing all rows.
// Prefer these over Rows() in hot paths to reduce memory overhead on large files.
[[nodiscard]] std::string GetLineString(std::size_t row) const
{
return content_.GetLine(row);
}
[[nodiscard]] std::pair<std::size_t, std::size_t> GetLineRange(std::size_t row) const
{
return content_.GetLineRange(row);
}
// Zero-copy view of a line. Points into the materialized backing store; becomes
// invalid after subsequent edits. Use immediately.
[[nodiscard]] std::string_view GetLineView(std::size_t row) const;
[[nodiscard]] const std::string &Filename() const
{
return filename_;
@@ -409,13 +402,13 @@ public:
}
kte::HighlighterEngine *Highlighter()
[[nodiscard]] kte::HighlighterEngine *Highlighter()
{
return highlighter_.get();
}
const kte::HighlighterEngine *Highlighter() const
[[nodiscard]] const kte::HighlighterEngine *Highlighter() const
{
return highlighter_.get();
}
@@ -450,7 +443,7 @@ public:
void delete_row(int row);
// Undo system accessors (created per-buffer)
UndoSystem *Undo();
[[nodiscard]] UndoSystem *Undo();
[[nodiscard]] const UndoSystem *Undo() const;
@@ -460,7 +453,17 @@ private:
std::size_t rx_ = 0; // render x (tabs expanded)
std::size_t nrows_ = 0; // number of rows
std::size_t rowoffs_ = 0, coloffs_ = 0; // viewport offsets
std::vector<Line> rows_; // buffer rows (without trailing newlines)
mutable std::vector<Line> rows_; // materialized cache of rows (without trailing newlines)
// PieceTable is the source of truth.
PieceTable content_{};
mutable bool rows_cache_dirty_ = true; // invalidate on edits / I/O
// Helper to rebuild rows_ from content_
void ensure_rows_cache() const;
// Helper to query content_.LineCount() while keeping header minimal
std::size_t content_LineCount_() const;
std::string filename_;
bool is_file_backed_ = false;
bool dirty_ = false;

View File

@@ -4,14 +4,13 @@ project(kte)
include(GNUInstallDirs)
set(CMAKE_CXX_STANDARD 20)
set(KTE_VERSION "1.4.1")
set(KTE_VERSION "1.5.3")
# Default to terminal-only build to avoid SDL/OpenGL dependency by default.
# Enable with -DBUILD_GUI=ON when SDL2/OpenGL/Freetype are available.
set(BUILD_GUI ON CACHE BOOL "Enable building the graphical version.")
set(KTE_USE_QT OFF CACHE BOOL "Build the QT frontend instead of ImGui.")
set(BUILD_TESTS OFF CACHE BOOL "Enable building test programs.")
option(KTE_USE_PIECE_TABLE "Use PieceTable instead of GapBuffer implementation" ON)
set(KTE_FONT_SIZE "18.0" CACHE STRING "Default font size for GUI")
option(KTE_UNDO_DEBUG "Enable undo instrumentation logs" OFF)
option(KTE_ENABLE_TREESITTER "Enable optional Tree-sitter highlighter adapter" OFF)
@@ -128,7 +127,6 @@ if (BUILD_GUI)
endif ()
set(COMMON_SOURCES
GapBuffer.cc
PieceTable.cc
Buffer.cc
Editor.cc
@@ -213,11 +211,9 @@ set(FONT_HEADERS
)
set(COMMON_HEADERS
GapBuffer.h
PieceTable.h
Buffer.h
Editor.h
AppendBuffer.h
Command.h
HelpText.h
KKeymap.h
@@ -270,9 +266,6 @@ add_executable(kte
${COMMON_HEADERS}
)
if (KTE_USE_PIECE_TABLE)
target_compile_definitions(kte PRIVATE KTE_USE_PIECE_TABLE=1)
endif ()
if (KTE_UNDO_DEBUG)
target_compile_definitions(kte PRIVATE KTE_UNDO_DEBUG=1)
endif ()
@@ -299,29 +292,34 @@ install(TARGETS kte
install(FILES docs/kte.1 DESTINATION ${CMAKE_INSTALL_MANDIR}/man1)
if (BUILD_TESTS)
# test_undo executable for testing undo/redo system
add_executable(test_undo
test_undo.cc
${COMMON_SOURCES}
${COMMON_HEADERS}
# Unified unit test runner
add_executable(kte_tests
tests/TestRunner.cc
tests/Test.h
tests/test_buffer_io.cc
tests/test_piece_table.cc
tests/test_search.cc
# minimal engine sources required by Buffer
PieceTable.cc
Buffer.cc
OptimizedSearch.cc
UndoNode.cc
UndoTree.cc
UndoSystem.cc
${SYNTAX_SOURCES}
)
if (KTE_USE_PIECE_TABLE)
target_compile_definitions(test_undo PRIVATE KTE_USE_PIECE_TABLE=1)
endif ()
# Allow tests to include project headers like "Buffer.h"
target_include_directories(kte_tests PRIVATE ${CMAKE_CURRENT_SOURCE_DIR})
if (KTE_UNDO_DEBUG)
target_compile_definitions(test_undo PRIVATE KTE_UNDO_DEBUG=1)
endif ()
target_link_libraries(test_undo ${CURSES_LIBRARIES})
# Keep tests free of ncurses/GUI deps
if (KTE_ENABLE_TREESITTER)
if (TREESITTER_INCLUDE_DIR)
target_include_directories(test_undo PRIVATE ${TREESITTER_INCLUDE_DIR})
target_include_directories(kte_tests PRIVATE ${TREESITTER_INCLUDE_DIR})
endif ()
if (TREESITTER_LIBRARY)
target_link_libraries(test_undo ${TREESITTER_LIBRARY})
target_link_libraries(kte_tests ${TREESITTER_LIBRARY})
endif ()
endif ()
endif ()
@@ -381,12 +379,18 @@ if (${BUILD_GUI})
${CMAKE_CURRENT_BINARY_DIR}/kge-Info.plist
@ONLY)
# Ensure proper macOS bundle properties and RPATH so our bundled
# frameworks are preferred over system/Homebrew ones.
set_target_properties(kge PROPERTIES
MACOSX_BUNDLE TRUE
MACOSX_BUNDLE_GUI_IDENTIFIER ${KGE_BUNDLE_ID}
MACOSX_BUNDLE_BUNDLE_NAME "kge"
MACOSX_BUNDLE_ICON_FILE ${MACOSX_BUNDLE_ICON_FILE}
MACOSX_BUNDLE_INFO_PLIST "${CMAKE_CURRENT_BINARY_DIR}/kge-Info.plist")
MACOSX_BUNDLE_INFO_PLIST "${CMAKE_CURRENT_BINARY_DIR}/kge-Info.plist"
# Prefer the app's bundled frameworks at runtime
INSTALL_RPATH "@executable_path/../Frameworks"
BUILD_WITH_INSTALL_RPATH TRUE
)
add_dependencies(kge kte)
add_custom_command(TARGET kge POST_BUILD
@@ -410,4 +414,19 @@ if (${BUILD_GUI})
# Install kge man page only when GUI is built
install(FILES docs/kge.1 DESTINATION ${CMAKE_INSTALL_MANDIR}/man1)
install(FILES kge.png DESTINATION ${CMAKE_INSTALL_PREFIX}/share/icons)
# Optional post-build bundle fixup (can also be run from scripts).
# This provides a CMake target to run BundleUtilities' fixup_bundle on the
# built app, useful after macdeployqt to ensure non-Qt dylibs are internalized.
if (APPLE)
include(CMakeParseArguments)
add_custom_target(kge_fixup_bundle ALL
COMMAND ${CMAKE_COMMAND}
-DAPP_BUNDLE=$<TARGET_BUNDLE_DIR:kge>
-P ${CMAKE_CURRENT_LIST_DIR}/cmake/fix_bundle.cmake
BYPRODUCTS $<TARGET_BUNDLE_DIR:kge>/Contents/Frameworks
COMMENT "Running fixup_bundle on kge.app to internalize non-Qt dylibs"
VERBATIM)
add_dependencies(kge_fixup_bundle kge)
endif ()
endif ()

1105
Command.cc

File diff suppressed because it is too large Load Diff

View File

@@ -31,6 +31,7 @@ enum class CommandId {
VisualFontPickerToggle,
// Buffers
BufferSwitchStart, // begin buffer switch prompt
BufferNew, // create a new empty, unnamed buffer (C-k i)
BufferClose,
BufferNext,
BufferPrev,

View File

@@ -197,9 +197,11 @@ Editor::OpenFile(const std::string &path, std::string &err)
eng->InvalidateFrom(0);
}
}
return true;
}
}
// Defensive: ensure any active prompt is closed after a successful open
CancelPrompt();
return true;
}
}
Buffer b;
if (!b.OpenFromFile(path, err)) {
@@ -237,8 +239,10 @@ Editor::OpenFile(const std::string &path, std::string &err)
}
// Add as a new buffer and switch to it
std::size_t idx = AddBuffer(std::move(b));
SwitchTo(idx);
return true;
SwitchTo(idx);
// Defensive: ensure any active prompt is closed after a successful open
CancelPrompt();
return true;
}

View File

@@ -1,204 +0,0 @@
#include <algorithm>
#include <cassert>
#include <cstring>
#include "GapBuffer.h"
GapBuffer::GapBuffer() = default;
GapBuffer::GapBuffer(std::size_t initialCapacity)
: buffer_(nullptr), size_(0), capacity_(0)
{
if (initialCapacity > 0) {
Reserve(initialCapacity);
}
}
GapBuffer::GapBuffer(const GapBuffer &other)
: buffer_(nullptr), size_(0), capacity_(0)
{
if (other.capacity_ > 0) {
Reserve(other.capacity_);
if (other.size_ > 0) {
std::memcpy(buffer_, other.buffer_, other.size_);
size_ = other.size_;
}
setTerminator();
}
}
GapBuffer &
GapBuffer::operator=(const GapBuffer &other)
{
if (this == &other)
return *this;
if (other.capacity_ > capacity_) {
Reserve(other.capacity_);
}
if (other.size_ > 0) {
std::memcpy(buffer_, other.buffer_, other.size_);
}
size_ = other.size_;
setTerminator();
return *this;
}
GapBuffer::GapBuffer(GapBuffer &&other) noexcept
: buffer_(other.buffer_), size_(other.size_), capacity_(other.capacity_)
{
other.buffer_ = nullptr;
other.size_ = 0;
other.capacity_ = 0;
}
GapBuffer &
GapBuffer::operator=(GapBuffer &&other) noexcept
{
if (this == &other)
return *this;
delete[] buffer_;
buffer_ = other.buffer_;
size_ = other.size_;
capacity_ = other.capacity_;
other.buffer_ = nullptr;
other.size_ = 0;
other.capacity_ = 0;
return *this;
}
GapBuffer::~GapBuffer()
{
delete[] buffer_;
}
void
GapBuffer::Reserve(const std::size_t newCapacity)
{
if (newCapacity <= capacity_) [[likely]]
return;
// Allocate space for terminator as well
char *nb = new char[newCapacity + 1];
if (size_ > 0 && buffer_) {
std::memcpy(nb, buffer_, size_);
}
delete[] buffer_;
buffer_ = nb;
capacity_ = newCapacity;
setTerminator();
}
void
GapBuffer::AppendChar(const char c)
{
ensureCapacityFor(1);
buffer_[size_++] = c;
setTerminator();
}
void
GapBuffer::Append(const char *s, const std::size_t len)
{
if (!s || len == 0) [[unlikely]]
return;
ensureCapacityFor(len);
std::memcpy(buffer_ + size_, s, len);
size_ += len;
setTerminator();
}
void
GapBuffer::Append(const GapBuffer &other)
{
if (other.size_ == 0)
return;
Append(other.buffer_, other.size_);
}
void
GapBuffer::PrependChar(char c)
{
ensureCapacityFor(1);
// shift right by 1
if (size_ > 0) [[likely]] {
std::memmove(buffer_ + 1, buffer_, size_);
}
buffer_[0] = c;
++size_;
setTerminator();
}
void
GapBuffer::Prepend(const char *s, std::size_t len)
{
if (!s || len == 0) [[unlikely]]
return;
ensureCapacityFor(len);
if (size_ > 0) [[likely]] {
std::memmove(buffer_ + len, buffer_, size_);
}
std::memcpy(buffer_, s, len);
size_ += len;
setTerminator();
}
void
GapBuffer::Prepend(const GapBuffer &other)
{
if (other.size_ == 0)
return;
Prepend(other.buffer_, other.size_);
}
void
GapBuffer::Clear()
{
size_ = 0;
setTerminator();
}
void
GapBuffer::ensureCapacityFor(std::size_t delta)
{
if (capacity_ - size_ >= delta) [[likely]]
return;
auto required = size_ + delta;
Reserve(growCapacity(capacity_, required));
}
std::size_t
GapBuffer::growCapacity(std::size_t current, std::size_t required)
{
// geometric growth, at least required
std::size_t newCap = current ? current : 8;
while (newCap < required)
newCap = newCap + (newCap >> 1); // 1.5x growth
return newCap;
}
void
GapBuffer::setTerminator() const
{
if (!buffer_) {
return;
}
buffer_[size_] = '\0';
}

View File

@@ -1,76 +0,0 @@
/*
* GapBuffer.h - C++ replacement for abuf append/prepend buffer utilities
*/
#pragma once
#include <cstddef>
class GapBuffer {
public:
GapBuffer();
explicit GapBuffer(std::size_t initialCapacity);
GapBuffer(const GapBuffer &other);
GapBuffer &operator=(const GapBuffer &other);
GapBuffer(GapBuffer &&other) noexcept;
GapBuffer &operator=(GapBuffer &&other) noexcept;
~GapBuffer();
void Reserve(std::size_t newCapacity);
void AppendChar(char c);
void Append(const char *s, std::size_t len);
void Append(const GapBuffer &other);
void PrependChar(char c);
void Prepend(const char *s, std::size_t len);
void Prepend(const GapBuffer &other);
// Content management
void Clear();
// Accessors
char *Data()
{
return buffer_;
}
[[nodiscard]] const char *Data() const
{
return buffer_;
}
[[nodiscard]] std::size_t Size() const
{
return size_;
}
[[nodiscard]] std::size_t Capacity() const
{
return capacity_;
}
private:
void ensureCapacityFor(std::size_t delta);
static std::size_t growCapacity(std::size_t current, std::size_t required);
void setTerminator() const;
char *buffer_ = nullptr;
std::size_t size_ = 0; // number of valid bytes (excluding terminator)
std::size_t capacity_ = 0; // capacity of buffer_ excluding space for terminator
};

View File

@@ -31,6 +31,7 @@ HelpText::Text()
" C-k c Close current buffer\n"
" C-k d Kill to end of line\n"
" C-k e Open file (prompt)\n"
" C-k i New empty buffer\n"
" C-k f Flush kill ring\n"
" C-k g Jump to line\n"
" C-k h Show this help\n"

View File

@@ -158,16 +158,17 @@ map_key(const SDL_Keycode key,
ascii_key = static_cast<int>(key);
}
bool ctrl2 = (mod & KMOD_CTRL) != 0;
// If user typed a literal 'C' (or '^') as a control qualifier, keep k-prefix active
if (ascii_key == 'C' || ascii_key == 'c' || ascii_key == '^') {
k_ctrl_pending = true;
// Keep waiting for the next suffix; show status and suppress ensuing TEXTINPUT
if (ed)
ed->SetStatus("C-k C _");
suppress_textinput_once = true;
out.hasCommand = false;
return true;
}
// If user typed a literal 'C' (uppercase) or '^' as a control qualifier, keep k-prefix active
// Do NOT treat lowercase 'c' as a qualifier; 'c' is a valid k-command (BufferClose).
if (ascii_key == 'C' || ascii_key == '^') {
k_ctrl_pending = true;
// Keep waiting for the next suffix; show status and suppress ensuing TEXTINPUT
if (ed)
ed->SetStatus("C-k C _");
suppress_textinput_once = true;
out.hasCommand = false;
return true;
}
// Otherwise, consume the k-prefix now for the actual suffix
k_prefix = false;
if (ascii_key != 0) {
@@ -294,25 +295,34 @@ ImGuiInputHandler::ProcessSDLEvent(const SDL_Event &e)
bool produced = false;
switch (e.type) {
case SDL_MOUSEWHEEL: {
// Let ImGui handle mouse wheel when it wants to capture the mouse
// (e.g., when hovering the editor child window with scrollbars).
// This enables native vertical and horizontal scrolling behavior in GUI.
if (ImGui::GetIO().WantCaptureMouse)
return false;
// Otherwise, fallback to mapping vertical wheel to editor scroll commands.
int dy = e.wheel.y;
// High-resolution trackpads can deliver fractional wheel deltas. Accumulate
// precise values and emit one scroll step per whole unit.
float dy = 0.0f;
#if SDL_VERSION_ATLEAST(2,0,18)
dy = e.wheel.preciseY;
#else
dy = static_cast<float>(e.wheel.y);
#endif
#ifdef SDL_MOUSEWHEEL_FLIPPED
if (e.wheel.direction == SDL_MOUSEWHEEL_FLIPPED)
dy = -dy;
#endif
if (dy != 0) {
int repeat = dy > 0 ? dy : -dy;
CommandId id = dy > 0 ? CommandId::ScrollUp : CommandId::ScrollDown;
std::lock_guard<std::mutex> lk(mu_);
for (int i = 0; i < repeat; ++i) {
q_.push(MappedInput{true, id, std::string(), 0});
if (dy != 0.0f) {
wheel_accum_y_ += dy;
float abs_accum = wheel_accum_y_ >= 0.0f ? wheel_accum_y_ : -wheel_accum_y_;
int steps = static_cast<int>(abs_accum);
if (steps > 0) {
CommandId id = (wheel_accum_y_ > 0.0f) ? CommandId::ScrollUp : CommandId::ScrollDown;
std::lock_guard<std::mutex> lk(mu_);
for (int i = 0; i < steps; ++i) {
q_.push(MappedInput{true, id, std::string(), 0});
}
// remove the whole steps, keep fractional remainder
wheel_accum_y_ += (wheel_accum_y_ > 0.0f)
? -static_cast<float>(steps)
: static_cast<float>(steps);
return true; // consumed
}
return true; // consumed
}
return false;
}
@@ -463,16 +473,16 @@ ImGuiInputHandler::ProcessSDLEvent(const SDL_Event &e)
ascii_key = static_cast<int>(c0);
}
if (ascii_key != 0) {
// Qualifier via TEXTINPUT: 'C' or '^'
if (ascii_key == 'C' || ascii_key == 'c' || ascii_key == '^') {
k_ctrl_pending_ = true;
if (ed_)
ed_->SetStatus("C-k C _");
// Keep k-prefix active; do not emit a command
k_prefix_ = true;
produced = true;
break;
}
// Qualifier via TEXTINPUT: uppercase 'C' or '^' only
if (ascii_key == 'C' || ascii_key == '^') {
k_ctrl_pending_ = true;
if (ed_)
ed_->SetStatus("C-k C _");
// Keep k-prefix active; do not emit a command
k_prefix_ = true;
produced = true;
break;
}
// Map via k-prefix table; do not pass Ctrl for TEXTINPUT case
CommandId id;
bool pass_ctrl = k_ctrl_pending_;

View File

@@ -41,4 +41,9 @@ private:
bool suppress_text_input_once_ = false;
Editor *ed_ = nullptr; // attached editor for editor-owned uarg handling
// Accumulators for high-resolution (trackpad) scrolling. We emit one scroll
// command per whole step and keep the fractional remainder.
float wheel_accum_y_ = 0.0f;
float wheel_accum_x_ = 0.0f; // reserved for future horizontal scrolling
};

View File

@@ -42,6 +42,9 @@ KLookupKCommand(const int ascii_key, const bool ctrl, CommandId &out) -> bool
case 'a':
out = CommandId::MarkAllAndJumpEnd;
return true;
case 'i':
out = CommandId::BufferNew; // C-k i new empty buffer
return true;
case 'k':
out = CommandId::CenterOnCursor; // C-k k center current line
return true;

View File

@@ -1,5 +1,7 @@
#include <algorithm>
#include <utility>
#include <limits>
#include <ostream>
#include "PieceTable.h"
@@ -14,13 +16,32 @@ PieceTable::PieceTable(const std::size_t initialCapacity)
}
PieceTable::PieceTable(const std::size_t initialCapacity,
const std::size_t piece_limit,
const std::size_t small_piece_threshold,
const std::size_t max_consolidation_bytes)
{
add_.reserve(initialCapacity);
materialized_.reserve(initialCapacity);
piece_limit_ = piece_limit;
small_piece_threshold_ = small_piece_threshold;
max_consolidation_bytes_ = max_consolidation_bytes;
}
PieceTable::PieceTable(const PieceTable &other)
: original_(other.original_),
add_(other.add_),
pieces_(other.pieces_),
materialized_(other.materialized_),
dirty_(other.dirty_),
total_size_(other.total_size_) {}
total_size_(other.total_size_)
{
version_ = other.version_;
// caches are per-instance, mark invalid
range_cache_ = {};
find_cache_ = {};
}
PieceTable &
@@ -34,6 +55,9 @@ PieceTable::operator=(const PieceTable &other)
materialized_ = other.materialized_;
dirty_ = other.dirty_;
total_size_ = other.total_size_;
version_ = other.version_;
range_cache_ = {};
find_cache_ = {};
return *this;
}
@@ -48,6 +72,9 @@ PieceTable::PieceTable(PieceTable &&other) noexcept
{
other.dirty_ = true;
other.total_size_ = 0;
version_ = other.version_;
range_cache_ = {};
find_cache_ = {};
}
@@ -64,6 +91,9 @@ PieceTable::operator=(PieceTable &&other) noexcept
total_size_ = other.total_size_;
other.dirty_ = true;
other.total_size_ = 0;
version_ = other.version_;
range_cache_ = {};
find_cache_ = {};
return *this;
}
@@ -79,6 +109,21 @@ PieceTable::Reserve(const std::size_t newCapacity)
}
// Setter to allow tuning consolidation heuristics
void
PieceTable::SetConsolidationParams(const std::size_t piece_limit,
const std::size_t small_piece_threshold,
const std::size_t max_consolidation_bytes)
{
piece_limit_ = piece_limit;
small_piece_threshold_ = small_piece_threshold;
max_consolidation_bytes_ = max_consolidation_bytes;
}
// (removed helper) — we'll invalidate caches inline inside mutating methods
void
PieceTable::AppendChar(char c)
{
@@ -151,6 +196,11 @@ PieceTable::Clear()
materialized_.clear();
total_size_ = 0;
dirty_ = true;
line_index_.clear();
line_index_dirty_ = true;
version_++;
range_cache_ = {};
find_cache_ = {};
}
@@ -171,6 +221,9 @@ PieceTable::addPieceBack(const Source src, const std::size_t start, const std::s
last.len += len;
total_size_ += len;
dirty_ = true;
version_++;
range_cache_ = {};
find_cache_ = {};
return;
}
}
@@ -179,6 +232,10 @@ PieceTable::addPieceBack(const Source src, const std::size_t start, const std::s
pieces_.push_back(Piece{src, start, len});
total_size_ += len;
dirty_ = true;
InvalidateLineIndex();
version_++;
range_cache_ = {};
find_cache_ = {};
}
@@ -197,12 +254,19 @@ PieceTable::addPieceFront(Source src, std::size_t start, std::size_t len)
first.len += len;
total_size_ += len;
dirty_ = true;
version_++;
range_cache_ = {};
find_cache_ = {};
return;
}
}
pieces_.insert(pieces_.begin(), Piece{src, start, len});
total_size_ += len;
dirty_ = true;
InvalidateLineIndex();
version_++;
range_cache_ = {};
find_cache_ = {};
}
@@ -225,3 +289,486 @@ PieceTable::materialize() const
// Ensure there is a null terminator present via std::string invariants
dirty_ = false;
}
// ===== New Phase 1 implementation =====
std::pair<std::size_t, std::size_t>
PieceTable::locate(const std::size_t byte_offset) const
{
if (byte_offset >= total_size_) {
return {pieces_.size(), 0};
}
std::size_t off = byte_offset;
for (std::size_t i = 0; i < pieces_.size(); ++i) {
const auto &p = pieces_[i];
if (off < p.len) {
return {i, off};
}
off -= p.len;
}
// Should not reach here unless inconsistency; return end
return {pieces_.size(), 0};
}
void
PieceTable::coalesceNeighbors(std::size_t index)
{
if (pieces_.empty())
return;
if (index >= pieces_.size())
index = pieces_.size() - 1;
// Merge repeatedly with previous while contiguous and same source
while (index > 0) {
auto &prev = pieces_[index - 1];
auto &curr = pieces_[index];
if (prev.src == curr.src && prev.start + prev.len == curr.start) {
prev.len += curr.len;
pieces_.erase(pieces_.begin() + static_cast<std::ptrdiff_t>(index));
index -= 1;
} else {
break;
}
}
// Merge repeatedly with next while contiguous and same source
while (index + 1 < pieces_.size()) {
auto &curr = pieces_[index];
auto &next = pieces_[index + 1];
if (curr.src == next.src && curr.start + curr.len == next.start) {
curr.len += next.len;
pieces_.erase(pieces_.begin() + static_cast<std::ptrdiff_t>(index + 1));
} else {
break;
}
}
}
void
PieceTable::InvalidateLineIndex() const
{
line_index_dirty_ = true;
}
void
PieceTable::RebuildLineIndex() const
{
if (!line_index_dirty_)
return;
line_index_.clear();
line_index_.push_back(0);
std::size_t pos = 0;
for (const auto &pc: pieces_) {
const std::string &src = pc.src == Source::Original ? original_ : add_;
const char *base = src.data() + static_cast<std::ptrdiff_t>(pc.start);
for (std::size_t j = 0; j < pc.len; ++j) {
if (base[j] == '\n') {
// next line starts after the newline
line_index_.push_back(pos + j + 1);
}
}
pos += pc.len;
}
line_index_dirty_ = false;
}
void
PieceTable::Insert(std::size_t byte_offset, const char *text, std::size_t len)
{
if (len == 0) {
return;
}
if (byte_offset > total_size_) {
byte_offset = total_size_;
}
const std::size_t add_start = add_.size();
add_.append(text, len);
if (pieces_.empty()) {
pieces_.push_back(Piece{Source::Add, add_start, len});
total_size_ += len;
dirty_ = true;
InvalidateLineIndex();
maybeConsolidate();
version_++;
range_cache_ = {};
find_cache_ = {};
return;
}
auto [idx, inner] = locate(byte_offset);
if (idx == pieces_.size()) {
// insert at end
pieces_.push_back(Piece{Source::Add, add_start, len});
total_size_ += len;
dirty_ = true;
InvalidateLineIndex();
coalesceNeighbors(pieces_.size() - 1);
maybeConsolidate();
version_++;
range_cache_ = {};
find_cache_ = {};
return;
}
Piece target = pieces_[idx];
// Build replacement sequence: left, inserted, right
std::vector<Piece> repl;
repl.reserve(3);
if (inner > 0) {
repl.push_back(Piece{target.src, target.start, inner});
}
repl.push_back(Piece{Source::Add, add_start, len});
const std::size_t right_len = target.len - inner;
if (right_len > 0) {
repl.push_back(Piece{target.src, target.start + inner, right_len});
}
// Replace target with repl
pieces_.erase(pieces_.begin() + static_cast<std::ptrdiff_t>(idx));
pieces_.insert(pieces_.begin() + static_cast<std::ptrdiff_t>(idx), repl.begin(), repl.end());
total_size_ += len;
dirty_ = true;
InvalidateLineIndex();
// Try coalescing around the inserted position (the inserted piece is at idx + (inner>0 ? 1 : 0))
std::size_t ins_index = idx + (inner > 0 ? 1 : 0);
coalesceNeighbors(ins_index);
maybeConsolidate();
version_++;
range_cache_ = {};
find_cache_ = {};
}
void
PieceTable::Delete(std::size_t byte_offset, std::size_t len)
{
if (len == 0) {
return;
}
if (byte_offset >= total_size_) {
return;
}
if (byte_offset + len > total_size_) {
len = total_size_ - byte_offset;
}
auto [idx, inner] = locate(byte_offset);
std::size_t remaining = len;
while (remaining > 0 && idx < pieces_.size()) {
Piece &pc = pieces_[idx];
std::size_t available = pc.len - inner; // bytes we can remove from this piece starting at inner
std::size_t take = std::min(available, remaining);
// Compute lengths for left and right remnants
std::size_t left_len = inner;
std::size_t right_len = pc.len - inner - take;
Source src = pc.src;
std::size_t start = pc.start;
// Replace current piece with up to two remnants
if (left_len > 0 && right_len > 0) {
pc.len = left_len; // keep left in place
Piece right{src, start + inner + take, right_len};
pieces_.insert(pieces_.begin() + static_cast<std::ptrdiff_t>(idx + 1), right);
idx += 1; // move to right for next iteration decision
} else if (left_len > 0) {
pc.len = left_len;
// no insertion; idx now points to left; move to next piece
} else if (right_len > 0) {
pc.start = start + inner + take;
pc.len = right_len;
} else {
// entire piece removed
pieces_.erase(pieces_.begin() + static_cast<std::ptrdiff_t>(idx));
// stay at same idx for next piece
inner = 0;
remaining -= take;
continue;
}
// After modifying current idx, next deletion continues at beginning of the next logical region
inner = 0;
remaining -= take;
if (remaining == 0)
break;
// Move to next piece
idx += 1;
}
total_size_ -= len;
dirty_ = true;
InvalidateLineIndex();
if (idx < pieces_.size())
coalesceNeighbors(idx);
if (idx > 0)
coalesceNeighbors(idx - 1);
maybeConsolidate();
version_++;
range_cache_ = {};
find_cache_ = {};
}
// ===== Consolidation implementation =====
void
PieceTable::appendPieceDataTo(std::string &out, const Piece &p) const
{
if (p.len == 0)
return;
const std::string &src = p.src == Source::Original ? original_ : add_;
out.append(src.data() + static_cast<std::ptrdiff_t>(p.start), p.len);
}
void
PieceTable::consolidateRange(std::size_t start_idx, std::size_t end_idx)
{
if (start_idx >= end_idx || start_idx >= pieces_.size())
return;
end_idx = std::min(end_idx, pieces_.size());
std::size_t total = 0;
for (std::size_t i = start_idx; i < end_idx; ++i)
total += pieces_[i].len;
if (total == 0)
return;
const std::size_t add_start = add_.size();
std::string tmp;
tmp.reserve(std::min<std::size_t>(total, max_consolidation_bytes_));
for (std::size_t i = start_idx; i < end_idx; ++i)
appendPieceDataTo(tmp, pieces_[i]);
add_.append(tmp);
// Replace [start_idx, end_idx) with single Add piece
Piece consolidated{Source::Add, add_start, tmp.size()};
pieces_.erase(pieces_.begin() + static_cast<std::ptrdiff_t>(start_idx),
pieces_.begin() + static_cast<std::ptrdiff_t>(end_idx));
pieces_.insert(pieces_.begin() + static_cast<std::ptrdiff_t>(start_idx), consolidated);
// total_size_ unchanged
dirty_ = true;
InvalidateLineIndex();
coalesceNeighbors(start_idx);
// Layout changed; invalidate caches/version
version_++;
range_cache_ = {};
find_cache_ = {};
}
void
PieceTable::maybeConsolidate()
{
if (pieces_.size() <= piece_limit_)
return;
// Find the first run of small pieces to consolidate
std::size_t n = pieces_.size();
std::size_t best_start = n, best_end = n;
std::size_t i = 0;
while (i < n) {
// Skip large pieces quickly
if (pieces_[i].len > small_piece_threshold_) {
i++;
continue;
}
std::size_t j = i;
std::size_t bytes = 0;
while (j < n) {
const auto &p = pieces_[j];
if (p.len > small_piece_threshold_)
break;
if (bytes + p.len > max_consolidation_bytes_)
break;
bytes += p.len;
j++;
}
if (j - i >= 2 && bytes > 0) {
// consolidate runs of at least 2 pieces
best_start = i;
best_end = j;
break; // do one run per call; subsequent ops can repeat if still over limit
}
i = j + 1;
}
if (best_start < best_end) {
consolidateRange(best_start, best_end);
}
}
std::size_t
PieceTable::LineCount() const
{
RebuildLineIndex();
return line_index_.empty() ? 0 : line_index_.size();
}
std::pair<std::size_t, std::size_t>
PieceTable::GetLineRange(std::size_t line_num) const
{
RebuildLineIndex();
if (line_index_.empty())
return {0, 0};
if (line_num >= line_index_.size())
return {0, 0};
std::size_t start = line_index_[line_num];
std::size_t end = (line_num + 1 < line_index_.size()) ? line_index_[line_num + 1] : total_size_;
return {start, end};
}
std::string
PieceTable::GetLine(std::size_t line_num) const
{
auto [start, end] = GetLineRange(line_num);
if (end < start)
return std::string();
// Trim trailing '\n'
if (end > start) {
// To check last char, we can get it via GetRange of len 1 at end-1 without materializing whole
std::string last = GetRange(end - 1, 1);
if (!last.empty() && last[0] == '\n') {
end -= 1;
}
}
return GetRange(start, end - start);
}
std::pair<std::size_t, std::size_t>
PieceTable::ByteOffsetToLineCol(std::size_t byte_offset) const
{
if (byte_offset > total_size_)
byte_offset = total_size_;
RebuildLineIndex();
if (line_index_.empty())
return {0, 0};
auto it = std::upper_bound(line_index_.begin(), line_index_.end(), byte_offset);
std::size_t row = (it == line_index_.begin()) ? 0 : static_cast<std::size_t>((it - line_index_.begin()) - 1);
std::size_t col = byte_offset - line_index_[row];
return {row, col};
}
std::size_t
PieceTable::LineColToByteOffset(std::size_t row, std::size_t col) const
{
RebuildLineIndex();
if (line_index_.empty())
return 0;
if (row >= line_index_.size())
return total_size_;
std::size_t start = line_index_[row];
std::size_t end = (row + 1 < line_index_.size()) ? line_index_[row + 1] : total_size_;
// Clamp col to line length excluding trailing newline
if (end > start) {
std::string last = GetRange(end - 1, 1);
if (!last.empty() && last[0] == '\n') {
end -= 1;
}
}
std::size_t target = start + std::min(col, end - start);
return target;
}
std::string
PieceTable::GetRange(std::size_t byte_offset, std::size_t len) const
{
if (byte_offset >= total_size_ || len == 0)
return std::string();
if (byte_offset + len > total_size_)
len = total_size_ - byte_offset;
// Fast path: return cached value if version/offset/len match
if (range_cache_.valid && range_cache_.version == version_ &&
range_cache_.off == byte_offset && range_cache_.len == len) {
return range_cache_.data;
}
std::string out;
out.reserve(len);
if (!dirty_) {
// Already materialized; slice directly
out.assign(materialized_.data() + static_cast<std::ptrdiff_t>(byte_offset), len);
} else {
// Assemble substring directly from pieces without full materialization
auto [idx, inner] = locate(byte_offset);
std::size_t remaining = len;
while (remaining > 0 && idx < pieces_.size()) {
const auto &p = pieces_[idx];
const std::string &src = (p.src == Source::Original) ? original_ : add_;
std::size_t take = std::min<std::size_t>(p.len - inner, remaining);
if (take > 0) {
const char *base = src.data() + static_cast<std::ptrdiff_t>(p.start + inner);
out.append(base, take);
remaining -= take;
inner = 0;
idx += 1;
} else {
break;
}
}
}
// Update cache
range_cache_.valid = true;
range_cache_.version = version_;
range_cache_.off = byte_offset;
range_cache_.len = len;
range_cache_.data = out;
return out;
}
std::size_t
PieceTable::Find(const std::string &needle, std::size_t start) const
{
if (needle.empty())
return start <= total_size_ ? start : std::numeric_limits<std::size_t>::max();
if (start > total_size_)
return std::numeric_limits<std::size_t>::max();
if (find_cache_.valid &&
find_cache_.version == version_ &&
find_cache_.needle == needle &&
find_cache_.start == start) {
return find_cache_.result;
}
materialize();
auto pos = materialized_.find(needle, start);
if (pos == std::string::npos)
pos = std::numeric_limits<std::size_t>::max();
// Update cache
find_cache_.valid = true;
find_cache_.version = version_;
find_cache_.needle = needle;
find_cache_.start = start;
find_cache_.result = pos;
return pos;
}
void
PieceTable::WriteToStream(std::ostream &out) const
{
// Stream the content piece-by-piece without forcing full materialization
for (const auto &p : pieces_) {
if (p.len == 0)
continue;
const std::string &src = (p.src == Source::Original) ? original_ : add_;
const char *base = src.data() + static_cast<std::ptrdiff_t>(p.start);
out.write(base, static_cast<std::streamsize>(p.len));
}
}

View File

@@ -3,8 +3,11 @@
*/
#pragma once
#include <cstddef>
#include <cstdint>
#include <string>
#include <ostream>
#include <vector>
#include <limits>
class PieceTable {
@@ -13,6 +16,12 @@ public:
explicit PieceTable(std::size_t initialCapacity);
// Advanced constructor allowing configuration of consolidation heuristics
PieceTable(std::size_t initialCapacity,
std::size_t piece_limit,
std::size_t small_piece_threshold,
std::size_t max_consolidation_bytes);
PieceTable(const PieceTable &other);
PieceTable &operator=(const PieceTable &other);
@@ -68,6 +77,38 @@ public:
return materialized_.capacity();
}
// ===== New buffer-wide API (Phase 1) =====
// Byte-based editing operations
void Insert(std::size_t byte_offset, const char *text, std::size_t len);
void Delete(std::size_t byte_offset, std::size_t len);
// Line-based queries
[[nodiscard]] std::size_t LineCount() const; // number of logical lines
[[nodiscard]] std::string GetLine(std::size_t line_num) const;
[[nodiscard]] std::pair<std::size_t, std::size_t> GetLineRange(std::size_t line_num) const; // [start,end)
// Position conversion
[[nodiscard]] std::pair<std::size_t, std::size_t> ByteOffsetToLineCol(std::size_t byte_offset) const;
[[nodiscard]] std::size_t LineColToByteOffset(std::size_t row, std::size_t col) const;
// Substring extraction
[[nodiscard]] std::string GetRange(std::size_t byte_offset, std::size_t len) const;
// Simple search utility; returns byte offset or npos
[[nodiscard]] std::size_t Find(const std::string &needle, std::size_t start = 0) const;
// Stream out content without materializing the entire buffer
void WriteToStream(std::ostream &out) const;
// Heuristic configuration
void SetConsolidationParams(std::size_t piece_limit,
std::size_t small_piece_threshold,
std::size_t max_consolidation_bytes);
private:
enum class Source : unsigned char { Original, Add };
@@ -83,12 +124,61 @@ private:
void materialize() const;
// Helper: locate piece index and inner offset for a global byte offset
[[nodiscard]] std::pair<std::size_t, std::size_t> locate(std::size_t byte_offset) const;
// Helper: try to coalesce neighboring pieces around index
void coalesceNeighbors(std::size_t index);
// Consolidation helpers and heuristics
void maybeConsolidate();
void consolidateRange(std::size_t start_idx, std::size_t end_idx);
void appendPieceDataTo(std::string &out, const Piece &p) const;
// Line index support (rebuilt lazily on demand)
void InvalidateLineIndex() const;
void RebuildLineIndex() const;
// Underlying storages
std::string original_; // unused for builder use-case, but kept for API symmetry
std::string add_;
std::vector<Piece> pieces_;
mutable std::string materialized_;
mutable bool dirty_ = true;
std::size_t total_size_ = 0;
};
mutable bool dirty_ = true;
// Monotonic content version. Increment on any mutation that affects content layout
mutable std::uint64_t version_ = 0;
std::size_t total_size_ = 0;
// Cached line index: starting byte offset of each line (always contains at least 1 entry: 0)
mutable std::vector<std::size_t> line_index_;
mutable bool line_index_dirty_ = true;
// Heuristic knobs
std::size_t piece_limit_ = 4096; // trigger consolidation when exceeded
std::size_t small_piece_threshold_ = 64; // bytes
std::size_t max_consolidation_bytes_ = 4096; // cap per consolidation run
// Lightweight caches to avoid redundant work when callers query the same range repeatedly
struct RangeCache {
bool valid = false;
std::uint64_t version = 0;
std::size_t off = 0;
std::size_t len = 0;
std::string data;
};
struct FindCache {
bool valid = false;
std::uint64_t version = 0;
std::string needle;
std::size_t start = 0;
std::size_t result = std::numeric_limits<std::size_t>::max();
};
mutable RangeCache range_cache_;
mutable FindCache find_cache_;
};

View File

@@ -142,9 +142,11 @@ protected:
p.save();
p.setClipRect(viewport);
// Iterate visible lines
for (std::size_t i = rowoffs, vis_idx = 0; i < last_row; ++i, ++vis_idx) {
const auto &line = static_cast<const std::string &>(lines[i]);
// Iterate visible lines
for (std::size_t i = rowoffs, vis_idx = 0; i < last_row; ++i, ++vis_idx) {
// Materialize the Buffer::Line into a std::string for
// regex/iterator usage and general string ops.
const std::string line = static_cast<std::string>(lines[i]);
const int y = viewport.y() + static_cast<int>(vis_idx) * line_h;
const int baseline = y + fm.ascent();

2502
REWRITE.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -42,13 +42,15 @@ TerminalFrontend::Init(Editor &ed)
meta(stdscr, TRUE);
// Make ESC key sequences resolve quickly so ESC+<key> works as meta
#ifdef set_escdelay
set_escdelay(50);
set_escdelay(TerminalFrontend::kEscDelayMs);
#endif
nodelay(stdscr, TRUE);
// Make getch() block briefly instead of busy-looping; reduces CPU when idle
// Equivalent to nodelay(FALSE) with a small timeout.
timeout(16); // ~16ms (about 60Hz)
curs_set(1);
// Enable mouse support if available
mouseinterval(0);
mousemask(ALL_MOUSE_EVENTS, nullptr);
mousemask(ALL_MOUSE_EVENTS | REPORT_MOUSE_POSITION, nullptr);
int r = 0, c = 0;
getmaxyx(stdscr, r, c);
@@ -57,6 +59,20 @@ TerminalFrontend::Init(Editor &ed)
ed.SetDimensions(static_cast<std::size_t>(r), static_cast<std::size_t>(c));
// Attach editor to input handler for editor-owned features (e.g., universal argument)
input_.Attach(&ed);
// Ignore SIGINT (Ctrl-C) so it doesn't terminate the TUI.
// We'll restore the previous handler on Shutdown().
{
struct sigaction sa{};
sa.sa_handler = SIG_IGN;
sigemptyset(&sa.sa_mask);
sa.sa_flags = 0;
struct sigaction old{};
if (sigaction(SIGINT, &sa, &old) == 0) {
old_sigint_ = old;
have_old_sigint_ = true;
}
}
return true;
}
@@ -80,9 +96,6 @@ TerminalFrontend::Step(Editor &ed, bool &running)
if (mi.hasCommand) {
Execute(ed, mi.id, mi.arg, mi.count);
}
} else {
// Avoid busy loop
usleep(1000);
}
if (ed.QuitRequested()) {
@@ -101,5 +114,10 @@ TerminalFrontend::Shutdown()
(void) tcsetattr(STDIN_FILENO, TCSANOW, &orig_tio_);
have_orig_tio_ = false;
}
// Restore previous SIGINT handler
if (have_old_sigint_) {
(void) sigaction(SIGINT, &old_sigint_, nullptr);
have_old_sigint_ = false;
}
endwin();
}

View File

@@ -3,6 +3,7 @@
*/
#pragma once
#include <termios.h>
#include <signal.h>
#include "Frontend.h"
#include "TerminalInputHandler.h"
@@ -15,6 +16,11 @@ public:
~TerminalFrontend() override = default;
// Configurable ESC key delay (ms) for ncurses' set_escdelay().
// Controls how long ncurses waits to distinguish ESC vs. meta sequences.
// Adjust if your terminal needs a different threshold.
static constexpr int kEscDelayMs = 50;
bool Init(Editor &ed) override;
void Step(Editor &ed, bool &running) override;
@@ -29,4 +35,7 @@ private:
// Saved terminal attributes to restore on shutdown
bool have_orig_tio_ = false;
struct termios orig_tio_{};
// Saved SIGINT handler to restore on shutdown
bool have_old_sigint_ = false;
struct sigaction old_sigint_{};
};

View File

@@ -29,89 +29,95 @@ map_key_to_command(const int ch,
// Handle special keys from ncurses
// These keys exit k-prefix mode if active (user pressed C-k then a special key).
switch (ch) {
case KEY_MOUSE: {
k_prefix = false;
k_ctrl_pending = false;
MEVENT ev{};
if (getmouse(&ev) == OK) {
// Mouse wheel → scroll viewport without moving cursor
case KEY_ENTER:
// Some terminals send KEY_ENTER distinct from '\n'/'\r'
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::Newline, "", 0};
return true;
case KEY_MOUSE: {
k_prefix = false;
k_ctrl_pending = false;
MEVENT ev{};
if (getmouse(&ev) == OK) {
// Mouse wheel → scroll viewport without moving cursor
#ifdef BUTTON4_PRESSED
if (ev.bstate & (BUTTON4_PRESSED | BUTTON4_RELEASED | BUTTON4_CLICKED)) {
out = {true, CommandId::ScrollUp, "", 0};
return true;
}
if (ev.bstate & (BUTTON4_PRESSED | BUTTON4_RELEASED | BUTTON4_CLICKED)) {
out = {true, CommandId::ScrollUp, "", 0};
return true;
}
#endif
#ifdef BUTTON5_PRESSED
if (ev.bstate & (BUTTON5_PRESSED | BUTTON5_RELEASED | BUTTON5_CLICKED)) {
out = {true, CommandId::ScrollDown, "", 0};
return true;
}
#endif
// React to left button click/press
if (ev.bstate & (BUTTON1_CLICKED | BUTTON1_PRESSED | BUTTON1_RELEASED)) {
char buf[64];
// Use screen coordinates; command handler will translate via offsets
std::snprintf(buf, sizeof(buf), "@%d:%d", ev.y, ev.x);
out = {true, CommandId::MoveCursorTo, std::string(buf), 0};
return true;
}
if (ev.bstate & (BUTTON5_PRESSED | BUTTON5_RELEASED | BUTTON5_CLICKED)) {
out = {true, CommandId::ScrollDown, "", 0};
return true;
}
#endif
// React to left button click/press
if (ev.bstate & (BUTTON1_CLICKED | BUTTON1_PRESSED | BUTTON1_RELEASED)) {
char buf[64];
// Use screen coordinates; command handler will translate via offsets
std::snprintf(buf, sizeof(buf), "@%d:%d", ev.y, ev.x);
out = {true, CommandId::MoveCursorTo, std::string(buf), 0};
return true;
}
// No actionable mouse event
out.hasCommand = false;
return true;
}
case KEY_LEFT:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveLeft, "", 0};
return true;
case KEY_RIGHT:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveRight, "", 0};
return true;
case KEY_UP:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveUp, "", 0};
return true;
case KEY_DOWN:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveDown, "", 0};
return true;
case KEY_HOME:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveHome, "", 0};
return true;
case KEY_END:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveEnd, "", 0};
return true;
case KEY_PPAGE:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::PageUp, "", 0};
return true;
case KEY_NPAGE:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::PageDown, "", 0};
return true;
case KEY_DC:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::DeleteChar, "", 0};
return true;
case KEY_RESIZE:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::Refresh, "", 0};
return true;
default:
break;
// No actionable mouse event
out.hasCommand = false;
return true;
}
case KEY_LEFT:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveLeft, "", 0};
return true;
case KEY_RIGHT:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveRight, "", 0};
return true;
case KEY_UP:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveUp, "", 0};
return true;
case KEY_DOWN:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveDown, "", 0};
return true;
case KEY_HOME:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveHome, "", 0};
return true;
case KEY_END:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::MoveEnd, "", 0};
return true;
case KEY_PPAGE:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::PageUp, "", 0};
return true;
case KEY_NPAGE:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::PageDown, "", 0};
return true;
case KEY_DC:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::DeleteChar, "", 0};
return true;
case KEY_RESIZE:
k_prefix = false;
k_ctrl_pending = false;
out = {true, CommandId::Refresh, "", 0};
return true;
default:
break;
}
// ESC as cancel of prefix; many terminals send meta sequences as ESC+...
@@ -172,14 +178,15 @@ map_key_to_command(const int ch,
ctrl = true;
ascii_key = 'a' + (ch - 1);
}
// If user typed literal 'C'/'c' or '^' as a qualifier, keep k-prefix and set pending
if (ascii_key == 'C' || ascii_key == 'c' || ascii_key == '^') {
k_ctrl_pending = true;
if (ed)
ed->SetStatus("C-k C _");
out.hasCommand = false;
return true;
}
// If user typed literal 'C' or '^' as a qualifier, keep k-prefix and set pending
// Note: Do NOT treat lowercase 'c' as a qualifier, since 'c' is a valid C-k command (BufferClose).
if (ascii_key == 'C' || ascii_key == '^') {
k_ctrl_pending = true;
if (ed)
ed->SetStatus("C-k C _");
out.hasCommand = false;
return true;
}
// For actual suffix, consume the k-prefix
k_prefix = false;
// Do NOT lowercase here; KLookupKCommand handles case-sensitive bindings

View File

@@ -1,206 +0,0 @@
/*
* BufferBench.cc - microbenchmarks for GapBuffer and PieceTable
*
* This benchmark exercises the public APIs shared by both structures as used
* in Buffer::Line: Reserve, AppendChar, Append, PrependChar, Prepend, Clear.
*
* Run examples:
* ./kte_bench_buffer # defaults
* ./kte_bench_buffer 200000 8 4096 # N=200k, rounds=8, chunk=4096
*/
#include <chrono>
#include <cstdint>
#include <cstring>
#include <iomanip>
#include <iostream>
#include <random>
#include <string>
#include <vector>
#include <typeinfo>
#include "GapBuffer.h"
#include "PieceTable.h"
using clock_t = std::chrono::steady_clock;
using us = std::chrono::microseconds;
struct Result {
std::string name;
std::string scenario;
double micros = 0.0;
std::size_t bytes = 0;
};
static void
print_header()
{
std::cout << std::left << std::setw(14) << "Structure"
<< std::left << std::setw(18) << "Scenario"
<< std::right << std::setw(12) << "time(us)"
<< std::right << std::setw(14) << "bytes"
<< std::right << std::setw(14) << "MB/s"
<< "\n";
std::cout << std::string(72, '-') << "\n";
}
static void
print_row(const Result &r)
{
double mb = r.bytes / (1024.0 * 1024.0);
double mbps = (r.micros > 0.0) ? (mb / (r.micros / 1'000'000.0)) : 0.0;
std::cout << std::left << std::setw(14) << r.name
<< std::left << std::setw(18) << r.scenario
<< std::right << std::setw(12) << std::fixed << std::setprecision(2) << r.micros
<< std::right << std::setw(14) << r.bytes
<< std::right << std::setw(14) << std::fixed << std::setprecision(2) << mbps
<< "\n";
}
template<typename Buf>
Result
bench_sequential_append(std::size_t N, int rounds)
{
Result r;
r.name = typeid(Buf).name();
r.scenario = "seq_append";
const char c = 'x';
auto start = clock_t::now();
std::size_t bytes = 0;
for (int t = 0; t < rounds; ++t) {
Buf b;
b.Reserve(N);
for (std::size_t i = 0; i < N; ++i) {
b.AppendChar(c);
}
bytes += N;
}
auto end = clock_t::now();
r.micros = std::chrono::duration_cast<us>(end - start).count();
r.bytes = bytes;
return r;
}
template<typename Buf>
Result
bench_sequential_prepend(std::size_t N, int rounds)
{
Result r;
r.name = typeid(Buf).name();
r.scenario = "seq_prepend";
const char c = 'x';
auto start = clock_t::now();
std::size_t bytes = 0;
for (int t = 0; t < rounds; ++t) {
Buf b;
b.Reserve(N);
for (std::size_t i = 0; i < N; ++i) {
b.PrependChar(c);
}
bytes += N;
}
auto end = clock_t::now();
r.micros = std::chrono::duration_cast<us>(end - start).count();
r.bytes = bytes;
return r;
}
template<typename Buf>
Result
bench_chunk_append(std::size_t N, std::size_t chunk, int rounds)
{
Result r;
r.name = typeid(Buf).name();
r.scenario = "chunk_append";
std::string payload(chunk, 'y');
auto start = clock_t::now();
std::size_t bytes = 0;
for (int t = 0; t < rounds; ++t) {
Buf b;
b.Reserve(N);
std::size_t written = 0;
while (written < N) {
std::size_t now = std::min(chunk, N - written);
b.Append(payload.data(), now);
written += now;
}
bytes += N;
}
auto end = clock_t::now();
r.micros = std::chrono::duration_cast<us>(end - start).count();
r.bytes = bytes;
return r;
}
template<typename Buf>
Result
bench_mixed(std::size_t N, std::size_t chunk, int rounds)
{
Result r;
r.name = typeid(Buf).name();
r.scenario = "mixed";
std::string payload(chunk, 'z');
auto start = clock_t::now();
std::size_t bytes = 0;
for (int t = 0; t < rounds; ++t) {
Buf b;
b.Reserve(N);
std::size_t written = 0;
while (written < N) {
// alternate append/prepend with small chunks
std::size_t now = std::min(chunk, N - written);
if ((written / chunk) % 2 == 0) {
b.Append(payload.data(), now);
} else {
b.Prepend(payload.data(), now);
}
written += now;
}
bytes += N;
}
auto end = clock_t::now();
r.micros = std::chrono::duration_cast<us>(end - start).count();
r.bytes = bytes;
return r;
}
int
main(int argc, char **argv)
{
// Parameters
std::size_t N = 100'000; // bytes per round
int rounds = 5; // iterations
std::size_t chunk = 1024; // chunk size for chunked scenarios
if (argc >= 2)
N = static_cast<std::size_t>(std::stoull(argv[1]));
if (argc >= 3)
rounds = std::stoi(argv[2]);
if (argc >= 4)
chunk = static_cast<std::size_t>(std::stoull(argv[3]));
std::cout << "KTE Buffer Microbenchmarks" << "\n";
std::cout << "N=" << N << ", rounds=" << rounds << ", chunk=" << chunk << "\n\n";
print_header();
// Run for GapBuffer
print_row(bench_sequential_append<GapBuffer>(N, rounds));
print_row(bench_sequential_prepend<GapBuffer>(N, rounds));
print_row(bench_chunk_append<GapBuffer>(N, chunk, rounds));
print_row(bench_mixed<GapBuffer>(N, chunk, rounds));
// Run for PieceTable
print_row(bench_sequential_append<PieceTable>(N, rounds));
print_row(bench_sequential_prepend<PieceTable>(N, rounds));
print_row(bench_chunk_append<PieceTable>(N, chunk, rounds));
print_row(bench_mixed<PieceTable>(N, chunk, rounds));
return 0;
}

View File

@@ -1,318 +0,0 @@
/*
* PerformanceSuite.cc - broader performance and verification benchmarks
*/
#include <algorithm>
#include <cassert>
#include <chrono>
#include <cstddef>
#include <cstdint>
#include <cstring>
#include <iomanip>
#include <iostream>
#include <random>
#include <string>
#include <typeinfo>
#include <vector>
#include "GapBuffer.h"
#include "PieceTable.h"
#include "OptimizedSearch.h"
using clock_t = std::chrono::steady_clock;
using us = std::chrono::microseconds;
namespace {
struct Stat {
double micros{0.0};
std::size_t bytes{0};
std::size_t ops{0};
};
static void
print_header(const std::string &title)
{
std::cout << "\n" << title << "\n";
std::cout << std::left << std::setw(18) << "Case"
<< std::left << std::setw(18) << "Type"
<< std::right << std::setw(12) << "time(us)"
<< std::right << std::setw(14) << "bytes"
<< std::right << std::setw(14) << "ops/s"
<< std::right << std::setw(14) << "MB/s"
<< "\n";
std::cout << std::string(90, '-') << "\n";
}
static void
print_row(const std::string &caseName, const std::string &typeName, const Stat &s)
{
double mb = s.bytes / (1024.0 * 1024.0);
double sec = s.micros / 1'000'000.0;
double mbps = sec > 0 ? (mb / sec) : 0.0;
double opss = sec > 0 ? (static_cast<double>(s.ops) / sec) : 0.0;
std::cout << std::left << std::setw(18) << caseName
<< std::left << std::setw(18) << typeName
<< std::right << std::setw(12) << std::fixed << std::setprecision(2) << s.micros
<< std::right << std::setw(14) << s.bytes
<< std::right << std::setw(14) << std::fixed << std::setprecision(2) << opss
<< std::right << std::setw(14) << std::fixed << std::setprecision(2) << mbps
<< "\n";
}
} // namespace
class PerformanceSuite {
public:
void benchmarkBufferOperations(std::size_t N, int rounds, std::size_t chunk)
{
print_header("Buffer Operations");
run_buffer_case<GapBuffer>("append_char", N, rounds, chunk, [&](auto &b, std::size_t count) {
for (std::size_t i = 0; i < count; ++i)
b.AppendChar('a');
});
run_buffer_case<GapBuffer>("prepend_char", N, rounds, chunk, [&](auto &b, std::size_t count) {
for (std::size_t i = 0; i < count; ++i)
b.PrependChar('a');
});
run_buffer_case<GapBuffer>("chunk_mix", N, rounds, chunk, [&](auto &b, std::size_t) {
std::string payload(chunk, 'x');
std::size_t written = 0;
while (written < N) {
std::size_t now = std::min(chunk, N - written);
if (((written / chunk) & 1) == 0)
b.Append(payload.data(), now);
else
b.Prepend(payload.data(), now);
written += now;
}
});
run_buffer_case<PieceTable>("append_char", N, rounds, chunk, [&](auto &b, std::size_t count) {
for (std::size_t i = 0; i < count; ++i)
b.AppendChar('a');
});
run_buffer_case<PieceTable>("prepend_char", N, rounds, chunk, [&](auto &b, std::size_t count) {
for (std::size_t i = 0; i < count; ++i)
b.PrependChar('a');
});
run_buffer_case<PieceTable>("chunk_mix", N, rounds, chunk, [&](auto &b, std::size_t) {
std::string payload(chunk, 'x');
std::size_t written = 0;
while (written < N) {
std::size_t now = std::min(chunk, N - written);
if (((written / chunk) & 1) == 0)
b.Append(payload.data(), now);
else
b.Prepend(payload.data(), now);
written += now;
}
});
}
void benchmarkSearchOperations(std::size_t textLen, std::size_t patLen, int rounds)
{
print_header("Search Operations");
std::mt19937_64 rng(0xC0FFEE);
std::uniform_int_distribution<int> dist('a', 'z');
std::string text(textLen, '\0');
for (auto &ch: text)
ch = static_cast<char>(dist(rng));
std::string pattern(patLen, '\0');
for (auto &ch: pattern)
ch = static_cast<char>(dist(rng));
// Ensure at least one hit
if (textLen >= patLen && patLen > 0) {
std::size_t pos = textLen / 2;
std::memcpy(&text[pos], pattern.data(), patLen);
}
// OptimizedSearch find_all vs std::string reference
OptimizedSearch os;
Stat s{};
auto start = clock_t::now();
std::size_t matches = 0;
std::size_t bytesScanned = 0;
for (int r = 0; r < rounds; ++r) {
auto hits = os.find_all(text, pattern, 0);
matches += hits.size();
bytesScanned += text.size();
// Verify with reference
std::vector<std::size_t> ref;
std::size_t from = 0;
while (true) {
auto p = text.find(pattern, from);
if (p == std::string::npos)
break;
ref.push_back(p);
from = p + (patLen ? patLen : 1);
}
assert(ref == hits);
}
auto end = clock_t::now();
s.micros = std::chrono::duration_cast<us>(end - start).count();
s.bytes = bytesScanned;
s.ops = matches;
print_row("find_all", "OptimizedSearch", s);
}
void benchmarkMemoryAllocation(std::size_t N, int rounds)
{
print_header("Memory Allocation (allocations during editing)");
// Measure number of allocations by simulating editing patterns.
auto run_session = [&](auto &&buffer) {
// alternate small appends and prepends
const std::size_t chunk = 32;
std::string payload(chunk, 'q');
for (int r = 0; r < rounds; ++r) {
buffer.Clear();
for (std::size_t i = 0; i < N; i += chunk)
buffer.Append(payload.data(), std::min(chunk, N - i));
for (std::size_t i = 0; i < N / 2; i += chunk)
buffer.Prepend(payload.data(), std::min(chunk, N / 2 - i));
}
};
// Local allocation counters for this TU via overriding operators
reset_alloc_counters();
GapBuffer gb;
run_session(gb);
auto gap_allocs = current_allocs();
print_row("edit_session", "GapBuffer", Stat{
0.0, static_cast<std::size_t>(gap_allocs.bytes),
static_cast<std::size_t>(gap_allocs.count)
});
reset_alloc_counters();
PieceTable pt;
run_session(pt);
auto pt_allocs = current_allocs();
print_row("edit_session", "PieceTable", Stat{
0.0, static_cast<std::size_t>(pt_allocs.bytes),
static_cast<std::size_t>(pt_allocs.count)
});
}
private:
template<typename Buf, typename Fn>
void run_buffer_case(const std::string &caseName, std::size_t N, int rounds, std::size_t chunk, Fn fn)
{
Stat s{};
auto start = clock_t::now();
std::size_t bytes = 0;
std::size_t ops = 0;
for (int t = 0; t < rounds; ++t) {
Buf b;
b.Reserve(N);
fn(b, N);
// compare to reference string where possible (only for append_char/prepend_char)
bytes += N;
ops += N / (chunk ? chunk : 1);
}
auto end = clock_t::now();
s.micros = std::chrono::duration_cast<us>(end - start).count();
s.bytes = bytes;
s.ops = ops;
print_row(caseName, typeid(Buf).name(), s);
}
// Simple global allocation tracking for this TU
struct AllocStats {
std::uint64_t count{0};
std::uint64_t bytes{0};
};
static AllocStats &alloc_stats()
{
static AllocStats s;
return s;
}
static void reset_alloc_counters()
{
alloc_stats() = {};
}
static AllocStats current_allocs()
{
return alloc_stats();
}
// Friend global new/delete defined below
friend void *operator new(std::size_t sz) noexcept(false);
friend void operator delete(void *p) noexcept;
friend void *operator new[](std::size_t sz) noexcept(false);
friend void operator delete[](void *p) noexcept;
};
// Override new/delete only in this translation unit to track allocations made here
void *
operator new(std::size_t sz) noexcept(false)
{
auto &s = PerformanceSuite::alloc_stats();
s.count++;
s.bytes += sz;
if (void *p = std::malloc(sz))
return p;
throw std::bad_alloc();
}
void
operator delete(void *p) noexcept
{
std::free(p);
}
void *
operator new[](std::size_t sz) noexcept(false)
{
auto &s = PerformanceSuite::alloc_stats();
s.count++;
s.bytes += sz;
if (void *p = std::malloc(sz))
return p;
throw std::bad_alloc();
}
void
operator delete[](void *p) noexcept
{
std::free(p);
}
int
main(int argc, char **argv)
{
std::size_t N = 200'000; // bytes per round for buffer cases
int rounds = 3;
std::size_t chunk = 1024;
if (argc >= 2)
N = static_cast<std::size_t>(std::stoull(argv[1]));
if (argc >= 3)
rounds = std::stoi(argv[2]);
if (argc >= 4)
chunk = static_cast<std::size_t>(std::stoull(argv[3]));
std::cout << "KTE Performance Suite" << "\n";
std::cout << "N=" << N << ", rounds=" << rounds << ", chunk=" << chunk << "\n";
PerformanceSuite suite;
suite.benchmarkBufferOperations(N, rounds, chunk);
suite.benchmarkSearchOperations(1'000'000, 16, rounds);
suite.benchmarkMemoryAllocation(N, rounds);
return 0;
}

View File

@@ -1,13 +1,15 @@
{
lib,
pkgs ? import <nixpkgs> {},
lib ? pkgs.lib,
stdenv,
cmake,
ncurses,
SDL2,
libGL,
xorg,
kdePackages,
qt6Packages ? kdePackages.qt6Packages,
installShellFiles,
graphical ? false,
graphical-qt ? false,
...
@@ -37,12 +39,13 @@ stdenv.mkDerivation {
xorg.libX11
]
++ lib.optionals graphical-qt [
qt5Full
qtcreator ## not sure if this is actually needed
kdePackages.qt6ct
qt6Packages.qtbase
qt6Packages.wrapQtAppsHook
];
cmakeFlags = [
"-DBUILD_GUI=${if graphical or graphical-qt then "ON" else "OFF"}"
"-DBUILD_GUI=${if graphical then "ON" else "OFF"}"
"-DKTE_USE_QT=${if graphical-qt then "ON" else "OFF"}"
"-DCMAKE_BUILD_TYPE=Debug"
];
@@ -52,17 +55,23 @@ stdenv.mkDerivation {
mkdir -p $out/bin
cp kte $out/bin/
installManPage ../docs/kte.1
''
+ lib.optionalString graphical ''
cp kge $out/bin/
installManPage ../docs/kge.1
mkdir -p $out/share/icons
cp ../kge.png $out/share/icons/
''
+ ''
${lib.optionalString graphical ''
mkdir -p $out/bin
${if graphical-qt then ''
cp kge $out/bin/kge-qt
'' else ''
cp kge $out/bin/kge
''}
installManPage ../docs/kge.1
mkdir -p $out/share/icons/hicolor/256x256/apps
cp ../kge.png $out/share/icons/hicolor/256x256/apps/kge.png
''}
runHook postInstall
'';
}

View File

@@ -0,0 +1,601 @@
# PieceTable Migration Plan
## Executive Summary
This document outlines the plan to remove GapBuffer support from kte and
migrate to using a **single PieceTable per Buffer**, rather than the
current vector-of-Lines architecture where each Line contains either a
GapBuffer or PieceTable.
## Current Architecture Analysis
### Text Storage
**Current Implementation:**
- `Buffer` contains `std::vector<Line> rows_`
- Each `Line` wraps an `AppendBuffer` (type alias)
- `AppendBuffer` is either `GapBuffer` (default) or `PieceTable` (via
`KTE_USE_PIECE_TABLE`)
- Each line is independently managed with its own buffer
- Operations are line-based with coordinate pairs (row, col)
**Key Files:**
- `Buffer.h/cc` - Buffer class with vector of Lines
- `AppendBuffer.h` - Type selector (GapBuffer vs PieceTable)
- `GapBuffer.h/cc` - Per-line gap buffer implementation
- `PieceTable.h/cc` - Per-line piece table implementation
- `UndoSystem.h/cc` - Records operations with (row, col, text)
- `UndoNode.h` - Undo operation types (Insert, Delete, Paste, Newline,
DeleteRow)
- `Command.cc` - High-level editing commands
### Current Buffer API
**Low-level editing operations (used by UndoSystem):**
```cpp
void insert_text(int row, int col, std::string_view text);
void delete_text(int row, int col, std::size_t len);
void split_line(int row, int col);
void join_lines(int row);
void insert_row(int row, std::string_view text);
void delete_row(int row);
```
**Line access:**
```cpp
std::vector<Line> &Rows();
const std::vector<Line> &Rows() const;
```
**Line API (Buffer::Line):**
```cpp
std::size_t size() const;
const char *Data() const;
char operator[](std::size_t i) const;
std::string substr(std::size_t pos, std::size_t len) const;
std::size_t find(const std::string &needle, std::size_t pos) const;
void erase(std::size_t pos, std::size_t len);
void insert(std::size_t pos, const std::string &seg);
Line &operator+=(const Line &other);
Line &operator+=(const std::string &s);
```
### Current PieceTable Limitations
The existing `PieceTable` class only supports:
- `Append(char/string)` - add to end
- `Prepend(char/string)` - add to beginning
- `Clear()` - empty the buffer
- `Data()` / `Size()` - access content (materializes on demand)
**Missing capabilities needed for buffer-wide storage:**
- Insert at arbitrary byte position
- Delete at arbitrary byte position
- Line indexing and line-based queries
- Position conversion (byte offset ↔ line/col)
- Efficient line boundary tracking
## Target Architecture
### Design Overview
**Single PieceTable per Buffer:**
- `Buffer` contains one `PieceTable content_` (replaces
`std::vector<Line> rows_`)
- Text stored as continuous byte sequence with `\n` as line separators
- Line index cached for efficient line-based operations
- All operations work on byte offsets internally
- Buffer provides line/column API as convenience layer
### Enhanced PieceTable Design
```cpp
class PieceTable {
public:
// Existing API (keep for compatibility if needed)
void Append(const char *s, std::size_t len);
void Prepend(const char *s, std::size_t len);
void Clear();
const char *Data() const;
std::size_t Size() const;
// NEW: Core byte-based editing operations
void Insert(std::size_t byte_offset, const char *text, std::size_t len);
void Delete(std::size_t byte_offset, std::size_t len);
// NEW: Line-based queries
std::size_t LineCount() const;
std::string GetLine(std::size_t line_num) const;
std::pair<std::size_t, std::size_t> GetLineRange(std::size_t line_num) const; // (start, end) byte offsets
// NEW: Position conversion
std::pair<std::size_t, std::size_t> ByteOffsetToLineCol(std::size_t byte_offset) const;
std::size_t LineColToByteOffset(std::size_t row, std::size_t col) const;
// NEW: Substring extraction
std::string GetRange(std::size_t byte_offset, std::size_t len) const;
// NEW: Search support
std::size_t Find(const std::string &needle, std::size_t start_offset) const;
private:
// Existing members
std::string original_;
std::string add_;
std::vector<Piece> pieces_;
mutable std::string materialized_;
mutable bool dirty_;
std::size_t total_size_;
// NEW: Line index for efficient line operations
struct LineInfo {
std::size_t byte_offset; // absolute byte offset from buffer start
std::size_t piece_idx; // which piece contains line start
std::size_t offset_in_piece; // byte offset within that piece
};
mutable std::vector<LineInfo> line_index_;
mutable bool line_index_dirty_;
// NEW: Line index management
void RebuildLineIndex() const;
void InvalidateLineIndex();
};
```
### Buffer API Changes
```cpp
class Buffer {
public:
// NEW: Direct content access
PieceTable &Content() { return content_; }
const PieceTable &Content() const { return content_; }
// MODIFIED: Keep existing API but implement via PieceTable
void insert_text(int row, int col, std::string_view text);
void delete_text(int row, int col, std::size_t len);
void split_line(int row, int col);
void join_lines(int row);
void insert_row(int row, std::string_view text);
void delete_row(int row);
// MODIFIED: Line access - return line from PieceTable
std::size_t Nrows() const { return content_.LineCount(); }
std::string GetLine(std::size_t row) const { return content_.GetLine(row); }
// REMOVED: Rows() - no longer have vector of Lines
// std::vector<Line> &Rows(); // REMOVE
private:
// REMOVED: std::vector<Line> rows_;
// NEW: Single piece table for all content
PieceTable content_;
// Keep existing members
std::size_t curx_, cury_, rx_;
std::size_t nrows_; // cached from content_.LineCount()
std::size_t rowoffs_, coloffs_;
std::string filename_;
bool is_file_backed_;
bool dirty_;
bool read_only_;
bool mark_set_;
std::size_t mark_curx_, mark_cury_;
std::unique_ptr<UndoTree> undo_tree_;
std::unique_ptr<UndoSystem> undo_sys_;
std::uint64_t version_;
bool syntax_enabled_;
std::string filetype_;
std::unique_ptr<kte::HighlighterEngine> highlighter_;
kte::SwapRecorder *swap_rec_;
};
```
## Migration Phases
### Phase 1: Extend PieceTable (Foundation)
**Goal:** Add buffer-wide capabilities to PieceTable without breaking
existing per-line usage.
**Tasks:**
1. Add line indexing infrastructure to PieceTable
- Add `LineInfo` struct and `line_index_` member
- Implement `RebuildLineIndex()` that scans pieces for '\n'
characters
- Implement `InvalidateLineIndex()` called by Insert/Delete
2. Implement core byte-based operations
- `Insert(byte_offset, text, len)` - split piece at offset, insert
new piece
- `Delete(byte_offset, len)` - split pieces, remove/truncate as
needed
3. Implement line-based query methods
- `LineCount()` - return line_index_.size()
- `GetLine(line_num)` - extract text between line boundaries
- `GetLineRange(line_num)` - return (start, end) byte offsets
4. Implement position conversion
- `ByteOffsetToLineCol(offset)` - binary search in line_index_
- `LineColToByteOffset(row, col)` - lookup line start, add col
5. Implement utility methods
- `GetRange(offset, len)` - extract substring
- `Find(needle, start)` - search across pieces
**Testing:**
- Write unit tests for new PieceTable methods
- Test with multi-line content
- Verify line index correctness after edits
- Benchmark performance vs current line-based approach
**Estimated Effort:** 3-5 days
### Phase 2: Create Buffer Adapter Layer (Compatibility)
**Goal:** Create compatibility layer in Buffer to use PieceTable while
maintaining existing API.
**Tasks:**
1. Add `PieceTable content_` member to Buffer (alongside existing
`rows_`)
2. Add compilation flag `KTE_USE_BUFFER_PIECE_TABLE` (like existing
`KTE_USE_PIECE_TABLE`)
3. Implement Buffer methods to delegate to content_:
```cpp
#ifdef KTE_USE_BUFFER_PIECE_TABLE
void insert_text(int row, int col, std::string_view text) {
std::size_t offset = content_.LineColToByteOffset(row, col);
content_.Insert(offset, text.data(), text.size());
}
// ... similar for other methods
#else
// Existing line-based implementation
#endif
```
4. Update file I/O to work with PieceTable
- `OpenFromFile()` - load into content_ instead of rows_
- `Save()` - serialize content_ instead of rows_
5. Update `AsString()` to materialize from content_
**Testing:**
- Run existing buffer correctness tests with new flag
- Verify undo/redo still works
- Test file I/O round-tripping
- Test with existing command operations
**Estimated Effort:** 3-4 days
### Phase 3: Migrate Command Layer (High-level Operations)
**Goal:** Update commands that directly access Rows() to use new API.
**Tasks:**
1. Audit all usages of `buf.Rows()` in Command.cc
2. Refactor helper functions:
- `extract_region_text()` - use content_.GetRange()
- `delete_region()` - convert to byte offsets, use content_.Delete()
- `insert_text_at_cursor()` - convert position, use content_
.Insert()
3. Update commands that iterate over lines:
- Use `buf.GetLine(i)` instead of `buf.Rows()[i]`
- Update line count queries to use `buf.Nrows()`
4. Update search/replace operations:
- Modify `search_compute_matches()` to work with GetLine()
- Update regex matching to work line-by-line or use content directly
**Testing:**
- Test all editing commands (insert, delete, newline, backspace)
- Test region operations (mark, copy, kill)
- Test search and replace
- Test word navigation and deletion
- Run through common editing workflows
**Estimated Effort:** 4-6 days
### Phase 4: Update Renderer and Frontend (Display)
**Goal:** Ensure all renderers work with new Buffer structure.
**Tasks:**
1. Audit renderer implementations:
- `TerminalRenderer.cc`
- `ImGuiRenderer.cc`
- `QtRenderer.cc`
- `TestRenderer.cc`
2. Update line access patterns:
- Replace `buf.Rows()[y]` with `buf.GetLine(y)`
- Handle string return instead of Line object
3. Update syntax highlighting integration:
- Ensure HighlighterEngine works with GetLine()
- Update any line-based caching
**Testing:**
- Test rendering in terminal
- Test ImGui frontend (if enabled)
- Test Qt frontend (if enabled)
- Verify syntax highlighting displays correctly
- Test scrolling and viewport updates
**Estimated Effort:** 2-3 days
### Phase 5: Remove Old Infrastructure (Cleanup) ✅ COMPLETED
**Goal:** Remove GapBuffer, AppendBuffer, and Line class completely.
**Status:** Completed on 2025-12-05
**Tasks:**
1. ✅ Remove conditional compilation:
- Removed `#ifdef KTE_USE_BUFFER_PIECE_TABLE` (PieceTable is now the
only way)
- Removed `#ifdef KTE_USE_PIECE_TABLE`
- Removed `AppendBuffer.h`
2. ✅ Delete obsolete code:
- Deleted `GapBuffer.h/cc`
- Line class now uses PieceTable internally (kept for API
compatibility)
- `rows_` kept as mutable cache rebuilt from `content_` PieceTable
3. ✅ Update CMakeLists.txt:
- Removed GapBuffer from sources
- Removed AppendBuffer.h from headers
- Removed KTE_USE_PIECE_TABLE and KTE_USE_BUFFER_PIECE_TABLE options
4. ✅ Clean up includes and dependencies
5. ✅ Update documentation
**Testing:**
- Full regression test suite
- Verify clean compilation
- Check for any lingering references
**Estimated Effort:** 1-2 days
### Phase 6: Performance Optimization (Polish)
**Goal:** Optimize the new implementation for real-world usage.
**Tasks:**
1. Profile common operations:
- Measure line access patterns
- Identify hot paths in editing
- Benchmark against old implementation
2. Optimize line index:
- Consider incremental updates instead of full rebuild
- Tune rebuild threshold
- Cache frequently accessed lines
3. Optimize piece table:
- Tune piece coalescing heuristics
- Consider piece count limits and consolidation
4. Memory optimization:
- Review materialization frequency
- Consider lazy materialization strategies
- Profile memory usage on large files
**Testing:**
- Benchmark suite with various file sizes
- Memory profiling
- Real-world usage testing
**Estimated Effort:** 3-5 days
## Files Requiring Modification
### Core Files (Must Change)
- `PieceTable.h/cc` - Add new methods (Phase 1)
- `Buffer.h/cc` - Replace rows_ with content_ (Phase 2)
- `Command.cc` - Update line access (Phase 3)
- `UndoSystem.cc` - May need updates for new Buffer API
### Renderer Files (Will Change)
- `TerminalRenderer.cc` - Update line access (Phase 4)
- `ImGuiRenderer.cc` - Update line access (Phase 4)
- `QtRenderer.cc` - Update line access (Phase 4)
- `TestRenderer.cc` - Update line access (Phase 4)
### Files Removed (Phase 5 - Completed)
- `GapBuffer.h/cc` - ✅ Deleted
- `AppendBuffer.h` - ✅ Deleted
- `test_buffer_correctness.cc` - ✅ Deleted (obsolete GapBuffer
comparison test)
- `bench/BufferBench.cc` - ✅ Deleted (obsolete GapBuffer benchmarks)
- `bench/PerformanceSuite.cc` - ✅ Deleted (obsolete GapBuffer
benchmarks)
- `Buffer::Line` class - ✅ Updated to use PieceTable internally (kept
for API compatibility)
### Build Files
- `CMakeLists.txt` - Update sources (Phase 5)
### Documentation
- `README.md` - Update architecture notes
- `docs/` - Update any architectural documentation
- `REWRITE.md` - Note C++ now matches Rust design
## Testing Strategy
### Unit Tests
- **PieceTable Tests:** New file `test_piece_table.cc`
- Test Insert/Delete at various positions
- Test line indexing correctness
- Test position conversion
- Test with edge cases (empty, single line, large files)
- **Buffer Tests:** Extend `test_buffer_correctness.cc`
- Test new Buffer API with PieceTable backend
- Test file I/O round-tripping
- Test multi-line operations
### Integration Tests
- **Undo Tests:** `test_undo.cc` should still pass
- Verify undo/redo across all operation types
- Test undo tree navigation
- **Search Tests:** `test_search_correctness.cc` should still pass
- Verify search across multiple lines
- Test regex search
### Manual Testing
- Load and edit large files (>10MB)
- Perform complex editing sequences
- Test all keybindings and commands
- Verify syntax highlighting
- Test crash recovery (swap files)
### Regression Testing
- All existing tests must pass with new implementation
- No observable behavior changes for users
- Performance should be comparable or better
## Risk Assessment
### High Risk
- **Undo System Integration:** Undo records operations with
row/col/text. Need to ensure compatibility or refactor.
- *Mitigation:* Carefully preserve undo semantics, extensive testing
- **Performance Regression:** Line index rebuilding could be expensive
on large files.
- *Mitigation:* Profile early, optimize incrementally, consider
caching strategies
### Medium Risk
- **Syntax Highlighting:** Highlighters may depend on line-based access
patterns.
- *Mitigation:* Review highlighter integration, test thoroughly
- **Renderer Updates:** Multiple renderers need updating, risk of
inconsistency.
- *Mitigation:* Update all renderers in same phase, test each
### Low Risk
- **Search/Replace:** Should work naturally with new GetLine() API.
- *Mitigation:* Test thoroughly with existing test suite
## Success Criteria
### Functional Requirements
- ✓ All existing tests pass
- ✓ All commands work identically to before
- ✓ File I/O works correctly
- ✓ Undo/redo functionality preserved
- ✓ Syntax highlighting works
- ✓ All frontends (terminal, ImGui, Qt) work
### Code Quality
- ✓ GapBuffer completely removed
- ✓ No conditional compilation for buffer type
- ✓ Clean, maintainable code
- ✓ Good test coverage for new PieceTable methods
### Performance
- ✓ Editing operations at least as fast as current
- ✓ Line access within 2x of current performance
- ✓ Memory usage reasonable (no excessive materialization)
- ✓ Large file handling acceptable (tested up to 100MB)
## Timeline Estimate
| Phase | Duration | Dependencies |
|----------------------------|----------------|--------------|
| Phase 1: Extend PieceTable | 3-5 days | None |
| Phase 2: Buffer Adapter | 3-4 days | Phase 1 |
| Phase 3: Command Layer | 4-6 days | Phase 2 |
| Phase 4: Renderer Updates | 2-3 days | Phase 3 |
| Phase 5: Cleanup | 1-2 days | Phase 4 |
| Phase 6: Optimization | 3-5 days | Phase 5 |
| **Total** | **16-25 days** | |
**Note:** Timeline assumes one developer working full-time. Actual
duration may vary based on:
- Unforeseen integration issues
- Performance optimization needs
- Testing thoroughness
- Code review iterations
## Alternatives Considered
### Alternative 1: Keep Line-based but unify GapBuffer/PieceTable
- Keep vector of Lines, but make each Line always use PieceTable
- Remove GapBuffer, remove AppendBuffer selector
- **Pros:** Smaller change, less risk
- **Cons:** Doesn't achieve architectural goal, still have per-line
overhead
### Alternative 2: Hybrid approach
- Use PieceTable for buffer, but maintain materialized Line objects as
cache
- **Pros:** Easier migration, maintains some compatibility
- **Cons:** Complex dual representation, cache invalidation issues
### Alternative 3: Complete rewrite
- Follow REWRITE.md exactly, implement in Rust
- **Pros:** Modern language, better architecture
- **Cons:** Much larger effort, different project
## Recommendation
**Proceed with planned migration** (single PieceTable per Buffer)
because:
1. Aligns with long-term architecture vision (REWRITE.md)
2. Removes unnecessary per-line buffer overhead
3. Simplifies codebase (one text representation)
4. Enables future optimizations (better undo, swap files, etc.)
5. Reasonable effort (16-25 days) for significant improvement
**Suggested Approach:**
- Start with Phase 1 (extend PieceTable) in isolated branch
- Thoroughly test new PieceTable functionality
- Proceed incrementally through phases
- Maintain working editor at end of each phase
- Merge to main after Phase 4 (before cleanup) to get testing
- Complete Phase 5-6 based on feedback
## References
- `REWRITE.md` - Rust architecture specification (lines 54-157)
- Current buffer implementation: `Buffer.h/cc`
- Current piece table: `PieceTable.h/cc`
- Undo system: `UndoSystem.h/cc`, `UndoNode.h`
- Commands: `Command.cc`

163
docs/plans/test-plan.md Normal file
View File

@@ -0,0 +1,163 @@
### Unit testing plan (headless, no interactive frontend)
#### Principles
- Headless-only: exercise core components directly (`PieceTable`, `Buffer`, `UndoSystem`, `OptimizedSearch`, and minimal `Editor` flows) without starting `kte` or `kge`.
- Deterministic and fast: avoid timers, GUI, environment-specific behavior; prefer in-memory operations and temporary files.
- Regression-focused: encode prior failures (save/newline mismatch, legacy `rows_` writes) as explicit tests to prevent recurrences.
#### Harness and execution
- Single binary: use target `kte_tests` (already present) to compile and run all tests under `tests/` with the minimal in-tree framework (`tests/Test.h`, `tests/TestRunner.cc`).
- No GUI/ncurses deps: link only engine sources (PieceTable/Buffer/Undo/Search/Undo* and syntax minimal set), not frontends.
- How to build/run:
- Debug profile:
```
cmake -S /Users/kyle/src/kte -B /Users/kyle/src/kte/cmake-build-debug -DBUILD_TESTS=ON && \
cmake --build /Users/kyle/src/kte/cmake-build-debug --target kte_tests && \
/Users/kyle/src/kte/cmake-build-debug/kte_tests
```
- Release profile:
```
cmake -S /Users/kyle/src/kte -B /Users/kyle/src/kte/cmake-build-release -DBUILD_TESTS=ON && \
cmake --build /Users/kyle/src/kte/cmake-build-release --target kte_tests && \
/Users/kyle/src/kte/cmake-build-release/kte_tests
```
---
### Test catalog (summary table)
The table below catalogs all unit tests defined in this plan. It is headless-only and maps directly to the suites AH described later. “Implemented” reflects current coverage in `kte_tests`.
| Suite | ID | Name | Description (1line) | Headless | Implemented |
|:-----:|:---:|:------------------------------------------|:-------------------------------------------------------------------------------------|:--------:|:-----------:|
| A | 1 | SaveAs then Save (append) | New buffer → write two lines → `SaveAs` → append → `Save`; verify exact bytes. | Yes | ✓ |
| A | 2 | Open existing then Save | Open seeded file, append, `Save`; verify overwrite bytes. | Yes | ✓ |
| A | 3 | Open non-existent then SaveAs | Start from non-existent path, insert `hello, world\n`, `SaveAs`; verify bytes. | Yes | ✓ |
| A | 4 | Trailing newline preservation | Verify saving preserves presence/absence of final `\n`. | Yes | Planned |
| A | 5 | Empty buffer saves | Empty → `SaveAs` → 0 bytes; then insert `\n` → `Save` → 1 byte. | Yes | Planned |
| A | 6 | Large file streaming | 14 MiB with periodic newlines; size and content integrity. | Yes | Planned |
| A | 7 | Tilde expansion | `SaveAs` with `~/...`; re-open to confirm path/content. | Yes | Planned |
| A | 8 | Error propagation | Save to unwritable path → expect failure and error message. | Yes | Planned |
| B | 1 | Insert/Delete LineCount | Basic inserts/deletes and line counting sanity. | Yes | ✓ |
| B | 2 | Line/Col conversions | `LineColToByteOffset` and reverse around boundaries. | Yes | ✓ |
| B | 3 | Delete spanning newlines | Delete ranges that cross line breaks; verify bytes/lines. | Yes | Planned |
| B | 4 | Split/Join equivalence | `split_line` followed by `join_lines` yields original bytes. | Yes | Planned |
| B | 5 | Stream vs Data equivalence | `WriteToStream` matches `GetRange`/`Data()` after edits. | Yes | Planned |
| B | 6 | UTF8 bytes stability | Multibyte sequences behave correctly (byte-based ops). | Yes | Planned |
| C | 1 | insert_text/delete_text | Edits at start/middle/end; `Rows()` mirrors PieceTable. | Yes | Planned |
| C | 2 | split_line/join_lines | Effects and snapshots across multiple positions. | Yes | Planned |
| C | 3 | insert_row/delete_row | Replace paragraph by row ops; verify bytes/linecount. | Yes | Planned |
| C | 4 | Cache invalidation | After each mutation, `Rows()` matches `LineCount()`. | Yes | Planned |
| D | 1 | Grouped insert undo | Contiguous typing undone/redone as a group. | Yes | Planned |
| D | 2 | Delete/Newline undo/redo | Backspace/Delete and Newline transitions across undo/redo. | Yes | Planned |
| D | 3 | Mark saved & dirty | Dirty/save markers interact correctly with undo/redo. | Yes | Planned |
| E | 1 | Search parity basic | `OptimizedSearch::find_all` vs `std::string` reference. | Yes | ✓ |
| E | 2 | Large text search | ~1 MiB random text/patterns parity. | Yes | Planned |
| F | 1 | Editor open & reload | Open via `Editor`, modify, reload, verify on-disk bytes. | Yes | Planned |
| F | 2 | Read-only toggle | Toggle and verify enforcement/behavior of saves. | Yes | Planned |
| F | 3 | Prompt lifecycle | Start/Accept/Cancel prompt doesnt corrupt state. | Yes | Planned |
| G | 1 | Saved only newline regression | Insert text + newline; `Save` includes both bytes. | Yes | Planned |
| G | 2 | Backspace crash regression | PieceTable-backed delete/join path remains stable. | Yes | Planned |
| G | 3 | Overwrite-confirm path | Saving over existing path succeeds and is correct. | Yes | Planned |
| H | 1 | Many small edits | 10k small edits; final bytes correct within time bounds. | Yes | Planned |
| H | 2 | Consolidation equivalence | After many edits, stream vs data produce identical bytes. | Yes | Planned |
Legend: Implemented = ✓, Planned = to be added per Coverage roadmap.
### Test suites and cases
#### A) Filesystem I/O via Buffer
1) SaveAs then Save (append)
- New buffer → `insert_text` two lines (explicit `\n`) → `SaveAs(tmp)` → insert a third line → `Save()`.
- Assert file bytes equal exact expected string.
2) Open existing then Save
- Seed a file on disk; `OpenFromFile(path)` → append line → `Save()`.
- Assert file bytes updated exactly.
3) Open non-existent then SaveAs
- `OpenFromFile(nonexistent)` → assert `IsFileBacked()==false` → insert `"hello, world\n"` → `SaveAs(path)`.
- Read back exact bytes.
4) Trailing newline preservation
- Case (a) last line without `\n`; (b) last line with `\n` → save and verify bytes unchanged.
5) Empty buffer saves
- `SaveAs(tmp)` on empty buffer → 0-byte file. Then insert `"\n"` and `Save()` → 1-byte file.
6) Large file streaming
- Insert ~14 MiB of data with periodic newlines. `SaveAs` then `Save`; verify size matches `content_.Size()` and bytes integrity.
7) Path normalization and tilde expansion
- `SaveAs("~/.../file.txt")` → verify path expands to `$HOME` and file content round-trips with `OpenFromFile`.
8) Error propagation (guarded)
- Attempt save into a non-writable path; expect `Save/SaveAs` returns false with non-empty error. Mark as skipped in environments lacking such path.
#### B) PieceTable semantics
1) Line counting and deletion across lines
- Insert `"abc\n123\nxyz"` → 3 lines; delete middle line range → 2 lines; validate `GetLine` contents.
2) Position conversions
- Validate `LineColToByteOffset` and `ByteOffsetToLineCol` at start/end of lines and EOF, especially around `\n`.
3) Delete spanning newlines
- Remove a range that crosses line boundaries; verify resulting bytes, `LineCount` and line contents.
4) Split/join equivalence
- Split at various columns; then join adjacent lines; verify bytes equal original.
5) WriteToStream vs materialized `Data()`
- After multiple inserts/deletes (without forcing `Data()`), stream to `std::ostringstream`; compare with `GetRange(0, Size())`, then call `Data()` and re-compare.
6) UTF-8 bytes stability
- Insert multibyte sequences (e.g., `"héllo"`, `"中文"`, emoji) as raw bytes; ensure line counting and conversions behave (byte-based API; no crashes/corruption).
#### C) Buffer editing helpers and rows cache correctness
1) `insert_text`/`delete_text`
- Apply at start/middle/end of lines; immediately call `Rows()` and validate contents/lengths mirror PieceTable.
2) `split_line` and `join_lines`
- Verify content effects and `Rows()` snapshots for multiple positions and consecutive operations.
3) `insert_row`/`delete_row`
- Replace a paragraph by deleting N rows then inserting N rows; verify bytes and `LineCount`.
4) Cache invalidation
- After each mutation, fetch `Rows()`; assert `Nrows() == content.LineCount()` and no stale data remains.
#### D) UndoSystem semantics
1) Grouped contiguous insert undo
- Emulate typing at a single location via repeated `insert_text`; one `undo()` should remove the whole run; `redo()` restores it.
2) Delete/newline undo/redo
- Simulate backspace/delete (`delete_text` and `join_lines`) and newline (`split_line`); verify content transitions across `undo()`/`redo()`.
3) Mark saved and dirty flag
- After successful save, call `UndoSystem::mark_saved()` (via existing pathways) and ensure dirty state pairing behaves as intended (at least: `SetDirty(false)` plus save does not break undo/redo).
#### E) Search algorithms
1) Parity with `std::string::find`
- Use `OptimizedSearch::find_all` across edge cases (empty needle/text, overlaps like `"aaaaa"` vs `"aa"`, Unicode byte sequences). Compare to reference implementation.
2) Large text
- Random ASCII text ~1 MiB; random patterns; results match reference.
#### F) Editor non-interactive flows (no frontend)
1) Open and reload
- Through `Editor`, open file; modify the underlying `Buffer` directly; invoke reload (`Buffer::OpenFromFile` or `cmd_reload_buffer` if you bring `Command.cc` into the test target). Verify bytes match the on-disk file after reload.
2) Read-only toggle
- Toggle `Buffer::ToggleReadOnly()`; confirm flag value changes and that subsequent saves still execute when not read-only (or, if enforcement exists, that mutations are appropriately restricted).
3) Prompt lifecycle (headless)
- Exercise `StartPrompt` → `AcceptPrompt` → `CancelPrompt`; ensure state resets and does not corrupt buffer/editor state.
#### G) Regression tests for reported bugs
1) “Saved only newline”
- Build buffer content via `insert_text` followed by `split_line` for newline; `Save` then validate bytes include both the text and newline.
2) Backspace crash path
- Mimic backspace behavior using PieceTable-backed helpers (`delete_text`/`join_lines`); ensure no dependency on legacy `rows_` mutation and no memory issues.
3) Overwrite-confirm path behavior
- Start with non-file-backed buffer named to collide with an existing file; perform `SaveAs(existing_path)` and assert success and correctness on disk (unit test bypasses interactive confirm, validating underlying write path).
#### H) Performance/stress sanity
1) Many small edits
- 10k single-char inserts and interleaved deletes; assert final bytes; keep within conservative runtime bounds.
2) Consolidation heuristics
- After many edits, call both `WriteToStream` and `Data()` and verify identical bytes.
---
### Coverage roadmap
- Phase 1 (already implemented and passing):
- Buffer I/O basics (A.1A.3), PieceTable basics (B.1B.2), Search parity (E.1).
- Phase 2 (add next):
- Buffer I/O edge cases (A.4A.7), deeper PieceTable ops (B.3B.6), Buffer helpers and cache (C.1C.4), Undo semantics (D.1D.2), Regression set (G.1G.3).
- Phase 3:
- Editor flows (F.1F.3), performance/stress (H.1H.2), and optional integration of `Command.cc` into the test target to exercise non-interactive command execution paths directly.
### Notes
- Use per-test temp files under the repo root or a unique temp directory; ensure cleanup after assertions.
- For HOME-dependent tests (tilde expansion), set `HOME` in the test process if not present or skip with a clear message.
- On macOS Debug, a benign allocator warning may appear; rely on process exit code for pass/fail.

View File

@@ -13,9 +13,9 @@
packages = eachSystem (system: rec {
default = kte;
full = kge;
kte = (pkgsFor system).callPackage ./default.nix { graphical = false; };
kge = (pkgsFor system).callPackage ./default.nix { graphical = true; };
qt = (pkgsFor system).callPackage ./default.nix { graphical-qt = true; }
kte = (pkgsFor system).callPackage ./default.nix { graphical = false; graphical-qt = false; };
kge = (pkgsFor system).callPackage ./default.nix { graphical = true; graphical-qt = false; };
qt = (pkgsFor system).callPackage ./default.nix { graphical = true; graphical-qt = true; };
});
};
}

View File

@@ -17,11 +17,21 @@ InstallDefaultFonts()
));
FontRegistry::Instance().Register(std::make_unique<Font>(
"brassmono",
BrassMono::DefaultFontBoldCompressedData,
BrassMono::DefaultFontBoldCompressedSize
BrassMono::DefaultFontRegularCompressedData,
BrassMono::DefaultFontRegularCompressedSize
));
FontRegistry::Instance().Register(std::make_unique<Font>(
"brassmonocode",
"brassmono-bold",
BrassMono::DefaultFontBoldCompressedData,
BrassMono::DefaultFontBoldCompressedSize
));
FontRegistry::Instance().Register(std::make_unique<Font>(
"brassmonocode",
BrassMonoCode::DefaultFontRegularCompressedData,
BrassMonoCode::DefaultFontRegularCompressedSize
));
FontRegistry::Instance().Register(std::make_unique<Font>(
"brassmonocode-bold",
BrassMonoCode::DefaultFontBoldCompressedData,
BrassMonoCode::DefaultFontBoldCompressedSize
));

View File

@@ -16,13 +16,18 @@ open .
cd ..
mkdir -p cmake-build-release-qt
cmake -S . -B cmake-build-release -DBUILD_GUI=ON -DCMAKE_BUILD_TYPE=Release -DENABLE_ASAN=OFF
cmake -S . -B cmake-build-release-qt -DBUILD_GUI=ON -DKTE_USE_QT=ON -DCMAKE_BUILD_TYPE=Release -DENABLE_ASAN=OFF
cd cmake-build-release-qt
make clean
rm -fr kge.app* kge-qt.app*
make
mv kge.app kge-qt.app
mv -f kge.app kge-qt.app
# Use the same Qt's macdeployqt as used for building; ensure it overwrites in-bundle paths
macdeployqt kge-qt.app -always-overwrite -verbose=3
# Run CMake BundleUtilities fixup to internalize non-Qt dylibs and rewrite install names
cmake -DAPP_BUNDLE="$(pwd)/kge-qt.app" -P "${PWD%/*}/cmake/fix_bundle.cmake"
zip -r kge-qt.app.zip kge-qt.app
sha256sum kge-qt.app.zip
open .

View File

@@ -1,102 +0,0 @@
// Simple buffer correctness tests comparing GapBuffer and PieceTable to std::string
#include <cassert>
#include <cstddef>
#include <cstring>
#include <random>
#include <string>
#include <vector>
#include "GapBuffer.h"
#include "PieceTable.h"
template<typename Buf>
static void
check_equals(const Buf &b, const std::string &ref)
{
assert(b.Size() == ref.size());
if (b.Size() == 0)
return;
const char *p = b.Data();
assert(p != nullptr);
assert(std::memcmp(p, ref.data(), ref.size()) == 0);
}
template<typename Buf>
static void
run_basic_cases()
{
// empty
{
Buf b;
std::string ref;
check_equals(b, ref);
}
// append chars
{
Buf b;
std::string ref;
for (int i = 0; i < 1000; ++i) {
b.AppendChar('a');
ref.push_back('a');
}
check_equals(b, ref);
}
// prepend chars
{
Buf b;
std::string ref;
for (int i = 0; i < 1000; ++i) {
b.PrependChar('b');
ref.insert(ref.begin(), 'b');
}
check_equals(b, ref);
}
// append/prepend strings
{
Buf b;
std::string ref;
const char *hello = "hello";
b.Append(hello, 5);
ref.append("hello");
b.Prepend(hello, 5);
ref.insert(0, "hello");
check_equals(b, ref);
}
// larger random blocks
{
std::mt19937 rng(42);
std::uniform_int_distribution<int> len_dist(0, 128);
std::uniform_int_distribution<int> coin(0, 1);
Buf b;
std::string ref;
for (int step = 0; step < 2000; ++step) {
int L = len_dist(rng);
std::string payload(L, '\0');
for (int i = 0; i < L; ++i)
payload[i] = static_cast<char>('a' + (i % 26));
if (coin(rng)) {
b.Append(payload.data(), payload.size());
ref.append(payload);
} else {
b.Prepend(payload.data(), payload.size());
ref.insert(0, payload);
}
}
check_equals(b, ref);
}
}
int
main()
{
run_basic_cases<GapBuffer>();
run_basic_cases<PieceTable>();
return 0;
}

View File

@@ -1,74 +0,0 @@
// Verify OptimizedSearch against std::string reference across patterns and sizes
#include <cassert>
#include <cstddef>
#include <random>
#include <string>
#include <vector>
#include "OptimizedSearch.h"
static std::vector<std::size_t>
ref_find_all(const std::string &text, const std::string &pat)
{
std::vector<std::size_t> res;
if (pat.empty())
return res;
std::size_t from = 0;
while (true) {
auto p = text.find(pat, from);
if (p == std::string::npos)
break;
res.push_back(p);
from = p + pat.size(); // non-overlapping
}
return res;
}
static void
run_case(std::size_t textLen, std::size_t patLen, unsigned seed)
{
std::mt19937 rng(seed);
std::uniform_int_distribution<int> dist('a', 'z');
std::string text(textLen, '\0');
for (auto &ch: text)
ch = static_cast<char>(dist(rng));
std::string pat(patLen, '\0');
for (auto &ch: pat)
ch = static_cast<char>(dist(rng));
// Guarantee at least one match when possible
if (textLen >= patLen && patLen > 0) {
std::size_t pos = textLen / 3;
if (pos + patLen <= text.size())
std::copy(pat.begin(), pat.end(), text.begin() + static_cast<long>(pos));
}
OptimizedSearch os;
auto got = os.find_all(text, pat, 0);
auto ref = ref_find_all(text, pat);
assert(got == ref);
}
int
main()
{
// Edge cases
run_case(0, 0, 1);
run_case(0, 1, 2);
run_case(1, 0, 3);
run_case(1, 1, 4);
// Various sizes
for (std::size_t t = 128; t <= 4096; t *= 2) {
for (std::size_t p = 1; p <= 64; p *= 2) {
run_case(t, p, static_cast<unsigned>(t + p));
}
}
// Larger random
run_case(100000, 16, 12345);
run_case(250000, 32, 67890);
return 0;
}

View File

@@ -1,338 +0,0 @@
#include <cassert>
#include <fstream>
#include <iostream>
#include "Buffer.h"
#include "Command.h"
#include "Editor.h"
#include "TestFrontend.h"
int
main()
{
// Install default commands
InstallDefaultCommands();
Editor editor;
TestFrontend frontend;
// Initialize frontend
if (!frontend.Init(editor)) {
std::cerr << "Failed to initialize frontend\n";
return 1;
}
// Create a temporary test file
std::string err;
const char *tmpfile = "/tmp/kte_test_undo.txt";
{
std::ofstream f(tmpfile);
if (!f) {
std::cerr << "Failed to create temp file\n";
return 1;
}
f << "\n"; // Write one newline so file isn't empty
f.close();
}
if (!editor.OpenFile(tmpfile, err)) {
std::cerr << "Failed to open test file: " << err << "\n";
return 1;
}
Buffer *buf = editor.CurrentBuffer();
assert(buf != nullptr);
// Initialize cursor to (0,0) explicitly
buf->SetCursor(0, 0);
std::cout << "test_undo: Testing undo/redo system\n";
std::cout << "====================================\n\n";
bool running = true;
// Test 1: Insert text and verify buffer contains expected text
std::cout << "Test 1: Insert text 'Hello'\n";
frontend.Input().QueueText("Hello");
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(buf->Rows().size() >= 1);
std::string line_after_insert = std::string(buf->Rows()[0]);
assert(line_after_insert == "Hello");
std::cout << " Buffer content: '" << line_after_insert << "'\n";
std::cout << " ✓ Text insertion verified\n\n";
// Test 2: Undo insertion - text should be removed
std::cout << "Test 2: Undo insertion\n";
frontend.Input().QueueCommand(CommandId::Undo);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(buf->Rows().size() >= 1);
std::string line_after_undo = std::string(buf->Rows()[0]);
assert(line_after_undo == "");
std::cout << " Buffer content: '" << line_after_undo << "'\n";
std::cout << " ✓ Undo successful - text removed\n\n";
// Test 3: Redo insertion - text should be restored
std::cout << "Test 3: Redo insertion\n";
frontend.Input().QueueCommand(CommandId::Redo);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(buf->Rows().size() >= 1);
std::string line_after_redo = std::string(buf->Rows()[0]);
assert(line_after_redo == "Hello");
std::cout << " Buffer content: '" << line_after_redo << "'\n";
std::cout << " ✓ Redo successful - text restored\n\n";
// Test 4: Branching behavior redo is discarded after new edits
std::cout << "Test 4: Branching behavior (redo discarded after new edits)\n";
// Reset to empty by undoing the last redo and the original insert, then reinsert 'abc'
// Ensure buffer is empty before starting this scenario
frontend.Input().QueueCommand(CommandId::Undo); // undo Hello
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]) == "");
// Type a contiguous word 'abc' (single batch)
frontend.Input().QueueText("abc");
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]) == "abc");
// Undo once should remove the whole batch and leave empty
frontend.Input().QueueCommand(CommandId::Undo);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]) == "");
// Now type new text 'X' this should create a new branch and discard old redo chain
frontend.Input().QueueText("X");
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]) == "X");
// Attempt Redo should be a no-op (redo branch was discarded by new edit)
frontend.Input().QueueCommand(CommandId::Redo);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]) == "X");
// Undo and Redo along the new branch should still work
frontend.Input().QueueCommand(CommandId::Undo);
frontend.Input().QueueCommand(CommandId::Redo);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]) == "X");
std::cout << " ✓ Redo discarded after new edit; new branch undo/redo works\n\n";
// Clear buffer state for next tests: undo to empty if needed
frontend.Input().QueueCommand(CommandId::Undo);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]) == "");
// Test 5: UTF-8 insertion and undo/redo round-trip
std::cout << "Test 5: UTF-8 insertion 'é漢' and undo/redo\n";
const std::string utf8_text = "é漢"; // multi-byte UTF-8 (2 bytes + 3 bytes)
frontend.Input().QueueText(utf8_text);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]) == utf8_text);
// Undo should remove the entire contiguous insertion batch
frontend.Input().QueueCommand(CommandId::Undo);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]) == "");
// Redo restores it
frontend.Input().QueueCommand(CommandId::Redo);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]) == utf8_text);
std::cout << " ✓ UTF-8 insert round-trips with undo/redo\n\n";
// Clear for next test
frontend.Input().QueueCommand(CommandId::Undo);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]) == "");
// Test 6: Multi-line operations (newline split and join via backspace at BOL)
std::cout << "Test 6: Newline split and join via backspace at BOL\n";
// Insert "ab" then newline then "cd" → expect two lines
frontend.Input().QueueText("ab");
frontend.Input().QueueCommand(CommandId::Newline);
frontend.Input().QueueText("cd");
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(buf->Rows().size() >= 2);
assert(std::string(buf->Rows()[0]) == "ab");
assert(std::string(buf->Rows()[1]) == "cd");
std::cout << " ✓ Split into two lines\n";
// Undo once should remove "cd" insertion leaving two lines ["ab", ""] or join depending on commit
frontend.Input().QueueCommand(CommandId::Undo);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
// Current design batches typing on the second line; after undo, the second line should exist but be empty
assert(buf->Rows().size() >= 2);
assert(std::string(buf->Rows()[0]) == "ab");
assert(std::string(buf->Rows()[1]) == "");
// Undo the newline should rejoin to a single line "ab"
frontend.Input().QueueCommand(CommandId::Undo);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(buf->Rows().size() >= 1);
assert(std::string(buf->Rows()[0]) == "ab");
// Redo twice to get back to ["ab","cd"]
frontend.Input().QueueCommand(CommandId::Redo);
frontend.Input().QueueCommand(CommandId::Redo);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]) == "ab");
assert(std::string(buf->Rows()[1]) == "cd");
std::cout << " ✓ Newline undo/redo round-trip\n";
// Now join via Backspace at beginning of second line
frontend.Input().QueueCommand(CommandId::MoveDown); // ensure we're on the second line
frontend.Input().QueueCommand(CommandId::MoveHome); // go to BOL on second line
frontend.Input().QueueCommand(CommandId::Backspace); // join with previous line
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(buf->Rows().size() >= 1);
assert(std::string(buf->Rows()[0]) == "abcd");
std::cout << " ✓ Backspace at BOL joins lines\n";
// Undo/Redo the join
frontend.Input().QueueCommand(CommandId::Undo);
frontend.Input().QueueCommand(CommandId::Redo);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(buf->Rows().size() >= 1);
assert(std::string(buf->Rows()[0]) == "abcd");
std::cout << " ✓ Join undo/redo round-trip\n\n";
// Test 7: Typing batching a contiguous word undone in one step
std::cout << "Test 7: Typing batching (single undo removes whole word)\n";
// Clear current line first
frontend.Input().QueueCommand(CommandId::MoveHome);
frontend.Input().QueueCommand(CommandId::KillToEOL);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]).empty());
// Type a word and verify one undo clears it
frontend.Input().QueueText("hello");
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]) == "hello");
frontend.Input().QueueCommand(CommandId::Undo);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]).empty());
frontend.Input().QueueCommand(CommandId::Redo);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]) == "hello");
std::cout << " ✓ Contiguous typing batched into single undo step\n\n";
// Test 8: Forward delete batching at a fixed anchor column
std::cout << "Test 8: Forward delete batching at fixed anchor (DeleteChar)\n";
// Prepare line content
frontend.Input().QueueCommand(CommandId::MoveHome);
frontend.Input().QueueCommand(CommandId::KillToEOL);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
frontend.Input().QueueText("abcdef");
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
// Ensure cursor at anchor column 0
frontend.Input().QueueCommand(CommandId::MoveHome);
// Delete three chars at cursor; should batch into one Delete node
frontend.Input().QueueCommand(CommandId::DeleteChar, "", 3);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]) == "def");
// Single undo should restore the entire deleted run
frontend.Input().QueueCommand(CommandId::Undo);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]) == "abcdef");
// Redo should remove the same run again
frontend.Input().QueueCommand(CommandId::Redo);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]) == "def");
std::cout << " ✓ Forward delete batched and undo/redo round-trips\n\n";
// Test 9: Backspace batching with prepend rule (cursor moves left)
std::cout << "Test 9: Backspace batching with prepend rule\n";
// Restore to full string then backspace a run
frontend.Input().QueueCommand(CommandId::Undo); // bring back to "abcdef"
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]) == "abcdef");
// Move to end and backspace three characters; should batch into one Delete node
frontend.Input().QueueCommand(CommandId::MoveEnd);
frontend.Input().QueueCommand(CommandId::Backspace, "", 3);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]) == "abc");
// Single undo restores the deleted run
frontend.Input().QueueCommand(CommandId::Undo);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]) == "abcdef");
// Redo removes it again
frontend.Input().QueueCommand(CommandId::Redo);
while (!frontend.Input().IsEmpty() && running) {
frontend.Step(editor, running);
}
assert(std::string(buf->Rows()[0]) == "abc");
std::cout << " ✓ Backspace run batched and undo/redo round-trips\n\n";
frontend.Shutdown();
std::cout << "====================================\n";
std::cout << "All tests passed!\n";
return 0;
}

63
tests/Test.h Normal file
View File

@@ -0,0 +1,63 @@
// Minimal header-only unit test framework for kte
#pragma once
#include <functional>
#include <iostream>
#include <string>
#include <vector>
#include <chrono>
#include <sstream>
namespace ktet {
struct TestCase {
std::string name;
std::function<void()> fn;
};
inline std::vector<TestCase>& registry() {
static std::vector<TestCase> r;
return r;
}
struct Registrar {
Registrar(const char* name, std::function<void()> fn) {
registry().push_back(TestCase{std::string(name), std::move(fn)});
}
};
// Assertions
struct AssertionFailure {
std::string msg;
};
inline void expect(bool cond, const char* expr, const char* file, int line) {
if (!cond) {
std::cerr << file << ":" << line << ": EXPECT failed: " << expr << "\n";
}
}
inline void assert_true(bool cond, const char* expr, const char* file, int line) {
if (!cond) {
throw AssertionFailure{std::string(file) + ":" + std::to_string(line) + ": ASSERT failed: " + expr};
}
}
template<typename A, typename B>
inline void assert_eq_impl(const A& a, const B& b, const char* ea, const char* eb, const char* file, int line) {
if (!(a == b)) {
std::ostringstream oss;
oss << file << ":" << line << ": ASSERT_EQ failed: " << ea << " == " << eb;
throw AssertionFailure{oss.str()};
}
}
} // namespace ktet
#define TEST(name) \
static void name(); \
static ::ktet::Registrar _reg_##name(#name, &name); \
static void name()
#define EXPECT_TRUE(x) ::ktet::expect((x), #x, __FILE__, __LINE__)
#define ASSERT_TRUE(x) ::ktet::assert_true((x), #x, __FILE__, __LINE__)
#define ASSERT_EQ(a,b) ::ktet::assert_eq_impl((a),(b), #a, #b, __FILE__, __LINE__)

33
tests/TestRunner.cc Normal file
View File

@@ -0,0 +1,33 @@
#include "Test.h"
#include <iostream>
#include <chrono>
int main() {
using namespace std::chrono;
auto &reg = ktet::registry();
std::cout << "kte unit tests: " << reg.size() << " test(s)\n";
int failed = 0;
auto t0 = steady_clock::now();
for (const auto &tc : reg) {
auto ts = steady_clock::now();
try {
tc.fn();
auto te = steady_clock::now();
auto ms = duration_cast<milliseconds>(te - ts).count();
std::cout << "[ OK ] " << tc.name << " (" << ms << " ms)\n";
} catch (const ktet::AssertionFailure &e) {
++failed;
std::cerr << "[FAIL] " << tc.name << " -> " << e.msg << "\n";
} catch (const std::exception &e) {
++failed;
std::cerr << "[EXCP] " << tc.name << " -> " << e.what() << "\n";
} catch (...) {
++failed;
std::cerr << "[EXCP] " << tc.name << " -> unknown exception\n";
}
}
auto t1 = steady_clock::now();
auto total_ms = duration_cast<milliseconds>(t1 - t0).count();
std::cout << "Done in " << total_ms << " ms. Failures: " << failed << "\n";
return failed == 0 ? 0 : 1;
}

79
tests/test_buffer_io.cc Normal file
View File

@@ -0,0 +1,79 @@
#include "Test.h"
#include <fstream>
#include <cstdio>
#include <string>
#include "Buffer.h"
static std::string read_all(const std::string &path) {
std::ifstream in(path, std::ios::binary);
return std::string((std::istreambuf_iterator<char>(in)), std::istreambuf_iterator<char>());
}
TEST(Buffer_SaveAs_and_Save_new_file) {
const std::string path = "./.kte_ut_buffer_io_1.tmp";
std::remove(path.c_str());
Buffer b;
// insert two lines
b.insert_text(0, 0, std::string("Hello, world!\n"));
b.insert_text(1, 0, std::string("Second line\n"));
std::string err;
ASSERT_TRUE(b.SaveAs(path, err));
ASSERT_EQ(err.empty(), true);
// append another line then Save()
b.insert_text(2, 0, std::string("Third\n"));
b.SetDirty(true);
ASSERT_TRUE(b.Save(err));
ASSERT_EQ(err.empty(), true);
std::string got = read_all(path);
ASSERT_EQ(got, std::string("Hello, world!\nSecond line\nThird\n"));
std::remove(path.c_str());
}
TEST(Buffer_Save_after_Open_existing) {
const std::string path = "./.kte_ut_buffer_io_2.tmp";
std::remove(path.c_str());
{
std::ofstream out(path, std::ios::binary);
out << "abc\n123\n";
}
Buffer b;
std::string err;
ASSERT_TRUE(b.OpenFromFile(path, err));
ASSERT_EQ(err.empty(), true);
b.insert_text(2, 0, std::string("tail\n"));
b.SetDirty(true);
ASSERT_TRUE(b.Save(err));
ASSERT_EQ(err.empty(), true);
std::string got = read_all(path);
ASSERT_EQ(got, std::string("abc\n123\ntail\n"));
std::remove(path.c_str());
}
TEST(Buffer_Open_nonexistent_then_SaveAs) {
const std::string path = "./.kte_ut_buffer_io_3.tmp";
std::remove(path.c_str());
Buffer b;
std::string err;
ASSERT_TRUE(b.OpenFromFile(path, err));
ASSERT_EQ(err.empty(), true);
ASSERT_EQ(b.IsFileBacked(), false);
b.insert_text(0, 0, std::string("hello, world"));
b.insert_text(0, 12, std::string("\n"));
b.SetDirty(true);
ASSERT_TRUE(b.SaveAs(path, err));
ASSERT_EQ(err.empty(), true);
std::string got = read_all(path);
ASSERT_EQ(got, std::string("hello, world\n"));
std::remove(path.c_str());
}

49
tests/test_piece_table.cc Normal file
View File

@@ -0,0 +1,49 @@
#include "Test.h"
#include "PieceTable.h"
#include <string>
TEST(PieceTable_Insert_Delete_LineCount) {
PieceTable pt;
// start empty
ASSERT_EQ(pt.Size(), (std::size_t)0);
ASSERT_EQ(pt.LineCount(), (std::size_t)1); // empty buffer has 1 logical line
// Insert some text with newlines
const char *t = "abc\n123\nxyz"; // last line without trailing NL
pt.Insert(0, t, 11);
ASSERT_EQ(pt.Size(), (std::size_t)11);
ASSERT_EQ(pt.LineCount(), (std::size_t)3);
// Check get line
ASSERT_EQ(pt.GetLine(0), std::string("abc"));
ASSERT_EQ(pt.GetLine(1), std::string("123"));
ASSERT_EQ(pt.GetLine(2), std::string("xyz"));
// Delete middle line entirely including its trailing NL
auto r = pt.GetLineRange(1); // [start,end) points to start of line 1 to start of line 2
pt.Delete(r.first, r.second - r.first);
ASSERT_EQ(pt.LineCount(), (std::size_t)2);
ASSERT_EQ(pt.GetLine(0), std::string("abc"));
ASSERT_EQ(pt.GetLine(1), std::string("xyz"));
}
TEST(PieceTable_LineCol_Conversions) {
PieceTable pt;
std::string s = "hello\nworld\n"; // two lines with trailing NL
pt.Insert(0, s.data(), s.size());
// Byte offsets of starts
auto off0 = pt.LineColToByteOffset(0, 0);
auto off1 = pt.LineColToByteOffset(1, 0);
auto off2 = pt.LineColToByteOffset(2, 0); // EOF
ASSERT_EQ(off0, (std::size_t)0);
ASSERT_EQ(off1, (std::size_t)6); // "hello\n"
ASSERT_EQ(off2, pt.Size());
auto lc0 = pt.ByteOffsetToLineCol(0);
auto lc1 = pt.ByteOffsetToLineCol(6);
ASSERT_EQ(lc0.first, (std::size_t)0);
ASSERT_EQ(lc0.second, (std::size_t)0);
ASSERT_EQ(lc1.first, (std::size_t)1);
ASSERT_EQ(lc1.second, (std::size_t)0);
}

36
tests/test_search.cc Normal file
View File

@@ -0,0 +1,36 @@
#include "Test.h"
#include "OptimizedSearch.h"
#include <string>
#include <vector>
static std::vector<std::size_t> ref_find_all(const std::string &text, const std::string &pat) {
std::vector<std::size_t> res;
if (pat.empty()) return res;
std::size_t from = 0;
while (true) {
auto p = text.find(pat, from);
if (p == std::string::npos) break;
res.push_back(p);
from = p + pat.size();
}
return res;
}
TEST(OptimizedSearch_basic_cases) {
OptimizedSearch os;
struct Case { std::string text; std::string pat; } cases[] = {
{"", ""},
{"", "a"},
{"a", ""},
{"a", "a"},
{"aaaaa", "aa"},
{"hello world", "world"},
{"abcabcabc", "abc"},
{"the quick brown fox", "fox"},
};
for (auto &c : cases) {
auto got = os.find_all(c.text, c.pat, 0);
auto ref = ref_find_all(c.text, c.pat);
ASSERT_EQ(got, ref);
}
}