Skip to content

Aliexe-code/SisyphusDB

 
 

Repository files navigation

SisyphusDB

A distributed key-value store built from scratch with Raft consensus

Grafana Dashboard

FeaturesQuick StartArchitectureBenchmarksChaos TestingEKS Deployment Proof


Features

  • Raft Consensus — Leader election, log replication, and automatic failover with <550ms recovery
  • LSM-Tree Storage — LevelDB-style tiered compaction with Bloom filters for 95% fewer disk lookups
  • 10,000+ Write RPS — Achieved through custom Raft WAL with drain-loop batching, buffer pooling, and async persistence
  • Kubernetes Native — StatefulSet deployment with persistent volumes and Prometheus/Grafana monitoring
  • CLI Client — Full-featured command-line interface with configuration management, metrics, and testing tools

Quick Start

CLI Client

# Build CLI
make build-cli

# Basic operations
sicli put hello world
sicli get hello
sicli delete hello

# Configure server
sicli config set --server-url http://localhost:8081
sicli metrics

For detailed installation instructions, see cmd/cli/INSTALLATION.md.

Local (Docker Compose)

docker-compose up
# Access: http://localhost:8001/put?key=hello&val=world
# Grafana: http://localhost:3000 (admin/admin)

Kubernetes

kubectl apply -f deploy/k8s/

See INSTALL.md for detailed setup and EKS-INSTALL.md for AWS deployment.


Architecture

Architecture

The system consists of three layers:

Layer Components
Consensus Raft leader election, log replication via gRPC, term-based conflict resolution
Storage Write-ahead log, MemTable with arena allocator, SSTable compaction
API HTTP interface with automatic leader forwarding

See docs/ARCHITECHTURE.md for detailed documentation.


Benchmarks

Write Performance

Metric Value
Peak Throughput 9,847 RPS
Mean Latency 26.74ms
P99 Latency 68.13ms
Success Rate 100%

Write Path Optimization

Custom appendable Raft WAL (d2d0e61) achieved a 3.3× throughput improvement through drain-loop batching, sync.Pool buffer reuse, and async fsync.

Metric Before After
Peak RPS 2,960 9,847
P99 Latency 90.32ms 68.13ms

Memory Optimization

Custom arena allocator reduced write latency by 71% (82ns → 23ns) by eliminating GC pressure.

Implementation Latency Allocations/Op
Standard Map 82.21 ns 0 B
Arena Allocator 23.17 ns 64 B

Benchmarks: docs/benchmarks/ | Arena profiling: docs/benchmarks/arena/


Chaos Testing

Chaos tests validate Raft's correctness guarantees under failure conditions.

Test 1: Leader Failover

Kills the leader mid-write and verifies no acknowledged data is lost.

[Step 1] Start 3-node cluster
[Step 2] Write 50 keys
[Step 3] SIGKILL the leader
[Step 4] Wait for new leader election (<550ms)
[Step 5] Write 50 more keys
[Step 6] Verify all 100 keys readable

Result: Zero data loss. All acknowledged writes survive leader failures.

Test 2: Split-Brain Prevention

Sends writes during election and verifies only one node accepts them.

# Run chaos tests
cd tests/chaos
go test -v -timeout 120s

See tests/chaos/README.md for details.


Observability

Built-in Prometheus metrics and Grafana dashboards:

  • Write throughput (RPS per node)
  • P99 latency
  • Raft leader state
  • Replication lag
# Access dashboards
kubectl port-forward svc/grafana 3000:3000
# Open http://localhost:3000 (admin/admin)

Project Structure

├── cmd/
│   ├── server/       # HTTP server entry point
│   └── cli/          # CLI client (sicli)
├── raft/             # Consensus implementation
├── kv/               # Storage engine (LSM-tree)
├── pkg/
│   ├── arena/        # Zero-allocation memory pool
│   ├── wal/          # Write-ahead log
│   └── bloom/        # Bloom filter
├── sstable/          # Sorted string tables
├── tests/chaos/      # Chaos tests
├── deploy/
│   ├── k8s/          # Kubernetes manifests
│   ├── prometheus/   # Monitoring config
│   └── grafana/      # Dashboards
└── docs/             # Architecture & benchmarks

License

MIT

About

A strongly consistent, distributed Key-Value database in Go. It uses a LSM-Tree storage engine with Bloom Filters for storage, all built from scratch.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Go 97.8%
  • Makefile 1.3%
  • Dockerfile 0.9%