Architecture Overview
This document describes Lowkey's system architecture, component design, and data flow.
On this page
System Components
+---------------------------------------------------------------------+
| Desktop Application |
| +-----------+ +-----------+ +-----------+ +------------+ |
| | React | | Tauri | | State | | IPC | |
| | UI | | Window | | Context | | Bindings | |
| +-----+-----+ +-----+-----+ +-----+-----+ +-----+------+ |
| +---------------+--------------+-------------+ |
+-------------------------------------+-----------------------------------+
| Tauri Commands
v
+---------------------------------------------------------------------+
| Engine Core (Rust) |
| +----------------+ +----------------+ +------------------------+ |
| | EngineHandle | | ChunkStore | | Streaming HTTP | |
| | (API Layer) | | (Storage) | | Server | |
| +-------+--------+ +-------+--------+ +-----------+------------+ |
| | | | |
| +-------------------+-----------------------+ |
| | |
| +---------------------------v-----------------------------------+ |
| | DHT / Networking | |
| | +----------+ +----------+ +----------+ +---------------+ | |
| | | Kademlia | | Gossipsub| | Relay | | Request-Resp | | |
| | | DHT | | PubSub | | Circuits | | Chunk Xfer | | |
| | +----------+ +----------+ +----------+ +---------------+ | |
| +----------------------------------------------------------------+ |
+---------------------------------------------------------------------+
Engine Core
EngineHandle API
The main interface for interacting with the P2P engine:
pub struct EngineHandle {
dht: DhtClient,
store: ChunkStore,
rt: Arc<Runtime>,
}
impl EngineHandle {
pub fn share_file(&self, path: &str, title: &str) -> Result<String>;
pub fn search(&self, query: &str) -> Result<Vec<SearchResult>>;
pub fn download_to_path(&self, file_id: &str, dest: &str) -> Result<()>;
pub fn get_stream_url(&self, file_id: &str) -> Result<String>;
}
DHT Client
Manages all network operations:
impl DhtClient {
// Content operations
pub async fn put(&self, key: &str, value: &[u8]) -> Result<()>;
pub async fn get(&self, key: &str) -> Result<Option<Vec<u8>>>;
// Chunk transfer (v0.2.7+)
pub async fn request_chunk(&self, peer: PeerId, chunk_id: String) -> Result<Vec<u8>>;
pub async fn announce_provider(&self, chunk_id: &str) -> Result<()>;
pub async fn find_providers(&self, chunk_id: &str) -> Result<Vec<PeerId>>;
// PubSub
pub async fn publish(&self, topic: &str, data: &[u8]) -> Result<()>;
}
Behaviour Stack
The libp2p swarm combines multiple protocols:
#[derive(NetworkBehaviour)]
struct Behaviour {
kademlia: kad::Behaviour<MemoryStore>, // DHT for discovery
gossipsub: gossipsub::Behaviour, // PubSub messaging
chunk_transfer: request_response::Behaviour<ChunkCodec>, // P2P chunks
relay: relay::client::Behaviour, // NAT traversal
rendezvous: rendezvous::client::Behaviour, // Peer discovery
identify: identify::Behaviour, // Peer identification
ping: ping::Behaviour, // Connectivity check
dcutr: dcutr::Behaviour, // Direct connection upgrade
autonat: Toggle<autonat::Behaviour>, // NAT detection
}
Data Flow
File Sharing Flow
1. User selects file
|
2. EngineHandle.share_file()
|
+-- 3. Split into 256 KiB chunks
|
+-- 4. Hash each chunk with BLAKE3
|
+-- 5. Store chunks locally
|
+-- 6. Create manifest JSON
| { file_id, title, mime, size, chunks: [chunk_ids] }
|
+-- 7. DHT put(manifest:{file_id}, manifest_json)
|
+-- 8. Announce provider for each chunk
| DHT put(provider:{chunk_id}, {peer_id, timestamp})
|
+-- 9. Index keywords for search
| DHT put(kw:{token}, file_id)
|
+-- 10. Gossipsub publish(lowkey-files, share_announcement)
Chunk Fetch Flow (v0.2.7)
1. Streaming server receives HTTP request for byte range
|
2. Calculate needed chunks for range
|
3. For each chunk:
|
+-- 4. Check local cache
| +-- If found -> return cached data
|
+-- 5. Find providers via DHT
| DHT get(provider:{chunk_id})
|
+-- 6. Try direct request to each provider (up to 3)
| Request-Response: {chunk_id} -> {found, data}
| +-- If success -> cache and return
|
+-- 7. Fallback: Gossipsub broadcast
publish(lowkey-chunk-req, {from, chunk_id})
subscribe(lowkey-chunk-resp) -> filter by "to" field
Threading Model
+-------------------------------------------------------------+
| Main Thread |
| Tauri event loop, React rendering, UI interactions |
+-----------------------------+-------------------------------+
| spawn_blocking / invoke
v
+-------------------------------------------------------------+
| Tokio Runtime |
| +--------------+ +--------------+ +--------------------+ |
| | DHT Task | | Streaming | | Chunk Fetches | |
| | (Swarm loop)| | HTTP Server | | (parallel async) | |
| +--------------+ +--------------+ +--------------------+ |
+-------------------------------------------------------------+
The Tokio runtime is shared across all async operations. The DHT swarm runs in a dedicated task, the streaming server handles concurrent requests, and chunk fetches are parallelized (up to 8 concurrent).
Error Handling
| Scenario | Strategy |
|---|---|
| DHT operations | 3 attempts with 200ms backoff |
| Chunk requests | 3 providers attempted before fallback |
| Network timeouts | 30 seconds for chunk transfer |
| Provider not found | Gossipsub fallback |
| Relay unavailable | Direct connection attempt |