Engineering Nexus: How I Built Secure E2EE Network Sync Into a Linux Clipboard Manager
嗯,用户让我总结一篇文章的内容,控制在100字以内,而且不需要特定的开头。我得先仔细读一下这篇文章,看看主要讲了什么。 文章介绍了一个叫做DotGhostBoard的剪贴板管理器,它支持在局域网内安全同步,而且不需要中央服务器或云存储。听起来挺隐私的。作者用了PyQt6和SQLite来构建这个应用,还详细介绍了架构的三个安全层:mDNS发现、X25519配对和HTTP API传输。 好的,我需要把这些关键点浓缩到100字以内。首先说明应用名称和功能,然后提到使用的工具和技术,最后总结其安全特性。 可能的结构是:应用名称+功能+技术栈+安全机制。比如:“DotGhostBoard是一款基于PyQt6和SQLite的剪贴板管理器,支持局域网内设备间的端到端加密同步。通过mDNS实现零配置设备发现,X25519密钥交换确保通信安全,HTTP API传输数据。” 这样应该符合要求了。 </think> DotGhostBoard是一款基于PyQt6和SQLite的剪贴板管理器,支持局域网内设备间的端到端加密同步。通过mDNS实现零配置设备发现,X25519密钥交换确保通信安全,HTTP API传输数据。 2026-4-16 05:2:17 Author: infosecwriteups.com(查看原文) 阅读量:4 收藏

freerave

Zero servers. Zero cloud. Zero plaintext on the wire — just PyQt6, cryptography, and a local network.

Press enter or click to view image in full size

Building a clipboard manager is a weekend project. Building one that syncs securely across devices on a local network — without a central server, without trusting the network, and without ever touching the cloud — that’s a different problem entirely.

DotGhostBoard v1.5.0 (Nexus) is my answer to that problem. It’s a privacy-first clipboard manager for Linux, built under the DotSuite umbrella. No telemetry. No Electron. No cloud. Pure PyQt6 + SQLite.

This post is a full architectural breakdown of how the sync layer works — from device discovery to encrypted payload delivery.

The Architecture at a Glance

The system has three independent security layers, each doing one job:

┌─────────────────────────────────────────────────────────────┐
│ LOCAL NETWORK (LAN) │
│ │
│ ┌──────────────┐ mDNS Discovery ┌──────────────┐ │
│ │ Device A │ ◄──────────────────► │ Device B │ │
│ │ (Arch) │ │ (Kali) │ │
│ │ │ X25519 Handshake │ │ │
│ │ ghostboard │ ──── PIN + ECDH ───► │ ghostboard │ │
│ │ │ │ │ │
│ │ HTTPServer │ ◄── AES-256-GCM ──── │ HTTPServer │ │
│ │ :PORT │ /api/sync E2EE │ :PORT │ │
│ └──────┬───────┘ └──────┬───────┘ │
│ │ │ │
│ ghost.db ghost.db │
│ (trusted_peers) (trusted_peers) │
└─────────────────────────────────────────────────────────────┘

Three layers. Each independently secure. Let’s go through them.

Layer 1 — Zero-Config Device Discovery with mDNS

The first UX problem: how do devices find each other without asking the user to type an IP address?

Answer: mDNS via zeroconf. Every device broadcasts itself on the LAN under a custom service type _dotghost._tcp.local.. Other instances listen and populate the UI automatically — no configuration, no manual IP entry.

Because the app runs on PyQt6, the discovery engine lives in its own QThread. Blocking network I/O never touches the main thread:

# core/network_discovery.py
import socket
from zeroconf import ServiceBrowser, Zeroconf, ServiceInfo, IPVersion
from PyQt6.QtCore import pyqtSignal, QThread
_SERVICE_TYPE = "_dotghost._tcp.local."

class DotGhostDiscovery(QThread):
peer_found = pyqtSignal(str, str, str, int) # node_id, name, ip, port
peer_lost = pyqtSignal(str) # node_id
def __init__(self, node_id: str, device_name: str, port: int):
super().__init__()
self.node_id = node_id
self.device_name = device_name
self.port = port
self.zeroconf = None
def run(self):
self.zeroconf = Zeroconf(ip_version=IPVersion.V4Only)
properties = {
b'node_id': self.node_id.encode('utf-8'),
b'device_name': self.device_name.encode('utf-8'),
b'version': b'1',
}
instance_name = f"{self.node_id}.{_SERVICE_TYPE}"
self.info = ServiceInfo(
type_=_SERVICE_TYPE,
name=instance_name,
addresses=[socket.inet_aton(get_local_ip())],
port=self.port,
properties=properties,
server=f"{self.node_id}.local."
)
self.zeroconf.register_service(self.info)
self.browser = ServiceBrowser(self.zeroconf, _SERVICE_TYPE, self)
self.exec() # Qt event loop keeps the thread alive
def add_service(self, zc: Zeroconf, type_: str, name: str):
info = zc.get_service_info(type_, name)
if not info:
return
props = info.properties
node_id = props.get(b'node_id', b'').decode()
dev_name = props.get(b'device_name', b'Unknown').decode()
if node_id == self.node_id: # skip self
return
ip = socket.inet_ntoa(info.addresses[0])
self.peer_found.emit(node_id, dev_name, ip, info.port)
def remove_service(self, zc: Zeroconf, type_: str, name: str):
node_id = name.replace(f".{_SERVICE_TYPE}", "")
self.peer_lost.emit(node_id)
def stop(self):
if self.zeroconf:
self.zeroconf.unregister_service(self.info)
self.zeroconf.close()
self.quit()

Why QThread and not threading.Thread? peer_found and peer_lost are pyqtSignals. They cross the thread boundary safely into the main UI thread via Qt's queued connection mechanism. A raw Python thread here would cause a race condition against the UI.

Layer 2 — Secure Device Pairing (X25519 + PBKDF2 + AES-GCM)

Finding a peer is one thing. Trusting it is another.

A local network isn’t inherently safe — public Wi-Fi, ARP spoofing, a compromised router. The pairing protocol defends against all of it with a three-phase handshake:

Device A                              Device B
│ │
│ 1. Generate ephemeral X25519 key │
│ 2. Derive wrap key from PIN+salt │
│ 3. Encrypt pubkey → send ─────────►│
│ │ 4. Decrypt pubkey with PIN+salt
│ │ 5. Generate ephemeral X25519 key
│◄──────────────── send encrypted ────│ 6. Derive shared secret (ECDH)
│ │ 7. Encrypt own pubkey → send
│ 8. Derive shared secret (ECDH) │
│ 9. Discard ephemeral keys │ 9. Discard ephemeral keys
│ │
│ Shared Secret stored in DB │

The PIN is a 6-digit out-of-band value shown on both screens — a human-verified channel that breaks any MITM attempt. Even if an attacker intercepts the traffic, they can’t decrypt the public keys without the PIN.

# core/pairing.py
import os
import base64
from cryptography.hazmat.primitives import hashes, serialization
from cryptography.hazmat.primitives.asymmetric import x25519
from cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC
from cryptography.hazmat.primitives.ciphers.aead import AESGCM
_KDF_ITERATIONS = 100_000 # OWASP minimum for PBKDF2-SHA256

def derive_handshake_key(pin: str, salt: bytes) -> bytes:
"""
Derive a 256-bit wrapping key from a 6-digit PIN + dynamic salt.
The salt is generated fresh per-pairing session - its job is to
prevent precomputed PIN dictionaries, not to be secret.
"""
kdf = PBKDF2HMAC(
algorithm=hashes.SHA256(),
length=32,
salt=salt,
iterations=_KDF_ITERATIONS,
)
return kdf.derive(pin.encode("utf-8"))

def generate_pairing_keys() -> tuple[x25519.X25519PrivateKey, bytes]:
"""Ephemeral X25519 key pair - lives only for the duration of the handshake."""
private_key = x25519.X25519PrivateKey.generate()
public_key_bytes = private_key.public_key().public_bytes(
encoding=serialization.Encoding.Raw,
format=serialization.PublicFormat.Raw
)
return private_key, public_key_bytes

def encrypt_pairing_payload(public_key_bytes: bytes, handshake_key: bytes) -> str:
"""
Encrypt the public key using the PIN-derived wrapping key.
Layout: [ 12 bytes nonce | ciphertext + 16 byte GCM tag ]
"""
aesgcm = AESGCM(handshake_key)
nonce = os.urandom(12)
ciphertext = aesgcm.encrypt(nonce, public_key_bytes, None)
return base64.b64encode(nonce + ciphertext).decode("utf-8")

def decrypt_pairing_payload(payload: str, handshake_key: bytes) -> bytes:
"""Reverse of encrypt_pairing_payload. Raises InvalidTag on wrong PIN."""
raw = base64.b64decode(payload)
nonce, ciphertext = raw[:12], raw[12:]
aesgcm = AESGCM(handshake_key)
return aesgcm.decrypt(nonce, ciphertext, None)

def derive_shared_secret(
private_key: x25519.X25519PrivateKey,
peer_public_key_bytes: bytes
) -> bytes:
"""
Complete the ECDH exchange. Both sides arrive at the same 32-byte value
without it ever being transmitted.
"""
peer_public_key = x25519.X25519PublicKey.from_public_bytes(peer_public_key_bytes)
return private_key.exchange(peer_public_key)

Why X25519 over RSA or P-256? X25519 is faster, has a smaller key size (32 bytes), is immune to invalid-curve attacks by design, and is the default in TLS 1.3. It’s the right choice for a constrained local protocol.

Once the handshake completes, the shared secret is stored in ghost.db and the ephemeral private keys are immediately garbage-collected.

Press enter or click to view image in full size

-- Storage schema for trusted peers
CREATE TABLE trusted_peers (
id INTEGER PRIMARY KEY AUTOINCREMENT,
node_id TEXT UNIQUE NOT NULL,
device_name TEXT NOT NULL,
shared_secret BLOB NOT NULL, -- raw 32 bytes from ECDH
paired_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

Layer 3 — The Local REST API, Rate Limiting & Thread-Safe UI

With discovery and pairing solved, the actual sync transport is a minimal HTTPServer running in a background thread — bound to 0.0.0.0 but protected by two hard gates.

Gate 1 — Peer Identity: every /api/sync request must carry a node_id that maps to a stored trusted peer. Unknown nodes get a 403 immediately.

Get freerave’s stories in your inbox

Join Medium for free to get updates from this writer.

Remember me for faster sign in

Gate 2 — E2EE Payload: even if someone spoofs a node_id, they can't forge a valid AES-GCM ciphertext without the shared secret. Wrong key = InvalidTag exception = instant drop.

# core/api_server.py
import json
import time
import urllib.parse
from collections import defaultdict
from http.server import BaseHTTPRequestHandler, HTTPServer
from threading import Lock
class _RateLimiter:
"""Sliding window rate limiter - 3 pairing attempts per 60s per IP."""
def __init__(self, max_attempts: int = 3, window: int = 60):
self._attempts = defaultdict(list)
self._lock = Lock()
self.max = max_attempts
self.window = window
def is_allowed(self, ip: str) -> bool:
now = time.time()
with self._lock:
self._attempts[ip] = [
t for t in self._attempts[ip] if now - t < self.window
]
if len(self._attempts[ip]) >= self.max:
return False
self._attempts[ip].append(now)
return True

_rate_limiter = _RateLimiter()

class GhostAPIHandler(BaseHTTPRequestHandler):
def log_message(self, format, *args):
pass # silence default HTTP logs
def _send_response(self, code: int, body: dict):
payload = json.dumps(body).encode()
self.send_response(code)
self.send_header("Content-Type", "application/json")
self.send_header("Content-Length", str(len(payload)))
self.end_headers()
self.wfile.write(payload)
def do_POST(self):
parsed = urllib.parse.urlparse(self.path)
client_ip = self.client_address[0]
if parsed.path == '/api/pair':
if not _rate_limiter.is_allowed(client_ip):
self._send_response(429, {
"status": "error",
"message": "Too many pairing attempts. Try again later."
})
return
# ... PIN verification and key exchange logic
self._send_response(200, {"status": "paired"})
return
if parsed.path == '/api/sync':
body = self.rfile.read(int(self.headers.get('Content-Length', 0)))
data = json.loads(body)
peer_node_id = data.get("node_id")
peer = storage.get_trusted_peer(peer_node_id)
if not peer:
self._send_response(403, {"status": "error", "message": "Untrusted peer"})
return
try:
plaintext = decrypt_from_peer(
data.get("payload"),
peer["shared_secret"]
)
except Exception:
self._send_response(403, {"status": "error", "message": "Decryption failed"})
return
item_id = storage.add_item("text", plaintext)
# Cross-thread UI update via Qt signal - safe from any thread
self.server.qthread_parent.sync_received.emit(item_id, plaintext)
self._send_response(201, {"status": "synced"})

Why HTTPServer over WebSockets or raw TCP? HTTP gives request/response semantics for free, works through most firewalls, and is trivially testable with curl. The overhead is negligible for clipboard payloads. When v3.x arrives, the transport will be upgraded to WebRTC for true NAT-piercing P2P.

The Full Sync Flow — End to End

Here’s exactly what happens when you copy something on Device A and it appears on Device B:

Device A (sender)                         Device B (receiver)
───────────────── ────────────────────
1. User copies text
2. ClipboardMonitor detects change
3. Encrypt with shared_secret
[ AES-256-GCM | random 12-byte nonce ]
4. POST /api/sync ──────────────────────► 5. GhostAPIHandler.do_POST()
{ 6. Lookup peer by node_id
"node_id": "abc123", 7. Decrypt with shared_secret
"payload": "<base64 ciphertext>" 8. storage.add_item()
} 9. sync_received.emit()
10. UI updates in main thread
◄──────────────── 201 { "status": "synced" }

Zero plaintext on the wire. Zero server in the middle. Zero cloud.

Press enter or click to view image in full size

Securing the Build: GPG-Signed Releases

A secure app with an unsigned binary is still a supply chain risk. Every release artifact — both the .AppImage and the .deb — is GPG-signed in CI.

# .github/workflows/build-all.yml
- name: Sign AppImage (GPG)
run: |
echo "${{ secrets.GPG_PRIVATE_KEY }}" | gpg --import --batch --yes
gpg --batch --yes --pinentry-mode loopback \
--passphrase "${{ secrets.GPG_PASSPHRASE }}" \
--detach-sign --armor \
DotGhostBoard-*.AppImage
- name: Sign DEB Package (GPG)
run: |
dpkg-sig --sign builder \
-k "${{ secrets.GPG_KEY_ID }}" \
--gpg-options "--passphrase ${{ secrets.GPG_PASSPHRASE }} --pinentry-mode loopback --batch --yes" \
dotghostboard_*.deb
- name: Generate SHA256 checksums
run: |
cd out && sha256sum * > SHA256SUMS.txt

Users can verify any release locally:

# Verify AppImage
gpg --verify DotGhostBoard-1.5.1-x86_64.AppImage.asc \
DotGhostBoard-1.5.1-x86_64.AppImage
# Verify DEB
dpkg-sig --verify dotghostboard_1.5.1_amd64.deb
# Verify checksum
sha256sum -c SHA256SUMS.txt

Lessons Learned

mDNS is fragile on some Linux setups. If avahi-daemon is running and competing for port 5353, zeroconf will fail silently. Detect the conflict early and surface it in the UI — don't leave the user with an empty peers list and no explanation.

PyInstaller and cryptography need explicit hidden imports. The package uses dynamic backend loading. Without --hidden-import cryptography.hazmat.primitives.asymmetric.x25519 and the aead module, the AppImage crashes at runtime with a clean ImportError that's nearly impossible to diagnose without already knowing where to look.

dpkg-sig hangs in CI without --pinentry-mode loopback. It silently waits for a terminal that doesn't exist. Always pass full GPG options explicitly in non-interactive environments.

Rate limiting shared state needs a lock. The sliding window dict is accessed from multiple HTTP handler threads simultaneously. Without a threading.Lock, you get a race condition under concurrent pairing attempts that's near-impossible to reproduce locally.

What’s Next — v2.0.0 Cerberus

The next release is Cerberus — a Zero-Knowledge Password Vault. The AES-256 infrastructure from v1.4.0 (Eclipse) already lays the foundation. What’s coming on top:

  • A fully isolated vault.db — separate file, separate connection, locked when not in use
  • Pattern-based secret detection using regex: JWT, AWS keys, GitHub tokens, high-entropy hex strings — shape detection, not keyword matching
  • Auto-clear: wipes the clipboard 30 seconds after a Vault paste
  • Paranoia Mode: a toggle that suspends all DB writes temporarily

The core design decision in Cerberus: detection happens at the shape of a string, not its meaning. A 1500-word article that mentions “password” doesn’t trigger anything. A 40-character base64 string with high Shannon entropy does.

Download & Source

If you find a security issue, please reach out directly before opening a public issue.

Known Issue

The .deb package in v1.5.1 has a minor theme inconsistency on first launch, and the update path migration between v1.4.x and v1.5.x requires a manual step that isn't surfaced in the UI yet.

Both will be resolved in v1.5.2 — dropping tomorrow.

I’d rather ship honest software and fix fast than pretend it’s perfect.

Press enter or click to view image in full size

DotSuite — built for the shadows 👻


文章来源: https://infosecwriteups.com/engineering-nexus-how-i-built-secure-e2ee-network-sync-into-a-linux-clipboard-manager-70f9d01fddda?source=rss----7b722bfd1b8d---4
如有侵权请联系:admin#unsafe.sh