Vera VB is a longitudinal voice biometric identity platform — the verified voice identity layer that sits below all voice AI platforms. It enables creators to build, own, and protect their personal biometric training dataset with timestamped, tamper-evident registration that they control regardless of what any platform does.
Built in response to a specific observation: the AI voice industry is replicating the music industry's IP extraction model at compressed timescales, and no platform-independent voice identity infrastructure exists for creators. Vera VB is designed to be that infrastructure — the biometric identity layer that complements, rather than competes with, emerging provenance standards.
Every major AI voice platform requires creators to upload audio data. The terms governing that upload — perpetual licenses, irrevocable rights, rights to train on and commercialize the voice — are written to maximize the platform's flexibility and minimize the creator's future options. This is not a new pattern. It is the same pattern the music industry used for decades, now executing at the speed of software deployments rather than contract negotiations.
The problem is structural. There is no platform-independent infrastructure for voice creators to establish verified ownership of their own biometric data before platforms capture it. Existing provenance standards like C2PA are designed to prove file authorship — they do not cover biometric voice identity, clone detection, or longitudinal aging analysis. Vera VB fills that gap: not as a protection tool or a monitoring service, but as an identity layer that creators own independently of any platform relationship.
The most important architectural decision was framing. Vera VB is not a "voice protection tool" — a category that will be marginalized as platform compliance features improve. It is a biometric identity layer — a category that is additive to every provenance standard that emerges, because no provenance standard covers biometric voice identity.
C2PA will commoditize file-level timestamps. Blockchain provenance will handle content authenticity. None of these systems address the biometric identity question: is this voice sample from the same speaker as a reference set registered two years ago, and can that claim be verified independently? That question requires speaker embeddings, longitudinal aging analysis, and clone detection — capabilities that Vera VB builds as its core, not as features.
The architecture is designed around a single constraint that drives every other decision: Vera VB must never be a voice data honeypot. If the platform held audio, it would be the single most valuable — and most targeted — voice data repository on the internet. By holding only derived features, a breach yields nothing exploitable. This constraint is a structural advantage, not a limitation.
The five-layer verification stack is the core product. Each layer is independently verifiable by a third party — courts, platforms, compliance officers, or the creator themselves:
The processing pipeline is designed for ephemeral execution — decrypted audio is never written to disk. Client-side Web Crypto API encryption precedes upload; Cloud Functions process audio ephemerally in memory; only derived features are written to Firestore and BigQuery. User audio is stored only in the user's own Google Drive, under their own Google account credentials.
See the full portfolio — production AI systems across asset management, insurance intelligence, and property data.