The epidemic in numbers: live data
Recent industry data shows how fast synthetic identity and AI-driven fraud have scaled:
- US lenders faced over $3.3 billion in exposure to suspected synthetic identities tied to new accounts in H1 2025 (up 3% from end of 2023). Estimated US economic losses from synthetic identity fraud may reach $30 - 35 billion annually.
- 67% of financial institutions and fintechs reported that fraud rates rose in 2025; 22% lost over $5M to fraud.
- 8.3% of digital account creation attempts in H1 2025 were suspected fraudulent -the highest-risk stage in the customer lifecycle.
- Synthetic identity accounts for only 4% of fraud cases but drives 7% of financial losses - underscoring its high cost and long-duration nature.
- 89% of respondents rank synthetic identity creation as the most concerning fraud tactic evolving with AI; 64% cite AI/deepfake as a top fraud threat.
- Deepfake attacks have accelerated: one every five minutes in 2024; fraud attempts spiked 3,000% in 2023. Generative AI fraud in the US alone is projected to reach $40 billion by 2027.
- Human detection of high-quality deepfake video is only 24.5% effective - so relying on human review is not a defence.
Financial institutions that rely on collecting and storing static PII (IDs, selfies, biometric templates) are building a target. AffixIO removes that target: verify eligibility from external, live data; return only yes or no.
Synthetic identity: the Frankenstein problem
Synthetic identities are built by combining real and fake information - e.g. a real SSN or national ID number with a fabricated name, address, and history. They are hard to spot because part of the data is real; they "age" accounts to build credit or trust, then bust out. Generative AI has made creating and scaling these identities trivial: fraud-as-a-service and commoditised tools mean that what once required significant expertise and investment can now be done at volume. Stolen biometric data is predicted as the next vector - so even "verified" faces may be synthetic.
The AffixIO fit
AffixIO does not rely on the document or the face the user presents. The system consults one or more external databases in real time and applies your rules. The question is not "Does this selfie match this ID?" but "Does this request meet the eligibility criteria according to the authoritative source?" The answer is YES or NO. No synthetic identity document or stitched-together profile is ever stored; the system never has to "believe" the applicant - it believes the live response from the external source.
Deepfakes and the biometric bypass
Attackers use deepfakes (AI-generated or face-swapped images and video) and inject them into onboarding flows - e.g. via virtual camera software - so liveness and selfie checks see a synthetic face. Sophisticated synthetic media is now widely accessible; even systems with low false-acceptance rates become exploitable when attacks are automated at scale. The defence is not "better deepfake detection" on the document you receive; it is not receiving the document or face at all. Verify against external, cryptographically attested data instead.
The AffixIO fit
When AffixIO runs a check, it does not process a selfie or an ID image. It queries external database(s), verifies the data cryptographically, and returns a binary result. So there is no fake face for your system to accept or reject - the deepfake is irrelevant. The eligibility decision is based on live, authoritative data. Neutralize the deepfake threat without ever seeing the fake face.
The AffixIO play: no raw data, no liability
We tackle the epidemic by removing the raw data from the equation. Our system verifies transaction or eligibility validity without exposing individual transaction details. When a check is needed, the system consults one or more external databases in real time. Instead of hoarding easily faked documents, we use a rules-based decision engine to produce a yes or no result based on the external data received. We verify the truth of the data cryptographically. Result: no PII repository to breach, no face to deepfake against you, no synthetic-ID document to store.
This is the same stateless proof flow we use for Identify via API, zero-knowledge proofs, and sector use cases in financial services and identity. No PII stored; no document or selfie retained. Just a cryptographically verified yes or no.
Verify with the API
AffixIO's behaviour is documented and verifiable. The Binary Eligibility Verification API at api.affix-io.com exposes:
- POST /v1/verify - Send a pseudonymised or hashed
identifierand acircuit_id(from GET /v1/circuits). The response includeseligible(boolean) anddata_retained: null. The OpenAPI spec states: "Always null - no PII or identifier data retained by AffixIO." - GET /v1/circuits - List verification circuits; each circuit resolves against relevant data sources (API, Database, Government Registry, etc.) and returns a binary eligibility result.
- Health - GET https://api.affix-io.com/health (no auth) returns service status so you can confirm the API is operational.
Full specification: openapi.json (OpenAPI 3.1). Production base URL: https://api.affix-io.com/v1. No documents or selfies are submitted or stored; only the binary outcome and an optional signed token are returned.
Global impact: relevance by region
The synthetic identity and deepfake epidemic is a global issue. Regulators and institutions in every major market are tightening expectations and looking for verification that does not depend on storing static PII.
United States
$3.3B+ synthetic identity exposure in H1 2025; $30 - 35B estimated annual loss. CFPB and OCC have highlighted synthetic identity as a systemic risk. Rules-based, real-time checks against authoritative sources (e.g. SSN validation, credit headers) without storing full PII align with regulatory direction and reduce breach and deepfake exposure.
United Kingdom
UK finance and FCA are focused on identity fraud and authorised push payment (APP) fraud; synthetic identities and deepfakes increase risk. Verification that relies on external, real-time data and returns only an eligibility outcome supports stronger customer due diligence without creating a central store of documents or biometrics that could be targeted or misused.
European Union
eIDAS 2.0, DORA, and GDPR push toward minimal data collection and strong assurance. Storing copies of IDs and selfies increases liability and attack surface. A stateless, rules-based check that queries EU or national registries and returns only yes/no supports compliance and avoids holding the very data that deepfakes and synthetic IDs are designed to fake.
Asia-Pacific
APAC faces rising AI-driven fraud and varied regulatory approaches (e.g. India's DPDP, Australia's privacy reforms). Real-time verification against local authoritative sources with only a binary result returned reduces cross-border data transfer and limits exposure to synthetic identity and deepfake attacks that target document-based flows.
Summary. The synthetic identity and deepfake epidemic is overwhelming traditional defences. Collecting and storing static PII is a liability. AffixIO removes raw data from the equation: verify transaction or eligibility validity without exposing individual details. The system consults external database(s) in real time, applies a rules-based decision engine, and returns yes or no. Data is verified cryptographically - neutralizing the deepfake threat without ever seeing the fake face. For API access and identity/transaction verification circuits, contact hello@affix-io.com or use our contact page.
Circuits for this trend
Use these circuit IDs with the AffixIO API. List all circuits: GET https://api.affix-io.com/v1/circuits (see openapi.json). Run a check: POST /v1/verify with identifier and circuit_id.
kyc(KYC Verification)simple-yesno(Simple Yes/No Circuit)composite(Composite Circuit)token-validation(Token Validation)
How AffixIO fits in
AffixIO provides the verification layer that does not depend on documents or selfies. As documented in the OpenAPI spec, you send an identifier (pseudonymised or hashed) and a circuit_id to api.affix-io.com; the connected circuit resolves against the relevant data source and returns a binary eligible result with data_retained: null. No PII repository, no face to deepfake, no synthetic-ID document to store. If you are responding to the synthetic identity and deepfake epidemic and need verification that does not hoard easily faked data, contact hello@affix-io.com or use our contact page for API access.
Frequently asked questions
What is synthetic identity fraud?
Synthetic identity fraud is when fraudsters combine real and fake information (e.g. a real SSN with a fabricated name and address) to create a "Frankenstein" identity that passes traditional checks. It accounts for a small share of fraud cases but drives a disproportionate share of losses - only 4% of cases but 7% of financial losses in the US. US lenders faced over $3.3 billion in exposure to suspected synthetic identities in H1 2025, with estimated annual economic losses of $30 - 35 billion. Generative AI has made creating and scaling synthetic identities much easier.
How do deepfakes bypass biometric and selfie verification?
Attackers use AI-generated or face-swapped images and video (deepfakes) and inject them into onboarding flows - e.g. via virtual camera software - so liveness and selfie checks see a synthetic face instead of a real one. Deepfake attempts have spiked (e.g. one every five minutes in 2024; 3,000% growth in fraud attempts in 2023). Human detection of high-quality deepfakes is only about 24.5% effective. Relying on "seeing" the face or storing biometrics makes you a target; verifying eligibility from external, authoritative data without ever handling the face neutralizes the deepfake vector.
Why is storing static PII a liability for financial institutions?
Static PII (documents, selfies, biometric templates) can be forged, stolen, or leaked. Once you store it, you own the breach risk and the regulatory burden. Synthetic identities and deepfakes are designed to look real at onboarding - so hoarding easily faked documents does not defend you; it creates a high-value target. AffixIO's approach: do not collect or store the raw data. Consult one or more external databases in real time, apply a rules-based decision engine, and return only a yes or no. No PII repository to attack; no face to deepfake against your system.
How does AffixIO verify without exposing transaction details?
Via the AffixIO API, you send a pseudonymised or hashed identifier and a circuit_id; the circuit resolves against the relevant data source (see openapi.json). The response returns eligible (true/false) and data_retained: null -the spec states that no PII or identifier data is retained. Your application never receives or stores raw PII or document images; only the binary outcome (and an optional signed token). That removes the synthetic identity and deepfake attack surface: there is no document or selfie for the system to accept or reject, only the result of a real-time check against the data source.
What is rules-based verification vs document-based verification?
Document-based verification relies on the user submitting an ID, selfie, or other artifact; the system then tries to detect fakes (deepfakes, forgeries). Rules-based verification asks: does this request meet the eligibility criteria? The answer is obtained by querying external sources (e.g. registries, credit bureaus, government databases) in real time and applying business rules. The output is yes or no -no document or face is ever stored or processed. That neutralizes deepfakes and synthetic identity abuse because the system never "sees" the fake face or the Frankenstein document; it only sees the result of a live check against authoritative data.
How does cryptographic verification neutralize the deepfake threat?
Cryptographic verification ensures that the data used for the decision comes from a trusted source and has not been tampered with. The eligibility engine does not rely on user-supplied images or documents; it relies on signed, verifiable responses from external systems. So even if a fraudster has a perfect deepfake, your system never receives it - it receives only the outcome of a real-time query to an authoritative database. The deepfake is irrelevant; the threat is neutralized without ever seeing the fake face.
Explore API access for identity and transaction verification.
Contact our team