FTC Position on Facial Recognition and Biometric Data
The Federal Trade Commission has established a sustained enforcement and policy posture around facial recognition technology and biometric data, treating both under its authority to address unfair or deceptive practices in commerce. This page covers the FTC's definitional framework for biometric data, the mechanisms through which the agency evaluates biometric practices, the commercial scenarios that draw the most scrutiny, and the threshold decisions that determine whether agency action is likely. Understanding this posture is essential for any organization deploying biometric systems in consumer-facing environments.
Definition and scope
The FTC treats biometric data as a category of sensitive personal information that includes physiological and behavioral characteristics capable of identifying a specific individual. This category encompasses facial geometry, fingerprints, voiceprints, iris patterns, gait data, and related derived measurements. The Commission's May 2023 policy statement on biometric information explicitly extended Section 5 of the FTC Act to cover the collection, use, and monetization of biometric data — framing unfair or deceptive practices in this domain as actionable violations regardless of whether a dedicated federal biometric statute exists.
Facial recognition occupies a distinct position within this broader category. Unlike a fingerprint scanner that requires active contact, facial recognition systems can operate passively and at scale — identifying or verifying individuals without their knowledge or deliberate participation. The FTC distinguishes between two functional modes:
- Identification — matching an unknown face against a database to determine who a person is
- Verification — confirming that a person matches a claimed identity (e.g., unlocking a device)
Verification is generally regarded as lower risk because it operates with user initiation. Identification at scale — particularly in retail, law enforcement support, or public-space surveillance contexts — draws significantly heavier regulatory concern from the Commission.
The scope of FTC jurisdiction covers commercial entities subject to the FTC Act. This excludes common carriers, non-profits, and financial institutions regulated exclusively under Gramm-Leach-Bliley, though those entities may face overlapping obligations from other federal regulators.
How it works
The FTC's biometric enforcement mechanism flows primarily through Section 5 of the FTC Act, which prohibits unfair or deceptive acts or practices in or affecting commerce (15 U.S.C. § 45). The Commission applies this standard to biometric data practices through two parallel analytical tracks:
Deception track: A company that collects facial images or biometric identifiers while making false or misleading representations about how that data is used, retained, or shared engages in deception. This includes affirmative misstatements in privacy policies, disclosures that omit material uses, or representations that data is not collected when it is.
Unfairness track: Even absent any representation, a practice that causes substantial injury to consumers, that consumers cannot reasonably avoid, and that is not outweighed by countervailing benefits, qualifies as unfair under Section 5(n). The FTC's 2023 policy statement identified six categories of biometric-related conduct that can satisfy the unfairness standard:
- Failing to disclose biometric data collection to consumers
- Deceiving consumers about how biometric data is used
- Using biometric data in ways that contradict stated policies
- Failing to maintain reasonable data security for biometric information
- Enabling illegal discrimination through biometric systems
- Violating consumer privacy through secondary uses of biometric data
The FTC's data security enforcement framework runs parallel to this analysis — companies that collect biometric data are expected to implement administrative, technical, and physical safeguards commensurate with the sensitivity of that data.
Common scenarios
Three commercial contexts generate the majority of FTC attention in the biometric space:
Retail surveillance: Retailers that deploy facial recognition to identify shoplifters or returning customers face heightened scrutiny when consumers are not meaningfully informed. The FTC has signaled that passive, non-consensual identification in physical retail environments raises unfairness concerns, particularly when the resulting data is retained or shared with third parties.
Employment and hiring: Biometric time-and-attendance systems and facial analysis tools used in automated video interviews collect biometric data in employment contexts. The FTC's concern in this space intersects with potential discriminatory outcomes — independent audits cited by the National Institute of Standards and Technology (NIST FRVT evaluations) have documented that facial recognition error rates vary significantly across demographic groups, with some systems showing higher false-positive rates for darker-skinned women compared to lighter-skinned men.
Identity verification in financial services and platforms: Online platforms and financial services providers increasingly use facial liveness checks and biometric verification to confirm account ownership or prevent fraud. The FTC views the data collected in these flows as sensitive regardless of the verification purpose — retention beyond the verification transaction and secondary use for advertising or analytics would likely be treated as unfair or deceptive.
The FTC's broader privacy framework situates these scenarios within the agency's general expectations around data minimization, purpose limitation, and meaningful consumer consent.
Decision boundaries
The FTC does not enforce against all biometric deployments — the agency's enforcement decisions reflect a structured assessment of risk factors. The distinctions that most reliably separate low-risk from high-risk practices include:
Notice vs. no notice: Companies that provide clear, specific, and prominent disclosure of biometric collection before that collection occurs occupy materially safer ground than those relying on buried privacy policy language or no disclosure at all.
Consent architecture: Affirmative opt-in consent for biometric collection is treated differently from opt-out or implied-consent frameworks. The Commission's guidance consistently treats opt-in consent as the appropriate default for sensitive data categories.
Data retention limits: Retaining biometric data beyond the functional purpose for which it was collected — such as keeping facial geometry templates indefinitely after a one-time verification — is a named risk factor in the 2023 policy statement.
Third-party sharing: Disclosing biometric data to data brokers, advertisers, or analytics providers absent specific consent is treated as a high-risk secondary use that compounds underlying collection issues.
Algorithmic accuracy disparities: Deploying a facial recognition system with documented demographic accuracy disparities in a context where errors cause material harm — such as wrongful shoplifter identification — can support an unfairness finding even when the underlying deployment is otherwise disclosed.
The FTC's notable cases and settlements record provides concrete examples of how these decision boundaries have been applied in enforcement actions. For a broader orientation to the agency's statutory authority across consumer protection domains, the FTC Authority homepage offers structured access to the full scope of Commission jurisdiction.