What is a “Spoof Bounty Program”?
Spoof Artifact Levels
Spoof bounty programs are the future of biometric security testing because no lab can possibly create or purchase all of the spoof artifacts that can be crowd-sourced from even a small spoof bounty program. Most labs test for presentation attack detection (PAD) using only five or six spoof artifacts. Test sets this small have almost no significance in the real world given that about 1-2% of sessions during account onboarding (initial new account signups) are spoofs.
For example, if you had one million users, then your biometric authenticator would see 10,000-20,000 different spoof artifacts. Contrast that with the five-six from today's best laboratory testing, and you can understand why it's much tougher to be secure in the real world.
To prevent you from being another public test case, it is important to insist that your biometric vendor maintain a persistent spoof bounty program to ensure they are aware of and robust to any emerging threats, like deepfakes. As of today, the only biometric authentication vendor with an active, real-world spoof bounty is FaceTec. Having already rebuffed over 15,000 real-world spoof attacks, the goal of the $75,000 Spoof Bounty Program remains to uncover unknown vulnerabilities in the liveness AI and security scheme so they can be patched, and the anti-spoofing capabilities elevated even further.
For more information on liveness detection please visit www.Liveness.com
Later in 2016, her follow-up, "Presentations and Attacks, and Spoofs, Oh My", continued to influence presentation attack detection research and testing.
No, they are not, and it is critical to a basic understanding of these biometric technologies to start using the correct terminology to prevent any further confusion about how biometrics are different, and where they are best used.
Facial recognition is for surveillance. It's the 1-to-N matching of images captured with cameras the user doesn't control, like those used in a casino or an airport. And it only provides "possible" matches for the surveilled person from face photos stored in an existing database.
Face authentication (1:1 matching+liveness), in contrast, takes user-initiated data collected from a user-controlled device and confirms the legitimate user's identity for their own direct benefit, like, for example, secure account access.
Detractors argue, "You can reset your password if stolen, but you can't reset your face." While this is true, it is a failure of imagination and understanding to stop there. We must ask, "What would make centralized biometric authentication safe?"
Anti-Spoofing for Onboarding, KYC and Enrollment
Requiring every new user to prove their liveness before they are even asked to present an ID document during digital onboarding is itself a huge deterrent to fraudsters who never want their real face on camera.
Due to "hill-climbing" attacks (see Glossary, below), biometric systems should never reveal which part of the system did or didn't catch a spoof. And while ISO 30107-3 gets a lot right, it unfortunately encourages testing both Liveness and Matching at the same time. Scientific method requires the fewest variables possible be tested at once, so Liveness testing should be done with a solely Boolean (true/false) response. Tests should not allow systems to have multiple-decision layers that could allow an artifact to pass Liveness but fail Matching because it didn't "look" enough like the enrolled subject.
Gartner, “Presentation attack detection (PAD, a.k.a., “liveness testing”) is a key selection criterion. ISO/IEC 30107 “Information Technology — Biometric Presentation Attack Detection” was published in 2017.
Ghiani, L., Yambay, D.A., Mura, V., Marcialis, G.L., Roli, F. and Schuckers, S.A., 2017. Review of the Fingerprint Liveness Detection (LivDet) competition series: 2009 to 2015. Image and Vision Computing, 58, pp.110-128:
Schuckers, S., 2016. Presentations and attacks, and spoofs, oh my. Image and Vision Computing, 55, pp.26-30:
Schuckers, S.A., 2002. Spoofing and anti-spoofing measures. Information Security technical report, 7 (4), pp.56-62:
1:1 (1-to-1) – Comparing the biometric data from a subject User to the biometric data stored for the expected User. If the biometric data does not match above the chosen FAR level, the result is a failed match.
Artifact (Artefact) – An inanimate object that seeks to reproduce human biometric traits.
Authentication – Concurrent Liveness Detection and 1:1 biometric matching of the User.
Bad Actor – A criminal; a person with intentions to commit fraud by deceiving others.
Biometric – The measurement and comparison of data representing the unique physical traits of an individual for the purposes of identifying that individual based on those unique traits.
Certification – The testing of a system to verify its ability to meet or exceed a specified performance standard. Testing labs Like iBeta issue certifications.
Complicit User Fraud – When a User pretends to have fraud perpetrated against them, but has been involved in a scheme to defraud by stealing an asset and trying to get it replaced by an institution.
Cooperative User – When a testing organization is guided by ISO 30107-3, the human Subjects used in the tests must provide any and all biometric data that is requested. This helps to assess the complicit User fraud and phishing risk, but only applies if the test includes matching (not recommended).
Centralized Biometrics – Biometric data is collected on any supported device, encrypted and sent to a server for enrollment and later authentication for that device or any other supported device. When the User’s original biometric data is stored on a secure 3rd-party server, that data can continue to be used as the source of trust, and their identity can be established and verified at any time. Any supported device can be used to collect and send biometric data to the server for comparison, enabling Users to access their accounts from all of their devices, new devices, etc., just like with passwords. Liveness Detection is the most critical component of a centralized biometric system, and because certified Liveness did not exist until recently, centralized biometrics have not yet been widely deployed.
Credential Sharing – When two or more individuals do not keep their credentials secret and can access each others accounts. This can be done to subvert licensing fees or to trick an employer into paying for time not worked (also called “buddy punching”).
Credential Stuffing – A cyberattack where stolen account credentials, usually comprising lists of usernames and/or email addresses and the corresponding passwords, are used to gain unauthorized user account access.
Decentralized Biometric – When biometric data is captured and stored on a single device and the data never leaves that device. Fingerprint readers in smartphones and Apple’s Face ID are examples of decentralized biometrics. They only unlock one specific device, they require re-enrollment on any new device, and further do not prove the identity of the User, whatsoever. Decentralized biometric systems can be defeated easily if a bad actor knows the device's override PIN number, allowing them to overwrite the User’s biometric data with their own.
Deepfake – A deepfake (a portmanteau of “deep learning” and “fake”) is an AI-based technology that can produce or alter digital video content so that it presents something that did not in fact occur.
End User – An individual human who is using an application.
Enrollment – When biometric data is collected for the first time, encrypted and sent to the server. Note: Liveness must be verified and a 1:N check should be performed against all the other enrollments to check for duplicates.
Face Authentication – 1:1 Face Matching + Liveness takes User-initiated data collected from a device they do control and confirms that User's identity for their own direct benefit, like, for example, secure account access.
Face Matching – Newly captured images/biometric data of a person are compared to the enrolled (previously saved) biometric data of the expected User, determining if they are the same.
Facial Recognition – 2D Face Matching used for surveillance; it's the 1-to-N matching of images captured with cameras the User doesn't control, like those in a casino or an airport. And it only provides "possible" matches for the surveilled person from face photos stored in an existing database.
FIDO – The acronym for Fast IDentity Online, FIDO is an independent standards body that provides guidance to organizations that choose to use Decentralized Biometric Systems (https://fidoalliance.org).
FRR/FNMR/FMR – The probability that a system will reject the correct User when that User’s biometric data is presented to the sensor. If the FRR is high, Users will be frustrated with the system because they are prevented from accessing their own accounts.
Hill-Climbing Attack – When an attacker uses information returned by the biometric authenticator (match level or liveness score) to learn how to modify their attacks to increase the probability of spoofing the system.
iBeta – A NIST-certified testing lab in Denver Colorado; the only lab currently certifying biometric systems for anti-spoofing/Liveness Detection to the ISO 30107-3 standard (ibeta.com).
Identity & Access Management (IdAM/IAM) – A framework of policies and technologies to ensure only authorized users have appropriate access to restricted technology resources, services, physical locations and accounts. Also called identity management (IdM).
Impostor – A living person with traits similar enough to a Subject User that the system determines the biometric data is from the same person.
ISO 30107-3 – The International Organization for Standardization’s testing guidance for evaluation of Anti-Spoofing technology (www.iso.org/standard/67381.html).
Knowledge-Based Authentication (KBA) - Authentication method that seeks to prove the identity of someone accessing a digital service. KBA requires knowing a user's private information to prove that the person requesting access is the owner of the digital identity. Static KBA is based on a pre-agreed set of shared secrets. Dynamic KBA is based on questions generated from additional personal information.
Liveness Detection – The ability for a biometric system to determine if User biometric data has been collected from a live human or an inanimate, non-living Artifact.
NIST (National Institute of Standards and Technology) – The U.S. government agency that provides measurement science, standards, and technology to advance economic advantage in business and government (nist.gov).
Phishing – When a User is tricked into giving a Bad Actor their passwords, PII, credentials, or biometric data. Example: A User gets a phone call from a fake customer service agent and they request the User’s password to a specific website.
PII – Personally Identifiable Information is information that can be used on its own or with other information to identify, contact, or locate a single person, or to identify an individual in context (en.wikipedia.org/wiki/Personally_identifiable_information).
Presentation Attack Detection (PAD) – A framework for detecting presentation attack events. Related to Liveness Detection and Anti-Spoofing.
Root Identity Provider – An organization that stores biometric data appended to corresponding personal information of individuals, and allows other organizations to verify the identities of Subject Users by providing biometric data to the Root Identity Provider for comparison.
Spoof – When a non-living object that exhibits some biometric traits is presented to a camera or biometric sensor. Photos, masks, or dolls are examples of Artifacts used in spoofs.
Subject User – The individual that is presenting their biometric data to the biometric sensor at that moment.
Synthetic Identity – When a Bad Actor uses a combination of biometric data, name, social security number, address, etc. to create a new record for a person who doesn't actually exist, and for the purposes of using an account in that name.