Intel, news

LinkedIn Verification Funnels User Biometrics to Government Agencies and AI Labs

| Chase Tactical | Tactical Gear

LinkedIn users attempting identity verification may be unknowingly handing sensitive personal data to Persona Identities Inc., a company that distributes information to government agencies, credit bureaus, utilities, and mobile providers.

The issue came to light last week when a Zurich-based privacy researcher known as “Rogi,” who covers surveillance capitalism on his blog The Local Stack, published his findings.

Rogi made the discovery after completing LinkedIn’s identity verification process, earning the platform’s coveted blue checkmark in just three minutes.

“Then I did what apparently nobody does,” he said. “I went and read the privacy policy and terms of service”—only to discover those 34 pages of disclosures didn’t stem from LinkedIn.”

It turns out that users seeking verification are redirected to Persona Identities Inc., a San Francisco-based tech company that provides customized identity verification platforms for businesses and organizations, helping them combat fraud, stay compliant, and manage user onboarding.

In the article, Rogi highlighted the extensive range of personal data Persona can access.

According to him, the company collects users’ full names, passport photos, selfies, facial biometric data, NFC chip information from passports, nationality, sex, birthday, age, email, phone number, physical address, IP address, geolocation, device type, MAC address, browser and OS version, and preferred language.

Persona reportedly shares this data with 17 subprocessors and a network of partners, including AI companies such as Anthropic, OpenAI, and Groqcloud.

Rogi also claims that the platform uses uploaded identity documents, including passports, to train AI systems.

“They use uploaded images of identity documents — that’s my passport — to train their AI. They’re teaching their system to recognize what passports look like in different countries,” he said. “They also use your selfie to ‘identify improvements in the Service.’ The legal basis? Not consent. Legitimate interest. Meaning they decided on their own that it’s fine.”

After Rogi’s The Local Stack post on LinkedIn and Persona went viral, Persona co-founder and CEO Rick Song addressed the report in a LinkedIn comment.

“No personal data processed is used for AI/model training,” Song said. “Data is exclusively used to confirm your identity.”

He also noted that all biometric data is deleted immediately after processing, and all other personal data is deleted within 30 days.

Song denied that AI companies such as OpenAI and Anthropic are subprocessors used to verify a user’s identity, even though those companies appear on a Persona webpage listing the company’s subprocessors.

“The referenced subprocessor list is the superset of subprocessors used across all customers, which is unfortunately misleading,” Song said. “Our customers select which products are used, which determines which subprocessors are used. We are adding a clarification to this list to make this clearer in the future.”

Persona’s presence on leading platforms, including Roblox and Discord, which use it for age verification, has placed the company under the microscope. Another recent report from a security researcher claims Persona performs “269 individual verification checks” on Discord users.

The controversy has sparked broader discussions about the trade-offs between online authenticity and privacy in professional networking. Many LinkedIn users now question whether a blue checkmark is worth surrendering such intimate details—biometrics, travel documents, and behavioral metadata—to a third-party firm with ties to extensive data ecosystems. Privacy advocates argue that “legitimate interest” as a basis for processing under frameworks like GDPR often serves as a loophole for companies to prioritize business needs over user rights, especially when data flows to subprocessors or partners whose roles remain opaque.

In response to mounting criticism, Persona has promised greater transparency, including clearer subprocessor disclosures and enhanced user-facing explanations. LinkedIn, meanwhile, has reiterated that it only receives confirmation of verification success, not the raw data itself, and emphasized that the partnership helps combat fake profiles that undermine professional trust. Yet skeptics remain unconvinced, pointing to Persona’s involvement with other high-profile clients like OpenAI, Roblox, and Discord—where similar age and identity checks trigger hundreds of backend validations, including watchlist screenings and risk scoring that could feed into larger surveillance networks.

As the story continues to circulate on forums like Hacker News and privacy-focused communities, calls for regulatory scrutiny are growing. Experts suggest users weigh the benefits carefully: a verified badge might boost credibility in job searches or networking, but it comes at the cost of feeding one of the most sensitive datasets imaginable into the machinery of modern identity infrastructure. In an era where data is currency, the real question may be whether convenience justifies the hidden toll on personal sovereignty.