Hiring Fraud Is Getting So Bad That Companies Now Screen Employees Multiple Times After They’re Hired

This May Help Someone Land A Job, Please Share!

The background check used to be the last thing that happened before you got the job offer. It was a formality. A final confirmation that you were who you said you were. That era is over.

A new report released this week by First Advantage, one of the world’s largest background screening providers, paints a picture of a hiring landscape under genuine strain. Their 2026 Global Background Screening Trends Report, drawn from more than 5,000 CHROs, HR leaders, and job seekers across nine industries and five global regions, identifies something that would have sounded extreme five years ago: employers are now building verification systems that don’t stop at the offer letter.

The reason is candidate fraud, and it’s happening at a scale most people haven’t fully registered yet.

☑️ Key Takeaways

  • 89% of HR hiring managers plan to add more background screening and identity verification tools within two years, signaling a fundamental shift in how employers think about trust
  • Gartner predicts 1 in 4 candidate profiles worldwide will be fake by 2028, driven by AI tools that make fabrication fast, cheap, and convincing
  • 62% of hiring professionals believe job seekers are now better at faking identities with AI than recruiters are at detecting it, creating a widening trust gap in remote hiring
  • 41% of organizations have already hired and onboarded a fraudulent candidate, meaning this is no longer a theoretical risk but a documented, widespread reality

The Numbers Are Hard to Ignore

Start with the headline statistic from Gartner: by 2028, 1 in 4 candidate profiles worldwide could be fake. That prediction, documented by HR Dive, comes from a survey of 3,000 job candidates in which 6% admitted to participating in interview fraud outright, either by posing as someone else or having someone else impersonate them.

Six percent admitting to it probably understates the actual rate considerably.

The employer-side data is more alarming still:

  • 59% of hiring managers say they’ve personally suspected a candidate of using AI to misrepresent themselves during the hiring process, according to Checkr’s survey of 3,000 American managers
  • 41% of IT, cybersecurity, risk, and fraud leaders say their company has hired and onboarded a fraudulent candidate, per a GetReal Security analysis
  • 62% of hiring professionals believe job seekers are now better at faking identities with AI than HR teams are at detecting them, with only 13% disagreeing
  • 23.2% of applicants were flagged as a fraud risk in a three-month window at cybersecurity firm Huntress after they deployed a fraud detection tool in late 2025

These aren’t edge cases anymore. They’re the new baseline.

How AI Changed the Game

The scale of this problem traces back to one thing: AI made deception cheap, fast, and accessible.

Fabricating a convincing resume used to require effort. Creating a deepfake video required technical skill. Generating fake work samples took time. None of that is true anymore. Generative AI tools can produce polished resumes, plausible work histories, and even real-time interview coaching in seconds, without any specialized knowledge.

We’ve written about how AI is revolutionizing the job search process from a legitimate standpoint. But the same capabilities that help honest candidates polish their applications are enabling a wave of bad actors to do something fundamentally different.

The forms of fraud showing up in hiring pipelines right now include:

  • Deepfake video interviews, where AI-generated video or real-time face-swapping replaces the actual candidate
  • Proxy interviews, where a third party attends and answers on behalf of the real applicant
  • AI-generated fake work samples and portfolios, with 28% of candidates admitting to this in the 2025 Greenhouse Workforce and Hiring Report
  • Synthetic resumes and fabricated work histories that look indistinguishable from legitimate ones
  • Bot-driven mass applications, where automated tools submit thousands of AI-generated applications at once
  • Real-time AI coaching delivered via hidden earpieces or screen overlays during live interviews

One cybersecurity firm thwarted an AI deepfake attempt by simply asking the candidate to wave their hand in front of their face. The bot couldn’t do it. That’s where the arms race currently stands.

Interview Guys Take: The troubling dynamic here is that the tools honest job seekers use to be more competitive, AI resume writing, AI interview prep, AI portfolio assistance, are the same tools fraudsters are weaponizing. Employers are responding by treating everyone with more scrutiny. The innocent majority is bearing the cost of the bad actors’ behavior, and that’s a legitimate grievance for legitimate candidates to carry into the job market.

The New Reality: Screening That Doesn’t Stop at Hiring

The First Advantage report’s headline finding is that 89% of HR hiring managers plan to add additional background screening and identity verification solutions within the next two years. That’s not a modest adjustment. That’s a near-universal shift in philosophy.

The direction of that shift is toward what the industry calls “employee lifecycle screening,” meaning continuous or repeated verification that extends well past the initial hire.

This is a structural change. Historically, background checks were pre-hire events. You applied, they checked, you started. The new model treats screening as an ongoing function, with checks potentially triggered by role changes, access expansions, contract renewals, or scheduled intervals.

More than 60% of global employers in the First Advantage survey also reported growth in candidates with multi-country or multi-location work histories, adding complexity to verification requirements that a single pre-hire check was never designed to handle.

The implications for hiring timelines are real. Employers are simultaneously trying to close roles faster (candidate drop-off from slow processes remains a documented problem) while adding more verification steps. The First Advantage report describes this tension as “risk and speed are now dual mandates, not tradeoffs.” Those two goals are in direct conflict, and companies are spending heavily on automation to try to reconcile them.

Understanding how today’s job applications actually work helps explain why this friction is landing so hard on candidates right now. The system was already under stress before fraud became a dominant concern.

The Remote Hiring Problem

Remote work created the conditions for this crisis. When interviews moved to Zoom, candidates became pixels on a screen. Physical presence, the friction that once made impersonation difficult, disappeared.

The pandemic-era normalization of fully remote hiring gave bad actors a low-risk environment to operate in. CBS News research found that 50% of businesses had encountered AI-driven deepfake fraud in some form. By mid-2025, even companies like Google and McKinsey had reportedly reintroduced mandatory in-person interviews specifically to counter the surge.

This is part of why remote job red flags have multiplied. What employers were once watching for in candidates, they’re now also watching for in the infrastructure of the hire itself.

The fraud isn’t evenly distributed. Data from Huntress and others points to remote roles, high-demand technical positions, and high-salary roles as the primary targets. The “return on investment” of fraud is highest where salaries and access are greatest. Engineering, quantitative finance, and consulting are disproportionately affected. But the 2025 research also uncovered fake candidates applying in schools, healthcare organizations, and government contractors, where the motivation isn’t salary but access.

Interview Guys Take: The geographic sprawl of remote hiring created an accountability vacuum. When a candidate is remote from day one and every interaction is digital, the verification checkpoints that used to exist naturally (showing your ID at the front desk, shaking someone’s hand, meeting your teammates in person) just don’t happen. Employers built remote pipelines without building remote trust infrastructure. They’re now building it retroactively, and job seekers are experiencing the friction of that rebuild.

The Trust Collapse and Its Consequences

Only 19% of hiring managers are extremely confident their current hiring process would catch a fraudulent applicant, per Checkr’s 2025 survey. That’s not a confidence crisis in one industry. It’s a systemic credibility problem.

What happens when hiring managers don’t trust their own processes?

They add steps. They extend timelines. They ask for more documentation. They move toward in-person requirements where they weren’t before. They second-guess candidates who perform unusually well.

62% of candidates said they are more likely to apply to a position if the organization requires in-person interviews, according to Gartner’s 2Q25 research. That’s a counterintuitive finding until you understand what it means: legitimate candidates actually welcome the friction that signals a real process, because it screens out the fakes competing against them.

This connects to a broader shift in how employers are evaluating applications. As we covered in our analysis of how multi-agent AI screeners rank resumes, the screening layer is becoming more sophisticated, not less, precisely because the candidate layer has become more sophisticated too.

The fraud problem is also accelerating the move to skills-based hiring. If credentials can be faked, employers increasingly want to see demonstrated capability through assessments, live problem-solving, and portfolio work that’s hard to fabricate in a pressured, real-time environment.

The Employer Response Is Already Reshaping Hiring

More than 63% of companies had already updated their hiring protocols within the past year specifically to address AI fraud and identity deception risks, per Checkr’s 2025 survey. The changes range from adding biometric checks to requiring IP and location verification, from implementing forensic AI tools that detect deepfakery to shifting toward behavioral interview questions that AI struggles to answer convincingly.

The tactical responses showing up most often include:

  • Biometric identity verification at the application stage
  • Live video with specific real-time prompts designed to be difficult for AI to fake (asking candidates to hold up a random object, writing a word on paper and holding it to the camera)
  • More in-person interview stages, particularly for final rounds
  • Post-hire anomaly monitoring, including device location checks and access pattern alerts
  • Structured skills assessments with real-time observation rather than take-home formats
  • Continuous background monitoring throughout employment, not just at hire

As First Advantage President Joelle Smith noted in the report release: “AI is a major catalyst for both innovation and emerging vulnerabilities. Organizations need smarter, simpler, and more secure screening and identity verification processes across the entire employee lifecycle.”

Interview Guys Take: The companies that will navigate this best are those that build verification into their process early and make it transparent to candidates. Communicating clearly about what identity steps candidates will encounter, and why, signals professionalism rather than paranoia. The worst outcome would be a hiring environment where verification is secretive and arbitrary, which simply disadvantages honest candidates while sophisticated fraudsters continue to adapt.

What This Means for the Job Market Broadly

The rise of candidate fraud and the employer response to it is producing a hiring environment with more friction, longer timelines, and higher verification requirements across the board. That’s the structural shift, and it’s not going away.

The North Korean job scammer problem we covered last year was an early indicator of how bad this could get when state-level actors entered the space. What the 2026 data shows is that the problem has generalized well beyond state actors into a broader ecosystem of fraud enabled by accessible AI tools.

The honest job seekers caught in this environment face a legitimately harder process. More steps. More verification. More documentation. Longer waits. That’s the cost of operating in a hiring market where trust has been systematically undermined.

The employers adding all these steps face a parallel challenge: how do you build a fast, quality candidate experience while simultaneously verifying that everyone in your pipeline is who they say they are?

There are no clean answers yet. But the direction is clear. The background check you used to get before your first day is becoming a recurring feature of your employment, not a one-time gateway to it. The hiring process is, in effect, never really over anymore.


BY THE INTERVIEW GUYS (JEFF GILLIS & MIKE SIMPSON)


Mike Simpson: The authoritative voice on job interviews and careers, providing practical advice to job seekers around the world for over 12 years.

Jeff Gillis: The technical expert behind The Interview Guys, developing innovative tools and conducting deep research on hiring trends and the job market as a whole.


This May Help Someone Land A Job, Please Share!