They were applying for remote IT roles, in some cases using stolen or faked identities and hijacked social profiles. Amazon's chief security officer, Stephen Schmidt, says the company has barred more than 1,800 suspected North Korean applicants from joining since April 2024 — and that the volume is rising.
Schmidt laid out the pattern in a LinkedIn post this month: the goal, he wrote, is simple. Get hired, get paid, and channel wages back to fund Pyongyang's programs. To do that, operatives lean on a mix of identity theft, dormant or hijacked LinkedIn accounts, and what investigators call "laptop farms" — machines physically based in the U.S. but remotely operated from overseas.
Why employers should care
Remote hiring opened enormous flexibility for companies, but it also widened attack surfaces. According to Schmidt, Amazon detected a 27% quarter-over-quarter increase in applications it linked to the Democratic People's Republic of Korea (DPRK) this year. The company uses a blend of AI-driven application screening and manual verification to spot suspicious candidates, then flags cases for further review.
The Department of Justice has been tracking similar schemes. In June it said investigators had uncovered 29 illegal laptop farms across the U.S. and indicted intermediaries who helped North Korean IT workers secure remote jobs using forged or stolen American identities. In one high-profile case, a woman in Arizona was sentenced to more than eight years in prison for running a laptop farm that placed North Korean operatives into over 300 U.S. companies; prosecutors say the scheme generated more than $17 million in illicit revenue for her and Pyongyang.
What the red flags look like
Schmidt offered practical indicators employers can watch for: oddly formatted phone numbers, mismatched or inconsistent education and employment histories, and unusual account activity on sites like LinkedIn. Fraudsters have become cleverer — some impersonate real engineers or pay people for access to legitimate accounts — so the small inconsistencies often matter.
Industry-level responses and intelligence cooperation
This isn't just a single-company problem. The U.S., Japan and South Korea held a forum in Tokyo in August to improve cooperation against the threat of DPRK-affiliated IT workers. Prosecutors and national-security officials describe the schemes as efforts to steal intellectual property and funds, evade sanctions, and underwrite weapons programs.
At scale, employers and platforms are turning to automation to triage risk. Amazon's use of AI to screen candidates is one example; it underscores a broader trend where machine learning helps flag anomalous patterns in hiring. That trend sits alongside broader debates about how AI tools are deployed in enterprises and the trade-offs they bring — from faster detection to potential false positives — issues explored in conversations about advanced search and AI integration across productivity tools like Gemini Deep Research and Gmail/Drive integration.
Why laptop farms matter for corporate security
A remote worker whose machine is actually run from overseas can introduce a silent backchannel into a company's network. Beyond the direct risk of IP theft or fraud, those arrangements can create compliance and legal headaches for firms that unwittingly employ sanctioned nationals. The DOJ’s laptop-farm investigations and related prosecutions have made that clear; they also remind hiring managers to treat verification and access controls as security controls.
Operational advice (what companies are doing)
- Layered verification: Amazon pairs automated signals with human review. Automated systems identify statistical outliers; people check identity documents and histories. This reduces noise while catching subtle fraud.
- Device and network hygiene: Limiting persistent access from unmanaged devices and enforcing multifactor authentication help reduce the value of any remote “host” machine.
- Report and share: Companies are being urged to report suspicious hiring activity to law enforcement. Cross-industry sharing of indicators of compromise accelerates detection.
Cyber hygiene matters in other areas, too. The same kinds of vulnerabilities that let bad actors exploit hiring systems can surface in software and infrastructure; agencies like CISA catalogue and warn about exploited flaws that affect enterprise posture and resilience, which is why keeping up with patching guidance and advisories remains essential (CISA added recent flaws to its KEV catalog).
The human angle
Behind the technical descriptions and legal filings are people — both perpetrators and victims. Some Americans were prosecuted for knowingly running laptop farms; others may have been duped into helping. And for companies, the reputational and operational cost of inadvertently hiring bad actors can be significant.
Amazon's announcement is a reminder that hiring is no longer just an HR function; it's a front-line part of corporate security. As remote work stays with us, the checks that used to feel bureaucratic (identity verification, device attestations, continuous monitoring) are proving to be essential defenses.
If anything about this episode is unexpected, it is how quickly tried-and-true criminal techniques have adapted to a remote-first world. The arms race between fraudsters and defenders is now part of recruiting itself — and employers who ignore that will likely learn the hard way.