Artificial intelligence is being used to create highly realistic but deceptive job postings, complicating the job search process for many individuals.
Microsoft’s Australia and New Zealand chief security officer Mark Anderson said besides fake job ads, AI also enables sophisticated scams such as deepfake video interviews, which can mislead job seekers and increase the risk of fraud.
Anderson explained that scammers create fake profiles using stolen credentials, fabricate job postings with automatically generated descriptions, and launch AI-powered email campaigns—all with the goal of tricking job seekers into revealing their personal information.
“For Kiwi job seekers, the risks are no less significant than anywhere else in the world.”
Anderson encourages people to never share personal or financial details with employers who have not been verified.
He also said it is important to assess an employer’s legitimacy by checking their official website or using reputable platforms like LinkedIn or Glassdoor.
“Watch for red flags, such as upfront payment requests or communication via free email domains, which are often signs of fraud.”
Anderson also advised job seekers to be wary if a remote video interview feels unnatural, noting that unusual facial expressions or noticeable delays in speech could be signs that AI is being used to mislead them.
eCommerce Scams Using AI
Scammers can now use AI to set up fraudulent e-commerce websites in just minutes. One victim of this trend is Kathmandu, a well-known New Zealand clothing and outdoor brand, which was recently impersonated by a fake shop on Facebook.
Anderson advised Kiwis to resist impulse purchases by verifying deals carefully and double-checking domain names and customer reviews before clicking on social media advertisements. He also recommended using secure payment methods that offer fraud protection rather than opting for direct bank transfers or cryptocurrency payments.
Combatting Scams
To address scams, Microsoft stated that it has implemented new in-product safety controls called the “Fraud-resistant by Design” policy. Introduced in January, the policy requires Microsoft product teams to conduct fraud prevention assessments and integrate fraud controls during the product design phase.
Microsoft reported that in the year leading up to April, it prevented approximately US$4 billion (NZ$6.74 billion) in global fraud attempts. The company also blocked an average of 1.6 million bot-driven sign-up attempts every hour during this period.
“The battle against scams will continue, and we remain committed to empowering consumers to protect themselves and their data,” Anderson added.