Compromised chatbot credentials are being bought and sold by criminals who frequent darknet marketplaces for stolen data, security researchers warn. The alert comes as global use of ChatGPT and rival artificial intelligence (AI) offerings continue to surge despite worries from some employers that the chatbots could exfiltrate sensitive information and as regulators voice privacy concerns.
In a new report, cybersecurity firm Group-IB said chatbot credentials now appear for sale on underground criminal markets. The firm monitored a flood of credentials in particular from systems in India, followed by Pakistan, Brazil, Vietnam, Egypt, the United States and France.
Credentials for chatbots such as ChatGPT, backed by OpenAI and Microsoft as well as Google’s Bard, aren’t being targeted outright, the firm said. Rather, the credentials are being stolen en masse by desktop information-stealing malware such as Raccoon, Vidar and Redline.
Info stealers target anything of potential value stored digitally on the infected system, including cryptocurrency wallet data, bank and payment card account access details, passwords for email and messaging services, and credentials saved in a browser.
The malware routes all information from an infected system – known in cybercrime circles as a “bot” – to the attacker, or in some cases a malware-as-a-service offering used by the attacker. In the latter case, service operators typically keep the most valuable information, including cryptocurrency wallet addresses and access details, to themselves. Everything else may end up being batched into “logs” and sold via darknet and Telegram channels.
For organizations that use chatbots and want to safeguard their credentials, the solution is to use long and strong passwords and to enable two-factor authentication (TFA) so criminals can’t easily use stolen chatbot credentials.
The security alert over the theft of chatbot credentials comes as their use in the workplace continues to grow, although not necessarily with corporate oversight or strong controls in place to govern that use.
A survey of nearly 12,000 U.S. employees conducted in February 2023 by social network app Fishbowl, found 43% of professionals said they’d used a chatbot such as ChatGPT for a work-related task. “Nearly 70% of those professionals are doing so without their boss’ knowledge,” it reported.
Adoption of chatbots in the workplace is set to increase as the technology gets baked into more productivity tools. While AI chatbots allow users to get human-sounding answers to questions they pose, current adopters say results remain mixed.
“What I found it does really, really well is give an answer with a lot of confidence – so much confidence that I tend to believe it, but almost half of the time it’s completely wrong,” David Johnson, a data scientist at Europol, the EU’s criminal intelligence and coordination agency, said at an EU conference on AI earlier this month.
Imperfect results to date haven’t dented the chatbot hype. Microsoft shares rose to an all-time high of over $2.5 trillion recently, driven by market optimism for all things AI, including Microsoft’s addition of ChatGPT to its Bing search engine and Azure cloud computing platform – and the advertising and cloud service revenue that might result.
“We reaffirm our bullish-outlier viewpoint on generative AI and continue to see it driving a resurgence of confidence in key software franchises,” JPMorgan analysts said in a research note in June 2023, Reuters reported.
If you or your staff is caught up in the AI chatbot frenzy, let D2 Cybersecurity help educate how to safeguard your credentials with our SAFE program.
Click here if you would like to see a demo of our cybersecurity programs!