3 research outputs found
From Chatbots to PhishBots? -- Preventing Phishing scams created using ChatGPT, Google Bard and Claude
The advanced capabilities of Large Language Models (LLMs) have made them
invaluable across various applications, from conversational agents and content
creation to data analysis, research, and innovation. However, their
effectiveness and accessibility also render them susceptible to abuse for
generating malicious content, including phishing attacks. This study explores
the potential of using four popular commercially available LLMs - ChatGPT (GPT
3.5 Turbo), GPT 4, Claude and Bard to generate functional phishing attacks
using a series of malicious prompts. We discover that these LLMs can generate
both phishing emails and websites that can convincingly imitate well-known
brands, and also deploy a range of evasive tactics for the latter to elude
detection mechanisms employed by anti-phishing systems. Notably, these attacks
can be generated using unmodified, or "vanilla," versions of these LLMs,
without requiring any prior adversarial exploits such as jailbreaking. As a
countermeasure, we build a BERT based automated detection tool that can be used
for the early detection of malicious prompts to prevent LLMs from generating
phishing content attaining an accuracy of 97\% for phishing website prompts,
and 94\% for phishing email prompts
Plasma Cell Mastitis mimicking as carcinoma of the breast: A case report and review of the literature
Abstract: Plasma Cell Mastitis (PCM) is an uncommon chronic inflammatory disease of the breast i