ChatGPT Spearphishing

Modern-day SDRs (sales development reps) perform acts of phishing for a living. Today’s business culture, especially in technology sales, accepts this as how business gets done. They do lead generation to identify their target company, cadence messaging to engage and interact with an individual at the target company, and finally, deliver the ‘payload’ — often in the form of a calendar invite, a pdf spec sheet, or possibly a link to a product download.

An acquaintance of mine on LinkedIn recently inquired if anyone knew of a SaaS offering that was leveraging a Large Language Model (LLM) based AI to do lead generation, handle the cadence messaging, and set up delivery of the ‘payload.’ In the comments were several recommendations for such services with varying levels of maturity.

It bears repeating; this is phishing at its phinest and is perfectly legal!

ChatGPT is conversational AI leveraging vast amounts of training on linguistic data in order to perform a realistic discussion on a topic. This technology, still in its nascent form, is already quite useful. I have co-written blogs with it, students are co-writing term papers (ahem), and even my dad used it to help write some flowery poetry about a certain politician he doesn’t agree with. The usefulness is undeniable. Today it is quite expensive to run, but all such technologies will become dramatically cheaper with time, either by efficiency gains, less sophistication, or novel breakthroughs.

The security implications are profound and easy to imagine; it seems just a touch of paranoia is added to any discussion about conversational or generative AI.

 

A Phony Phish with ChatGPT + LinkedIn

Say you’re an employee at a corporation getting a LinkedIn message from a recruiter, one with an opportunity that matches your experience and is personalized with references to your specific background. There is even some flattery mixed in (all you have to do is ask ChatGPT, and it will come up with some clever and sincere-sounding words). The employee responds, interested to hear more, and the recruiter asks for their email address to set up a call. The recruiter then sends a link to a scheduling platform to the employee’s personal email address, attaching whatever payload they want to utilize.

This not entirely a speculative scenario; it can be fully automated with a conversational bot and some basic coding skills. ChatGPT may have guardrails, and the API may be gated, but do not expect this technology to remain in the hands of scrupulous entities.

The key to social engineering is context and gaining the confidence of the target. Pretending to be someone trusted, the bad actor convinces the target to do some act, sometimes appearing innocuous — at other times, coercive. The scale at which these attacks will be launched using AI will be incredible to witness. Imagine LLMs trained on breached data from enterprises, giving them even more context and credibility.

We already live in a world where social engineering and phishing are cited as the top security concern among CISOs, and soon the malicious actors will have dramatically more firepower than they do today.

 

Phishing Defense with Security Service Edge

Today, SSE, or Security Service Edge, is the primary defense against phishing when even the best corporate phishing awareness training is done annually. There are several ways SSE is implemented: 

  • Blocking malicious domains: 
    • If an employee gets a phishing email that uses the domain getcalendly.com, Internet Threat Protection (ITP) should block a device trying to hit this domain.
    • ITP can be configured to block known phishing domains that have been flagged globally. ITP can also block newly registered domains, which are often created to take advantage of trending domains.
  • Making sure the security posture of the device is sound:
    • The Banyan app can ensure that certain endpoint security applications are running. 
    • Easy ingestion of Endpoint Detection & Response (EDR) signals by an SSE solution prevents attackers from introducing new devices to access protected services.
    • An example EDR signal is one that confirms the device is registered with the EDR solution, which means it is running the EDR agent and successfully reporting its state.
    • A simple SSE policy can deny access to any services if a device is not running the EDR agent or if the device trust level itself is not in “good standing” with the EDR (e.g., out of compliance or otherwise compromised).
  • Leveraging certificates and Multi-Factor Authentication (MFA) to validate identities: 
    • A compromised admin device could not be switched to be used by a newly created Okta identity an attacker has generated if the existing device certificate already contains user claims.
    • The login certificate on a compromised admin device expires within a certain time period (e.g., 24 hours) and would need to be refreshed via a new logon.

 

Limit Exposure to ChatGPT Spearphishing

While email filtering can help minimize phishing email exposure, attacks coming from other applications (like LinkedIn or Twitter) should also be protected against. These sophisticated attacks are becoming harder to detect for even the sharpest cyber security practitioner, leading to a great dependence on solutions that work. With a solid SSE solution that focuses on least privilege access models and integrates with endpoint security products, enterprises can limit exposure to attacks and restrict damage when breaches occur or are attempted. Similar to how attackers are making use of AI-based tools, security vendors are also evolving their offerings to take advantage of next-generation AI to build intelligence in their threat detection and response models..

author avatar
Colin Rand
Colin Rand is an engineer and contributor.