How to beat credential stuffing attacks

Career planning, AI security reasources & OpenAI research

In partnership with

đź‘‹ Good morning, Cyber Pros!

This week’s issue brings you:

  • How to beat credential stuffing attacks

  • A helpful career planning technique

  • How OpenAI’s latest research enhances the security of LLMs

  • AI security resource roundup

Let’s dive in!

Read time: ~6 mins

Learn how to make AI work for you.

AI breakthroughs happen every day. But where do you learn to actually apply the tech to your work? Join The Rundown — the world’s largest AI newsletter read by over 600,000 early adopters staying ahead of the curve.

  1. The Rundown’s expert research team spends all day learning what’s new in AI

  2. They send you daily emails on impactful AI tools and how to apply it

  3. You learn how to become 2x more productive by leveraging AI

LEARNING
HOW TO BEAT CREDENTIAL STUFFING ATTACKS

Source: Radware

Definition: Credential stuffing is a malicious tactic where automated tools repetitively inject compromised username/password combinations into various online platforms, aiming to hijack legitimate user accounts alongside those initially compromised.

How Credential Stuffing Works:

1/ Data Acquisition: Threat actors source username/password pairs from various breaches, including previous data breaches, man-in-the-middle attacks, the dark web, or phishing incidents.

2/ Automation Setup: Deploying bots, hackers systematically test these credentials across multiple platforms, exploiting the tendency of users to reuse login details across unrelated services.

3/ Attack Execution: After filtering successful logins, hackers store the confirmed valid credentials for further exploitation.

4/ Objectives:

  • Organisations: Target administrative accounts for lateral movement, conduct additional attacks (e.g., malware, ransomware), or pilfer patents/trade secrets.

  • Email and Social Media Services: Access both personal and business accounts to initiate phishing and social engineering schemes targeting the victims' trusted contacts.

  • Credential Trading: Validate credentials for resale at elevated prices to other malicious actors.

The Perils of Credential Stuffing:

  • Despite a low success rate (typically ranging from 0.1% to 4%, or 4 in 100 attempts), credential stuffing remains popular among threat actors due to its affordability and minimal technical requirements.

  • The abundance of credential databases in cybercrime circles exacerbates the threat, with over 24 billion username/password pairs currently in circulation.

Examples of credential stuffing attacks:

Recommendations for preventing credential stuffing attacks

The OWASP Credential Stuffing Cheat Sheet suggests several prevention mechanisms, including the following:

  1. Monitor user activity

  2. Use bot-detection mechanisms

  3. Implement MFA

  4. Enforce the use of unique credentials

  5. Leverage password-free authentication

  6. Integrate obfuscation techniques

  7. Scan for and alert on anomalous activity

  8. Develop an incident response plan

CAREER
TRY THIS CAREER PLANNING TECHNIQUE

Summary: Before you embark on the job hunting journey, it helps to know where you’re going and how you’re going to get there. Before we start applying for any jobs, it’s helpful to have a clear sense of direction. You can use this tactic for your career planning, or life in general.

Details:

Get yourself a big piece of paper (you can do this on your phone or laptop too) and draw the above diagram.

  • Write down where you are today: Perform a small self-assessment. Be honest about your strengths and weaknesses.

  • Write down where you want to be 5-10 years from now: Be descriptive and specific. We need to be clear about our dreams so that we can optimise our planning to get there.

  • List habits / actions to get there: On the top half, note down all of the things required of the person who is where you want to be. What are their skills, experiences, strengths, behaviours? The more specific the better.

  • List what would hold you back: On the bottom half, note down what behaviours you need to improve on, what weaknesses you need to turn to strengths, what toxic people are holding you back, what habits aren’t serving you (8 hour screen times, Netflix binges etc.).

  • Schedule a daily or weekly reminder on your phone: Regularly refresh your mind of your intentions. Feel free to adjust wherever needed, this isn’t a concrete commitment – it’s a dynamic planner.

I hope this method serves you.

AI & SECURITY
ENHANCE SECURITY WITH INSTRUCTION HIERARCHY

Summary: OpenAI’s latest research introduces the "Instruction Hierarchy," a novel approach to enhance the security and robustness of Large Language Models against malicious attacks.

Open AI Research

Key Takeaways:

1/ Vulnerability to Attacks

  • LLMs, powering virtual assistants and AI apps, face susceptibility to attacks like prompt injections and jailbreaks.

  • These attacks manipulate AI behaviour by injecting malicious instructions, overriding system commands.

2/ The Instruction Hierarchy Proposal

  • OpenAI suggests an instruction hierarchy prioritising instructions based on their source.

  • Developer messages have the highest priority, followed by user messages, then third-party content.

3/ Enhanced Model Robustness

  • Implementing this hierarchy enhances LLMs' defense mechanisms against various attacks.

  • It improves resistance to common exploits, even those not explicitly modelled during training.

Challenges in Implementation and Balance:

1/ Complexity in Implementation

  • Establishing a clear hierarchy and training models accordingly is complex.

  • It requires generating accurate training data reflecting attack scenarios and benign use cases.

2/ Balancing Security and Flexibility

  • Ensuring LLMs remain secure while maintaining flexibility poses a challenge.

  • Over-refusal risks where models excessively reject or ignore commands need to be addressed.

3/ Continuous Evolution of Threats

  • Ongoing research and adaptation of the instruction hierarchy are crucial.

  • Automated red-teaming and continuous monitoring are recommended to stay ahead of vulnerabilities.

AI & SECURITY
AI SECURITY RESOURCE ROUNDUP

1/ Top 10 Libraries for Automatically Red Teaming Your GenAI Application [Tools]

  • Advances in AI security must consider the broader context of conversations and the potential for incremental manipulation, rather than just immediate interactions.

  • Microsoft has developed a multifaceted approach to defend against malicious AI inputs and attacks, including the Spotlighting strategy and prompt engineering.

  • Collaboration and open sharing of strategies and vulnerabilities is crucial to strengthening the overall security framework for AI.

  • This paper presents a comprehensive overview of the LLM supply chain, highlighting its three core elements: 1) the model infrastructure, encompassing datasets and toolchain for training, optimisation, and deployment; 2) the model lifecycle, covering training, testing, releasing, and ongoing maintenance; and 3) the downstream application ecosystem, enabling the integration of pre-trained models into a wide range of intelligent applications.

  • This repository centralises and summarises practical and proposed defenses against prompt injection.

  • LLM agents have been used for both good and bad purposes and researchers are interested in their potential to exploit cybersecurity vulnerabilities. A study found that GPT-4 is capable of exploiting 87% of one-day vulnerabilities with critical severity, but only with the CVE description. This raises concerns about the widespread use of powerful LLM agents.

FEEDBACK
DID YOU ENJOY THIS ONE?

If you’ve got a question or feedback, you can reply to this directly!

I want to create a newsletter that you can’t wait to open every week.

Your feedback will help me do that.

REFERRALS
SHARE CYBER PRO CLUB!

If you found this newsletter valuable, share this link with others: https://www.cyberproclub.com/subscribe

Thanks for reading.

Cal J Hudson