Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

How Toxic Can You Get? Search-based Toxicity Testing for Large Language Models

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • معلومة اضافية
    • بيانات النشر:
      Institute of Electrical and Electronics Engineers
    • الموضوع:
      2025
    • Collection:
      KITopen (Karlsruhe Institute of Technologie)
    • نبذة مختصرة :
      Language is a deep-rooted means of perpetration of stereotypes and discrimination. Large Language Models (LLMs), now a pervasive technology in our everyday lives, can cause extensive harm when prone to generating toxic responses. The standard way to address this issue is to align the LLM , which, however, dampens the issue without constituting a definitive solution. Therefore, testing LLM even after alignment efforts remains crucial for detecting any residual deviations with respect to ethical standards. We present EvoTox, an automated testing framework for LLMs’ inclination to toxicity, providing a way to quantitatively assess how much LLMs can be pushed towards toxic responses even in the presence of alignment. The framework adopts an iterative evolution strategy that exploits the interplay between two LLMs, the System Under Test (SUT ) and the Prompt Generator steering SUT responses toward higher toxicity. The toxicity level is assessed by an automated oracle based on an existing toxicity classifier. We conduct a quantitative and qualitative empirical evaluation using five state-of-the-art LLMs as evaluation subjects having increasing complexity (7–671B parameters). Our quantitative evaluation assesses the cost-effectiveness of four alternative versions of EvoTox against existing baseline methods, based on random search, curated datasets of toxic prompts, and adversarial attacks. Our qualitative assessment engages human evaluators to rate the fluency of the generated prompts and the perceived toxicity of the responses collected during the testing sessions. Results indicate that the effectiveness, in terms of detected toxicity level, is significantly higher than the selected baseline methods (effect size up to 1.0 against random search and up to 0.99 against adversarial attacks). Furthermore, EvoTox yields a limited cost overhead (from 22% to 35% on average). This work includes examples of toxic degeneration by LLMs, which may be considered profane or offensive to some readers. Reader discretion is advised.
    • File Description:
      application/pdf
    • Relation:
      info:eu-repo/semantics/altIdentifier/issn/0098-5589; info:eu-repo/semantics/altIdentifier/issn/1939-3520; info:eu-repo/semantics/altIdentifier/issn/2326-3881; https://publikationen.bibliothek.kit.edu/1000185879; https://publikationen.bibliothek.kit.edu/1000185879/168296812; https://doi.org/10.5445/IR/1000185879/pre
    • الرقم المعرف:
      10.5445/IR/1000185879/pre
    • الدخول الالكتروني :
      https://publikationen.bibliothek.kit.edu/1000185879
      https://publikationen.bibliothek.kit.edu/1000185879/168296812
      https://doi.org/10.5445/IR/1000185879/pre
    • Rights:
      KITopen License, https://publikationen.bibliothek.kit.edu/kitopen-lizenz ; info:eu-repo/semantics/openAccess
    • الرقم المعرف:
      edsbas.720D6FAC