New experimental design to capture bias using LLM to validate security threats

Winnie Bahati Mbaka*

*Corresponding author for this work

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

38 Downloads (Pure)

Abstract

The usage of Large Language Models is already well understood in software engineering and security and privacy. Yet, little is known about the effectiveness of LLMs in threat validation or the possibility of biased output when assessing security threats for correctness. To mitigate this research gap, we present a pilot study investigating the effectiveness of chatGPT in the validation of security threats. One main observation made from the results was that chatGPT assessed bogus threats as realistic regardless of the assumptions provided which negated the feasibility of certain threats occurring.

Original languageEnglish
Title of host publicationEASE '24
Subtitle of host publicationProceedings of the 28th International Conference on Evaluation and Assessment in Software Engineering
PublisherAssociation for Computing Machinery
Pages458-459
Number of pages2
ISBN (Electronic)9798400717017
DOIs
Publication statusPublished - 2024
Event28th International Conference on Evaluation and Assessment in Software Engineering, EASE 2024 - Salerno, Italy
Duration: 18 Jun 202421 Jun 2024

Publication series

NameACM International Conference Proceeding Series

Conference

Conference28th International Conference on Evaluation and Assessment in Software Engineering, EASE 2024
Country/TerritoryItaly
CitySalerno
Period18/06/2421/06/24

Bibliographical note

Publisher Copyright:
© 2024 Owner/Author.

Keywords

  • ChatGPT
  • Large Language Models
  • Security Threat Validation

Fingerprint

Dive into the research topics of 'New experimental design to capture bias using LLM to validate security threats'. Together they form a unique fingerprint.

Cite this