Abstract
The usage of Large Language Models is already well understood in software engineering and security and privacy. Yet, little is known about the effectiveness of LLMs in threat validation or the possibility of biased output when assessing security threats for correctness. To mitigate this research gap, we present a pilot study investigating the effectiveness of chatGPT in the validation of security threats. One main observation made from the results was that chatGPT assessed bogus threats as realistic regardless of the assumptions provided which negated the feasibility of certain threats occurring.
| Original language | English |
|---|---|
| Title of host publication | EASE '24 |
| Subtitle of host publication | Proceedings of the 28th International Conference on Evaluation and Assessment in Software Engineering |
| Publisher | Association for Computing Machinery |
| Pages | 458-459 |
| Number of pages | 2 |
| ISBN (Electronic) | 9798400717017 |
| DOIs | |
| Publication status | Published - 2024 |
| Event | 28th International Conference on Evaluation and Assessment in Software Engineering, EASE 2024 - Salerno, Italy Duration: 18 Jun 2024 → 21 Jun 2024 |
Publication series
| Name | ACM International Conference Proceeding Series |
|---|
Conference
| Conference | 28th International Conference on Evaluation and Assessment in Software Engineering, EASE 2024 |
|---|---|
| Country/Territory | Italy |
| City | Salerno |
| Period | 18/06/24 → 21/06/24 |
Bibliographical note
Publisher Copyright:© 2024 Owner/Author.
Keywords
- ChatGPT
- Large Language Models
- Security Threat Validation