Abstract
Following the recent release of AI assistants, such as OpenAI’s ChatGPT and GitHub Copilot, the software industry quickly utilized these tools for software development tasks, e.g., generating code or consulting AI for advice. While recent research has demonstrated that AI-generated code can contain security issues, how software professionals balance AI assistant usage and security remains unclear. This paper investigates how software professionals use AI assistants in secure software development, what security implications and considerations arise, and what impact they foresee on security in software development. We conducted 27 semi-structured interviews with software professionals, including software engineers, team leads, and security testers. We also reviewed 190 relevant Reddit posts and comments to gain insights into the current discourse surrounding AI assistants for software development. Our analysis of the interviews and Reddit posts finds that, despite many security and quality concerns, participants widely use AI assistants for security-critical tasks, e.g., code generation, threat modeling, and vulnerability detection. Participants’ overall mistrust leads to checking AI suggestions in similar ways to human code. However, they expect improvements and, therefore, a heavier use of AI for security tasks in the future. We conclude with recommendations for software professionals to critically check AI suggestions, for AI creators to improve suggestion security and capabilities for ethical security tasks, and for academic researchers to consider general-purpose AI in software development.
| Original language | English |
|---|---|
| Title of host publication | CCS '24: Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security |
| Publisher | Association for Computing Machinery, Inc |
| Pages | 2726-2740 |
| Number of pages | 15 |
| ISBN (Electronic) | 9798400706363 |
| DOIs | |
| Publication status | Published - 2024 |
| Event | 31st ACM SIGSAC Conference on Computer and Communications Security, CCS 2024 - Salt Lake City, United States Duration: 14 Oct 2024 → 18 Oct 2024 |
Conference
| Conference | 31st ACM SIGSAC Conference on Computer and Communications Security, CCS 2024 |
|---|---|
| Country/Territory | United States |
| City | Salt Lake City |
| Period | 14/10/24 → 18/10/24 |
Bibliographical note
Publisher Copyright:© 2024 Copyright held by the owner/author(s).
Funding
We thank our anonymous reviewers and shepherd for their valuable feedback and for helping us to improve this paper. We also acknowledge Dagstuhl Seminar 23181 in which most authors participated and where this project started. This research was funded by VolkswagenStiftung Niedersächsisches Vorab – ZN3695. This research was also partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC 2092 CaSa – 390781972 and supported by the U.S. National Science Foundation (NSF) Award # 2247141 and Award # 2312321. The research was also partly supported by European Union Horizon Europe program - Cybersecurity Sec4AI4Sec Award # 101120393 and by NWO, Dutch Research Organization - Kennisen Innovatieconvenant (KIC) HEWSTI Award # KICH1.VE01.20.004. This work was also supported by EPSRC Grants REPHRAIN: National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online (EPSRC Grant EP/V011189/1) and Equitable Privacy (EPSRC Grant EP/W025361/1).
| Funders | Funder number |
|---|---|
| Nederlandse Organisatie voor Wetenschappelijk Onderzoek | |
| Dutch Research Organization - Kennisen Innovatieconvenant | |
| Deutsche Forschungsgemeinschaft | EXC 2092 CaSa – 390781972 |
| VolkswagenStiftung Niedersächsisches Vorab | ZN3695 |
| Korean Institute of Criminology | KICH1.VE01.20.004 |
| European Union Horizon Europe program | 101120393 |
| Equitable Privacy | EP/W025361/1 |
| National Science Foundation | 2312321, 2247141 |
| Engineering and Physical Sciences Research Council | EP/V011189/1 |
Keywords
- AI Assistants
- Generative AI
- Interviews
- Large Language Models
- LLM
- Software Development
- Software Security