Using AI Assistants in Software Development: A Qualitative Study on Security Practices and Concerns

Jan H. Klemmer, Stefan Albert Horstmann, Nikhil Patnaik, Cordelia Ludden, Cordell Burton, Carson Powers, Fabio Massacci, Akond Rahman, Daniel Votipka, Heather Richter Lipford, Awais Rashid, Alena Naiakshina, Sascha Fahl

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

43 Downloads (Pure)

Abstract

Following the recent release of AI assistants, such as OpenAI’s ChatGPT and GitHub Copilot, the software industry quickly utilized these tools for software development tasks, e.g., generating code or consulting AI for advice. While recent research has demonstrated that AI-generated code can contain security issues, how software professionals balance AI assistant usage and security remains unclear. This paper investigates how software professionals use AI assistants in secure software development, what security implications and considerations arise, and what impact they foresee on security in software development. We conducted 27 semi-structured interviews with software professionals, including software engineers, team leads, and security testers. We also reviewed 190 relevant Reddit posts and comments to gain insights into the current discourse surrounding AI assistants for software development. Our analysis of the interviews and Reddit posts finds that, despite many security and quality concerns, participants widely use AI assistants for security-critical tasks, e.g., code generation, threat modeling, and vulnerability detection. Participants’ overall mistrust leads to checking AI suggestions in similar ways to human code. However, they expect improvements and, therefore, a heavier use of AI for security tasks in the future. We conclude with recommendations for software professionals to critically check AI suggestions, for AI creators to improve suggestion security and capabilities for ethical security tasks, and for academic researchers to consider general-purpose AI in software development.

Original languageEnglish
Title of host publicationCCS '24: Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security
PublisherAssociation for Computing Machinery, Inc
Pages2726-2740
Number of pages15
ISBN (Electronic)9798400706363
DOIs
Publication statusPublished - 2024
Event31st ACM SIGSAC Conference on Computer and Communications Security, CCS 2024 - Salt Lake City, United States
Duration: 14 Oct 202418 Oct 2024

Conference

Conference31st ACM SIGSAC Conference on Computer and Communications Security, CCS 2024
Country/TerritoryUnited States
CitySalt Lake City
Period14/10/2418/10/24

Bibliographical note

Publisher Copyright:
© 2024 Copyright held by the owner/author(s).

Funding

We thank our anonymous reviewers and shepherd for their valuable feedback and for helping us to improve this paper. We also acknowledge Dagstuhl Seminar 23181 in which most authors participated and where this project started. This research was funded by VolkswagenStiftung Niedersächsisches Vorab – ZN3695. This research was also partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC 2092 CaSa – 390781972 and supported by the U.S. National Science Foundation (NSF) Award # 2247141 and Award # 2312321. The research was also partly supported by European Union Horizon Europe program - Cybersecurity Sec4AI4Sec Award # 101120393 and by NWO, Dutch Research Organization - Kennisen Innovatieconvenant (KIC) HEWSTI Award # KICH1.VE01.20.004. This work was also supported by EPSRC Grants REPHRAIN: National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online (EPSRC Grant EP/V011189/1) and Equitable Privacy (EPSRC Grant EP/W025361/1).

FundersFunder number
Nederlandse Organisatie voor Wetenschappelijk Onderzoek
Dutch Research Organization - Kennisen Innovatieconvenant
Deutsche ForschungsgemeinschaftEXC 2092 CaSa – 390781972
VolkswagenStiftung Niedersächsisches VorabZN3695
Korean Institute of CriminologyKICH1.VE01.20.004
European Union Horizon Europe program101120393
Equitable PrivacyEP/W025361/1
National Science Foundation2312321, 2247141
Engineering and Physical Sciences Research CouncilEP/V011189/1

    Keywords

    • AI Assistants
    • Generative AI
    • Interviews
    • Large Language Models
    • LLM
    • Software Development
    • Software Security

    Fingerprint

    Dive into the research topics of 'Using AI Assistants in Software Development: A Qualitative Study on Security Practices and Concerns'. Together they form a unique fingerprint.

    Cite this