Interviews

Software development

by Mark Rowe

Is Generative AI a new era in application development? asks John Walsh, Senior Product Marketing Manager, Developer Security at the cyber firm CyberArk.

In the early days of computing, software development was more repetitive, laborious and primitive. Programmers used low-level languages like assembler or binary to manually input hardware-specific instructions and manage memory. There was a steep learning curve and it took a lot of custom code just to write relatively basic programmes. Luckily, technology advancements, code libraries, and collaborative software communities like Stack Overflow have made coding more efficient, enabling development teams to produce sophisticated applications more quickly.

The saying among developers used to be Google, copy and paste, meaning developers leaned heavily from their peers in communities like Stack Overflow for code snippets that they could in turn leverage.

Generative AI (Artificial Intelligence) tools like ChatGPT and similar AI-powered tools are the next software development evolution, as they are capable of generating massive amounts of code snippets in various programming languages, beyond the limits of what a human community can produce. However, as these tools become more prevalent, the potential software supply chain security risks also grow.

The problem is that AI tools can and will recommend code snippets that contain security vulnerabilities. This is why human developers still must review and be accountable for any code that AI produces. Therefore, developers must be knowledgeable about security practices to ensure the code produced by AI tools is secure and doesn’t introduce vulnerabilities unintentionally. Rather than replacing developers, tools like ChatGPT necessitate the acquisition of new skill sets related to identity security, since human accountability for the code remains paramount regardless of its machine-generated origin.

The trajectory of app development

One of the aspects I find most enjoyable about software development is its constant evolution. As a developer, you are always seeking ways to enhance efficiency and avoid duplicating code, following the principle of “don’t repeat yourself.” Throughout history, humans have sought means to automate repetitive tasks. From a developer’s perspective, eliminating repetitive coding allows us to construct superior and more intricate applications.

AI bots are not the first technology to assist us in this endeavour. Instead, they represent the next phase in the advancement of application development, building upon previous achievements.

ChatGPT: is it really the end of Google, copy, and paste development?

Prior to AI-powered tools, developers would search on platforms like Google and Stack Overflow for code solutions, comparing multiple answers to find the most suitable one. With ChatGPT, developers specify the programming language and required functionality, receiving what the AI tool deems the best answer. This saves time by reducing the amount of code developers need to write. By automating repetitive tasks, ChatGPT enables developers to focus on higher-level concepts, resulting in advanced applications and faster development cycles.

However, there are caveats to using AI tools. They provide a single answer with no validation from other sources, unlike what you would see in a collective software development community, so developers need to validate any AI solution. In addition, because the tool is in beta stage, the code served by ChatGPT should still be evaluated and cross-checked before being used in any application.

There are plenty of examples of breaches that started thanks to someone copying over code and not checking it thoroughly. Think back to the Heartbleed exploit, a security bug in a popular library that led to the exposure of hundreds of thousands of websites, servers and other devices which used the code.

Because the library was so widely used, the thought was, of course, someone had checked it for vulnerabilities. But instead, the vulnerability persisted for years, quietly used by attackers to exploit vulnerable systems.

This is the darker side to ChatGPT; attackers also have access to the tool. While OpenAI has built some safeguards to prevent it from answering questions regarding problematic subjects like code injection, the CyberArk Labs team has already uncovered some ways in which the tool could be used for malicious reasons. Breaches have occurred due to blindly incorporating code without thorough verification. Attackers can exploit ChatGPT, using its capabilities to create polymorphic malware or produce malicious code more rapidly. Even with safeguards, developers must exercise caution.

Best practices when using code from AI tools

With these potential security risks in mind, there are some important best practices to follow when using code generated by AI tools like ChatGPT. This involves checking the solution generated by ChatGPT against another source, like a community you trust, or friends. You should then make sure the code follows best practices for granting access to databases and other critical resources, following the principle of least privilege, secrets management, auditing and authenticating access to sensitive resources.

Make sure you double-check the code for any potential vulnerabilities and be aware of what you’re putting into ChatGPT as well. There is a question of how secure the information you put into ChatGPT is, so be careful when using highly sensitive inputs. Ensure you’re not accidentally exposing any personal identifying information that could run afoul of compliance regulations.

Ultimately, a machine cannot be held accountable, so developers are now responsible for the machine-generated code without the traditional peer review and community safeguards. They should familiarise themselves with identity security best practices and collaborate with security teams to appropriately validate the code. The accountability lies with the human users of these tools, as they bear the consequences of any issues or breaches. With careful evaluation and adherence to cybersecurity practices, ChatGPT and similar AI-powered tools can elevate software development to new heights.

Related News

Newsletter

Subscribe to our weekly newsletter to stay on top of security news and events.

© 2024 Professional Security Magazine. All rights reserved.

Website by MSEC Marketing