Reliance on Artificial Intelligence Produces Less Secure Code – Stanford

 


Lately, it can be seen that there are many artificial intelligence services being developed to help improve our efficiency and productivity on a daily basis, and this includes for the software development industry.


Recently, a paper produced by experts and lecturers from Stanford University has confirmed that software code developed with the help of artificial intelligence services such as GitHub Autopilot has produced source code that has a number of security vulnerabilities when tested.


Stanford research on this phenomenon tested 47 random people where they consisted of pre- and post-graduate students as well as programming industry experts.


This test only used the C programming language to test the hypothesis of this vulnerability, and the report issued by Stanford said that only 67 percent of the group that used the help of artificial intelligence gave a satisfactory solution.


Among the errors and vulnerabilities that can be shown include integer overflow (integer overflow) and SQL injection. Those who do not use the GitHub Autopilot service are seen to be less aware of this issue, but still show weaknesses from other aspects.


These Stanford researchers also say that there are no issues in using Autopilot, but programmers should check their code to make sure that the issues in question do not cause problems when developing their own software in the future.

Previous Post Next Post

Contact Form