How Snyk ensures safe adoption of AI

SnykSec - Mar 28 - - Dev Community

The AI revolution is reshaping industries, processes, and the very fabric of software development. As we navigate through this transformative era, it's crucial to understand not only the evolution and application of AI in software development but also the innovative ways in which Snyk,  the industry-leading developer security platform, is harnessing AI to enhance security.

The evolution of AI in software development

The journey of AI dates back to the 1950s, marked by ambitious goals and significant milestones. Initially, AI research focused on symbolic AI (a.k.a. "Good Old-Fashioned AI"). This school of AI uses human-readable symbols to represent problems and logic to solve them. This early AI was rule-based and laid the foundation for rule-based systems and expert systems, offering a glimpse into the potential of machines mimicking human reasoning.

Transitioning from the symbolic era in the late 20th century, AI underwent a significant transformation, embracing the principles of machine learning. This pivotal shift enabled AI systems to evolve by learning directly from data. Currently, we are navigating the exciting era of deep learning and neural networks, where AI systems have the ability to process and analyze vast amounts of data by mimicking the intricate neural patterns of the human brain. 

The further advancement of deep learning led to the advent of generative AI (genAI) which marked a crucial shift towards models that could learn from data, generate new content, and even write code. Tools like GPT (generative pre-trained transformer) have demonstrated remarkable capabilities in understanding, predicting, and generating human-like content — text, images, and videos — opening new avenues for AI applications in software development. However, due to the way in which the technology works, all genAI tools may occasionally also generate incorrect, or even completely fabricated but believable outputs, the latter known as hallucinations”

AI in modern software development: Applications and examples

Today, developers use AI in various facets of software development, from automating mundane tasks to enhancing creativity and efficiency. The most common uses include: 

  • Generating code
  • Pair programming
  • Refactoring code
  • Providing templates
  • Adding comments
  • Summarizing code
  • Writing “readme” files
  • Troubleshooting bugs/issues
  • Code reviews

Of these tasks, the top three use cases pose the highest risk to enterprise and code security. These allow the addition, modification, and deletion of source code which can lead to the unintentional introduction of software vulnerabilities with serious consequences to organizations.

Research from McKinsey shows that developers can complete coding tasks twice as fast with genAI coding assistants. On the other hand, the 2023 Snyk AI Coding Security report and other studies like this one from Stanford University have also shown that the AI-powered assistants have also made the code less secure.

The double-edged sword: Benefits and risks

Whilst genAI coding assistants are helping developers boost their productivity, they are also generating and adding exponential amounts of code to applications. This leads to multiple risks associated with the AI-generated code

  • Quality and reliability: AI-generated code’s quality and reliability may vary due to training on incomplete or biased datasets. These may lead to AI hallucinations, errors, and the suggestion of non-existent packages that enable supply-chain attacks.
  • Blind trust: The AI-generated code can give a false sense of assurance and security to developers. However, unquestioning reliance on AI-generated code is not much different from blindly copy-pasting code from Stack Overflow. AI-generated code should be treated the same as any other new code being introduced, and must be tested for security issues before it is committed.
  • Security vulnerabilities: Due to flaws in AI models or biased training data, AI-generated code can introduce security vulnerabilities in the generated code. For example, we have shown that Copilot may amplify problems in insecure codebases and introduce security vulnerabilities.
  • License infringement and IP violations: AI-generated code takes a “black box” approach which may introduce code from different sources and license types in your applications, which can put your organizations at the risk of license infringement and IP violations. Alternatively, if a GPT is actively trained on queries it receives, IP could accidentally be handed over as training material

How Snyk empowers developers to safely adopt AI

AI's risks and benefits have led to the emergence of a new shift left in the software development life cycle (SDLC), and a corresponding need to empower developers to use AI coding assistants securely without slowing them down. This can be done by by pairing the use of AI coding tools with an industry-leading security platform like Snyk that can provide: 

  • Security at the speed of AI development by scanning source code directly from the IDE with real-time analysis.
  • Accurate results, with reduced false positives and false negatives made possible with a proprietary, hybrid AI approach that incorporates thorough multi-file, interfile, and dataflow analysis, and combines this with extensive human expert fine-tuning throughout.
  • Tight integration with developer's Git worflows, development tools, and CI/CD to provide complete visibility of issues early, and throughout the SDLC.

DeepCode AI: Snyk’s hybrid AI for static analysis


DeepCode AI is our proprietary AI technology that powers Snyk and combines the two prominent schools of AI: rule-based symbolic AI and neural/ML-based genAI. When Snyk scans the source code, it parses it to create event graphs. These event graphs help with an abstracted view of the dataflow, sanitizers, and sink, and are passed through Snyk’s symbolic, rule-based, AI engine to detect any code security issues. When a code security issue is detected, Snyk reports the vulnerability along with the data flow, source and sink back to the user. Using ML, DeepCode AI continuously learns from open source repositories (with very specific licenses that allow commercial use), to detect and create new rules, which are added to the symbolic AI engine after careful vetting and curation by the security analysts. 

When a vulnerability is detected, Snyk’s new DeepCode AI fix feature generates fix candidates for the security issue. But before DeepCode recommends this fix to the user, it validates the version of the code with the fix by sending it back to the symbolic AI engine for a security scan. This ensures that the suggested code fixes the issue and does not add any new security problems. 

This hybrid AI approach of Snyk adds more robustness, accuracy, and speed, empowering developers to perform real-time scanning in their IDEs, significantly improving their productivity, and reducing context-switching. 

Snyk features using AI today

While AI has only been in the spotlight for a short while, Snyk has been harnessing the power of AI for years. In fact, AI is a big part of the reason we've been able to provide industry-leading speed and accuracy to developers and enterprises. There are multiple pieces of the Snyk platform that leverage AI: 

  1. Snyk Vulnerability Database (VulnDB): Snyk monitors various security databases, social networks, and community channels to look for various candidates of interest to our security research team. Using different AI techniques, our teams are able to uncover existing and new exploits for open source libraries and frameworks. This vulnerability database is used by both Snyk Open Source and Snyk Container products.
  2. Snyk Codeuses AI for fast and real-time analysis of code security issues. The SAST analysis performed by Snyk uses the symbolic AI approach, which makes it more accurate (Snyk achieved a 72% OWASP benchmark accuracy, versus the 53% accuracy of another reputable brand), and provides data-flow analysis to developers.
  3. Addition of new Snyk Code rules: Snyk uses machine learning algorithms to build and maintain its SAST rulesets by continuously learning from hundreds of specifically selected open source projects. Our security experts then vet the rules to ensure Snyk Code is  up-to-date, dynamic, and reliable.
  4. DeepCode AI Fix with Snyk Code: Snyk uses DeepCode AI Fix to provide fixes to the developers in the IDEs, using our hybrid AI approach to ensure that developers are not just able to find issues fast, but are also able to fix them effectively and early, within their IDEs. Research shows that the DeepCode AI fix outperforms various baseline models with its efficient program analysis.
  5. Snyk Code custom rules:Snyk enables teams to create custom rules for their SAST scans using suggestive AI so that teams can extend Snyk code rules for their unique business needs.
  6. Reachable vulnerability analysis: Snyk’s DeepCode AI can quickly traverse Snyk’s Vulnerability Database to conduct relevant function analysis, and create a call graph to gauge if a certain vulnerable function associated with an open source package is being called from your application, putting it at a higher risk for exploitation. This feature is currently in closed beta with support for popular languages like Java, Javascript, and Typescript.

Safely adopt AI with Snyk

AI-powered tools and solutions can definitely boost developer productivity and improve time-to-market for enterprises. However, you have seen that there needs to be suitable guardrails to prudently manage the overall risk profile of applications, and to protect enterprises in this dynamic landscape. With ground-breaking technology and leadership in security expertise, Snyk be your security partner, helping you to innovate with AI, confidently and safely.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .