23 February 2023

How to use Dracon to satisfy an organisation's security use cases

This is the blogpost form of the AppSec Dublin 2023 talk introducing Dracon.
This is the second blogpost in the series, you can read the previous one, here.

Introduction

Modern software applications are deployed via containers to multiple cloud environments with an unforeseen speed. Automation is a must in every step of the pipeline in such a demanding environment. At the same time security has significantly evolved beyond simple tooling execution.

To cover at minimum the most important aspects of application security you need to:

  • Scan code and infrastructure as code with some form of SAST.
  • Scan deployed applications with a DAST tool.
  • Assure you supply chains with a solid SCA solution.
  • And of course maintain a high quality SBOM both for each individual application and an aggregate.

Despite all the advancements in this area, application security is still driven by tickets, metrics on tickets, and human action on those tickets. On top of that, we still need to run a tool and send all the results to developers for triaging and fixing. This is inefficient, but also expensive, as it consumes the most costly and valuable resources in an organisation. Everyday your senior developers and security architects go through hundreds of such tickets to eliminate duplicates, false positives and identify the bugs that are critical to your business operations.

In order to be productive, organisations need to find a way to reconcile development speed, automation, and also the multitude of different stakeholders and their requirements.

An automated solution is required to satisfy:

  • The security and reporting needs for multiple development teams with different toolsets,
  • The CISO’s need for posture observability and management, and
  • The DevOps need for speed and automation

Usually these groups compromise with sub-standard UX or with systems that are not really suited to their use cases, jerry-rigged to do something they weren’t designed for.

In this blogpost we demonstrate and provide guidelines on how to satisfy the requirements for every organisational function using a new, open source solution released in AppSec Dublin 2023.

Context

Let’s assume an organisation that writes Go and Python using a monorepo and empowers developers with significant autonomy so that each team is free to select and manage their tools.

As well, they use infrastructure as code via terraform and there are regulatory requirements to meet and metrics to track. In this organisation (let’s be honest, like in most!) the security team is minimal and involved in every step of the development process, running a comprehensive security champions programme.

With that out of the way, let’s set up our baseline.

Baseline

To achieve high security requirements, at a minimum baseline scanning support for all technologies must exist. The standard industry tools that can help achieve this are:

  • Bandit – SAST for Python
  • Pipsafety – Composition Analysis for Python
  • Gosec – SAST for Go
  • Nancy – Composition Analysis for Go
  • Trivy – Composition Analysis for Containers, SBOM generation and more
  • Tfsec – Infrastructure as Code SAST

The Dracon configuration that runs the tools above,
looks like the following:

kustomization.yaml

---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

nameSuffix: -github-com-python-project
namespace: default

resources:
 - https://github.com/ocurity/dracon//components/base/

components:
 - https://github.com/ocurity/dracon//components/sources/git/
 - https://github.com/ocurity/dracon//components/producers/aggregator/
 - https://github.com/ocurity/dracon//components/producers/python-bandit/
 - https://github.com/ocurity/dracon//components/producers/python-pip-safety/
 - https://github.com/ocurity/dracon//components/producers/golang-gosec/
 - https://github.com/ocurity/dracon//components/producers/golang-nancy/
 - https://github.com/ocurity/dracon//components/producers/terraform-tfsec/
 - https://github.com/ocurity/dracon//components/producers/docker-trivy
 - https://github.com/ocurity/dracon//components/enrichers/aggregator/

And pipelinerun.yaml

---
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
 generateName: dracon-github-com-python-project-
 namespace: default
spec:
 serviceAccountName: dracon
 pipelineRef:
   name: dracon-github-com-python-project
 params:
 - name: repository_url
   value: <a git url>
 - name: consumer-elasticsearch-url
   value: http://quickstart-es-http:9200
 - name: producer-docker-trivy-target
   value: "ubuntu:latest"
 - name: producer-docker-trivy-format
   value: sarif
 - name: producer-docker-trivy-command
   value: image
 workspaces:
 - name: source-code-ws
   subPath: source-code
   volumeClaimTemplate:
     spec:
       accessModes:
         - ReadWriteOnce
       resources:
         requests:
           storage: 1Gi

This pipeline produces thousands of results per scan without any way to organise them easily, requiring a lot of time-consuming, manual work. Any developer using this would have to triage the results every time they run this pipeline, definitely missing a lot of important results amongst all the noise.

Satisfying the needs of Developers

A common pain point developers share is having to look at multiple places to find or correlate information. They enjoy tool integration and prefer receiving scanning results and tickets that are relevant within the context they are working on. They have strict deadlines so it is key to avoid developers wasting time with irrelevant information. Pushing them to find the needle in a completely out-of-context haystack, full of duplicates and false positives, will soon lead to noise fatigue.

Having in mind these multiple, but common, requirements, let’s try to make developers happier and more productive by modifying the baseline. First, let’s introduce an enricher, a component that annotates tool results with arbitrary annotations. Dracon supports a deduplication enricher which detects duplicate findings and silences them.

To enable it, in our kustomize.yaml we add the following:

 - https://github.com/ocurity/dracon//components/enrichers/deduplication/

Second in order to bring the now deduplicated results to the developers own workflow tooling a Jira consumer can be introduced. For this, one more line needs to be added to our existing kustomization.yaml

 - https://github.com/ocurity/dracon//components/components/consumers/jira

To configure the Jira consumer, the following parameters are used to modify the pipelinerun.yaml:

- name: consumer-jira-url
  value: "<url of your jira instance>"
- name: consumer-jira-api-token
  value: "jira api token for the bot that dracon can use"
- name: consumer-jira-user
  value: "<bot account email address>"
- name: consumer-jira-config
  value: |    {"defaultValues":{"project":"TEST","issueType":"Task","customFields":null},"addToDescription":["scan_start_time","tool_name","target","type","confidence_text","annotations"],"mappings":null}

(Documentation on the configuration of the Jira consumer can be found here.)

In short, the above configuration will open each unique finding as a Task against a project called Test and dump the whole finding in the description field of the Task.

This way a developer gets only fresh, new information that is relevant to them, above the threshold the team cares about (no informational issues), factored directly into their work-estimation solution.

Satisfying the needs of security engineers

An application security team needs to have a solid overview of who runs which pipeline and guarantee that specific, security-related steps are not skipped. Their priority is to take action on results and keep the number of false positives as low as possible. Reporting is also important for them. For example, the application security team should know when a pipeline finished executing, how many issues have been identified per team or per project, and more.

The existing pipeline can be modified to add a second consumer which will put results in an elasticsearch instance.

kustomization.yaml

 - https://github.com/ocurity/dracon//components/consumers/elasticsearch/

From there it is possible to create a Kibana dashboard that shows results only for the relevant indexes and pipelines. Even though detailed security metrics are unique for each organisation and team, we have identified some common patterns among organisations. More insights on this soon to be provided in a future blogpost in this series.

In order to notify the security team when pipelines have finished executing we can send a private message on Slack To do so, add a slack consumer to kustomization.

- https://github.com/ocurity/dracon//components/consumers/slack

The Slack consumer needs to know the webhook URL where it will push results to. We can provide it with the following pipelinerun.yaml parameter:

 - name: consumer-slack-webhook
   value: "<webhook url>"

Satisfy the requirements of the CISO

CISOs and Security Directors must have a clear understanding of the maturity level of the security programme. To calculate this, they need metrics, such as the number of currently open and unpatched vulnerabilities with a clear indication of their severity. They also set policy requirements, especially on projects governed by strict regulations and compliance frameworks. Finally, they receive security alerts when critical vulnerabilities have been detected and coordinate response and remediation.

The baseline pipeline can be modified by adding a policy enricher. Then, a relevant OPA policy can be written that matches on specific tool output.

Dracon provides an open source policy enricher which allows annotating every result with a pass/fail annotation against a policy written in Rego.

Simple use case: A policy that detects Critical vulnerabilities looks like the following:


package example.gosec                                              
                                                                  
default allow := false                                             
                                                                  
allow =true {                                                      
   print(input)                                                   
   check_severity                                                 
}                                                                  
                                                                  
check_severity {                                                   
   input.severity == "SEVERITY_LOW"                               
}                                                                  
                                                                  
check_severity {                                                   
   input.severity == "SEVERITY_HIGH"                              
}                                                                  
check_severity {                                                   
   input.severity == "SEVERITY_MEDIUM"                            
}

To add this policy to the pipeline, first the existing kustomization.yaml file must be modified to add a policy enricher:

 - https://github.com/ocurity/dracon//components/enrichers/policy/

Then, this policy in base64 encoded format is supplied to the enricher by adding the following parameter to pipelinerun.yaml:

- name: enricher-policy-base64-policy
  value: "cGFja2FnZSBleGFtcGxlLmdvc2VjCgpkZWZhdWx0IGFsbG93IDo9IGZhbHNlCgphbGxvdyA9dHJ1ZSB7CiAgICBwcmludChpbnB1dCkKICAgIGNoZWNrX3NldmVyaXR5Cn0KCmNoZWNrX3NldmVyaXR5IHsKICAgIGlucHV0LnNldmVyaXR5ID09ICJTRVZFUklUWV9MT1ciCn0KCmNoZWNrX3NldmVyaXR5IHsKICAgIGlucHV0LnNldmVyaXR5ID09ICJTRVZFUklUWV9ISUdIIgp9CmNoZWNrX3NldmVyaXR5IHsKICAgIGlucHV0LnNldmVyaXR5ID09ICJTRVZFUklUWV9NRURJVU0iCn0="

Another Kibana dashboard can be added to visualise these results filtered considering the above policy.

What about the auditors?

In a nutshell, auditors need to be able to ensure scanning has ran, policies are adhered to, and relevant controls are applied.

To achieve this, it’s possible to: Expand the Policy file with more checks (for example, ensure that no Critical findings have been identified, or specific rules of specific tools have not triggered), create another dashboard, and raise an alert when any of the policies fail.

Putting it all together

Using one tool and one pipeline execution it was possible to satisfy every level of security requirements within the example organisation. Every stakeholder in the security programme gets relevant, actionable results in the exact format they need. This way the sample organisation can boost productivity, enhance deployment speed and automation and at the end of the day better utilise resources. The best part: Since this is all generated by a singular pipeline execution, there is no further automation needed in order to reconcile data.

The final kustomization.yaml file is here.
And the final pipelinerun.yaml file is here.

That’s all folks! This journey started with a single baseline pipeline and ended up with a toolchain that brings value to all the stakeholders in an application security programme, with metrics, alerts, integrations, and lots of dashboards. This is still one execution of one single pipeline which can be triggered by pushes to a branch or cronjobs. You can find the full pipeline in our community-pipelines repository.

Our Dracon enterprise offering greatly simplifies this process by allowing for a no-code pipeline management and additional visualisations. It provides seamless integration with Github, and of course connectors for proprietary platforms such as Veracode, Snyk, etc. We offer Dracon both as a stand-alone and a SaaS solution, so you don’t have to maintain your own clusters and at the same time benefit from our extensive Kubernetes expertise.

To get more information about Dracon and how you can use it in your organisation get in touch with us here.

Lets talk!

Interested in DevSecOps?

Book a Demo!