Building a Serverless, SQL-Compatible Cloud Security Findings Tool with Vue, Postgres, Lambda & API Gateway

Building a Serverless, SQL-Compatible Cloud Security Findings Tool with Vue, Postgres, Lambda & API Gateway

Building a front and back end to add greater visibility and insight into your multi-account cloud environment

The Problem

AWS has many different tools that audit security findings with your environment. SecurityHub, GuardDuty, Inspector, Macie, etc., and there exist even more open-source tools such as Prowler for doing the same thing. However, SecurityHub is not very scalable with it's interface. Going in and querying data is a pain, with clunky dropdowns and limited search features, it quickly becomes evident that you need another solution.

The solution

A simple backend database that automatically has these security findings streamed directly to it, where they can be queried by SQL commands. This makes it easy for anyone with a little background using SQL to query their findings.

The benefits to using this approach instead of one of the few AWS-offered solutions, including the CID, include the reduced cost, incremental development and custom approaches that allow you to provide this solution as a low-cost alternative that allows you to essentially just pay strictly for compute and storage, both of which are extremely budget-friendly service types within AWS.

The technology

  • S3 Static Website to host the front end

  • API Gateway as the REST endpoint intermediary

  • Lambda as the intermediary

  • Postgres RDS DB as the backend

An example

The following command would show you, in table form, a list of accounts and their corresponding S3 buckets that are public.

select account_name, finding, resource from findings where finding = "S3 Buckets Public"

Data Sources

The source of the data is irrelevant, although it will be shown here. AWS tool findings like SecurityHub, GuardDuty and Macie will be streamed directly via EventBridge -> Lambda function connection. Audit tools such as Steampipe and prowler can be configured via scripts to send output via API Gateway ->, here we will use Steampipe only.

High-Level Architecture

  • This architecture runs in a centralized security account. It can either run in your delegated administrator account or an account that has access to the delegated admin account via a cross-account role.

  • The top most EC2 instance represents the compute that does active scanning auditing of accounts. Inspector does the same thing, while GuardDuty and SecurityHub are automatically streamed to the central account.

  • Upon receiving their corresponding findings, they each call a separate Lamba function either via EventBridge or API Gateway.

  • The Lambda functions uses the psycopg2 driver to connect to the DB uses stored creds in Secrets Manager, then inserts the finding alongside various metadata into the Findings table

  • The VueJS front end app, running as a static website on S3, uses REST calls to API Gateway to execute DB calls using Lambda as the intermediary for query santiziation.

The Front End
The front end is written in VueJS with some vanilla JS functions thrown in. The code is relatively simple, and a similar UI could be written in the langauge of your choice. This is a Single Page Application (SPA) and thus the VueJS code is relatively simple, comprised of a few components regarding account-dropdowns, the search bar and the output tables. The full code and deployment instructions can be viewed here:

The setup script builds the Vue project and uploads it to an s3 bucket where static website hosting must already be configured.

The Back End

As per the above diagram, the back-end consists of two distinc APIs (each could be split into multiple per CRUD operation) that faciliates 1) sending data to your DB and 2) retreiving data from your DB. Let's talk about the origin of the data.

AWS security services support streaming events directly to Lambda, making it easy to collect findings in the DB through the use of EventBridge + Lambda. The flow would look like:

SecurityHub/GuardDuty/Macie generates finding -> triggers EventBridge rule -> triggers Lambda function -> sends to DB.

For example, a SecurityHub EventBridge rule looks like:

  "source": [
  "detail-type": [
    "Security Hub Findings - Imported"
  "detail": {
    "findings": {
      <attribute filter values>

Set the rule action to rigger a lambda function with code to sends the data to your cooresponding backend.

For the custom audit data, this can be anything from whichever tool you use, either open-source tools such as Prowler or Steampipe, or an in-house propriety tool that uses the AWS SDK. The language doesn't matter, all that matters is findings are generated and sent through API Gateway to a Lambda function, which sends on to the DB. The flow looks like:

Python scripts runs Steampipe -> completion triggers a helper script -> parses output JSON file for relevant fields -> POSTS to API Gateway -> passes through payload to Lambda function -> sends to DB.

For example, some python psudo-code

import boto3
import requests

# init clients & vars
client = boto3.client("s3")
accountIds = ["123412341234", "etc"]
apiUrl = ""
# get all public buckets per account
for account in accountIds:
    session = boto3.session(account)
    publicBuckets = []
    # find all public buckets
    response = session.list_buckets()
    for bucket in response["Buckets"]:
        if bucket["PublicAccess"] == true:

    for pbucket in publicBuckets:
        finding = {
            "Finding": "S3 Public Bucket",
            "Resource": pbucket,
            "AccountId": account
        }, json.dumps(finding))

The above example code will send your findings to your API, which passes them through to your Lambda, which has an event handler that takes the finding as input and updates your backend DB.

The Complete Picture

Now that the entire appliction takes form: You have the s3 static front end, the API Gateway & Lambda middleware which connect to the backend DB. It's incredibly important to do SQL santization at the Lambda layer as well as throttling at the API gateway level shown here:

The DB originall used was a Postgres RDS db.m7g instance with 64GB of storage, with recurring helper Lambda functions running perioidcally that would purge data by offloading storage to S3 for data where timestamp > certain_date.

The front end authentication was a simple API password that was not stored client side and validated at the Lambda level encrypted by Secrets Manager.