Complex Event Filtering with AWS EventBridge Pipes, Rules and No Custom Code.

James Matson - May 9 - - Dev Community

It can be a guilty pleasure introducing AWS Lambda functions into your solution. Lambda functions are many things, but perhaps none more so than the swiss army knife of cloud development.

There’s almost nothing they can’t do, owing mostly to the fact that a Lambda is just an uber-connected vassal for your awesome code. One of the most common uses cases for a Lambda function is as the glue between one or more AWS services, if you’ve worked in the serverless space I’m telling you an all-too-familiar tale. You’ve set up a DynamoDb table to record some event data, and now you want to take that event data and send it to an API after performing some logic or manipulation on the data. You might end up with a design that looks similar to the below:

Image description

A glorious serverless Lambda function creates the glue between the exported stream events and the downstream REST API. Magic! Or, maybe you’re the provider of the API, and you want to receive events from the outside world, perform all kinds of logic and business rules on the data, then submit different messages to different downstream services as a result. You might end up with:

Image description

Again, Lambda functions serve to provide the glue and business logic between upstream services and downstream services. Using whatever language you’re most comfortable in, you can fill the lambda with whatever code you need to get a virtually limitless variation of logic.

It’s awesome! I love Lambda.

Really, I do! If I saw Lambda walking on the opposite side of a busy street on a rainy day, I’d zig-zag through traffic to get to the other side, pick Lambda up right off the ground and hug it to pieces as the raindrops fell around us and the city seemed to freeze in place.

Then I’d place Lambda down, kiss its nose, and proceed to walk hand-in-hand with it through the rainy streets, talking about nothing and everything at the same time.

Wow. That got really romantic. I’m not crying, you’re crying!

Image description

This is me, hugging AWS Lambda. If this doesn’t melt your heart, you’re dead. You’re literally dead.

But there are downsides that come with filling your environment with Lambda functions.

For one thing, Lambda functions are filled with code and libraries, so irrespective of whether you’re using C#, Python, NodeJS or Go you have to face the reality of security scans, package management, code maintenance and just the overall technical debt that comes with maintaining a code base. (Even if that codebase is distributed among lots of little Lambda functions).

Even if everything is well maintained, as runtimes age out of support from AWS, you’ll be faced with the task of updating functions to the next supported runtime which — depending on how big your environment is — can range from a minor annoyance to a major headache.

AWS however, have made some great moves across their landscape to lessen the reliance builders of today have on needing Lambda to form the ‘glue’ between services. There’s no replacing Lambda for its flexibility and power to encapsulate business logic for your applications, but using Lambda functions simply as a way to stitch AWS native service A to AWS native service B is a pain and let’s be honest — after the eleventy billionth time you’ve done it — not much fun either.

Now as a software developer from way back in the olden times of Visual Basic and ASP classic, I know that a dev’s first instinct is to solve the problem with code. I get it. Code is beautiful, and in the right hands can solve all the problems, but the reality is that to think in a ‘cloud native’ manner actually involves in many cases thinking about code less as a solution to all problems and instead a solution to the problems only code can solve.

Cloud services — AWS included — have come a long way to allowing you to get a lot of work done by leveraging the built in functionality of components like S3, EventBridge, CloudWatch, Step Functions and more. This means that when you do turn to application code, it’s for the right reasons, because it’s the right tool.

Less is More

To demonstrate how taking the ‘less Lambda/code is more’ approach can work, I’m going to take you through a real world use case, showing you one reasonable way of approaching the solution using AWS Lambda, and then an alternative that uses zero (that’s right, zero — what sorcery is this?) Lambda functions to achieve the same ends.

Alright, so what’s our real world use case? (Bearing in mind this is a very real world use case. As in, it happened — to me — in the real world, a place I'm rumoured to visit from time to time).

I’ve been tasked with building a service as a part of a larger platform that will help provide stock on hand information to a retail website. In retail, accurate stock isn’t a nice to have, it’s essential. If you don’t provide correct stock for your customers to view, you’re either losing sales when you’re not reporting stock that is there, or creating frustrated customers showing products in stock that actually aren’t.

In order to provide useful information quickly to the website, we have a third party search index product that holds our products as well as information about whether the products are in or out of stock. The stock on hand figures are going to be held in a DynamoDb table, and my job is to take the ever changing stock on hand figures from that database, work out whether the change is resulting in the product

Moving from in stock to out of stock
Moving from out of stock to in stock
and send the resulting information to the third party search index via an API call. To keep things simple, we don’t actually need to send the stock figure (e.g. 5 units in stock) itself to the search index, we just need to send the product ‘sku’ (Stock Keeping Unit) and whether it’s in or out of stock.

Lambda’s Lambda’s Everywhere

Let’s have a look at how we might approach this with our code and Lambda first approach:

Image description

Not too shabby. Putting aside the complexities of error handling, retries and what have you, we have a pretty simple robust solution. Our stock table has DynamoDb streams enabled and configured to send NEW_AND_OLD_IMAGES. This means that when a value changes, the table will export some JSON data that tells us what the old value was as well as what the new value is. This is important for us to determine if the product is moving into/out of stock.

We then have a Lambda function set up to be triggered by the DynamoDb stream event. This is called the ‘Filtering Service’. It’s job is to examine the data and determine if it’s something we should be sending to our search index. Remember, we don’t care about movements of units up or down unless it results in the product moving in stock or out of stock. Here’s a good visual reference below:

Image description

If our filtering service says ‘yup, this looks good — send it on’ it’s going to send that data to an SQS Queue. Why not directly to the next Lambda? Well, it’s a bit frowned upon to directly invoke Lambda from Lambda and it doesn’t hurt to put a little decoupling between Lambda A and B.

The Queue will have a trigger setup to invoke the IndexService Lambda. It’s job will be to obtain the required credentials to call the third party search index API, and send along a payload to that API that looks a bit like this:



{
    "sku": "111837",
    "in_stock": false
}


Enter fullscreen mode Exit fullscreen mode

Nice! Easy and done. Except you’ve just introduced another 2 Lambda functions to your landscape. Functions that come with all the baggage that we talked about earlier.

So, is there another way? A — and here’s a new term I’ve coined just for you — Lambdaless way?

Of course.

This is The Way

Image description

So how are we going to tackle this problem without any custom code or Lambda functions? We’ll let’s look at the design visually first, then we can walk through it (including — as always — an actual repository you can play around with).

Image description

Whoah. Hang on a second.

This looks more complicated than the other diagram, which if I put my architect hat on seems counterintuitive — right? You should seek to simplify not complicate an architecture?

Well yes, that’s absolutely right. But try to remember that in our other diagram there are 2 Lambda functions. Inside those functions is a whole bunch of code. That code includes branching logic and different commands/methods, none of which is actually shown on the diagram.

We need to replicate that logic somewhere, so we’re using native AWS services and components to do it. Hence, the diagram may look a little more busy, but in reality it’s pretty elegant.

Because we don’t need any custom code packages, all of our architecture in the image will be delivered by way of a SAM (Serverless Application Model) template, a great infrastructure-as-code solution for AWS projects. You can see the full template in the repository here:

Repo
But we’ll be breaking it down piece by piece below.

First, let’s have a quick look at our DynamoDb table setup:

Resources:
DynamoDBStockOnHandTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: StockOnHand
AttributeDefinitions:
- AttributeName: sku
AttributeType: S
KeySchema:
- AttributeName: sku
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGES

We’ve defined a simple table, with ‘sku’ as the hash or partition key. We’ve set up a small amount of provisioned throughput (how much read and write ‘load’ our database can handle) and finally — most importantly — we’ve enabled ‘Streams’ with the type of NEW_AND_OLD_IMAGES which we discussed in our Lambda solution.

The idea is when an upstream system inserts a new record with a SKU and a stock on hand figure, the data will be streamed out of the database to trigger downstream events.

Our DynamoDb table and DynamoDb stream remain the same as our Lambda based solution, but after that it’s AWS EventBridge to the rescue to pretty much take care of everything else we could possibly need.

EventBridge has become — in my humble opinion — the darling of event-driven serverless architecture in AWS. In our team, we are consistently using it for any solution where we need event-driven solutions at scale and with decoupling and fine control built into the solution from the start.

So wer’e sending our DynamoDb stream to an EventBridge Pipe.

EventBridge Pipes are a great way to take data from a range of AWS sources, and then filter, enrich, transform and target a downstream source, all without any custom code.

Image description

In our case though, we’re just using the Pipe itself as a way to get our DynamoDb stream from DynamoDb into EventBridge itself, because at the time of writing at least there’s no way to directly target an EventBridge bus with a DynamoDb stream. Some AWS services, like Lambda or API Gateway let you integrate directly to an EventBridge bus, but DynamoDb streams isn’t one of them.

Using a Pipe however, gives us the ability to get where we need to get. Let’s have a look at the components that we’ve set up to allow our ‘Stream to EventBridge’ connection:



StockOnHandEventBus:
    Type: AWS::Events::EventBus
    Properties:
      Name: StockEventBus

  Pipe:
    Type: AWS::Pipes::Pipe
    Properties:
      Name: ddb-to-eventbridge
      Description: "Pipe to connect DDB stream to EventBridge event bus"
      RoleArn: !GetAtt PipeRole.Arn
      Source: !GetAtt DynamoDBStockOnHandTable.StreamArn
      SourceParameters:
        DynamoDBStreamParameters:
          StartingPosition: LATEST
          BatchSize: 10
          DeadLetterConfig:
            Arn: !GetAtt PipeDLQueue.Arn
      Target: !GetAtt StockOnHandEventBus.Arn
      TargetParameters:
        EventBridgeEventBusParameters:
          DetailType: "StockEvent"
          Source: "soh.event"

  PipeRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - pipes.amazonaws.com
            Action:
              - sts:AssumeRole
      Policies:
        - PolicyName: SourcePolicy
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - "dynamodb:DescribeStream"
                  - "dynamodb:GetRecords"
                  - "dynamodb:GetShardIterator"
                  - "dynamodb:ListStreams"
                  - "sqs:SendMessage"
                Resource: 
                  - !GetAtt DynamoDBStockOnHandTable.StreamArn
                  - !GetAtt PipeDLQueue.Arn
        - PolicyName: TargetPolicy
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - 'events:PutEvents'
                Resource: !GetAtt StockOnHandEventBus.Arn

  PipeDLQueue: 
    Type: AWS::SQS::Queue   
    Properties: 
      QueueName: DLQ-StockEvents


  PipeDLQPolicy:
    Type: AWS::SQS::QueuePolicy
    Properties:
      Queues:
        - !Ref PipeDLQueue
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service: "events.amazonaws.com"
            Action: "sqs:SendMessage"
            Resource: !GetAtt PipeDLQueue.Arn
            Condition:
              ArnEquals:
                "aws:SourceArn": !Sub "arn:aws:events:${AWS::Region}:${AWS::AccountId}:rule/StockEventBus/*"


Enter fullscreen mode Exit fullscreen mode

There’s quite a bit going on here. First, we’ve created our custom EventBridge bus. This is just a way to seperate our particular sets of events so that we don’t need to recognise them apart from other events that might come into the default EventBridge bus. It’s our own private channel for our stock on hand service.

Next we’re defining our Pipe. The source for the Pipes events is our DynamoDb stream and the target is our EventBridge bus. We’re sending our event from the pipe to EventBridge with the following parameters:

DetailType: “StockEvent”
Source: “soh.event”

The detail type and source are critical to allow EventBridge to properly filter and route the message where it needs to go next.

You can see we’re also referencing an IAM (Identity and Access Management) role. The role specifies that the AWS service ‘pipes.amazonaws.com’ can assume it, and the policies allow the role to accept the DynamoDb stream and target the EventBridge bus, as well as send any failures (messages that for some reason don’t get to EventBridge) to our SQS DLQ (Dead Letter Queue).

So now we’re getting our DynamoDb event streamed out of the database and into EventBridge via our Pipe. What does the event look like? Let’s take a look:



{
    "version": "0",
    "id": "REDACTED-ID",
    "detail-type": "StockEvent",
    "source": "soh.event",
    "account": "REDACTED-ACCOUNT",
    "time": "2024-05-02T07:04:50Z",
    "region": "ap-southeast-2",
    "resources": [],
    "detail": {
        "eventID": "REDACTED-EVENT-ID",
        "eventName": "MODIFY",
        "eventVersion": "1.1",
        "eventSource": "aws:dynamodb",
        "awsRegion": "ap-southeast-2",
        "dynamodb": {
            "ApproximateCreationDateTime": 1714633489,
            "Keys": {
                "sku": {
                    "S": "111837"
                }
            },
            "NewImage": {
                "sku": {
                    "S": "111837"
                },
                "soh": {
                    "N": "4"
                }
            },
            "OldImage": {
                "sku": {
                    "S": "111837"
                },
                "soh": {
                    "N": "11"
                }
            },
            "SequenceNumber": "REDACTED-SEQUENCE-NUMBER",
            "SizeBytes": 37,
            "StreamViewType": "NEW_AND_OLD_IMAGES"
        },
        "eventSourceARN": "REDACTED-ARN"
    }
}


Enter fullscreen mode Exit fullscreen mode

As you can see, we’ve got a standard DynamoDb stream event here, but pay particular attention to two areas that will be important further on. Firstly, the detail type and source. Those will be the same for every message, and will help with routing/filtering by rules.

Then we have our old and new SOH figures in the OldImage and NewImage sections respectively. If you look at this specific example, you can see that based on our requirements this message shouldn’t get sent to our 3rd party search index, because it’s not a move from out of stock to in stock or vice versa.

So with our event in EventBridge, what’s next? That’s where our EventBridge rules come in. EventBridge rules are a set of conditions tied to an event bus that tell EventBridge what data should be in the event to make it valid to trigger as well as defining one or more targets to send data to when that data matches the rule.

Unsurprisingly, we have two rules set up in our SAM template. An ‘in stock’ rule and an ‘out of stock’ rule. Let’s take a look at our in stock rule carefully, because a lot of the magic that lets us replace Lambda code is contained in these rules.



InStockRule:
    Type: AWS::Events::Rule
    Properties:
      Name: InStockRule
      EventBusName: !Ref StockOnHandEventBus
      EventPattern:
        source:
          - "soh.event"
        "detail-type":
          - "StockEvent"
        detail:
          eventSource:
            - "aws:dynamodb"
          eventName:
            - "MODIFY"
          dynamodb:
            NewImage:
              soh:
                N:
                  - "anything-but": "0"
            OldImage:
              soh:
                N:
                  - "0"
      State: ENABLED
      Targets:
        - Arn: !GetAtt EventApiDestination.Arn
          RoleArn: !GetAtt EventBridgeTargetRole.Arn
          Id: "StockOnHandApi"
          DeadLetterConfig:
            Arn: !GetAtt PipeDLQueue.Arn
          InputTransformer:
            InputPathsMap:
              sku: "$.detail.dynamodb.NewImage.sku.S"
            InputTemplate: |
              {
                "sku": <sku>,
                "in_stock": true
              }


Enter fullscreen mode Exit fullscreen mode

Our event rule uses an EventPattern to determine when to trigger. If you look at our pattern you can see it closely matches the structure of our DynamoDb stream event. The rule looks for the detail type and source of our event, and then interrogates the detail of the event. But here things get a little interesting. Rather than just look for constant values, we have some logic in our rule:



NewImage:
  soh:
    N:
      - "anything-but": "0"
OldImage:
  soh:
    N:
      - "0"


Enter fullscreen mode Exit fullscreen mode

Using ‘content filtering’ in the rule, we’re able to express that we only want to trigger the rule when the event has a soh value in the NewImage section (the new figure) that is anything but 0, and an OldImage (the past figure) soh value that is exactly 0.

That’s how we ensure that the rule is triggered when something is ‘in stock’.

We then define an EventApiDestination, that’s basically telling the rule that our target will be an API that exists outside of AWS (more on that later) as well as the same DLQ (Dead Letter Queue) we mentioned before, for any failures / rejections from that API.

Great! But, we have a problem. If you remember, our 3rd party API expects the format of data as:



{
    "sku": "111837",
    "in_stock": false
}


Enter fullscreen mode Exit fullscreen mode

But the DynamoDb stream data looks nothing like this? If we were using custom code, the transformation would be trivial, but what do we do without that option? Transformation to the rescue! EventBridge rules allow you to manipulate data and reshape it before sending it to the target.



InputTransformer:
  InputPathsMap:
    sku: "$.detail.dynamodb.NewImage.sku.S"
  InputTemplate: |
    {
      "sku": <sku>,
      "in_stock": true
    }


Enter fullscreen mode Exit fullscreen mode

This final part of the rule definition is essentially saying ‘pick out the value from the JSON path detail.dynamodb.NewImage.sku.s, add it to the variable ‘sku’ then create a new JSON object that uses the variable, and provides an in_stock value of true.

Huzzah! We’re getting so close now.

We won’t go through the out of stock rule in detail because it’s essentially the exact same rule with the exact same target, only our content filter is the exact inverse, and our transformation creatse a JSON object with in_stock: false.

So let’s recap. Our databse has streamed our event out, our pipe has gotten the event to our EventBridge bus, and our rules have ensured that a) we only get the events we want and b) those events are shaped as we require.

Now we just need to send the event to the third party API, and that’s where our rules ‘target’ comes in. A target can be a variety of AWS services (Lambda, SQS, Step Functions etc) but it can also be a standard API destination, even one that sits outside of AWS. To define that in our template, we use the following:



  EventApiConnection:
    Type: AWS::Events::Connection
    Properties:
      Name: StockOnHandApiConnection
      AuthorizationType: API_KEY
      AuthParameters:
        ApiKeyAuthParameters:
          ApiKeyName: "x-api-key"
          ApiKeyValue: "xxx"
      Description: "Connection to API Gateway"

  EventApiDestination:
    Type: AWS::Events::ApiDestination
    Properties:
      Name: StockOnHandApiDestination
      InvocationRateLimitPerSecond: 10
      HttpMethod: POST
      ConnectionArn: !GetAtt EventApiConnection.Arn
      InvocationEndpoint: !Ref ApiDestination


Enter fullscreen mode Exit fullscreen mode

We define a connection, which holds the authentication mechanism for our API (though in our case we’re just passing some dummy values as we’re using a special ‘mock’ API Gateway our team uses for integration tests) and then the API destination itself, which is where we describe the request as a POST request, to a specific URL, and set a rate limit to ensure we don’t flood the API.

And that’s it! We are done. A complete solution without any custom code. So how about we deploy it and see if it works?

Our Solution in Action

Because we’ve opted to use SAM to express our IaC (Infrastructure-as-Code) in a template, we get access to all the wonders and magic of the SAM CLI. This includes ‘sam sync’. This command allows us not only to deploy our template to AWS, but when combined with the ‘watch’ parameter, means if we make any changes to our template locally, the change will be automatically synced to the cloud without us even needing to think about deploying.

Awesome, let’s give it a shot.

PS C:\Users\JMatson\source\repos\SohWithEventBridge\SohWithEventBridge> sam sync --watch --stack-name SohWithEventBridge --template serverless.yaml --parameter-overrides ApiDestination=https://get-your-own.com

You’ll notice I’m passing in the URL of the ‘third party API’ when deploying the template. This is because the endpoint parameter in the template has no useful value, so if you decide to grab the repo and have a go yourself, you’ll need to supply an API that can accept the post request.

By passing in a parameter override, we’re populating the below parameter:



Parameters:
  ApiDestination:
    Type: String
    Default: "<Your API here>"


Enter fullscreen mode Exit fullscreen mode

After a few minutes, our entire solution should be deployed to AWS

Image description

A quick spot check of our resources:

DynamoDb table with stream enabled? Check

Image description

Our Pipe set up with the right source and target? Check

Image description

Our Event bus and rules?

Image description

Let’s check one of the rules to make sure it has the right filtering, target and transformation set up:

Image description

Image description

Uploading image

Looking good, so it’s time to test this thing out.

Because I’m a nice person and I’m all about the developer happiness these days, I’ve included a nifty little python script in the repository that we’re going to use to do some tests.

You can grab it from:

test script

It’s not elegant, but it’ll do. It’s job will be to simulate activity by inserting a few items into our stock on hand table with stock figures, then updating them — possibly more than once — to validate different scenarios (in and out of stock as well as neither).

The updates are:



# Insert items
put_ddb_item('188273', 0)
put_ddb_item('723663', 20)
put_ddb_item('111837', 50)

# Update items
update_ddb_item('188273', 5)
update_ddb_item('723663', 15)
update_ddb_item('111837', 10)

# Additional update
update_ddb_item('111837', 0)
update_ddb_item('111837', 11)
update_ddb_item('111837', 4)


Enter fullscreen mode Exit fullscreen mode

Let’s run it, and check our results:



(.venv) PS C:\Users\JMatson\source\repos\SohWithEventBridge\SohWithEventBridge\TestScripts> py test_script.py
INFO:botocore.credentials:Found credentials in shared credentials file: ~/.aws/credentials
INFO:root:Put item 188273 into DynamoDB table
INFO:root:Consumed capacity: {'TableName': 'StockOnHand', 'CapacityUnits': 1.0}
INFO:root:Put item 723663 into DynamoDB table
INFO:root:Consumed capacity: {'TableName': 'StockOnHand', 'CapacityUnits': 1.0}
INFO:root:Put item 111837 into DynamoDB table
INFO:root:Consumed capacity: {'TableName': 'StockOnHand', 'CapacityUnits': 1.0}
INFO:root:Update item 188273 in DynamoDB table
INFO:root:Consumed capacity: {'TableName': 'StockOnHand', 'CapacityUnits': 1.0}
INFO:root:Update item 723663 in DynamoDB table
INFO:root:Consumed capacity: {'TableName': 'StockOnHand', 'CapacityUnits': 1.0}
INFO:root:Update item 111837 in DynamoDB table
INFO:root:Consumed capacity: {'TableName': 'StockOnHand', 'CapacityUnits': 1.0}
INFO:root:Update item 111837 in DynamoDB table
INFO:root:Consumed capacity: {'TableName': 'StockOnHand', 'CapacityUnits': 1.0}
INFO:root:Update item 111837 in DynamoDB table
INFO:root:Consumed capacity: {'TableName': 'StockOnHand', 'CapacityUnits': 1.0}
INFO:root:Update item 111837 in DynamoDB table
INFO:root:Consumed capacity: {'TableName': 'StockOnHand', 'CapacityUnits': 1.0}
(.venv) PS C:\Users\JMatson\source\repos\SohWithEventBridge\SohWithEventBridge\TestScripts> 


Enter fullscreen mode Exit fullscreen mode

Okay, so the script has run, inserted some products and updated them. Now if all is working and we focus on product sku 111837, we should expect the following:



# Insert items
put_ddb_item('111837', 50) <- This wont count as its an INSERT and we filter for MODIFY only.

# Update items
update_ddb_item('111837', 10) <- 50 to 10 shouldnt filter through

# Additional update
update_ddb_item('111837', 0) <- 10 to 0 we should see
update_ddb_item('111837', 11) <- 0 to 11 we should see
update_ddb_item('111837', 4) <- 11 to 4 shouldnt filter through
So we should get 2 of the 5 events through.


Enter fullscreen mode Exit fullscreen mode

Now you can check the monitoring at various points through the event-driven journey by checking invocation triggers for our Pipe or Rules, but we’re going straight to our final target — our API — to check its logs using a simple CloudWatch Insights query.

With all the transformation and filtering we’ve done previously, we should only see a nice, neat JSON payload that tells us something is in stock, or out of stock:



**CloudWatch Logs Insights**    
region: ap-southeast-2    
log-group-names: API-Gateway-Execution-Logs_t23ilm51j6/dev    
start-time: -300s    
end-time: 0s    
query-string:


Enter fullscreen mode Exit fullscreen mode

fields @timestamp, @message, @logStream, @log
| sort @timestamp desc
| filter @message like '111837'



---
| @timestamp | @message | @logStream | @log |
| --- | --- | --- | --- |
| 2024-05-02 11:57:27.388 | {   "sku": "111837",   "in_stock": true } | 26267e5fba9c96a4989c9b712553f791 | 712510509017:API-Gateway-Execution-Logs_t23ilm51j6/dev |
| 2024-05-02 11:57:25.908 | {   "sku": "111837",   "in_stock": false } | 527de3f583aa2547d2819f2328657427 | 712510509017:API-Gateway-Execution-Logs_t23ilm51j6/dev |
---


Enter fullscreen mode Exit fullscreen mode

Huzzah! Success. We get SKU 111837 having posted both an in stock and out of stock event to our third party API, thus concluding the journey of our Lambdaless and codeless event-driven solution.

Not too shabby, eh? Well if you’re not impressed that’s okay, I’m impressed enough for the both of us. While I’m a software engineer at heart and I’ll always enjoy the act of writing code, there’s no denying the power and flexibility that can come from being able to combine native services through configuration alone to deliver on real world use cases.

What are your thoughts about leaning into native services and cutting back on the use of code itself to solve problems? Let me know in the comments and — as always — if you enjoyed the article feel free to leave a like, claps or whatever you feel is appropriate.

As a reminder — you can grab the complete repo for this guide below:

https://github.com/kknd4eva/SohWithEventBridge

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .