Inbound EDI using Terraform


The goal of this tutorial is to deploy a simple serverless system that ingests an EDI file and outputs a transformed JSON. It sounds simple, but at Stedi we like to say that reality has a surprising amount of detail. Luckily we have Stedi to do the heavy lifting of translating and transforming the EDI into a JSON to a shape of our choosing. For a deeper overview of Stedi, visit Introduction to Stedi.

What we will build

In this tutorial, we provide a reference implementation that works on the AWS Cloud. While there are AWS specific components in this guide, you could apply the same concepts and reuse part of the code for other (cloud) implementations. We are working on additional developer guides for other cloud platforms.

At a high level, we need to deploy components on AWS which do the following:

  1. Receive an EDI file as input data. In our case, we will ingest an EDI 850 purchase order on an S3 bucket that comes from our trading partner.
  2. Translate the 850 EDI to JSON EDI (JEDI). This makes it easier to inspect and process the data for humans and machines.
  3. Maps that JEDI to a specific JSON output schema, which allows us to model the output as we want.
  4. Write that JSON back into S3. From here, the file can be retrieved by another process.

Breakdown of Stedi services we use

Stedi has two services that we will explore for this workload: EDI Core (the service that translates EDI files to JSON documents (JEDI)), and Mappings (the service that allows for remapping keys and values of JSON documents).

In order to execute these queries, we will use a serverless Lambda function that receives the EDI and makes those two requests. When an EDI file appears in the S3 bucket, it will trigger the Lambda that will get the file, transform it, and place it back in another S3 prefix.


The workflow step by step

Below you can find a step by step process of how the workflow will execute once a file is received on S3.

We have several services to create in AWS, these are listed below.


This is a quick breakdown - all of these components are defined as infrastructure as code in the project's GitHub repository.

Deploying the application


For this tutorial you will need:

Setup infrastructure using Terraform

We are using Terraform to deploy the solution on AWS. Terraform is an Infrastructure-as-Code (IaC) tool that allows you to manage infrastructure with configuration files rather than a GUI.

While these IaC tools exist for specific cloud providers, Terraform is cloud provider agnostic and with minor changes can be modified to deploy to Azure, Google Cloud, Alibaba Cloud, Heroku, and Oracle Cloud.

We are starting with a skeleton repository that has the basics configured. Download or clone it from:

There is additional information in the Readme document that provides a more detailed overview of all source code.

On your CLI, you can run git clone

Then change directory using cd tutorials/stedi-inbound-transformation-aws-terraform

The project structure is as follows;

├── scripts
│   └──    // Shell script for creating lambda .zip
├── index.js        // Main Lambda code that does all the magic
├──     // Core terraform infrastructure is written here
├──    // String variables for terraform
├── 850.edi     // Our X12 850 EDI file
└── package.json    // (To download @axios when we run npm install)

Once you’ve cloned the repo, run bash /scripts/ to run the necessary provisioning steps. This will run terraform init to download Terraform providers (modules and plugins required for deployment).

Now to deploy the stack, make sure your AWS profile and IAM access keys are correctly configured for the AWS CLI. Next, run terraform apply. You will be prompted to enter:

  1. Your Stedi API key (you can get this by following the
  2. Your Stedi Mapping ID. For this tutorial, we will use the Mapping ID from the

You can get the API key from the Stedi terminal under Account > API Keys.

Create a Stedi Mapping

We need to create a Stedi Mapping and get it’s Stedi ID.

  1. Navigate to > Mappings
  2. Click ‘Create Mapping’ to begin the wizard.
  3. For the Source Schema: Copy-and-paste the output JSON from the /translate call shown above).
  4. For the Target Schema: Copy-and-paste the following
  "po_number": "xxxxx",
  "sender_id": "xxxxx",
  "interchange_receiver_id_08": "xxxxx"
  1. For “Add Keys to Target Map”: Select all keys.
  2. For the “po_number” mapping: output.interchanges[0].groups[0].transaction_sets[0].heading.beginning_segment_for_purchase_order_BEG.purchase_order_number_03
  3. For the “sender_id” mapping: output.interchanges[0].interchange_control_header_ISA.interchange_sender_id_06
  4. For the “interchange_receiver_id_08” mapping: output.interchanges[0].interchange_control_header_ISA.interchange_receiver_id_08
  5. Save the mapping, and get the Mapping ID from


Once you’ve entered those values into the CLI prompt, hit enter and you will see the following screen:


Type “yes” and hit enter. This will deploy to your AWS account. If you encounter any failures, they are likely due to a misconfiguration of your AWS profile in the shell.

On success, you should see:


Note: the name of our bucket in this instance is panda-smoothly-open-teal (a random generated name by Terraform), the one you will see will be different.

Let’s navigate to the AWS Console and create a folder called inbound/ inside the S3 bucket. Navigate inside the inbound/ folder, and upload the 850.edi file that was shared inside the git repository.


Now go back to the root of the bucket and click Refresh. You should see a new folder pop up called orders/.


Inside that folder, we’ll find a JSON file that’s the output of the workflow we coded in that Lambda.


Opening that file, we can confirm it looks like the JSON payload we were expecting.


Congratulations, you completed the workshop assignment succesfully!

Final thoughts

In order to monitor if the solution is running correctly, you can take a look at the CloudWatch Metrics of the Lambda function. Over here, you can see how many successful invocations happened and inspect the duration of these requests.

This concludes our workshop, but please feel free to extend and modify this stack further. You can try to process your own EDI files with the solution and explore different mappings that can be made.

Cleaning up the resources on AWS

If you’d like to tear down and clean up your AWS account, run terraform destroy and it will remove everything including the data in the files.

We hope you enjoyed this tutorial on how to implement a workflow on AWS and calling Stedi to do data transformations, and we hope this makes the journey towards implementation clearer.

As always, Stedi is evolving, so if you have any thoughts or feedback we would love to hear from you!