Engineering

06 July, 2020

Securing application webhooks in Elixir

Not long ago, I had a task that involved securing a webhook from an external API, making it possible to verify if the request was coming from the allowed application (authenticity).

Nuno Bernardes

Software Engineer

Securing application webhooks in Elixir - Coletiv Blog

Not long ago, I had a task that involved securing a webhook from an external API, making it possible to verify if the request was coming from the allowed application (authenticity) and if the received payload matched the one sent from the application, by verifying if the hashes matched (integrity).

Using SHA256 HMAC payload verification, the flow from the validation was as follows:

  1. Receive an incoming request from the external API.

  2. Extract the text payload as an array of bytes. The entire body of the POST request is used.

  3. Compute a SHA256 HMAC digest for the array of bytes. If your external API implementation has multiple HMAC keys, compute one digest for each of the HMAC keys.

  4. Base64 encodes each of the digests.

  5. Compare the base 64 digest(s) to the values of the signature headers. Compare the one you computed using your application’s key(s). At least one of the computed digests must exactly match its corresponding header value. If there is no match then the notification may be compromised and it should not be trusted. Only one match is required.

Knowing this I started to code the first iteration to ensure if the request was valid…

def is_request_valid?(conn) do # Signature received from the request (Step 1) incoming_signature = conn |> get_req_header("signature") |> Enum.at(0) # Body payload of the request (Step 2) {:ok, payload, _conn} = Plug.Conn.read_body(conn) # Stored secret, generated from the external app stored_secret = Application.gen_env(:example_app, :webhook_secret) # Hash generation function, using elixir crypto library (Step 3 and 4) generated_hash = :crypto.hmac(:sha256, stored_secret, payload) |> Base.encode64() # Either returns true if request is valid, or false if not (Step 5) generated_hash == incoming_signature end

When testing this approach, the result was invariably a False boolean ☹️. When debugging the code, the expected JSON (already decoded in a map) was returned with no errors whatsoever. Everything looked well coded. What was happening? Time to investigate!

Problem-solving

As software developers, we are constantly faced with new problems and new challenges to solve, and this was not outside of that spectrum. Working at Coletiv, and always having my peers at my back (even in times of social isolation), I confronted the problem with them! Without removing the challenge from the problem they suggested I should analyze the Plug dependency more carefully.

I started looking for clues 🕵🏽 in Plug hexdocs to see if I could find “the murder weapon”. As it seems, the guilty party was the payload! The plug provides a specification for web application components and adapters for web servers. When we receive the request, our Plug parses the content from it with Poison (our parser of choice).

The representation of the parsed data in Elixir is different from the representation of the raw received data. Decoding a byte array and encoding it again will not yield the same result as the original byte array had, even though it encodes the same objects. This happens because the in-memory representation of the parsed data in Elixir may order the objects differently from the byte array. When computing the hash function, it could not match the signatures in the request header. So, is there a way to access a raw unparsed version of this endpoint’s payload? Removing the parsing function from the Plug would surely break all of our other endpoints which rely on it…

Cache, I choose you!

Digging up the problem, I found out that the Parsers Plug supports cache body reading, making it possible to cache the raw body to perform verifications later, by storing the cached body in the connection. Following the example at hexdocs…

defmodule CacheBodyReader do def read_body(conn, opts) do {:ok, body, conn} = Plug.Conn.read_body(conn, opts) conn = update_in(conn.assigns[:raw_body], &[body | &1 || []]) {:ok, body, conn} end end plug( Plug.Parsers, parsers: [:urlencoded, :multipart, :json], pass: ["*/*"], body_reader: {CacheBodyReader, :read_body, []}, json_decoder: Poison )

This way, our requests can access the normal parsed body, but also additionally have access to the raw body, if needed. After this change, we had to alter some code in our request validation.

# Body payload of the request (Step 2) [payload] = Map.get(conn.assigns, :raw_body)

Finally, when testing the webhook, everything was working as intended and the requests made by the external API were being accepted by our application!

Elixir

Software Development

Security Token

Webhooks

Sha 256

Plug

Join our newsletter

Be part of our community and stay up to date with the latest blog posts.

Subscribe

Join our newsletter

Be part of our community and stay up to date with the latest blog posts.

Subscribe

You might also like...

Go back to blogNext
How to support a list of uploads as input with Absinthe GraphQL

Engineering

26 July, 2022

How to support a list of uploads as input with Absinthe GraphQL

As you might guess, in our day-to-day, we write GraphQL queries and mutations for Phoenix applications using Absinthe to be able to create, read, update and delete records.

Nuno Marinho

Software Engineer

Flutter Navigator 2.0 Made Easy with Auto Router - Coletiv Blog

Engineering

04 January, 2022

Flutter Navigator 2.0 Made Easy with Auto Router

If you are a Flutter developer you might have heard about or even tried the “new” way of navigating with Navigator 2.0, which might be one of the most controversial APIs I have seen.

António Valente

Software Engineer

Enabling PostgreSQL cron jobs on AWS RDS - Coletiv Blog

Engineering

04 November, 2021

Enabling PostgreSQL cron jobs on AWS RDS

A database cron job is a process for scheduling a procedure or command on your database to automate repetitive tasks. By default, cron jobs are disabled on PostgreSQL instances. Here is how you can enable them on Amazon Web Services (AWS) RDS console.

Nuno Marinho

Software Engineer

Go back to blogNext