How to support a list of uploads as input with Absinthe GraphQL
As you might guess, in our day-to-day, we write GraphQL queries and mutations for Phoenix applications using Absinthe to be able to create, read, update and delete records.
When attending AWS Summit Madrid 2019 (#AWSSummit) we had the chance to watch the keynotes and speak with multiple companies that are exploring the vast majority of the Amazon solutions. Most of them justified their choice of using Amazon Web Services (AWS) with three main reasons:
Offering — a lot of services, where most are easy to integrate with their systems
Low-cost —when compared to supporting a team to create your own solutions
Partnership — their engineers are always ready to help your company decide what's best and to provide technical help
As you may know, AWS is continuously expanding its value offering to all the different companies it serves. It currently offers more than 150 products from which 19 are in Machine Learning, making it the area with the most solutions available.
Rekognition is a ready-to-use Machine Learning product capable of doing real-time video and image analysis such as detect text, detect faces and extract labels.
Play around with the Object and Scene Detection Rekognition Demo.
This article intent is to emphasise the potential of AWS Rekognition displaying its value with some examples and sample code using an existent package we developed to easily integrate all of its functionalities into your Elixir project.
We are integrating some of the Rekognition functionalities with our projects and, since we were not able to find any Elixir package available, we decided to contribute to the community by building a package that wraps the service.
Building the entire AWS authentication and request handling would greatly increase the effort to implement the package and we didn’t want to reinvent the wheel given that ex_aws already had everything we needed, we were only missing the specific Rekognition actions implementation.
We find really important to contribute to the open source community, specially in a language like Elixir where we found the people to be open and willing to share.
In the end, we got our Rekognition package that fully supports all features available at the moment.
Any contributions are welcomed and greatly appreciated!! 😃
Facial Landmarks (picture from Unsplash)
Using the available DetectFaces action, you are able to search a given image for faces and their corresponding characteristics and attributes.
{:ok, image_binary} = File.read("test/assets/face_source.jpeg")
%{:ok, response} = ExAws.Rekognition.detect_faces(image_binary, ["ALL"])
|> ExAws.request()
The response is given in real time and we are able to retrieve more concrete characteristics like the facial landmarks (eyes, mouth, nose, etc…) and some deductions like the gender, age range and even the emotions the face may be expressing.
Rekognition deduced the person in the picture to be a female between the age of 20 and 38, expressing happiness. Take a look at the response below to see the entire attributes you receive.
%{
"FaceDetails" => [
%{
"AgeRange" => %{"High" => 38, "Low" => 20},
"Beard" => %{"Confidence" => 98.57318115234375, "Value" => false},
"BoundingBox" => %{...},
"Confidence" => 100.0,
"Emotions" => [
%{"Confidence" => 0.3396322727203369, "Type" => "SURPRISED"},
%{"Confidence" => 0.18125057220458984, "Type" => "ANGRY"},
%{"Confidence" => 96.90631866455078, "Type" => "HAPPY"},
%{"Confidence" => 0.6400362253189087, "Type" => "DISGUSTED"},
%{"Confidence" => 0.4920317828655243, "Type" => "SAD"},
%{"Confidence" => 1.1983953714370728, "Type" => "CALM"},
%{"Confidence" => 0.24234053492546082, "Type" => "CONFUSED"}
],
"Eyeglasses" => %{"Confidence" => 99.99881744384766, "Value" => false},
"EyesOpen" => %{"Confidence" => 58.073116302490234, "Value" => true},
"Gender" => %{"Confidence" => 98.77777862548828, "Value" => "Female"},
"Landmarks" => [
%{
"Type" => "eyeLeft",
"X" => 0.31307676434516907,
"Y" => 0.4234926700592041
},
%{
"Type" => "eyeRight",
"X" => 0.404507040977478,
"Y" => 0.4026407301425934
},
...
],
"MouthOpen" => %{"Confidence" => 98.65016174316406, "Value" => true},
"Mustache" => %{"Confidence" => 99.93177795410156, "Value" => false},
"Pose" => %{...},
"Quality" => %{...},
"Smile" => %{"Confidence" => 97.82798767089844, "Value" => true},
"Sunglasses" => %{"Confidence" => 99.99996185302734, "Value" => false}
}
]
}
There are many use cases one can give to an OCR. One can take advantage of this service by using it to retrieve cards information without the need to type it and it can also be used to identify car plates with decent accuracy.
Detect Text (Coletiv business card)
Using the available DetectText action, you are able to retrieve all the locations and pieces of text found within the given image, all this in real time.
{:ok, image_binary} = File.read("test/assets/image.jpeg")
%{:ok, response} = ExAws.Rekognition.detect_text(image_binary)
|> ExAws.request()
In the response, you get the list of detections that occurred. Each detection contains a confidence value, a string with the detected text and also the geometry values that help you locate the text within the image.
%{
"TextDetections" => [
%{
"Confidence" => 81.74166870117188,
"DetectedText" => "CO",
"Geometry" => %{...}
},
%{
"Confidence" => 96.2022705078125,
"DetectedText" => "Andre Silva",
"Geometry" => %{...}
},
%{
"Confidence" => 99.13896179199219,
"DetectedText" => "Software Engineer",
"Geometry" => %{...}
},
%{
"Confidence" => 99.6148452758789,
"DetectedText" => "andre@coletiv.com",
"Geometry" => %{...}
},
%{
"Confidence" => 91.88440704345703,
"DetectedText" => "www.coletiv.com",
"Geometry" => %{...}
},
%{
"Confidence" => 81.74166870117188,
"DetectedText" => "CO",
"Geometry" => %{...}
},
...
]
}
This functionality is more suited for pictures with less text and doesn’t cope well with a full document analysis use case. AWS is currently working on a different product, Textract, which they intend it to be used to digitalise paper documents and easily access the information inside them providing means of indexing and searching.
Imagine you have a system or an application that requires you to identify the users by the face and you happen to have a picture of the person, Rekognition is able to do this almost without any setup necessary on your hand.
Rekognition contains a set of actions that allow you to create collections of faces and these handle the data storage for the faces, meaning you don’t need to store this on your backend.
The documentation clearly states that no face is being saved, only the feature vectors for each face are stored and that is enough for the algorithm to recognise faces in other images.
In order to do such a feature we need to take advantage of multiple actions following the order:
CreateCollection — create a collection to be able to store faces;
IndexFaces — index or add faces to an existent collection;
SearchFacesByImage — detect faces (from a collection) inside an image.
collection_id = "ex_aws_rekognition_test_collection"
{:ok, _} =
ExAws.Rekognition.create_collection(collection_id)
|> ExAws.request()
{:ok, image_binary} = File.read("test/assets/face_target.jpeg")
{:ok, _} =
ExAws.Rekognition.index_faces(collection_id, image_binary)
|> ExAws.request()
{:ok, image_binary} = File.read("test/assets/face_source.jpeg")
{:ok, result} =
ExAws.Rekognition.search_faces_by_image(collection_id, image_binary)
|> ExAws.request()
There are many different use cases for this product and the best way to see how the other companies are taking advantage of Rekognition and the remainder AWS products is to attend the Amazon Summits that are being held all around the 🌍.
Follow the AWS Global Summit Program (#AWSSummit) as one may be held close to you, it’s free and they offer a lot of🍹🥗🥣🥤 to make it easier to keep up with all the knowledge being shared.
As a final note, if you intend to integrate a specific AWS product but it is still not supported by the ex_aws organisation, make sure you contact us and we can create it together! 😃
Join our newsletter
Be part of our community and stay up to date with the latest blog posts.
SubscribeJoin our newsletter
Be part of our community and stay up to date with the latest blog posts.
SubscribeAs you might guess, in our day-to-day, we write GraphQL queries and mutations for Phoenix applications using Absinthe to be able to create, read, update and delete records.
If you are a Flutter developer you might have heard about or even tried the “new” way of navigating with Navigator 2.0, which might be one of the most controversial APIs I have seen.
A database cron job is a process for scheduling a procedure or command on your database to automate repetitive tasks. By default, cron jobs are disabled on PostgreSQL instances. Here is how you can enable them on Amazon Web Services (AWS) RDS console.