Using Google Cloud’s AutoML in AWS Lambda with API Gateway

  • Malavika
  • October 31, 2019

The Cloud AutoML API is a suite of machine learning products that enables developers with limited machine learning expertise to train high-quality models specific to their business needs, by leveraging Google’s state-of-the-art transfer learning, and Neural Architecture Search technology.

In this tutorial, we’ll go through the step-by-step details on how to use AutoML API library with a simple lambda function using serverless framework. We will first create a lambda function which will act as a wrapper for AutoML and then use API gateway to trigger it. This function will make a simple request to a machine learning model hosted over AutoML(currently in beta) and return the classification result as a response.

Before continuing further, it is assumed that the following prerequisites are met:

  1. An AWS account with permission to access AWS Lambda console
  2. Serverless framework (Getting Started) installed.
  3. Python3.6+ runtime
  4. A sample model hosted in Google’s AutoML

You can check out AutoML from GCP’s left panel > Natural Language > AutoML text classification. To start using your custom model, you’ll have to first create a service account. If you don’t need a custom model solution, the Cloud Natural Language API provides content classification, entity and sentiment analysis, and more.

AWS Lambda with Serverless Framework

Let us start by defining our serverless.yml file:

service: rapidapi-example
      dev: 'arn:aws:lambda:us-east-2:7XXXXXXXXXX5:layer:autoML:2'
      dev: 'arn:aws:iam::7XXXXXXXXXX5:role/LambdaServiceRole'
      dev: dev

  stage: ${opt:stage, 'dev'}
  name: aws
  runtime: python3.6
  role: '${self:custom.myEnvironment.awsrole.${self:provider.stage}}'
  region: '${self:custom.myEnvironment.region.${self:provider.stage}}'
    awssecret: '${file(config.json):awssecret}'
    awskey: '${file(config.json):awskey}'
    GOOGLE_APPLICATION_CREDENTIALS: 'google_automl_keyfile.json'
    project_id : '${self:custom.myEnvironment.project_id.${self:provider.stage}}'
    compute_region : '${self:custom.myEnvironment.compute_region.${self:provider.stage}}'
    model_id : '${self:custom.myEnvironment.model_id.${self:provider.stage}}'

    handler: ActionableAPI.handleRequest
    name: ${self:provider.stage}-ActionableAPI
    memorySize: 1500
      - '${self:custom.myEnvironment.automllayer.${self:provider.stage}}'
      - http:
          path: /classify
          method: POST

  - serverless-python-requirements

Here we define our function ActionableAPI under functions with an HTTP POST event trigger and custom route /classify . This will setup an API Gateway for the endpoint and send the request to your function as an event when called. When you configure your function to be HTTP trigger based, the default timeout is automatically set to 30 seconds. If you’re expecting your requests to take more than 30 seconds to process, which is already a significant amount of time, try to break your functionality either across multiple lambda functions or look for other event mechanisms through which the request can be triggered.

The runtime is defined as python3.6 under provider and the default stage is dev . The environment variables are referenced according to the stage and you can define other stages like dev and prod and update myEnvironment accordingly. The important variables for our use case here are

  1. GOOGLE_APPLICATION_CREDENTIALS, which is the path to the file you’ll get while setting up your service account.
  2. project_id: id of your project under which the model is deployed
  3. compute_region : region in which your model is hosted
  4. model_id : id of your custom model

Since I’ll be using AutoML, I’ve added a layer for it’s python library and configured it in `ActionableAPI`. Layers are typically used for libraries that AWS runtime does not support out of the box. It has great re-usability when used across functions and it helps standardize your code as well. I won’t be going into the details of setting up a layer since it is a little out of scope for this tutorial. For more details on serverless.yml configuration and setting up a lambda layer, you can refer to this and this.

For a complete understanding of how API Gateway works in context of Lambdas, you can refer to this extensive guide.

Next, let’s define the function handleRequest under the file name as configured in the YAML file:

import json
import os
from import automl_v1beta1 as automl

def handleRequest(event, context):
    print("Event received: ", json.dumps(event))
    reqBody = json.loads(event["body"])
    sentence = reqBody["sentence"] if "sentence" in reqBody else None

if sentence:
    # Create client for prediction service.
    project_id = os.environ["project_id"]
    compute_region = os.environ["compute_region"]
    model_id = os.environ["model_id"]
    automl_client = automl.AutoMlClient()
    prediction_client = automl.PredictionServiceClient()

    # Get the full path of the model.
    model_full_id = automl_client.model_path(project_id, compute_region, model_id)

    payload = {"text_snippet": {"content": sentence, "mime_type": "text/plain"}}
    sentenceObj = {"sentence": sentence}
    params = {}
        response = prediction_client.predict(model_full_id, payload, params)
            "Error in AutoML response :", json.dumps(response.error)
    print("response", response)
    responseObj = {}
    for result in response.payload:
        if result.classification.score > 0.5:
            responseObj["actionable"] = result.display_name == "TRUE"
            responseObj["confidence"] = result.classification.score
    return {"statusCode": 200, "body": json.dumps(responseObj)}
    rejectResp = {"message": "Sentence not found"}
    return {"statusCode": 400, "body": json.dumps(rejectResp)}


Let’s start with a little explanation of what this function is trying to do. At the top, you can see that I’ve imported automl client library, which is basically what we configured as the layer for our function. In handleRequest , we check for sentence present in the request body. If sentence is present, we initialize the parameters required by our model with the environment variables as defined in the YAML file above and setup our AutoML client. This calls our model hosted in Google Cloud’s AutoML and returns true if there is a piece of text in the sentence (typically review) suggesting an improvement or recommendation to increase the overall quality of the product. If the classification score comes above a threshold of 0.5, we mark it as actionable and return the result along with the confidence score.

The interesting point to notice here is the way the result is returned from the function. This is to ensure API Gateway compatibility which expects the lambda to return the result in a certain manner. It typically goes like this:

  "statusCode": 200, 
  "headers": {"Content-Type": "application/json"},
  "body": "response body"

At this point, your code structure should look something like this:

|--- serverless.yml
|--- config.json // if you have externalized your AWS keys
|--- node_modules/ 
|--- package.json
|--- package-lock.json
|--- automl_key.json

Now, to deploy the lambda function with stage dev as defined, the following command can be used:

serverless deploy -v --stage dev

To add small libraries, you don’t have to create a layer. It can be simply done by defining those packages in requirements.txt and using serverless-python-requirements . This takes care of downloading and installing the packages while deploying.

In the deployment logs, you can see the API endpoint for your lambda:

api keys:
  POST -


If, by any chance, you missed this, you can head over to the Lambda Management Console in AWS and search for the function name, which in our case will be dev-ActionableAPI and click on API Gateway:

You can then scroll to the bottom of the page and check out the API endpoint details. After you’ve tried out the API, the logs should appear in CloudWatch by default under the log group /aws/lambda/dev-ActionableAPI .

Now that we’re all setup, let’s see how we can publish this API in a marketplace.

  • Tags:
  • AI
  • Analytics
  • AWS Lambda
  • Rapid API
  • Tech
You might also Like
Analyze Earnings Call using Bewgle’s NLP platform

Analyze Earnings Call using Bewgle’s NLP platform

  • Kshitija Ambulgekar
  • December 23, 2022

At Bewgle we apply our NLP capabilities on any unstructured data, using our patented AI models, to generate actionable insights from it. Bewgle’s Natural language processing, machine learning models, fundamentally analyze the text and output the answers to questions, the insights, topics, sentiment, adjectives and other key features that we promise to our customers. Here … Continue reading "Analyze Earnings Call using Bewgle’s NLP platform"

News Articles Analysis: How to get Insights using BEWGLE’s NLP Platform

News Articles Analysis: How to get Insights using BEWGLE’s NLP Platform

  • Vivek Hegde
  • November 10, 2022

Analyzing news articles can be helpful in drawing insights into how online platforms/news agencies are portraying a brand/product(s). So it’s essential information for any brand to understand and act upon. Keeping up to date with new trends/innovations/launches etc can be hard as it involves going over multiple articles on a daily basis. What if we … Continue reading "News Articles Analysis: How to get Insights using BEWGLE’s NLP Platform"

Lowering Cholesterol Naturally – Through Bewgle Lens

Lowering Cholesterol Naturally – Through Bewgle Lens

  • Swati Agarwal
  • August 2, 2022

At Bewgle, we take immense pride in our NLP capabilities on any unstructured data. Though we have primarily focused on drawing insights from feedback or similar text, I, as a data enthusiast, wanted to challenge the system beyond feedback. One of the use cases that I wanted to try was that of deriving insights from … Continue reading "Lowering Cholesterol Naturally – Through Bewgle Lens"