The Cloud AutoML API is a suite of machine learning products that enables developers with limited machine learning expertise to train high-quality models specific to their business needs, by leveraging Google’s state-of-the-art transfer learning, and Neural Architecture Search technology.
In this tutorial, we’ll go through the step-by-step details on how to use AutoML API library with a simple lambda function using serverless framework. We will first create a lambda function which will act as a wrapper for AutoML and then use API gateway to trigger it. This function will make a simple request to a machine learning model hosted over AutoML(currently in beta) and return the classification result as a response.
Before continuing further, it is assumed that the following prerequisites are met:
- An AWS account with permission to access AWS Lambda console
- Serverless framework (Getting Started) installed.
- Python3.6+ runtime
- A sample model hosted in Google’s AutoML
You can check out AutoML from GCP’s left panel > Natural Language > AutoML text classification. To start using your custom model, you’ll have to first create a service account. If you don’t need a custom model solution, the Cloud Natural Language API provides content classification, entity and sentiment analysis, and more.
AWS Lambda with Serverless Framework
Let us start by defining our serverless.yml
file:
service: rapidapi-example
...
...
custom:
...
...
myEnvironment:
automllayer:
dev: 'arn:aws:lambda:us-east-2:7XXXXXXXXXX5:layer:autoML:2'
awsrole:
dev: 'arn:aws:iam::7XXXXXXXXXX5:role/LambdaServiceRole'
deployment:
dev: dev
...
...
...
provider:
stage: ${opt:stage, 'dev'}
name: aws
runtime: python3.6
role: '${self:custom.myEnvironment.awsrole.${self:provider.stage}}'
region: '${self:custom.myEnvironment.region.${self:provider.stage}}'
environment:
awssecret: '${file(config.json):awssecret}'
awskey: '${file(config.json):awskey}'
GOOGLE_APPLICATION_CREDENTIALS: 'google_automl_keyfile.json'
project_id : '${self:custom.myEnvironment.project_id.${self:provider.stage}}'
compute_region : '${self:custom.myEnvironment.compute_region.${self:provider.stage}}'
model_id : '${self:custom.myEnvironment.model_id.${self:provider.stage}}'
functions:
ActionableAPI:
handler: ActionableAPI.handleRequest
name: ${self:provider.stage}-ActionableAPI
memorySize: 1500
layers:
- '${self:custom.myEnvironment.automllayer.${self:provider.stage}}'
events:
- http:
path: /classify
method: POST
plugins:
- serverless-python-requirements
Here we define our function ActionableAPI
under functions with an HTTP POST event trigger and custom route /classify
. This will setup an API Gateway for the endpoint and send the request to your function as an event when called. When you configure your function to be HTTP trigger based, the default timeout is automatically set to 30 seconds. If you’re expecting your requests to take more than 30 seconds to process, which is already a significant amount of time, try to break your functionality either across multiple lambda functions or look for other event mechanisms through which the request can be triggered.
The runtime is defined as python3.6
under provider and the default stage is dev
. The environment variables are referenced according to the stage and you can define other stages like dev and prod and update myEnvironment
accordingly. The important variables for our use case here are
- GOOGLE_APPLICATION_CREDENTIALS, which is the path to the file you’ll get while setting up your service account.
- project_id: id of your project under which the model is deployed
- compute_region : region in which your model is hosted
- model_id : id of your custom model
Since I’ll be using AutoML, I’ve added a layer for it’s python library and configured it in `ActionableAPI`. Layers are typically used for libraries that AWS runtime does not support out of the box. It has great re-usability when used across functions and it helps standardize your code as well. I won’t be going into the details of setting up a layer since it is a little out of scope for this tutorial. For more details on serverless.yml
configuration and setting up a lambda layer, you can refer to this and this.
For a complete understanding of how API Gateway works in context of Lambdas, you can refer to this extensive guide.
Next, let’s define the function handleRequest
under the file name ActionableAPI.py
as configured in the YAML file:
import json
import os
from google.cloud import automl_v1beta1 as automl
def handleRequest(event, context):
print("Event received: ", json.dumps(event))
reqBody = json.loads(event["body"])
sentence = reqBody["sentence"] if "sentence" in reqBody else None
if sentence:
# Create client for prediction service.
project_id = os.environ["project_id"]
compute_region = os.environ["compute_region"]
model_id = os.environ["model_id"]
automl_client = automl.AutoMlClient()
prediction_client = automl.PredictionServiceClient()
# Get the full path of the model.
model_full_id = automl_client.model_path(project_id, compute_region, model_id)
payload = {"text_snippet": {"content": sentence, "mime_type": "text/plain"}}
sentenceObj = {"sentence": sentence}
params = {}
try:
response = prediction_client.predict(model_full_id, payload, params)
except:
raise(
"Error in AutoML response :", json.dumps(response.error)
)
print("response", response)
responseObj = {}
for result in response.payload:
if result.classification.score > 0.5:
responseObj["actionable"] = result.display_name == "TRUE"
responseObj["confidence"] = result.classification.score
print(json.dumps(responseObj))
return {"statusCode": 200, "body": json.dumps(responseObj)}
else:
rejectResp = {"message": "Sentence not found"}
return {"statusCode": 400, "body": json.dumps(rejectResp)}
Let’s start with a little explanation of what this function is trying to do. At the top, you can see that I’ve imported automl client library, which is basically what we configured as the layer for our function. In handleRequest
, we check for sentence present in the request body. If sentence is present, we initialize the parameters required by our model with the environment variables as defined in the YAML file above and setup our AutoML client. This calls our model hosted in Google Cloud’s AutoML and returns true if there is a piece of text in the sentence (typically review) suggesting an improvement or recommendation to increase the overall quality of the product. If the classification score comes above a threshold of 0.5, we mark it as actionable and return the result along with the confidence score.
The interesting point to notice here is the way the result is returned from the function. This is to ensure API Gateway compatibility which expects the lambda to return the result in a certain manner. It typically goes like this:
{ "statusCode": 200, "headers": {"Content-Type": "application/json"}, "body": "response body" }
At this point, your code structure should look something like this:
(root) | | |--- serverless.yml | |--- ActionableAPI.py | |--- config.json // if you have externalized your AWS keys | |--- node_modules/ | |--- package.json | |--- package-lock.json | |--- automl_key.json |
Now, to deploy the lambda function with stage dev
as defined, the following command can be used:
serverless deploy -v --stage dev
To add small libraries, you don’t have to create a layer. It can be simply done by defining those packages in requirements.txt
and using serverless-python-requirements
. This takes care of downloading and installing the packages while deploying.
In the deployment logs, you can see the API endpoint for your lambda:
..... ..... api keys: ...... endpoints: POST - https://abc123XY.execute-api.us-east-2.amazonaws.com/dev/classify functions: ..... .....

If, by any chance, you missed this, you can head over to the Lambda Management Console in AWS and search for the function name, which in our case will be dev-ActionableAPI
and click on API Gateway:
You can then scroll to the bottom of the page and check out the API endpoint details. After you’ve tried out the API, the logs should appear in CloudWatch by default under the log group /aws/lambda/dev-ActionableAPI
.
Now that we’re all setup, let’s see how we can publish this API in a marketplace.