AWS Lex is a powerful tool for building interactive chatbots. It makes it easy to create bots that can answer questions, respond to comments, and more. You can use Lex to create bots in a variety of languages, and you can even use it to build bots that are specific to a certain industry or topic. You can also use Lex to build bots that are specific to a certain customer or user. You can create bots that answer questions from customers, respond to comments from customers, or more. You can also use Lex to build bots that are specific to a certain product or service. You can use AWS Lex to build chatbots in a variety of ways. You can use it as an application programming interface (API), or you can use it as a library. You can also use AWS Lex in combination with other tools, such as Azure Bot Service and Google Cloud Platform Console.
What Is AWS Lex?
AWS Lex is made up of many different machine-learning services, most of which are available as a standalone AWS service.
The first step is speech recognition—converting spoken word into text that a machine can more easily understand. AWS’s Transcribe service does this quite well, though it’s better suited for non-realtime applications, such as subtitling video or transcribing audio call logs. This step isn’t necessary if you’re making a text-based chatbot, but it’s crucial for bots like Alexa and Siri.
Machines don’t automatically understand human language though, so extracting the useful bits out of a given sentence is key to making the chatbot respond fluently to commands. AWS Comprehend does this with high accuracy and is able to pick out and identify keywords in input text.
Combined with custom logic for dictating the flow of a conversation, Lex is able to respond to user commands and to send tasks off to Lambda for further processing. During a conversation, AWS Lex can also query users for additional information; for example, if a user is trying to book an appointment, Lex can ask the user for a date and time suitable for them.
Lex’s text output can also be converted to speech using AWS Polly, making for a seamless chatbot experience.
With how metered all of the component services are, Lex itself is surprisingly simply priced—you’re charged $0.004 per voice request ($4 per thousand) and $0.001 per text request ($1 per thousand).
Unlike most AWS services, Lex is currently only available within three regions:
us-east-1 (N. Virginia) us-west-2 (Oregon) eu-west-1 (Ireland)
With how latency dependant a chatbot usually is, it’s surprising to see only a few regions being supported, but Lex also only supports English so the region choices make sense.
How Does Lex Work?
To get started, head over to the Lex console. A few sample applications are already made that you can try out for yourself, but we’ll go ahead and create a new custom bot so you can see how they’re built.
It all starts with Intents. You can think of Intents as certain actions your bot is capable of, such as scheduling appointments, ordering items, etc. Each intent needs a few trigger words, called utterances, which start the conversation. Try to keep these fairly brief; for example, “book an appointment” works better than “I would like to book an appointment.”
Your bot can have multiple intents and multiple utterances associated with each intent. You should try to capture all of the different ways a user could state their intent.
Once the bot starts an intent, it queries the user for additional data. Technically, you don’t need any additional data, and you can have your bot finish the conversation and perform its action right away.
The additional data comes in the form of Slots. You can think of these like arguments for a command—the bot must query the user for each argument before sending off its final action. The arguments are type sensitive, so if Lex asks a user how many items they would like to order, it won’t accept “green” as an answer.
AWS already has a lot of prebuilt types, most of which are identified by AWS Comprehend. If you’re asking a user for a date, use AMAZON.DATE, and if you’re asking for an address, use AMAZON.StreedAddress.
Each slot comes with its own prompt, which is shown or read to the user. For example, if you’re asking the user for the date of their appointment, you may write something along the lines of “Which day would you like to book your appointment for?”
You can also build your own slot types. For example, if you offer a few different kinds of appointments, you can add them in your own type. Lex expands your slot values to include similar responses that you may get from users in the real world. You can also limit your custom slot type to only exact words and synonyms, if you want it to be more strict. A good rule of thumb, though, is to include the types in the slot prompt so that the user knows the options. Otherwise, some people may get stuck.
Additionally, you can integrate slots directly into the utterances. If a user says “I would like to book an appointment tomorrow,” you can cut out the extra step and consider that slot fulfilled. You can do this by surrounding the slot name with brackets in the utterance definition:
This is all the configuration your bot needs, but most users like to see a confirmation prompt before the action is taken, both for peace of mind and to ensure the bot hasn’t screwed something up. Lex supports this, under the “Confirmation prompt” settings. You can include slot variables in the prompt, which Lex fills with what the user said (to the best of its knowledge, at least).
From here, you can hit Build to test your bot in the integrated testing panel. It should respond to your utterance and ask you for each of the slots you’ve given it. It should respond well to changes in command structure, but if it doesn’t, you may want to add more utterances or expand your slot definitions.
By default, Lex runs in debug mode and simply returns the slot values once it’s done. You can change this to call a Lambda function, passing the slot values as parameters to the function. It’s up to you to decide what to do from here. You can also manually take more control over Lex using a Lambda validation hook; this enables you to run a Lambda function whenever a user responds, to either validate and enable the input or prompt the user again.
Once your bot is done, you can give a response message letting the user know how the Lambda function handled their input, or simply thanking them for their service.