Building a Facebook Messenger Bot in 42 days: How We Did It and Why You Can Too!

Last month Skyscanner launched the first travel search engine Bot for Facebook Messenger. The Bot allows users to search for travel via Messenger's chat service, and can interpret natural conversation to kick off a flight search or even serve up destination inspiration if the user doesn't know where they want to go. Facebook even featured us as their first 'expert' case study on their Messenger Blog!

As the bot was built using our own Skyscanner for Business APIs, other businesses, such as Meekan have also been able to build and launch their own Messenger Bot using our APIs and we hope this is something that many other companies will be able to achieve too.

The bot was built and turned around in a super speedy time-frame of 42 days, after Facebook announced the availability of the new service at their F8 Developer Conference. 

To give a little more perspective on how we did it and how your team could too, we broke the process down into the following stages of the build to shed some light on how it came to be.

 

 

Thinking in bots

Historically, Skyscanner did not not have a dedicated bots team. We had previously built a few proofs-of-concept and internal bots before and even integrated the Skyscanner flights API into Amazon’s cloud-based voice service, Alexa. 

As we knew that Facebook would be announcing Bots for Messenger at F8 2016, there was a prime opportunity to pull together a virtual squad from across Skyscanner, bringing together people with the right skills to build a bot quickly.

Formed around the kernel of our Traveller Insights Squads in Sofia who had built a number of slack bots before, we created a team comprised of ten people from five different tribes and four countries around the world, who were all given the freedom to drop what they were doing and concentrate on building a Facebook bot.

F8: Kicking the virtual squad into gear

With a team and a rough plan in place we were ready for Facebook’s F8 conference. As the announcement was made that the Messenger Bot APIs were available, we had an open line between those of us who were at F8 and the team sitting excitedly in the offices.

In the days before F8 we’d set up a rough architecture, based on our previous experiences. To move fast, it was imperative to avoid re-inventing the wheel, so we used our own APIs as much as possible, building a simple but powerful conversation service around two concepts:

  • We’d build this as a platform for conversations rather than a one-off integration
  • The first version should be a proper Earliest Lovable Product (see here and here)

User-flow wise, we had a sketch for the simplest idea we could think of – allowing people to find flight prices if they already knew where they want to fly to and when they want to fly. This meant collecting information until we had enough context on what the user wants to do, so as to allow us to go off and use our travel APIs to find an answer.

On the service side, the Sofia team had been preparing the ground, stubbing out a set of microservices in Amazon’s AWS that would be able to deal with this flow. A stub of a service to implement Messenger’s APIs in when we knew what it would look like, a general Conversation Service to collect user’s context and provide questions/answers, and a set of clients that could speak to our own Skyscanner APIs.

The initial sketching of the architecture

 

To a working prototype in days

The preparations for the services we’d made and the conversations we had at F8 with the Messenger team allowed us to make a flying start on building the first prototype of the bot. The whole team dropped all other projects and focused on this one goal – building a well-working and useful Facebook Messenger bot as soon as we could.

As the documents on the Messenger API were now available, we could set up a first version of a bot, linked to a test page. This was built in a few hours after the announcement, filling in some of the unknowns around interfaces and possibilities, but also generating a pile of other questions – how could we best leverage the tools we had in the API to build the user flow we had envisaged?

 

The bot’s first word

One of the more interesting issues in building a bot (apart from parsing language, more on which below) is keeping ‘state’. How do you know where you are in the conversation with the user, and how does the bot know what to ask or say next?

A sort of state machine is needed for that, taking in context gleaned from the user and indicating what state in the conversation to move to next. There are quite a few external options available for this (Facebook’s own Wit.AI, for example).

Happily we already had something like this available internally – we use our Message Decision Engine in App and on the web to determine what the next best message is to show our users for onboarding and other uses. As we built this as a generic decision engine/state machine, this gave us an ideal platform to use for our bot. We can represent the various state transitions as rules, and therefore only have to keep track of user context – what state were they in (for example, we know the destination but not where they want to leave from), and what information we now have from Natural Language Processing (for example, they told us where they would like to fly from). Working through the rules, this gives us the user’s place in the flow, and hence tells us what to ask for next.

Once enough context is collected on the user’s question, we could hand off to the part of the service that would ask the actual question. This can be done asynchronously – we do not hang around waiting but call back the Messenger API with our responses. This allows us to kick off a live search for prices, which, depending on caching and popularity of the search, could take a few seconds. The asynchronous nature of these communications is reflected in the final architecture, and allowed us to handily decouple the Messenger API implementation and the underlying systems.

Conversational flows and NLP: leveraging Wit.AI

Collecting that context is an interesting exercise. The simplest case we could think of (and hence started with) was a question/answer flow – where are you going? Where are you leaving from? When do you want to fly?

We could run the answers through our own autosuggest endpoints or through a string-to-date parser and get a result that gives us the information we need. It’s not a very nice user experience though. Giving the bot the information you already know – a fairly natural thing to do – will lead to confusion. I really just want to be able to tell the bot “I’d like to fly to Barcelona from London next Friday please”. The only thing it should ask me is for confirmation on what I just gave it, and ask me when I want to fly back (or if it’s a one-way trip).

When we saw test users interact with the bot it was clear that we needed a better way to deal with this way of speaking to a bot. It just wouldn’t do to type a whole sentence at a bot and just get ‘so you want to fly to Barcelona’ back, followed by a request for the information that you just gave!

To solve that problem, we needed to be able to parse natural language. There are quite a few companies out there that can do this now as a service – Facebook’s Wit.Ai ,Microsoft’s Luis and many others. We chose Wit.AI as the service to use for the Messenger bot, partially through the Facebook connection, but mostly because it was easy to slot into our architecture and we could train the matchers well to recognize travel intent.

NLP is an interesting topic and worthy of discussion all by itself – for our fairly fixed domain and relatively simple initial use-case, the tokens to be recognized (and training needed to be able to do that) were reasonably straight forward. The big challenge in this area will come when we want to allow people to make more complex choices, like filtering flights on class – “what’s the price for that one if I fly business class?” might parse pretty well for us humans, but will need quite a bit of training and a good contextual memory to figure out what ‘that one’ means.

 

A complex query

Iterating and testing – the experience needs to be right

We worked closely with the partner engineers at Facebook to make sure that all that technical, flow and NLP work described above was presented to users in as natural a way as possible.

There are quite a few questions to answer before you have a bot that can naturally chat with people. What tone of voice do you use? Too friendly, and people will just chat at the bot rather than focus on asking questions about travel. Too business-like and the experience is also bad and as mentioned previously, playing twenty questions makes people unhappy.

To see what would work, we built on Facebook’s experiences with building bots and also ran our own tests with volunteers within Skyscanner. A good chunk of the company played with the bot just before release, either in a controlled fashion or just coming back with comment. That instant feedback allowed us to react in almost real-time to issues that people found and improve the bot rapidly.

Once happy with the experience (exactly 42 days after F8), we formally submitted and – thanks to our close work with the Facebook team and the effort we’d expended on ensuring the experience was right – they were accepted and on production in Messenger in a matter of hours. Our awesome PR and Growth teams had lined up a nice bit of publicity around it, and we saw a good number of people come in to use the bot on launch.

 

The final result

 

Where we go from here is an interesting subject as there are many ways in which we could improve and build on the bot and as with all these things, picking the right thing to focus time and energy on is crucial.

Since our architecture is based on loosely coupled microservices, we could design the system from the ground up to be generic rather than tied directly to the Messenger integration. Really all we needed to know is two things to go from a Messenger-specific implementation to a more generic one:

  • The user ID
  • The originating integration

Combined with the asynchronous messaging implementation mentioned above, this allows us to extend and re-use our systems for other bot implementations where we think they can help answer people’s travel questions.

Another interesting question we’re diving into is around success. As bots are a new area for us, what does success actually look like? To answer that, we’re exploring telemetry throughout the flows, looking at cumulative behavior, but also at individual people’s experiences. Where do people get frustrated and give up? This touches on other important areas as well – how do you A/B test effectively in this new bots world?

There is a long list of questions that we’d like to be able to answer – and that we can answer, given the travel knowledge we have available within Skyscanner. “I’d like to fly somewhere hot this summer” should give you a nice set of choices that fit with you, your local area (when is the normal summer holiday period?) and previous searches you have made with us. The challenge will be to capture those questions in user flows like we have talked about above for the simpler case. It’s all about giving simple answers to complex questions!

Want to build your own innovative bot and think you can produce something truly unique using our travel APIs too? Request access to get started today!