Public services
Many people are now familiar with generative AI chatbots like ChatGPT. These chatbots are increasingly powerful and with careful prompting can fulfill an enormous range of tasks. However, prompting alone will not produce the desired responses in some use cases. Usually this is a case of the writing style being inappropriate, or the content being factually inaccurate.
These issues make it a challenge to provide chatbots for public services, where it's particularly important that any information provided is accurate, trustworthy and referenced. However it's often possible to overcome the issues by customising a chatbot for a specific use-case, using relevant data on the subject of interest. Depending on the subject, and the structure and size of data, customisation is usually acheived one of two ways:
- RAG (Retrieval-Augmented Generation) - at each interaction with the chatbot, relevant snippets of information are retrieved from a database before being inserted into the LLM prompt along with the user's query.
- Finetuning - a foundation model is trained on a specific dataset of curated questions and answers until it is able to answer questions in a similar fashion.
Below are two examples of custom chatbots I've built which provide advice on health and housing.
BabyBuddy
This is a prototype built in half a day as part of Nesta's Generative AI Exploration Day, with Anna & Clare from Best Begginings (the charity behind the Baby Buddy app). The premise was to build a custom chatbot which responds to questions using referenceable content from Baby Buddy, also mimicking their style of writing which is concise and easy to read.
The chatbot has access to a subset of Baby Buddy's content on labour & childbirth. When a user asks a question, we first perform a semantic search using vector representations of the text known as embeddings. This fetches the 5 most similar Q&A examples from the Baby Buddy data. We then input these examples into a prompt to GPT-3.5, along with the users's original question and some additional instruction, resulting in the chatbot's response.
For more in-depth technical explanation, and code, see https://github.com/dan-kwiat/baby-buddy-chat
GovChat
- Do people want to engage with an AI chatbot in a gov.uk context e.g. when seeking advice on rental disputes?
- Can the chatbot reliably give appropriate answers?
- Do people trust the answers?
- Is it faster to use a chatbot than to scan the text on the advice page?
These are some of the questions being explored by an upcoming trial that BIT is running on the Predictiv platform with 5,000 participants.
Intended as a rough mockup of what the study participants might see, this prototype is a clone of the official gov.uk page on private rent increases, with a chatbot added to the bottom of the page. The chatbot is aware of the official guidance from the webpage, so is able to answer questions about rent increases in a style which is relevant to the UK rental market.
The prototype is now being developed by the team who are building several versions of the chatbot, with different tones and capabilities.
For more in-depth technical explanation, and code, see https://github.com/dan-kwiat/gov-chat