Research

Staying up-to-date with the latest news and emerging research is a challenge for any researcher. It requires dedicating time to scanning multiple sources to identify articles and paper of interest, and more time to thoroughly read and understand that content.

Large language models (LLMs) are increasingly competent at forming nuanced understandings of long pieces of text, even on specialist subjects, and they can do it in seconds. This can be very useful for filtering, categorising, clustering and summarising content.


The Cutting Edge

This is a prototype of a tool developed with Hannah B to investigate whether LLMs can help BIT's teams stay up-to-date with the latest news and research.

The tool pulls in previews of academic papers from 20+ journals and categorises them according to 20+ policy areas, spanning BIT's four clusters. The tool could be extended to provide summaries of individual papers, or reader digests for a group of papers e.g. "Summarise all research in social care from the last month."

Being a rough prototype, there is much that can be improved, including the quality of the categorisation. Please help us define the project's direction by completing the very short feedback form, linked from the webpage.

For more in-depth technical explanation, and code, see https://github.com/dan-kwiat/bit-cutting-edge


Qualitative research

Many teams across the group have a lot of freetext data in storage e.g. from past surveys. Nobody has had time to read or analyse it properly, but maybe LLMs can provide some quick insights for us e.g. by identifying common themes in the text. I wrote this script, which clusters survey responses by theme, as a demonstration for Chiara and Jordan in anticipation of an online experiment they're planning for COP28.

For more in-depth technical explanation, and code, see https://github.com/dan-kwiat/bit-qual-research