Ever since the start of the year, I’ve been wanting to dive into the world of large language models, or LLMs. I’ve always learned best with a project-based approach and by actually building something. In this project, I hope to learn how to create an application on top of one and discover what limitations we could encounter. While brainstorming ideas, I had two that I believe will require knowledge about how to prompt an LLM and how to give it a wider context of the user.
The first idea I had was a personal tutor that wouldn’t just give you answers but would prompt you in the direction of the correct answer, assessing your knowledge along the way. I like this idea because it ties in well with a project I worked on back in my university days for my dissertation, which involved developing an adaptive learning system. The second idea was to make use of Open Banking data APIs. A user could connect their personal transaction history and then ask the LLM for insights. Questions could include: ‘How much do I spend each month on recurring payments?’ or ‘Which month do I spend the most money?’
With the limited knowledge I currently have about LLMs and providing context, I’m unsure whether it’s even possible to feed such large amounts of user-specific data for querying. I’m aware that there are context length limits, so simply adding the transactional history to the prompt would most likely not be feasible. For this reason, I’ve chosen the second option. However, I would like to revisit the learning assistant idea later, as I find the challenge of teaching an LLM to distinguish between suggestive guidance and straightforward answers to be a very interesting problem to solve.
Just to be clear, this project is not a 3-step guide to building an LLM application; it’s a series of real-time posts about the progress and development of the project.
Let’s give the project a codename for easy reference. How about ‘Senna’?
Ciaran Ashton