In this blog post, Max Visscher gives us an update of his work in introducing Large Language Models in Impacter.
After 2 months of development and smooth progression, we are happy to announce that our first proof of concept is working. This version can resolve most of Impacters feedback, and some types of feedback written by users.
The main goal of this project is to create an environment for researchers where they can use LLM’s to their full potential without sharing data with companies like OpenAI or Microsoft. We have accomplished this by hosting our own LLM on private servers. This is made possible by Meta, who released the LLM, Llama2 last July. This LLM is open source, allowing us to host it on our own hardware, and keep all data indoors.
We can generate suggestions with the LLM by extracting text around the selected characters for each comment and use this data as context for the LLM. Because Impacter has a set category for each type of feedback, we can write a specific prompt for each type of feedback. This allows us to guarantee better suggestions for each category. Because users do not share a singular writing style or template on how the feedback is written, this has proven to be challenging for feedback written by users. We are currently experimenting with different strategies to extract certain parts of the feedback to make them as similar as possible.
This development has mostly been dependent on empirical research, so we will continue working on finding and developing new solutions and strategies. When we are confident that the application meets the quality standards necessary to be used for writing grant proposals, we will release a Beta version where you can give feedback on the suggestions. This feedback will be most crucial for the final stages of the development and will guide us towards the best values and prompts used to generate suggestions.