On Friday November 10th EARMA hosted the first AI Day in Brussels. The location was perfect for us, close enough to Utrecht to go back and forth in one day and with the upside that you can bump into familiar faces during the trip, which indeed happened when we entered Brussels - one of the participants in our very own test with Large Language Models in Impacter took the same train from Utrecht.
The day itself had a full program with plenty of room to network as well. There are a few very important lessons to take away from the AI Day:
1) AI (and specifically Large Language Models) is here to stay
Sounds straightforward, but that isn’t really the case. For years, most research officers seemed to be very hesitant in adopting all sorts of tooling that used AI. Still, nowadays, it’s adopted with small steps. The best illustration of this feeling was the large round of applause the AI-professor on stage got when she told the audience in the panel conversation that AI is not good enough to write the grant for you. According to her, the best use for it is to let ChatGPT write one of the partner paragraphs for you, send it to the partner to make them write their own piece or else threaten to include the ChatGPT text. These texts are usually so bad in her experience, that people will promptly reply to start writing their own texts.
2) There are still some hesitations
And this is not necessarily a bad thing. Everyone is aware that most researchers are using ChatGPT or other tools to write (parts) of their proposals. But how to properly guide this process as a research manager? Should you forbid these tools, write guidelines to make the best use of them or use them yourself? And what sort of biases are you then creating in your text? There were some very interesting comments about AI generated texts for, for instance, the gender part of a proposal. If you have no clue on how to do this, but ChatGPT wrote something that sounds good, are you then really taking care of gender biases in your research? Probably not! And this is true for many parts of a proposal which makes it problematic, because helpful tools are welcome, but as a researcher you should always take ownership of your research proposal.
3) Taking ownership
The insight from the European Commission was basically that you can use any tool you want for writing your proposal, as long as you take ownership for it. So if you let an AI write your proposal, the commission doesn’t really care, but they do expect you to be able to execute on your promises.
4) Issues with IP and copyright
Most people in the room shared some sort of concern regarding IP and copyrights. When using ChatGPT, everything you enter is stored in US based servers and can be used freely by OpenAI and Microsoft. Panel members and speakers shared their struggles on deciding what (not) to include when using these online tools for proposal writing. You can of course buy private access at OpenAI to use ChatGPT but that is really costly and can only be purchased in a tender procedure.
And this final point is exactly something we are working on with Impacter! Storing an open source Large Language Model on our own Impacter server allows us to make it part of the same legal agreement we are already signing for years with our customers. We are not feeding the model with proposal texts, so we can (in line with the GDPR), upon your request, delete everything from you and your proposal. It is important to be aware of this because this is impossible when using ChatGPT or other online models where everything that served as input, is used to train the models and is therefore irrevocably part of the model in the future. We are happy to talk if you are interested in learning more! There is still a lot to discover in AI, but one thing is certain: it’s here to stay!