Frequently Asked Questions

How does Impacter work?

Impacter uses state-of-the-art Natural Language Processing to compare your text to our existing corpus of successful and unsuccessful proposals. This helps to spot problems or shortcomings in the text. A part of Impacter feedback takes place at the proposal level. The rest of the feedback is provided inside the document, where Impacter poses questions for parts of the text to help researchers be more concise, clear and specific.

When do I use Impacter?

You can use Impacter during the development of a research grant proposal at two stages. First, when you have an initial idea of abstract by using that to find related funded projects in the past. Second, when you have a full draft of the proposal to check for common pitfalls.

How do I use Impacter?

You use Impacter through selecting the call that you are writing for, and subsequently uploading the docx draft of your proposal. This proposal is then automatically scanned and evaluated by Impacter, and within 2 minutes the feedback is provided to you.

Does Impacter improve my chances of funding?

Over the past years, we have found that the group of Impacter applicants has a 3 to 4 percentage point improved success rate over non users. While this correlation does not prove causation, we believe that Impacters’ checks contribute significantly to a better plan for societal impact and helps to prevent common pitfalls in knowledge utilization.

Does Impacter guarantee the success of proposals when everything is green?

Unfortunately, no. The algorithms Impacter uses can be gamed. If all indicators are green, this means that all things that Impacter measures appear to be in your proposal. A check in Impacter is a good first step to take before consulting colleagues for feedback, for example your research support office or a peer. Consulting Impacter first will ensure that some of the ambiguity is already addressed before your colleagues contribute.

Does Impacter lead to bland, uniform or one-size-fits all proposals?

It seems intuitive that using software in proposals leads to proposals that look like eachother. However, the algorithms in Impacter do the opposite: because they are trained on historical data, they detect the common and overused phrases and concepts. Subsequently, the feedback intends to help you make those generic statements specific to your research proposal.

How are the priorities determined?

Impacter works with Natural Language Processing, AI and keyword recognition. Impacter uses a variety of indicators which are then compared to baselines. When you are close to the baseline for successful proposals, the indicators turn from orange to yellow to green. These baselines are often call specific – for smaller grants, the expected outputs are different than for large multi-partner programs. You can read more about some specific analyses in our blogs:

Does Impacter user generative AI?

In the newest version of Impacter, you can choose to use generative AI. Our Large Languge Model gives you suggestions on how to resolve some of the comments in Impacter. We take speciale care to guarantee confidentiality. If you want to read more about this, please check the recent blog post on our guarantees surrounding the usage of the LLM. As with any suggestion by a Large Language Model - be very critical of it's output, as it will be creative with facts, too!

In line with the European Commission (EC) recent guidelines on the responsible use of generative AI in research, researchers must be transparent when using generative AI substantially. According to the EC, using generative AI as a basic author support tool is not substantial use. And so, if you use Impacter to get feedback on your research proposal, this is not considered substantial use, thus you do not need to report it.

What happens to my proposal when I upload it?

Impacter respects and understands the confidential nature of the ideas present in your research proposal. This means that your proposal will under no circumstance leave our servers, and that strict security measures are in place to safeguard the confidentiality of your proposal.

We do use the proposals we have to improve the analyses in Impacter. Comparisons of successful and unsuccessful proposals tell us a lot about the characteristics of winning grant proposals. A nice example of an analysis that we were able to improve in this way is our readability analysis.

More information about our privacy policy can be found via this link.

Do I have to pay?

Impacter is paid for by your institution, so for individual researchers it is free of charge, for as many proposals as you like. Is your institution not a customer of Impacter yet, send us a quick e-mail!