06-12-2024

The use of artificial intelligence (AI) and AI-generated tools has grown steadily in recent years. Currently, many platforms are testing AI-generated tools for summarising contributions to online consultations or debates, moderating content, translation or recommendation algorithms.

In January 2024, and in view of the new EU AI Act, the European Commission has also decided to start promoting the internal development and use of “legal, safe and trustworthy artificial intelligence systems”

In this context the European Center for Not-for-Profit (ECNL) has organised a webinar to provide information on relevant policy developments and to learn more about trends and examples of the use of AI and AI-generated tools to improve public participation in policy-making.

What regulations apply to public participation platforms in terms of AI?

The European Union currently has two sets of regulations governing the use of AI: the AI Act and the Digital Service Act (DSA).

The AI Act aims to foster the responsible development of AI in the EU by addressing potential risks to citizens’ health, safety and fundamental rights. It defines risks in four categories: 

  • Minimal risk: most AI systems such as spam filters and AI-enabled video games have no obligation under the AI Act.
  • Specific transparency risk: systems like chatbots must clearly inform users that they are interacting with a machine, and AI-generated content should be properly labelled.
  • High risk: AI-based medical software or AI systems used for recruitment are considered high-risk and must comply with strict requirements, including risk mitigation systems, high-quality data sets, clear information for users, human supervision, etc.
  • Unacceptable risk: among other things, AI systems that enable ‘social rating’ by governments or companies are seen as a clear threat to people’s fundamental rights and are banned. 

The AI Act, in its current version, does not apply to public participation platforms as most of them are classified as ‘minimal risk’. However, platforms can implement these regulations on a voluntary basis to ensure greater transparency for their users.

On the other hand, the Digital Service Act (DSA) regulates online intermediaries and platforms such as marketplaces, social networks, content-sharing platforms, app stores, and online travel and accommodation platforms. Its main objective is to prevent illegal online activities and the dissemination of false information in order to guarantee the safety of users and protect their fundamental rights.

In the case of public participation platforms, it will come into play in two main situations:  

  • Content moderation: platforms that use AI to moderate the content that users post must be transparent about the content they deem inappropriate, clearly explaining the reasons and the moderation process in their terms and conditions of use.
  • Recommendation systems: in its terms and conditions, the platform must explain why and how information is recommended and give users the opportunity to change their preferences in order to ensure greater transparency and control.

It should be noted that the DSA applies in this case to very large online platforms, but no participation platform has yet been designated as such. The DSA therefore applies in a limited way and only in certain scenarios, although this does not exclude the possibility of changes in the future.

How is AI used on participation platforms? 

As part of the event, ECNL invited three organisations to share their experiences of using AI and AI-generated tools for their participation platforms. Discover the key takeaways below: 

The Citizens’ Foundation’s mission is to connect governments and citizens by creating open engagement platforms offering advice on how best to plan and execute citizen engagement projects. To do this, they have been using AI or AI-generated tools for their various projects for many years.

Their most recent project is called ‘Policy Synth’, which uses their top-rated citizen engagement solutions, over 30 types of GPT-4 agents and advanced genetic algorithms to bring together policy makers, citizens and AI in a collective discourse. This collaborative interaction aims to speed up and improve decision-making processes, paving the way for more innovative and effective policy solutions.

Decidim is a digital platform for citizen participation, currently they mainly use AI for content moderation and translation. AI-generated tools are conditioned to detect patterns such as spam and offensive language, and will automatically flag up content for verification by human moderators. In this specific case, AI is limited because it requires human supervision. As for the translation service, although very beneficial in a multilingual environment, it is sometimes limited because it does not take into account certain specific elements of the language or cultural differences.

Go Vocal is a digital community engagement platform designed to improve decision-making. It currently uses AI or AI-generated tools for four functions: content moderation, translation, conducting online and offline surveys and data processing. The data processing function is called ‘Sensemaking’ and allows users to quickly search, classify, organise and summarise the data they need. To ensure the accuracy of the data, the summary produced by Sensemaking always refers to the file in which the data is stored.

Recommendations

The use of AI and AI-generated tools has many advantages for participation platforms, such as analysing and synthesising responses, resolving language barriers or extracting patterns from data. However, and despite the regulations that are involved, in most cases it must be supervised by a human in order to be sure of the accuracy of the information it provides and generates.

Based on the above, the examples demonstrated and the discussions held during the event organised by ECNL, the following conclusions were put forward:

  • Implement a GDPR-compliant data protection policy.
  • Conduct a risk and fundamental rights impact assessment when developing AI.
  • Ensure that marginalised groups are not disproportionately affected when using AI-assisted content moderation.
  • Organise digital literacy training to ensure that users use AI-assisted tools safely.
  • Involve CSOs in the development and deployment of AI tools.
  • Create a common repository of typical fundamental rights risks and impacts to facilitate consultation and mitigate risks and avoid harm.