Ethics in Action #1: How deliberAIde's Tools Preserve Human Agency & Protect Data Privacy

With each of our ethical principles introduced and outlined in the first instalment of this series, it's now time to delve deeper into our operationalisation efforts for Principle 1 (Preserving Human Agency) and Principle 2 (Protecting Data Privacy).
Principle 1: Preserving Human Agency
Preserving Human Agency: we are committed to providing our users with the ability to maintain meaningful oversight and control, from start to finish, over determining which and how different AI tools are employed within their specific contexts.

While it's easy to get caught up in AI hype narratives that suggest automating everything away from humans is a desirable goal, we firmly believe that AI tools should empower, rather than replace, human decision-making. Thus, we make sure to explicitly design our tools to maximise user agency.
We recognise that effective solutions to real-world democratic challenges require human experts to retain full control and flexibility over both the AI tools they use and the outputs they generate. As a result, we go beyond the typical oversight-focused "human-in-the-loop" approach (where humans merely observe AI functions) and embrace a stronger "human-in-control" philosophy to the design of our AI tools, requiring them to always obtain active human approval before taking any actions.
What does this mean in practice? It means we embed extensive customisation options into every tool we create to provide our users with the power to activate, deactivate, or modify specific tools to fit their unique needs, so that they always retain the agency to determine how different AI tools are deployed in their specific contexts and what they get out of them. For instance, in the first version of our platform, users will be able to:
- Edit AI-generated summaries so they truly reflect stakeholder perspectives;
- Link AI-generated insights directly back to participant quotes for easy verification;
- Create custom tags and filters that organise AI outputs according to their specific needs.
These features will ensure that our users remain firmly in the driver's seat, maintaining meaningful oversight and control over their stakeholder engagement processes from start to finish.
Looking ahead
Our goal for the future is to provide our users with even more flexibility and control via an end-to-end platform that's loaded with every democratic engagement tool they could need – whether they're running mass digital surveys or facilitating face-to-face deliberative sessions. By making our platform modular, we hope to create an interoperable toolbox that can be combined and integrated with other complementary applications, tools and platforms, enabling democracy practitioners to mix and match different tools to suit different types of engagements.
Principle 2: Protecting Data Privacy
Protecting Data Privacy: we are committed to protecting user privacy across all of our operations and to adhering to the various privacy standards of different stakeholder communities.

In today's world, data is both incredibly valuable and increasingly vulnerable. At deliberAIde, we believe people should have complete control over their personal information – how it's collected, who it's accessed by, who it's shared with, where it's stored, and how it's used. We define data privacy as the fundamental right of data subjects (i.e. our users and the stakeholders they engage).
Given its status as a universal right, for those building data-reliant AI tools for democratic deliberation contexts, privacy protection isn't just a 'nice to have' – it's an essential requirement for making people feel comfortable enough to share their true perspectives without the fear of having their words weaponised against them. And since our tools gather and process audio and text data from deliberative engagements, often including sensitive, personal and/or proprietary information, we take our responsibility to protect privacy and confidentiality seriously. That's why we've implemented robust privacy measures which ensure that all our data collection and processing operations strictly adhere to EU and UK GDPR standards.
In line with EU GDPR Articles 4.11 and 7, before gathering, using or storing personal data, we require both users and recorded participants to provide explicit consent that is (i) freely given (without pressure), (ii) specific to particular data processing operations, (iii) informed through clear, comprehensible information, and (iv) demonstrated through positive action. We make a point of clearly explaining the purposes of our data collection, usage and storage processes to our users before they ever touch our tools, and we regularly remind them to do the same with their stakeholders before recording any discussions.

Moreover, we operate on a minimal data retention approach, only collecting, storing and using data that is genuinely essential to the delivery of our core services and only for our explicitly stated purposes. To help us with this, we've integrated robust anonymisation processes into our platform that automatically identify and remove personally identifiable information (PII) from both stored data and tool outputs. Our transcription tool, for example, automatically strips names and other personal identifiers from deliberation transcripts as it documents them, only letting users store anonymised transcripts.
Through such privacy-by-design efforts, our aim is to create a platform that users can trust to keep their sensitive deliberation data private and secure while still enabling meaningful analysis.
Looking ahead
While we're proud of our current practices, we recognise that we still have a long way to go in ensuring maximal privacy protection for our data subjects. Thus, we're working on implementing even stronger privacy safeguards in the future, including:
- Establishing dynamic, continual consent mechanisms: We want to move beyond one-off consent mechanisms towards a more dynamic system where data subjects can easily opt in or out of data usage or storage at any point, withdraw their previous consent, and request specific sections of past transcripts to be redacted or deleted (e.g. if a stakeholder no longer supports a position they once advocated for, or simply feels uncomfortable about their stored data).
- Enabling local hosting: At present, we primarily rely on Microsoft's cloud services to sustain our operations, which we recognise don't always guarantee AI model hosting and data storage on servers with strict privacy standards. Once we have more capacity and resources, we will be working to shift our reliance away from US-based cloud services to more secure European data and AI infrastructure. We also plan to eventually build a tech stack that enables our users to deploy our tools locally, so that those who need extra data privacy and security can run our tools and store their data on privacy-compliant servers or even self-host if needed.
- Building a better anonymisation tool: We're actively seeking grants to develop a custom open source tool that would improve the contextual consistency, reliability and accuracy of anonymisation in deliberation transcripts across different deployment scenarios.
Closing remarks
We hope you've enjoyed this first glimpse into how we're translating high-level ethical commitments into action at deliberAIde. As we gear up to launch our initial suite of tools at the end of May, these operational principles will ensure that our platform is maximally privacy-protecting and agency-preserving from day one.
If you're enjoying this series so far, stay tuned for our next post, where we'll shift our focus to Principle 3 (Championing Deliberative Stakeholder Engagement) and Principle 4 (Maintaining a Diverse Team).
Until then, we'd love to hear your thoughts on these principles and how you're approaching similar challenges in your own work!