Recap: Brave Conversations Stuttgart 2025 - Exploring Trust, Values, and Agency in the Age of AI

July 21, 2025

On July 4 and 5 2025, Brave Conversations created a dynamic learning space where participants explored the societal and ethical dimensions of AI through creative methods and critical reflection.

Being Human in the Age of Smart Machines

Alternativtext

Exploring Trust, Values, and Agency in the Age of AI

In collaboration with our partners from Intersticia, Change Labs Stuttgart, and IRIS, we had the pleasure of hosting a two-day learning event that brought together students, researchers and educators at the intersection of technology and society. Brave Conversations is a format designed to explore the human aspects of digital transformation, opening up space for critical thinking, ethical reflection, and value-driven debate.

Day 1: Trust, Technology, and Human Values

We began with a student workshop led by Ghada Ibrahim on the topic: “Exploring Trust in Technology – Humans & AI: Do We Choose to Trust AI – or Just Surrender to It?”

Students were asked to reflect on key questions:

  • Is tech amplifying existing fractures?
  • Are tech giants exploiting gaps in social resilience and connectedness?
  • How could technology be designed to reinforce positive human values rather than exploit weaknesses?

Using the Human-Centered Design cycle, students worked in teams to analyze small-scale case studies, applying creative problem solving to explore the risks and opportunities AI poses. The method helped participants structure their thinking while staying open to unexpected insights. What emerged was a nuanced sense of AI's double-edged nature: its potential to empower society, and its capacity to reinforce systemic inequalities. The session left students with a deeper awareness of their own assumptions about technology and set the stage for even more probing questions the next day.

Day 2: Brave Conversations Stuttgart 2025 – Learning Through Play, Conflict, and Reflection

Saturday’s Brave Conversations event shifted the focus to collective exploration. Lead by our hosts Anni Rowland-Campbell, Hannah Stewart and Ghada Ibrahim, we began by grounding ourselves in the history of technological development: technologies aren’t abstract forces, they are invented, funded, and deployed by people, within political and economic systems. 

Recognizing this brought us to the key question of the day:
If we are the ones building and training, and actively using technology, what does that say about us? And how can we design tech that serves society, not just markets?

The day unfolded as an embodied learning experience including movement and emotion. Brave Conversations is about learning through doing, feeling, and experiencing, rather than only thinking or reading. This approach carried into the afternoon, as participants worked through a fictional criminal case to examine values, responsibility, and digital agency. Using Moore’s Strategic Triangle as a framework, we asked:

What can we do? (Operational Capacity)
What may we do? (Legitimacy & Governance)
What should we do? (Public Value)

Alternativtext

The case surfaced complex ethical questions: Who owns our data after death? Can a large language model (LLM) speak on behalf of someone who is gone? Are we morally responsible for acting on its insights? Moreover, how do we identify, and challenge, the biases that shape its responses?

Simulating the Future? AI in Action

To bring theory into lived experience, the afternoon ended with a collaborative simulation using an LLM. Working in groups, participants entered prompts into the AI and received tasks designed to fulfill a given goal. Roles shifted throughout the session – from active agents to observers to policymakers – creating a dynamic setting for testing human-AI interaction.

What followed was as insightful as it was chaotic. At first, the LLM’s responses seemed helpful and directive. However, as groups encountered conflicting goals and communication breakdowns, the AI began looping, offering repetitive solutions and defaulting to generic strategies. Surprisingly, instead of stepping outside the logic of the tool, many participants kept turning back to the LLM for answers – even when it was clearly stuck.

This prompted critical reflection: Why did we trust the system so much, even when it wasn’t delivering? What does this say about our willingness to surrender agency? And how do we train ourselves – and future professionals – to stay self-aware and critical in tech-mediated environments? For a more in-depth reflection on the notion of trust in AI read Ghada's blog entry on the Intersticia website. 

Looking Back, Looking Forward

Across both days, Brave Conversations created a space for open inquiry, discomfort, humor, and insight. It invited participants to think not just about technology but also with and through it. It challenged them to articulate their values, engage in debate, and reflect on how digital systems shape, and are shaped by, human behavior.

In a world where tech is increasingly interwoven with every career path, event formats like this help prepare students to become thoughtful, ethical actors in the systems they will one day lead. We are grateful to all participants, facilitators, and partners for their courage to have brave conversations, and we look forward to continuing them.

Brave_Conversations_Particpant

Alternativtext

To the top of the page