Scope
SCOPE explores a new way forward – using AI to make surveys more adaptive and more inclusive.
At the heart of the project is a pilot test of an AI-powered conversational agent – a digital tool that leads users through a mental health literacy survey not as a form, but as a conversation. The AI responds to queries, handles confusion, gently redirects digressions, and ensures respectful, accessible dialogue. It adapts to the user, instead of expecting the user to adapt to it.
But SCOPE is not just about improving one survey
This pilot is a proof of concept for a larger idea that AI can be used to create smarter, more meaningful ways of gathering public insight—especially in contexts where traditional surveys fall flat. We are testing whether a conversational agent can –
- Improve completion rates and engagement
- Handle real-world ambiguity in user responses
- Respect boundaries while still collecting useful data
- Monitor for inclusion and underrepresented voices in real-time
- Log drop-offs, digressions, and friction points for future refinement
We are starting with young adults and mental health. But the potential use cases go far beyond—to city councils, schools, community organisations, and anyone who needs to engage people in meaningful, inclusive ways.
SCOPE asks the question – What if surveys were not something you filled out… but something you had a conversation with?
This pilot is our first step toward answering that!

Adaptive Surveys
SCOPE uses an AI-powered conversational agent that adapts in real-time to each participant’s responses. It can clarify questions, handle digressions gently, and guide the user without breaking the flow – making the experience feel more like a dialogue than a form.

Increased Engagement
By replacing survey forms into a dynamic conversation, SCOPE helps reduce survey fatigue and increase completion rates. Participants stay more focused and involved, even when discussing complex or sensitive topics.

Inclusive Data Collection
SCOPE monitors for underrepresented voices, tracks skipped questions and digression points, and allows users to opt out – supporting respectful, inclusive participation across diverse groups.

Ethical AI Practices
SCOPE is designed with strict guardrails – no diagnosis, no advice, and no storage of identifiable personal data. It prioritises care, clarity, and autonomy – showing how AI can support human-centred research without overstepping.
What’s Included: Our Core Features
Welcome screen
A warm, clear introduction that sets the tone and purpose of the conversation from the first click.
One-click management
Easily control the survey experience with simple actions such as skip, repeat, or ask again – no need to type.
Custom greetings
A personalised, friendly message that adapts to context and makes every participant feel welcome.



Integrated Questions
The AI agent seamlessly embeds each survey question into a natural conversational flow, so it never feels like a form being filled out, but rather a guided dialogue. There are no page breaks or rigid transitions, just smooth, adaptive pacing.
Helps answering the questions
When a participant hesitates and asks for helps answering, the agent can offer gentle hints or simple examples, just enough support to help them reflect and respond. It does not lead or judge, it simply clarifies.


Helps understand questions
If a user seems confused or asks for clarification, the agent offers plain-language explanations or rephrasing. This helps reduce cognitive load and ensures questions are accessible to participants with different levels of mental health literacy.
Encourages when participant loses motivation
If the participant slows down or appears disengaged, the agent offers gentle encouragement. This might include short affirmations, reminders of the value of their input or suggests to take a break and return to the survey when the participant feels ready.


Informs
about the status
When asked, the agent lets users know how far they have come and how much is left without rushing them. It helps manage expectations and reduce fatigue by keeping users aware of their progress in a supportive tone.
The Team


Dr Swati Virmani
is the Deputy Head – Academic at De Montfort University (DMU) London Campus. She is a
passionate educator and researcher in Human-Centred AI, Governance, and Digital
Transformation, committed to pedagogical innovation and industry-aligned teaching. She
is particularly interested in how Generative AI is reshaping assessment, learning
design, and institutional responsibility within the UK Higher Education. An Associate of
the UK’s Economics Network, she was conferred the title of DMU University Teacher Fellow
in 2023.
Swati has led projects on AI-integrated curriculum development, culturally sensitive AI
systems, Responsible AI strategy, with a focus on promoting equity, unbiased design,
transparency and community engagement. She has advised policy-makers and industry bodies
on the ethical use of AI in public and professional settings. As a member of the UK’s
Chartered Institute of Public Relations’ AI in PR panel, she has contributed to three
major reports assessing the industry’s readiness for an AI-driven future. She speaks
nationally and internationally, including at the UK AI Summit, Global Alliance Public
Relations (Africa & Europe), and the Institute for Public Relations (New York), on the
future of AI in education, communication and engagement, and social good.


Dr Jawad Ashraf
is a Senior Lecturer in Computing at De Montfort University, with a PhD in Computer Science from the University of Leicester. His research intersects machine learning, natural language processing, and intelligent systems, with a growing focus on generative AI for addressing multidisciplinary community challenges. Previously, he served as Assistant Professor at Kohat University of Science and Technology, where he led academic-industry linkages, startup incubation, and the commercialization of applied research. Dr Ashraf has supervised numerous PhD and MS projects, and continues to work at the intersection of education, technology, and social impact.