Open Category
Entry ID
530
Participant Type
Team
Expected Stream
Stream 1: Identifying an educational problem and proposing a solution.

Section A: Project Information

Project Title:
MindGuard - an AI-Powered Critical Thinking and National Education Mentor to Combat Fake News, Disinformation and Conspiracy Beliefs Prevalent among Adolescents
Project Description (maximum 300 words):

MindGuard is an AI-powered mentoring system designed to combat the rise of fake news, disinformation, and conspiracy theories among adolescents. By leveraging state-of-the-art large language models (LLMs) with deep reasoning capabilities, MindGuard provides personalized, interactive, and scalable critical thinking education.

Our Key Innovations:
1. AI-Driven Personal Mentorship: MindGuard tailors its approach to each adolescent, identifying misconceptions and engaging in deep reasoning to provide fact-based refutations.
2. Deep Reasoning and Persuasion: Unlike traditional fact-checking tools, MindGuard does not merely present facts; it reasons through misinformation and strategically engages in dialogue to help adolescents develop critical thinking skills.
3. Real-Time Adaptability: The system utilizes advanced web search functionalities to provide the most up-to-date and contextually relevant evidence during interactions.

Design and Technical Principles:
MindGuard operates in three key stages:
1. Identify: The AI mentor chats with adolescents to detect misconceptions about fake news, disinformation, and conspiracy theories.
2. Reason: The system analyzes the misinformation, gathers counter-evidence, and formulates logical arguments.
3. Convince: MindGuard engages in a structured dialogue to help adolescents critically reassess their beliefs using persuasive, evidence-backed reasoning.

Potential Impact:
1. Empowering Youth with Critical Thinking Skills: MindGuard fosters resilience against misinformation by developing adolescents’ ability to assess information critically.
2. Scalability and Accessibility: Designed for broad implementation, the system can be integrated into educational curricula or used independently as a digital AI mentor.
3. Localization for Hong Kong and Beyond: MindGuard is customized for the local context, aligning with national education priorities and policies.

MindGuard represents a transformative approach to AI in education, addressing the urgent challenge of misinformation while equipping the next generation with essential cognitive tools for the digital age.

File Upload

Section B: Participant Information

Personal Information (Team Member)
Title First Name Last Name Organisation/Institution Faculty/Department/Unit Email Phone Number Contact Person / Team Leader
Dr. Chi Chiu So The Hong Kong Polytechnic University School of Professional Education and Executive Development kelvin.so@cpce-polyu.edu.hk 67633133
Dr. Anthony Wai Keung Loh The Hong Kong Polytechnic University Hong Kong Community College anthony.wk.loh@cpce-polyu.edu.hk 92588572
Dr. Siu Pang Yung The University of Hong Kong Department of Mathematics spyung@hku.hk 69963427
Mr. Cheuk Ho Lee The Hong Kong Polytechnic University Department of Computing 24035839d@connect.polyu.hk 65738510

Section C: Project Details

Project Details
Please answer the questions from the perspectives below regarding your project.
1.Problem Identification and Relevance in Education (Maximum 300 words)

The rapid expansion of social media and generative AI has drastically changed how adolescents consume information. While these technologies offer unprecedented access to knowledge, they have also amplified the spread of fake news, disinformation, and conspiracy theories. Adolescents, still in the process of cognitive development, are particularly vulnerable to persuasive but misleading narratives, leading to real-world consequences. Events such as the 2019 Hong Kong social unrest, the COVID-19 misinformation wave, and the January 6 U.S. Capitol attack illustrate the dangers of unchecked misinformation influencing young minds.

Recognizing this growing crisis, we drew inspiration from recent study published in the prestigious journal Science - “Durably reducing conspiracy beliefs through dialogues with AI” - demonstrating AI's potential in reducing conspiracy beliefs through interactive dialogues. Traditional critical thinking education is resource-intensive and often lacks the personalized engagement necessary to challenge deeply held misconceptions. Our hypothesis is that an AI-powered mentor, capable of deep reasoning and interactive persuasion, can bridge this gap by providing adolescents with personalized, real-time guidance to critically evaluate misinformation.

MindGuard is designed to address this challenge by leveraging state-of-the-art large language models (LLMs) with deep reasoning capabilities. Unlike static fact-checking tools, MindGuard engages adolescents in structured dialogue, identifying misconceptions, reasoning through misinformation, and providing persuasive, evidence-backed refutations. By simulating one-on-one mentorship, the system fosters critical thinking skills essential for navigating the digital landscape.

We believe MindGuard will succeed because it combines the proven effectiveness of dialogue-based belief correction with the scalability of AI. By integrating real-time adaptability, localization for Hong Kong, and alignment with national education policies, MindGuard offers a transformative, scalable, and sustainable approach to combating misinformation in education.

2a. Feasibility and Functionality (for Streams 1&2 only) (Maximum 300 words)

MindGuard leverages large language models (LLMs) with deep reasoning capabilities, specifically DeepSeek-R1, to provide personalized, AI-powered critical thinking mentorship for adolescents. It integrates real-time web search functionalities to ensure responses are backed by the most current and contextually relevant information.

Technologies and Implementation:
1. LLM-Powered Dialogue System: MindGuard uses DeepSeek-R1, optimized for deep reasoning and persuasive engagement, to challenge misinformation through interactive conversations.
2. Web Search Integration: Real-time fact retrieval enhances AI-generated responses, ensuring accurate and up-to-date information.
3. Adaptive Learning System: Reinforcement Learning from Human Feedback (RLHF) technique enables the AI to refine its reasoning strategies over time with real dialogue records.

Required Resources:
1. Computational Infrastructure: Cloud-based GPU servers for scalable AI deployment.
2. Data and Domain Expertise: Collaboration with educators, psychologists, and policymakers to refine conversational strategies.
3. Pilot Testing and User Feedback: School partnerships to assess engagement and learning outcomes.

Market Validation:
We will conduct:
1. Surveys & Focus Groups: Understanding student, parent, and teacher needs.
2. School-Based Pilots: Testing MindGuard in real-world educational settings.
3. Policy Alignment: Ensuring relevance to Hong Kong’s education strategies on critical thinking and national education.

Core Functionalities & User Experience:
1. AI-Powered Personal Mentorship: Engaging, interactive dialogues tailored to user misconceptions.
2. Personalization and Adaptive Learning: Making critical thinking education appealing and personalized.

Performance Metrics:
1. Misinformation Resilience Scores: Measuring students’ ability to identify misinformation.
2. User Engagement & Retention Rates: Evaluating long-term adoption and interaction quality.
3. Educator & Parent Feedback: Assessing the perceived impact on adolescent reasoning skills.

By combining advanced AI, real-time adaptability, and interactive engagement, MindGuard ensures effective, scalable, and sustainable critical thinking education.

2b. Technical Implementation and Performance (for Stream 3&4 only) (Maximum 300 words)

N/A

3. Innovation and Creativity (Maximum 300 words)

MindGuard represents a groundbreaking approach to combating misinformation by combining deep reasoning AI mentorship with interactive, personalized critical thinking education. Unlike traditional fact-checking tools or static educational materials, MindGuard does not merely provide correct information. It engages adolescents in dynamic, persuasive dialogues to challenge their misconceptions and encourage independent critical thinking.

Our innovation lies in three key areas:
1. AI-Powered Personal Mentorship: MindGuard functions as a real-time critical thinking mentor, capable of identifying, reasoning, and persuading adolescents regarding misinformation. This one-on-one AI dialogue approach personalizes learning, making it more effective than conventional classroom instruction.
2. Deep Reasoning and Adaptive Persuasion: Unlike fact-checking tools that simply present correct information, MindGuard understands adolescent misconceptions, analyzes misinformation, and formulates logical counterarguments. By leveraging DeepSeek-R1 and web search integration, the system ensures that counter-evidence is always relevant, up-to-date, and contextually accurate.
3. Localization and Engagement Strategies: While previous research has demonstrated AI’s potential in reducing conspiracy beliefs, our project is the first localized adaptation for Hong Kong and the Chinese-speaking world. Additionally, we introduce personalization elements and an interactive, engaging user experience to make critical thinking education appealing to adolescents.

By combining these elements, MindGuard enhances its effectiveness in addressing three core user challenges: adolescent vulnerability to misinformation, lack of personalized critical thinking education, and the need for scalable solutions. Its AI-driven approach allows for mass adoption without compromising personalization, making it both innovative and highly scalable for national and global applications.

MindGuard is not just a tool. It is a revolutionary AI mentor designed to empower the next generation with the critical thinking skills necessary to navigate today’s complex information landscape.

4. Scalability and Sustainability (Maximum 300 words)

MindGuard is designed to be a scalable and sustainable AI-powered mentor, ensuring long-term impact in critical thinking education while addressing potential technical and engagement challenges.

Scalability Strategies:
1. Cloud-Based AI Deployment: MindGuard will leverage cloud computing and distributed processing, ensuring that it can handle increasing user demand efficiently. The system is designed for broad adoption in schools and independent use by adolescents, making it highly scalable beyond Hong Kong.
2. Efficient and Safe AI Model Selection: By utilizing DeepSeek-R1, a highly optimized LLM with deep reasoning capabilities and Chinese language proficiency, we ensure safety, effectiveness and efficiency. Its reasoning capabilities allow adaptive, real-time engagement with users.
3. Localization for Different Contexts: While initially designed for Hong Kong, MindGuard’s modular framework allows for easy adaptation to different cultural and national contexts, aligning with broader national education and STEM learning initiatives.

Sustainability Strategies:
1. Alignment with Educational and National Policies: MindGuard directly supports Hong Kong’s national education priorities, including the 14th Five-Year Plan’s focus on AI-powered education, national security awareness, and critical thinking development.
2. AI as a Daily Learning Companion: Like Siri or other virtual assistants, MindGuard can be seamlessly integrated into adolescents’ daily lives, ensuring long-term engagement beyond the classroom.
3. Continuous Learning and Adaptation: By integrating web search functionalities, MindGuard keeps pace with evolving misinformation trends, ensuring sustained effectiveness.
4. Scalable Infrastructure with Sustainable AI: Our approach emphasizes energy-efficient AI computing, leveraging cloud-based GPU optimization to minimize resource consumption while maintaining performance.

MindGuard is not just a one-time solution. It is a long-term, evolving AI mentor that will scale, adapt, and sustain engagement to combat misinformation and enhance critical thinking in the digital age.

5. Social Impact and Responsibility (Maximum 300 words)

MindGuard is a socially responsible AI-powered mentor designed to empower adolescents with critical thinking skills to combat fake news, disinformation, and conspiracy theories. By addressing these growing societal challenges, MindGuard contributes to a more informed, rational, and resilient generation.

Addressing Key Social Issues:
1. Protecting Adolescents from Misinformation: Adolescents are highly vulnerable to persuasive but misleading narratives, especially in politically and socially sensitive environments. MindGuard helps them develop logical reasoning skills to critically evaluate online content instead of passively accepting misinformation.
2. Promoting National and Civic Awareness: MindGuard aligns with Hong Kong’s national education priorities, helping students understand the importance of social stability, national security, and responsible digital citizenship.
3. Bridging Educational Inequality: Unlike traditional critical thinking education, which relies on teacher availability and classroom resources, MindGuard offers free, scalable AI mentorship, ensuring equal access to quality education for all students, regardless of socioeconomic background.

Metrics for Measuring Social Impact:
To evaluate MindGuard’s effectiveness, we will track:
1. Misinformation Resilience Scores: Assessing students’ ability to identify and reject misinformation before and after using MindGuard.
2. User Engagement and Retention Rates: Measuring adoption rates in schools and individual usage.
3. Teacher and Parent Feedback: Gathering qualitative insights on how MindGuard supports adolescents' critical thinking growth.

Ensuring Responsiveness to Evolving Needs:
MindGuard is designed to continuously adapt by integrating web search functionalities and user feedback-driven improvements, ensuring it stays relevant in the ever-changing landscape of misinformation.

By enhancing digital literacy, promoting equity in education, and fostering a critical-thinking culture, MindGuard contributes to a more resilient and socially responsible society.

Do you have additional materials to upload?
No
PIC
Personal Information Collection Statement (PICS):
1. The personal data collected in this form will be used for activity-organizing, record keeping and reporting only. The collected personal data will be purged within 6 years after the event.
2. Please note that it is obligatory to provide the personal data required.
3. Your personal data collected will be kept by the LTTC and will not be transferred to outside parties.
4. You have the right to request access to and correction of information held by us about you. If you wish to access or correct your personal data, please contact our staff at lttc@eduhk.hk.
5. The University’s Privacy Policy Statement can be access at https://www.eduhk.hk/en/privacy-policy.
Agreement
  • I have read and agree to the competition rules and privacy policy.