Participant Information | |||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Participant Type | Team | ||||||||||||||||||||||||||||||
Team Name: | AIchamps | ||||||||||||||||||||||||||||||
Team Name: | AIchamps | ||||||||||||||||||||||||||||||
Personal Information (Team Member) |
| ||||||||||||||||||||||||||||||
Project Information | |||||||||||||||||||||||||||||||
Project Title: | BloomSphere AI | ||||||||||||||||||||||||||||||
Expected Stream | 3 | ||||||||||||||||||||||||||||||
Project Description (maximum 300 words): | This project introduces an AI-powered teacher-student ecosystem that revolutionizes assessment creation, personalization, and evaluation in education. The core innovation lies in leveraging a Hybrid AI approach that integrates Large Language Models (LLMs) for question generation and Smaller Language Models (SLMs) like BERT for intelligent question classification. The system aligns all assessments with Bloom’s Taxonomy, ensuring cognitive diversity across six levels: Remember, Understand, Apply, Analyze, Evaluate, and Create. Traditional assessment methods are manual, inconsistent, and lack adaptability. Our solution addresses these limitations by automating the end-to-end process—from generating high-quality, taxonomy-based questions to grading and performance analytics. A rule-based system manages test scheduling, scoring logic, and personalized reporting. Students benefit from adaptive learning paths based on their uploaded study materials, enabling continuous self-evaluation and targeted improvement. The platform dynamically adjusts question difficulty and focus areas to match individual performance and learning gaps. Technically, the system combines natural language understanding, text generation, classification algorithms, and rule-based logic into a unified, scalable architecture. This hybrid design ensures both flexibility and reliability, optimizing for both accuracy and performance. The potential impact is transformative: Educators save significant time and ensure cognitive balance in assessments. By aligning AI capabilities with pedagogical best practices, this project paves the way for data-driven, equitable, and future-ready education. | ||||||||||||||||||||||||||||||
File Upload | AIchamps.pdf | ||||||||||||||||||||||||||||||
Project Details | Please answer the questions from the perspectives below regarding your project. | ||||||||||||||||||||||||||||||
1.Problem Identification and Relevance in Education (Maximum 300 words) | The idea for this project emerged from firsthand observation of the challenges educators and students face in modern classrooms. Teachers invest significant time manually creating and grading assessments, often struggling to ensure that questions align with cognitive learning objectives such as those defined in Bloom’s Taxonomy. Meanwhile, students frequently receive generic, non-personalized assessments that don’t address their unique learning needs or gaps in understanding. We believe this approach will succeed because: AI has matured to the point where it can reliably generate and categorize questions across learning levels. This project aligns with the evolving landscape of education, where scalable, data-driven tools are essential for delivering quality, inclusive, and future-ready learning. | ||||||||||||||||||||||||||||||
2a. Feasibility and Functionality (for Streams 1&2 only) (Maximum 300 words) | Our solution leverages a Hybrid AI architecture combining Large Language Models (LLMs) like GPT for automated question generation and Small Language Models (SLMs) such as BERT for categorizing questions according to Bloom’s Taxonomy. These technologies will be integrated into a web-based platform with a user-friendly dashboard for educators and students. Rule-based systems will manage test scheduling, grading logic, and report generation, ensuring seamless automation. To build this platform, we require: Access to LLM and SLM APIs (e.g., OpenAI, gemini) We plan to validate market demand through: Pilot programs with schools and coaching centers Core functionalities of the platform include: AI-powered question generation aligned with Bloom’s levels To ensure a positive user experience, we have implemented: Intuitive UI/UX design with minimal learning curve Performance metrics to evaluate effectiveness: Accuracy of AI-generated question alignment with Bloom’s levels Our approach balances technical innovation with educational relevance, ensuring both feasibility and high-impact functionality. | ||||||||||||||||||||||||||||||
2b. Technical Implementation and Performance (for Stream 3&4 only) (Maximum 300 words) | Our system architecture is designed around a modular, scalable AI-driven ecosystem that integrates state-of-the-art technologies for seamless automation and performance. The functional architecture consists of four core components: frontend interface, backend processing, AI orchestration, and data management. Frontend: Backend & AI Orchestration: Database: Performance Metrics: Accuracy of Bloom’s-level classification This implementation plan ensures a robust, flexible, and data-driven solution that tightly integrates advanced AI technologies with real-world educational needs. | ||||||||||||||||||||||||||||||
HTML Block |
| ||||||||||||||||||||||||||||||
3. Innovation and Creativity (Maximum 300 words) | Our project introduces a novel AI-powered ecosystem that transforms how assessments are created, delivered, and evaluated in education. While traditional platforms focus on digitizing static tests, our solution takes a fundamentally innovative approach by combining generative AI (Gemini) with orchestrated reasoning (LangChain + LangGraph) and a taxonomy-aligned classification engine, all integrated into a seamless user experience. The creative use of Bloom’s Taxonomy as a backbone for question generation and categorization sets our platform apart. Unlike conventional systems that generate generic questions, our solution ensures that each item targets a specific cognitive level—fostering deeper learning and critical thinking. This structured, yet dynamic, framework allows educators to create balanced and meaningful assessments effortlessly. Our platform's ability to personalize learning experiences using student-uploaded content and adaptive feedback is another unique innovation. By analyzing individual learning gaps and automatically generating relevant practice questions, the system not only saves educators time but also empowers students to take ownership of their progress. The use of LangGraph for agentic workflows adds another layer of creativity. It enables multi-step reasoning and adaptive dialogue with students, allowing for deeper interaction and real-time scaffolding, far beyond static quizzes. From a user experience perspective, we’ve reimagined assessment as a collaborative and intelligent process rather than a one-time test. Features like automated grading, real-time analytics, and performance-based content adaptation create a feedback-rich loop that continuously supports learner growth. By tightly integrating advanced AI with proven educational frameworks, this project exemplifies innovation in both technology and pedagogy—solving real-world classroom problems with scalable, intelligent, and personalized solutions. | ||||||||||||||||||||||||||||||
4. Scalability and Sustainability (Maximum 300 words) | Our platform is architected for scalability from the ground up, leveraging cloud-native technologies and modular AI orchestration to handle growing user demand without performance degradation. We use serverless and containerized deployment strategies (via AWS or GCP) to dynamically scale compute resources based on load. Gemini’s API-based architecture and LangChain/LangGraph orchestration allow for parallel processing of assessments and real-time personalization at scale. MongoDB’s flexible schema design ensures rapid, scalable data storage and retrieval across large datasets. To address potential bottlenecks, we will: Implement intelligent request throttling and load balancing for AI inference calls. For sustainability, we focus on: Cloud efficiency: Using resource-optimized LLMs and inference endpoints to reduce energy consumption. To maintain long-term user engagement, we provide: Gamified learning experiences and badges based on progress. | ||||||||||||||||||||||||||||||
5. Social Impact and Responsibility (Maximum 300 words) | Our solution directly addresses key social issues in education—inequity, lack of personalization, and limited access to quality assessment tools—by making intelligent learning support universally available. Many students, particularly in under-resourced communities, lack personalized academic support. Similarly, educators are burdened by manual tasks that hinder their ability to focus on meaningful instruction. By automating assessment creation and aligning it with Bloom’s Taxonomy, we ensure that students at all cognitive levels receive balanced, skill-targeted learning opportunities. Our AI-powered system supports self-paced learning, allowing students to progress based on their individual needs and strengths. This is especially beneficial for students with learning differences or those from diverse language and socio-economic backgrounds. Our platform also empowers teachers in underserved schools by removing the technical and time barriers involved in designing high-quality assessments. With accessible tools and analytics, educators can provide timely interventions and track growth—enhancing educational outcomes across demographics. We align with broader goals of equity, inclusion, and lifelong learning by: Supporting multilingual capabilities for diverse student populations Social impact metrics include: Increase in student performance and confidence (via pre-/post-assessment data) To remain responsive to evolving community needs, we will: Partner with educators, NGOs, and policymakers for ongoing feedback | ||||||||||||||||||||||||||||||
Do you have additional materials to upload? | Yes | ||||||||||||||||||||||||||||||
Supplementary materials upload (Optional) | |||||||||||||||||||||||||||||||
PIC |
Personal Information Collection Statement (PICS): | ||||||||||||||||||||||||||||||
Agreement |
|