Revel Logo Light
announcements, milestone

Revel Reports Are Here!

July 4, 2025

By: Samuel Soo

Turning Revel’s usage data into actionable insights with automated teacher reports

Revel Reports Are Here!

Teachers need both product and process feedback to support student learning

At the heart of every teacher's mission is helping students learn and grow. To achieve this effectively, educators need a complete picture of student learning - not just where students end up, but how they get there. Holistic feedback requires both summative insights (the final product of learning) and formative insights (the learning process itself).

Traditionally, teachers in Singapore Junior Colleges have relied on two primary methods: seasonal examinations and regular homework assignments. However, both approaches now face significant challenges that prevent teachers from getting the complete feedback picture they need to support student learning effectively.

Tests show learning outcomes but miss the learning journey

Seasonal examinations, typically conducted a few times yearly as class tests and End-of-Year Examinations (EOYs), provide the most reliable assessment of student ability. These proctored, exam-condition assessments offer genuine insights into what students can accomplish independently. After each examination period, teachers collaborate to produce Examiners Reports that identify common student weaknesses and strengths across different topics. While valuable for measuring learning outcomes, this process focuses primarily on summative feedback - the final product of student learning. What's often missing is insight into the learning process itself. Teachers receive feedback on where students ended up, but not on the questions, confusion points, and struggles students encountered while getting there. This leaves teachers blind to the learning journey, making it difficult to understand why certain misconceptions persist or where students need additional scaffolding. Furthermore, human biases can creep into the analysis process. Anecdotal fallacies and saliency bias may lead teachers to overemphasize memorable student errors while missing broader patterns in the learning process.

Homework has been compromised by AI

Homework assignments traditionally offered teachers a window into the learning process. With weekly or daily assignments, teachers could monitor understanding in real-time, see where students got stuck, and provide targeted support before major examinations. This formative feedback approach allowed for timely intervention and gave teachers insight into how students were thinking through problems.

However, the widespread availability of ChatGPT and similar AI tools has fundamentally disrupted this process feedback method. These sophisticated language models can solve high school homework questions with remarkable speed and accuracy, making it increasingly difficult to distinguish between genuine student work and AI-generated responses. Despite media claims about AI detection software, the truth is that no reliable tools currently exist to consistently identify AI-generated academic content.

This creates a significant gap in the feedback loop. Students are increasingly relying on AI assistance for their studies, but teachers unknowingly end up tracking the capabilities of these AI systems rather than their students' actual learning progress. The traditional method for understanding the learning process has become unreliable, leaving teachers with only summative feedback from exams - half the picture they need to support student learning effectively.

Introducing Revel Reports for teachers

View a sample report <a href=”/blog_assets/reports/report_example.pdf”>here!

Revel reports have 2 key sections: Usage Statistics and Frequently Asked Questions (FAQs). These reports are generated using data within a specified time frame (E.g. Monthly).

Usage Statistics

banner
Example Usage Statistics

The purpose of this section is to convey some information about Revel’s health, which includes how popular it is among students and how well it is serving them. For this, we provide the following numbers:

  • Total Searches: Number of user queries sent to the Revel system.
  • Relevant searches: Number of user queries sent to the Revel system which returned at least one search result. This also excludes the “one-click queries” that we provide for people to easily try Revel out, but don’t meaningfully represent much.
  • Active Users: Number of users that have sent at least one query.
  • Average searches per user: Number of relevant searches divided by number of active users.
  • Average searches per week: Number of relevant searches divided by number of weeks in the specified timeframe.
  • Latency in notes/videos endpoint: Average time in ms for the Revel system to respond to any given query.
  • Conversation rate: Percentage of how often users activate the “chat” feature among relevant queries.

Frequently Asked Questions

banner
Example of FAQs for the “Organic Chemistry” topic.

The section lists out FAQs from students, referencing the database of logged queries for the specified time period. We won’t go deep into the details of how we group and count similar queries because we feel the technical explanation deserves its own blog post. We have roughly ~20k total query entries in our database right now, so making sense of noisy data at that scale required us to engineer a Revel-specific algorithm that could balance both precision and accuracy. 1

The FAQs are a clear indicator to teachers on what specific concepts students are unfamiliar with (e.g. “does higher bond energy mean less stabil[ity]?” above), and on a macro scale, tells them which topics need work on as a whole. Teachers can use this information to check for understanding during revision sessions and improve their teaching resources based on real student questions. If students are actively using Revel to assist in their learning, educators can generate a report to view their students' weaknesses whenever and assist them immediately. Over time, they can view how the queries of students change to gauge the effectiveness of their teaching methodologies.

The key here is that this should inform teachers in their pedagogy, helping them understand not just what students got wrong on tests, but what they were confused about during the learning process.

Note: We do not provide individual student-specific information. For example, FAQs are generated over the entire corpus of queries from the student body. Revel has always been students-first, and we strictly do not compromise on the privacy of our users. For Revel Reports to be effective, students must first trust Revel as a reliable study tool to use it frequently over other tools like ChatGPT.

How This Impacts Our End Goal

Revel's vision has always been to help students study smarter, not harder. But when we ground ourselves in reality, we see that most educational inefficiencies worldwide don't originate from how students revise during solo study sessions. Instead, they're rooted in classroom dynamics: the disconnect between what teachers think students understand and what students actually struggle with, the timing mismatch between when students need help and when they receive feedback, and the lack of visibility into the learning process itself. By helping teachers teach more effectively, we can directly impact how students learn. Rather than disrupting the classroom environment, Revel aims to support both educators and learners in building more effective educational experiences together.

Thus, our hope is that Revel Reports bring us one step closer to this goal.

Study Smarter, Not Harder

Footnotes

  1. No, it isn’t as easy as plopping the data straight into an LLM. We wish it was.