How do I know? Because I’ve done it.
Blackboard published a post late in 2025 which concluded (emphasis is mine):
AI Agents don’t show up as separate apps or users—they look identical to normal student activity in the LMS. Because they operate at the web interaction and automation layers—filling in forms, clicking buttons, making API calls, etc.—they fall outside what the LMS itself can control. Many are completely invisible to the platform.
It is true that current/standard LMS log data that Blackboard collects cannot differentiate between users. This is as true for Student A and Student B as it is for Student A and Agent A. In fact, this is why we built Cursive (ensuring the right student is the learner is an essential foundation for assessment validity).
But this is untrue when log data are expanded.
Agents are not people. Your students are not robots. A student’s humanness is as relevant to differentiating users as an Agent’s cool efficiency. This ‘detection’ tool for agents in quizzes is not infallible or immutable. (But it did stand up to prompts to “act like a human, use the mouse, don’t click dead center of any buttons.”
What about Einstein and OpenClaw? Yes, those too.
How does this work (and what does it work for)?
I started working on this Agent Detection plugin several weeks ago to better understand the limitations of the Learning Management System, testing the truth of Blackboard’s article, and exploring other work out there.
- Moodle’s open source nature and well documented plugin ecosystem provided a great test bed.
- Claude provided the heavy lifting of coding the plugin.
- All testing and verification was done by a human (that includes a human review of the plugin before it can be released to the community).
To get started it was important first to understand the ways that we interact with learning activities on an LMS. As you navigate any page, you do so in uniquely human ways based on your computer, peripherals, and behavior (agents are not equally equipped). Capturing data from multiple AI browsers (Opera’s Neon, Perplexity Comet, ChatGPT Atlas) was a CPU (and patience) taxing endeavor which, for all the suggestions of efficiency, took a loonnnnnggg time.
Once we’d verified some of the ‘behaviors’ of the Agents it was easy to start to create ways to identify them out of all user behaviors on a site. In the end, we settled on several metrics which can be configured together or in part to help identify if/when an agent was taking a quiz (or completing an assignment, or discussion) and how to build that confidence across a user’s session.
What does this Plugin do/not do?
This plugin does:
- listen for new events and log them
- not require any different behavior from the student
- work completely locally, no data is transferred to a 3rd party
- integrate some flagging to evaluators and the administrator
- explain why a user was flagged (see below)
- identify Comet AI 100% of the time (so far dozens of tests over the last few weeks)
This plugin does not:
- capture biometric data
- account for tablet/touch screen use
- block the user from progressing or log a user out (yet, though this is possible and the plugin is in testing)
A few screenshots are provided below



If you’re interested in learning more, contact us.
(first published at This Isn’t Fine Substack)

Leave a Reply