NHCS says it's piloting an AI alert system to monitor student safety
New Hanover County Schools will pilot an alert system using artificial intelligence to detect threats to student safety on school devices and Google accounts.
The LightSpeed Systems Alert service scans activity on NHCS devices and Google accounts for flags. If a flag is found, a human review board under LightSpeed evaluates the actual risk, and then contacts an administrator at the school where it occurred.
Flags can include self-harm, cyberbullying, suicide, and school violence. The school district says the system does not store any student data; it instead monitors things like emails and Google docs for key terms.
Schools choose which administrator gets notified. Once they're alerted, the process then involves “traditional methods of investigation” into the student’s safety.
Brian Lantz, director of network security at NHCS, said the system is basically delivering a message.
“We’re saying: 'Hey this is happening, whether that’s suicidal thoughts or violent tendencies, whatever it might be, here’s the information'," he said. "And the technical side of that is kind of null at that point.”
The contract does allow Lightspeed’s review board to contact government officials about certain situations at its “sole discretion."
"If a Lightspeed Safety Specialist believes there is imminent threat of harm to a person, the Safety Specialist will call contacts on an escalation list designated by the district. If none of the contacts on the list can be reached, the Safety Specialist will contact a local authority such as law enforcement. As an example, if the Lightspeed Safety Specialist had reason to believe a school shooting was imminent and a district leader could not be reached, the specialist would contact local law enforcement to intervene and protect students," according to Cheryl Black, Lightspeed's vice-president of marketing.
Lantz said even if students open their NHCS Google accounts on a personal device, the only data the system has access to is what’s in the cloud on that Google account.
Students aren’t allowed to open incognito windows, which Lantz said could affect the alert system. Staff will still know which account opened the incognito window, and students will still be restricted from accessing social media sites — a filter that's already in place.
Once launched, the pilot will go through the end of next school year. At that point, staff said they would reevaluate to consider continuing.
The over $200,000 cost of the pilot is covered by federal covid relief funding. Staff said many other districts have implemented similar systems.
In response to concerns raised by some parents that the system would be used for stricter disciplinary measures more than student safety, staff said discipline is not the function of the program — rather, student safety is the point.
As to how the AI will differentiate between a real threat and for example, a search for a research paper, a Lightspeed spokesperson said over email that the alerts include context like site history for the review board.
The review board can also determine if an alert is not a threat.
The spokesperson also said the AI scans the following: "recent web searches, browser history, images, email, Microsoft and Google productivity suites including Google and Word docs and messaging, specific sites like YouTube, and other online activities."