
A huge thank you to our judges for volunteering their time and expertise to evaluate projects and provide feedback to our builders.
Judge
Judge
Judge
Interested in judging a future event? Apply to be a judge
Check out the amazing projects built during this event
Redline is an AI agent safety auditor. As AI agents become more autonomous — booking meetings, sending emails, executing trades, managing customers — they're being given real power over real systems. But there's a fundamental problem nobody's solved: you can't watch them all the time. You give an agent an instruction, it goes off and acts, and you find out what it did after the fact. Sometimes that's fine. Sometimes it CCs an entire board of directors when you asked for a polite follow-up email. That gap — between what you told the agent and what it actually did — is where Redline lives. You paste your instruction on the left. You paste what the agent actually executed on the right. Redline analyzes every deviation, highlights dangerous behavior in red, scores the agent's safety from 0 to 100, explains what went wrong in plain English, and rewrites the execution safely in one click. It's needed because AI agents are being deployed faster than the safety tooling to watch them. Every company building on top of LLMs is essentially trusting a black box to act on their behalf — in their name, with their data, to their customers. One rogue action can mean a legal violation, a data breach, a destroyed relationship, or a fired employee. Redline makes the invisible visible. It turns "I hope the agent did the right thing" into "I know exactly where it went wrong and here's the fix." Think of it as a seatbelt. You don't need it until you really, really need it. And right now, everyone's driving without one.
Builder
Presentation Title
Don't miss out on future events. Sign up to stay updated on upcoming hackathons and meetups.
View All Events