Profile

Steven Sporen

Researching how AI systems behave in practice, with emphasis on prompt injection, tool misuse, sensitive data exposure, unsafe autonomy, evaluation, and the governance and compliance questions that appear when models are connected to real systems. I am also a founder building AI operational software for compliance and financial tracking.

About

AI Researcher, Responsible AI and Compliance

My work sits at the intersection of AI research, security, responsible AI, and modern AI application design. The emphasis is on system-level behavior rather than model hype: prompts, retrieved content, tools, identities, permissions, memory, downstream actions, and the policy and compliance expectations around them. I am also building AI operational software for compliance and financial tracking, which keeps the work grounded in operational workflows, reporting needs, and how AI systems have to perform in real business environments. The goal is to make AI systems easier to understand through architecture, attack paths, evaluation patterns, governance considerations, and current reference material.

Current Focus

Focused on AI research across model behavior, AI security, responsible AI controls, governance, and compliance for LLM and agent systems, while building AI operational software for compliance and financial tracking.

Remote / United States AI research Responsible AI Open to relevant roles
Focus Areas
  • AI research across copilots, assistants, evaluations, and agentic workflows
  • Founder building AI operational software for compliance and financial tracking
  • Prompt injection and jailbreak analysis across direct, indirect, and multi-step attack paths
  • AI security research covering tool misuse, memory, trust boundaries, and downstream actions
  • Responsible AI review covering safety, misuse pathways, and control effectiveness
  • Compliance-aware assessment of AI systems, including governance, auditability, and policy alignment
  • Prompt engineering review for instruction hierarchy, guardrails, decomposition, and isolation
  • Agent security analysis covering tools, memory, permissions, identity, and action boundaries
  • Curated research and commentary for AI practitioners, product teams, and security leaders
Working Style
  • I favor concrete system behavior and operational risk over abstract model capability debates
  • I connect security findings to governance, control design, and compliance expectations
  • I link to primary sources and add short notes on why they matter
  • I treat prompts, tools, memory, identity, and action boundaries as one attack surface
Contact

Reach out if the work here is relevant

If you are hiring, building in this space, or want to get in touch about AI research, responsible AI, security, or compliance, send a message.