Agent Engineer
London, UK / Bay Area, CA
We encourage everyone of all backgrounds and experience levels to apply
As an Agent Engineer, you'll:
Collaborate with the founders and team on ideas bringing technical expertise to solutions
Design and implement modular agent architectures with strong emphasis on reliability and verifiability
Develop orchestration and reasoning systems for autonomous decision-making
Create mechanisms for tool use, memory, and learning within agent systems including MCP
Build evaluation frameworks to measure agent performance and reliability
Integrate with modern LLM APIs and develop prompt engineering techniques, optimise and measure performance
Work with our customers to understand their challenges and needs
What we're looking for
Strong technical experience with TypeScript
Experience with LLM APIs (OpenAI, Anthropic, etc.)
Familiarity with agent architectures, limitations, and trade-offs
Familiarity with autonomous decision-making, reasoning, and guardrails
Comfort with rapid prototyping and iterative development
Excellent problem-solving and debugging skills
May be benefitial:
Background in reinforcement learning or autonomous systems
Understanding of evaluation methodologies for agent systems
Experience with multi-agent systems
If you're passionate about pushing the boundaries of what autonomous AI agents can do and want to help shape this emerging technology, we'd love to hear from you.
As our Agent Engineer, you'll spearhead the development of our core agent architecture and capabilities. You'll work with a multidisciplinary team to design and implement autonomous systems that can reason, plan, and act effectively.
As an Agent Engineer, you'll:
Collaborate with the founders and team on ideas bringing technical expertise to solutions
Design and implement modular agent architectures with strong emphasis on reliability and verifiability
Develop orchestration and reasoning systems for autonomous decision-making
Create mechanisms for tool use, memory, and learning within agent systems including MCP
Build evaluation frameworks to measure agent performance and reliability
Integrate with modern LLM APIs and develop prompt engineering techniques, optimise and measure performance
Work with our customers to understand their challenges and needs
What we're looking for
Strong technical experience with TypeScript
Experience with LLM APIs (OpenAI, Anthropic, etc.)
Familiarity with agent architectures, limitations, and trade-offs
Familiarity with autonomous decision-making, reasoning, and guardrails
Comfort with rapid prototyping and iterative development
Excellent problem-solving and debugging skills
May be benefitial:
Background in reinforcement learning or autonomous systems
Understanding of evaluation methodologies for agent systems
Experience with multi-agent systems
If you're passionate about pushing the boundaries of what autonomous AI agents can do and want to help shape this emerging technology, we'd love to hear from you.
Interested in this role or know someone who may be?
Send a message to founders@stuntdouble.io