AI Builder

AI Builder

You build AI products for clients. You help client teams become AI-native. The balance shifts week to week. This is what the role looks like.

01

What This Role Is

AI Builders sit at the intersection of Tomoro's two practices: AI Engineering and Human Productivity. Some weeks you're shipping a prototype. Other weeks you're embedded with a client team, finding the workflow that AI should own. You need to be good at both.

You are not a software engineer. You are not a management consultant. You are both and neither. You use AI agents as your primary tool for building, and you use consulting instincts to work out what's worth building in the first place.


02

Responsibilities

Building AI products and prototypes

  • Take a client problem from brief to working prototype, fast — days, not months
  • Use AI-assisted development tools (Claude Code, Cursor, Codex) as your default way of building
  • Work across modalities: LLMs, image generation, voice, video, and whatever comes next
  • Make product and design decisions — what should this be, how should it feel, is it good enough to put in front of a client
  • Ship deployed prototypes, not slide decks
  • Extract and apply brand identity from public sources and client materials

Helping client teams become AI-native

  • Embed with client teams to observe how work actually happens — not how they describe it
  • Find high-value workflow opportunities: where AI removes work, improves work, or makes new work possible
  • Run discovery conversations that get to the root of how someone spends their time
  • Design and ship reusable workflow assets: prompt patterns, templates, Custom GPTs, quality checks
  • Coach teams through behaviour change — from "I should use AI more" to "I default to AI"
  • Know when something should stay as off-the-shelf tooling and when it needs a custom build

Working as a consultant

  • Run client discovery sessions and shape ambiguous problems into clear next steps
  • Present to senior stakeholders — marketing directors, heads of people, C-suite
  • Build trust quickly with people who may be sceptical, overwhelmed, or both
  • Write clearly — briefs, recommendations, async updates
  • Work autonomously: find what's needed, go get the information, make it happen

03

Competencies

1

AI-native working

You default to AI. Not as a novelty, but as how you work. You use agents to research, build, write, analyse, and ship. You know when the agent is helping and when it's hallucinating. You iterate fast, verify intelligently, and treat outputs as drafts to shape — not answers to accept.

What good looks like
  • You take an idea to a deployed prototype in hours, not days
  • You use agent skills, custom instructions, and tool integrations fluently
  • You have strong opinions about which models and tools work best for which tasks
  • You're constantly trying new things and have a feel for what's state of the art
2

Taste and product thinking

You care about the quality of what you make. Not just whether it works, but whether it's good — whether someone using it would feel something. You make intentional design choices. You know when to stop.

What good looks like
  • Your prototypes feel crafted, not thrown together (even if they were thrown together)
  • You can explain why you made the choices you made
  • You understand brand — how a company sounds, looks, and feels — and apply that to what you build
  • You have a point of view on what makes an experience beautiful
3

Consulting and problem shaping

You can walk into a room where nobody has a clear brief and leave with a plan. You ask sharp questions. You listen for what people aren't saying. You find the highest-value problem, not just the most obvious one.

What good looks like
  • You run discovery conversations that surface real workflow problems, not surface-level complaints
  • You map opportunities to concrete outcomes: time saved, quality improved, new work unlocked
  • You present recommendations specific enough to act on
  • Client teams want to work with you again
4

Gen AI landscape knowledge

You know what's out there. Not just LLMs — image, video, 3D, voice, music, world models. You've tried them. You have opinions. You can connect that knowledge to real applications: not "I know about Sora" but "here's what you could build with it for this client."

What good looks like
  • You can name the best model for a given task and explain why
  • You follow the space closely and notice when something meaningfully shifts
  • You've built things with multiple modalities, not just text
  • You explain technical concepts to non-technical clients without dumbing them down
5

Autonomy and resourcefulness

The answer is rarely handed to you. You work it out. You dig, hustle, and figure things out on your own. When you hit a wall, you find another way around it. You don't wait for permission or perfect information.

What good looks like
  • You research a brand, a methodology, or a technical approach from scratch with minimal guidance
  • You ship without needing constant check-ins
  • When something doesn't work, you try something else — quickly
  • You're comfortable with ambiguity and make good calls when the brief is thin

04

How We Assess

Two stages. Each covers different ground. Together they test the full range.

Stage 1

Take-home assignment (~4 hrs)

Choose one of two briefs. Brief A tests building (image generation, brand craft, deployed prototype). Brief B tests consulting (conversational design, methodology extraction, Custom GPT). Both test taste, AI-native working, and autonomy.

Stage 2

Interview (conversation)

If you chose Brief A, we talk consulting — discovery, problem shaping, client thinking. If you chose Brief B, we talk gen AI — models, modalities, what's possible, how you hack things together.

Everyone gets assessed on both building and consulting. The order is yours.