About the Role
HeartStamp is building a generative AI platform for personalized digital expression by combining cutting-edge models, LoRA customization, and a marketplace to turn creative intent into beautifully rendered digital content and high-fidelity, print-ready media. We are launching in the USA, Canada & UK in Q1 2026, with rapid worldwide expansion plans soon after.
We’re hiring a Technical Lead – Platform & AI Infrastructure to architect and build the backend systems that power our content generation, personalization, and monetization engines. You’ll own the development of our scalable cloud infrastructure, modular APIs, credit-based microtransaction system, and the automated pipeline for generating print-ready outputs—forming the technical backbone of the company. You’ll also guide frontend architecture and conduct full-stack code reviews to ensure engineering consistency and performance across the platform.
This is a hands-on, foundational role with high autonomy and influence. You’ll collaborate directly with the CPO and founder to shape our early product, make pragmatic technology choices, and lay the groundwork for a high-trust engineering culture. You’ll also work closely with the AI/ML & Workflow Engineer to ensure our generative stack is fully integrated, secure, and production-ready.
We’re looking for someone with an entrepreneurial mindset who’s excited to move quickly, solve hard problems, and help grow a team around them.
What You’ll Do
Backend & Platform Architecture
- Design and implement the modular, service-oriented backend for user accounts, token-based credit balances, asset lifecycle management, and generative task orchestration
- Build and maintain a robust API layer to serve web, mobile, and internal generative services
- Architect the Digital Asset Management (DAM) system, linking prompt metadata, LoRA IDs, SKUs, image outputs, and print-ready PDFs
- Define the core logic for credit throttling, transaction logging, and auditability
- Implement and oversee the automated pipeline for generating and managing print-ready outputs using HTML-to-PDF engines, including handoff to fulfillment
AI Workflow Integration & Infrastructure
- Collaborate with the AI/ML & Workflow Engineer to integrate AI inference pipelines (e.g., ComfyUI workflows, LoRA chaining, ControlNet/IPAdapter support) into scalable backend services
- Architect a flexible AI service abstraction layer for routing requests to the appropriate model (e.g., SD3.5, SDXL Turbo), managing inference queues, and tracking model usage
- Define the runtime environment for GPU-accelerated inference, container orchestration, and resource optimization (e.g., Docker, EKS, or similar)
- Ensure all AI services are deployed securely and reliably within the broader platform architecture
Infrastructure & DevOps
- Lead the initial cloud setup using AWS or GCP (final decision TBD), including containerization and orchestration for backend and AI services
- Establish CI/CD pipelines to support rapid iteration, secure deployments, and high uptime
- Set up secrets management, logging, monitoring, and cost tracking for the entire system
- Ensure infrastructure is cost-efficient, modular, and ready for scale
Security & Legal Readiness
- Implement role-based access controls, secure credential storage, audit logs, and token expiration policies
- Ensure architectural alignment with IP and content safety guidelines, including moderation, user input attribution, and DMCA compliance
- Coordinate with legal counsel and AI providers on licensing enforcement, attribution requirements, and acceptable use policies
Leadership & Collaboration
- Serve as the engineering lead for the MVP build, responsible for system architecture and delivery velocity
- Collaborate cross-functionally with the CPO, founder, and AI/ML engineer to align tech stack decisions with product milestones
- Mentor future hires and establish best practices for backend, frontend, DevOps, and ML integration
- Participate in roadmap planning, sprint ceremonies, and product/infra reviews
What You Bring
- 5+ years experience as a full stack or backend engineer, with experience owning architecture in early-stage or startup environments
- Proficiency in building API-first applications using modern frameworks (e.g., Python/FastAPI, Node.js, Go)
- Solid experience with cloud infrastructure (AWS or GCP) and container orchestration (Docker, Kubernetes, EKS/GKE)
- Familiarity with GPU-based systems, inference orchestration, and integrating third-party AI toolchains
- Experience building token-based credit systems and integrating payment processors (e.g., Stripe)
- A practical, collaborative mindset—you enjoy enabling other builders through thoughtful systems design
- Strong code review and mentorship skills across backend, infra, and frontend disciplines
- Bonus: Experience designing AI routing logic or building infrastructure for generative model workflows
Location & Team Structure
- Remote-first, with a preference for U.S. time zones
- Reports directly to the Chief Product Officer (CPO)
- Will manage technical direction and engineering best practices for the early team
- Collaborates closely with the AI/ML Engineer and founder
Why Join Now?
- Architect the system that powers a modular, creator-friendly AI design platform
- Own core infrastructure and integrations for a company built on AI-first workflows
- Build with speed and autonomy—without compromising quality or security
- Help define an expressive, legally sound system for next-generation visual personalization
Apply Now
Let's start your dream job