Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
We are building the AI Operating System for data: a dynamic ontology engine that makes sense of all forms of structured and unstructured data — and a new type of AI worker that executes autonomous workflows with transparent reasoning across real tools.
This is AI middleware designed to help AI understand data the way data teams do: to reason, plan, and act inside real systems.
We are already working with leading companies across fintech, travel, insurance, gaming, and logistics, have paying clients, and have raised funding to scale.
The Role
Join a small, high-caliber engineering team and help build the core infrastructure powering our agentic systems. This is a hands-on, high-impact role for someone excited to ship quickly, solve complex problems, and grow into new responsibilities as we scale.
What You’ll Work On
- Agent Infrastructure: Build orchestration, memory, and execution logic for scalable autonomous systems using Python and open-source LLM frameworks (LangChain, LlamaIndex, etc).
- Ontology & Semantic Data Layers: Contribute to automated systems that connect structured and unstructured data into a unified “data brain.”
- Reasoning Engines: Help develop pipelines that combine LLMs with symbolic reasoning and long-term memory.
- Data Infrastructure: Build and maintain metadata stores, connectors, ETL, and vectorized storage (Chroma, Pinecone, Weaviate).
- Full-Stack Contributions: Ship APIs and lightweight frontends when needed to move fast and gather customer feedback.
- 3+ years in backend, infrastructure, or AI systems engineering.
- Strong Python expertise; you’ve shipped production systems using modern data/AI tooling.
- Familiarity with LLM frameworks (LangChain, LlamaIndex, Haystack) and vector DBs.
- Experience with data infrastructure (dbt, BigQuery, Airflow) and cloud platforms (GCP or AWS).
- Bonus: Background in knowledge graphs, semantic data modeling, or symbolic AI.
- Startup-Ready: You thrive in ambiguity, move quickly, and enjoy working across the stack.
- Builder Mindset: You use AI tools daily (Cursor, Windsurf, Cody, etc) to 10x your output.
- Frontier Tech: Work on core AI infrastructure at the edge of what’s possible.
- Immediate Impact: Small team, big problems — what you build will ship fast and matter immediately.
- Career Growth: Expand responsibilities and shape the direction of our systems as we scale.
- Backed by Conviction: Recently raised $3.1M seed round from top European and US funds.
Key Skills
Ranked by relevanceReady to apply?
Join Mindroiu Serban-Alexandru PFA and take your career to the next level!
Application takes less than 5 minutes