Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Prometa.ai is a rapidly growing technology company developing Deep Learning-based recommendation systems and agentic AI solutions powered by Large Language Models (LLMs). We operate active projects across various industries and continue to strengthen our pioneering position in the field. We offer AI-powered products and consultancy services tailored for sectors such as finance, retail, healthcare, insurance, and telecommunications.
At PrometaAI, we are building next-generation AI products using the power of LLMs, RAG pipelines, and agentic workflows. We're looking for a Generative AI Engineer who can bridge the gap between research and production — someone who knows how to turn cutting-edge models into scalable, reliable systems.
Key Responsibilities:
- Build and deploy end-to-end LLM-powered applications, including RAG, QA, and agent-based flows.
- Implement multi-agent systems using LangGraph, LangChain, and LangFlow.
- Optimize prompts, tool usage, memory, and reasoning chains for production use.
- Integrate GenAI components with existing APIs and infrastructure.
- Monitor performance using tools like Langfuse, and apply evaluations on cost, latency, and accuracy.
- Work closely with Data Science and MLOps to streamline deployments and lifecycle management.
- Handle vector search integration (e.g., FAISS, Qdrant), chunking logic, and embedding evaluation.
- Participate in scalable cloud-based deployments (e.g., GCP, Azure, OpenShift, Kubernetes).
Required Skill Sets:
- Hands-on experience with LLM applications, including prompt design and tool-augmented agents.
- Strong Python skills; ability to rapidly prototype and productionize code.
- Proficiency in frameworks like LangChain, LangGraph, LangFlow, and Langfuse.
- Familiarity with RAG pipelines, vector DBs, and document retrieval strategies.
- Experience working with Docker, Kubernetes, OpenShift, and cloud platforms.
- Exposure to MLOps tools like MLFlow, Airflow, Kubeflow.
- Understanding of performance tuning, error handling, latency optimization.
- Bonus: Experience in Speech-to-Text / Text-to-Speech is a big plus.
- Bonus: Awareness of AI security, privacy, and interpretability is a plus.
What We Offer:
- Opportunity to work on leading agentic AI platforms in Turkey
- A forward-thinking, innovation-driven, and growth-oriented organization
- Hands-on experience with scalable AI solutions across multiple industries
- Flexibility to work remotely
- Ongoing support for certifications and training for professional development
Key Skills
Ranked by relevanceReady to apply?
Join Prometa AI and take your career to the next level!
Application takes less than 5 minutes

