Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
The AI Architect defines and governs the Cognitive AI architecture for enterprise platforms, enabling ministries, system integrators, and internal teams to consume AI capabilities as standardized services. The role designs cloud-agnostic patterns for model inferencing, fine-tuning, RAG/knowledge services, and agentic capabilities, ensuring multi-tenancy, security, observability, and cost controls are built in by design.
Working with platform/cloud, data, security, and vendor teams, the AI Architect sets reference architectures, model governance standards, and performance targets, and provides assurance through design reviews, PoCs, and production readiness gates. The role bridges research-grade AI with production-grade platform engineering.
Duties & Responsibilities
- Define Cognitive AI reference architecture: inference gateway, model serving backends, RAG/KB, fine-tune pipelines, and agent tooling.
- Establish model lifecycle governance: model/dataset cards, approvals, evaluation gates, canary/rollback strategy, and versioning.
- Define multi-tenant AI patterns: isolated workspaces, per-tenant endpoints/keys, quotas, dedicated GPU/TPU options, and data boundary enforcement.
- Specify inferencing service requirements: API compatibility, routing/fallback, batching/caching, safety filters, SLAs/SLOs.
- Define RAG architecture patterns: ingestion, chunking, embeddings, vector indexes, retrieval policies, and grounded responses with citations.
- Define fine-tuning approach (LoRA/PEFT), orchestration, artifacts management, provenance, and promotion pipelines.
- Ensure safety and compliance: guardrails, PII redaction, prompt governance, content filters, and audit trails; coordinate with cybersecurity/privacy teams.
- Define end-to-end observability: metrics/logs/traces/audits for training, inference, and pipelines; anomaly detection and alerting requirements.
- Provide technical assurance for vendor deliverables: design reviews, performance testing plans, acceptance criteria, and operational readiness.
- Support enablement: SDK guidance, sample implementations, documentation, and “golden path” onboarding for developers and integrators.
Skills & Abilities
- Strong understanding of modern AI architectures: LLMs, embeddings, RAG, fine-tuning, evaluation, and guardrails.
- Ability to translate AI requirements into scalable, secure, cloud-agnostic platform designs.
- Knowledge of model serving performance techniques: GPU batching, KV/prompt caching, autoscaling, latency optimization.
- Strong governance mindset: evaluation, safety, provenance, and auditability in regulated environments.
- Excellent communication skills to align research, engineering, security, and business stakeholders.
Education & Background
- Bachelor’s degree in Computer Science, Information Technology, Cybersecurity, or related field; Master’s degree highly preferred.
- 7+ years in AI/ML engineering or architecture, including production deployments of LLM/RAG systems.
- Experience with Kubernetes-based serving and MLOps practices (registry, CI/CD, reproducibility, monitoring).
- Experience with public cloud AI stacks and accelerator infrastructure (GPUs; TPUs advantageous).
- Experience defining AI governance/safety controls and operating models (approvals, evaluations, incident response for AI).
- Preferred: experience in government, telco, or other regulated domains and multi-vendor delivery environments.
Preferred Tools / Soft Skills
Preferred Tools
- Model serving & orchestration: vLLM, KServe/Triton, NVIDIA NIM, Ray/Airflow/Prefect; MLflow (registry)
- RAG & vector: PGVector/Weaviate/Elasticsearch, LangChain/LlamaIndex-style frameworks, document pipelines
- Observability & governance: OpenTelemetry, Dynatrace/Datadog, evaluation harnesses, SBOM/attestation, policy-as-code
Soft Skills
- Pragmatic decision-making balancing accuracy, latency, cost, and compliance
- Strong facilitation of architecture reviews and technical alignment workshops
- Ownership mindset with delivery focus across multiple vendors
- Ability to explain AI concepts clearly to non-AI stakeholders (risk, value, limitations)
- Continuous improvement mindset and curiosity for emerging AI approaches and standards
Key Skills
Ranked by relevanceReady to apply?
Join Starlink Qatar and take your career to the next level!
Application takes less than 5 minutes

