TATWEER MIDDLE EAST AND AFRICA L.L.C
Machine Learning Specialist
TATWEER MIDDLE EAST AND AFRICA L.L.CUnited Arab Emirates8 hours ago
Full-timeDesign, Consulting +1

Key Responsibilities

Computer Vision Pipeline Development:

  • Design and implement real-time CV pipelines for object detection, tracking, and classification meeting
  • Build multi-object tracking systems across camera feeds with re-identification and trajectory forecasting
  • Develop preprocessing pipelines for video streams (frame extraction, normalization, augmentation) with error handling and backpressure mechanisms
  • Implement annotation workflows and active learning loops to continuously improve model quality

Model Engineering Optimization

  • Fine-tune and evaluate SOTA open-source models (YOLO, EfficientDet, DETR families) on domain-specific datasets
  • Optimize inference throughput: batching strategies, model quantization (INT8/FP16), ONNX/TensorRT conversion, and multi-GPU orchestration
  • Build A/B testing frameworks to measure model performance (mAP, FPS, recall@IOU) in production
  • Maintain model registry with versioning, lineage tracking, and rollback capabilities

Production ML Infrastructure:

  • Architect scalable ML services exposing REST/gRPC APIs with authentication, rate limiting, and circuit breakers
  • Containerize models and services (Docker) with CI/CD pipelines for automated testing and deployment
  • Implement monitoring dashboards tracking inference latency, GPU utilization, prediction confidence distributions, and data drift
  • Own incident response: debug production issues, conduct root-cause analysis, implement permanent fixes

Software Engineering Excellence

  • Write maintainable Python code with type hints, unit/integration tests (pytest), and API documentation
  • Design clear data contracts between services; validate schemas with Pydantic/protobufConduct thorough code reviews focusing on performance, maintainability, and ML best practices
  • Document system architecture, model cards, and operational runbooks

Collaboration Mentorship

  • Partner with data engineers on annotation tooling, dataset pipelines, and feature stores
  • Work with DevOps to optimize Kubernetes deployments, autoscaling policies, and cost efficiency
  • Mentor junior engineers on CV fundamentals, debugging techniques, and production ML patterns
  • Present technical deep-dives to cross-functional stakeholders

Minimum Qualifications

  • Education: Bachelors in Computer Science, Computer Engineering, Electrical Engineering, or related field
  • Experience: 3-6 years building and deploying ML systems in production environments
  • Computer Vision: Proven track record shipping CV solutions (object detection, segmentation, tracking, or pose estimation) handling real-world data
  • Python Proficiency: Strong software engineering skills—clean code, testing (pytest/unittest), packaging, virtual environments, type hints
  • Model Deployment: Experience serving models via REST/gRPC APIs with frameworks like FastAPI, Flask, or TorchServe
  • Infrastructure: Hands-on with Docker, CI/CD tools (GitHub Actions, GitLab CI), and cloud platforms (AWS/Azure/GCP) or on-prem GPU clusters
  • Performance Tuning: Practical experience profiling code (cProfile, py-spy), optimizing memory usage, and reducing inference latency

Preferred Qualifications

  • Masters degree in Computer Science, Data Science, Machine Learning, or related field
  • Advanced CV: Multi-object tracking (SORT, DeepSORT, ByteTrack), trajectory forecasting, or video understanding models
  • Model Serving: Experience with Triton Inference Server, TorchServe, vLLM, or TensorRT optimizations
  • LLM/RAG Systems: Built retrieval-augmented generation pipelines using vector databases (Pinecone, Weaviate, Milvus) and embedding models
  • Edge Deployment: Optimized models for edge devices (NVIDIA Jetson, Coral TPU) with latency/power constraints
  • MLOps Maturity: Worked with experiment tracking (MLflow, Weights Biases), feature stores (Feast, Tecton), or Kubernetes operators (KubeFlow, Seldon)
  • Distributed Training: Experience with multi-GPU training (DDP, DeepSpeed) or large-scale data processing (Ray, Dask)

Key Skills

Ranked by relevance