Pathway
Machine Learning DevOps - Cloud and Compute Cluster
PathwayFrance20 hours ago
Full-timeRemote FriendlyOther
About Pathway

Pathway is shaking the foundations of artificial intelligence by introducing the world's first post-transformer model that adapts and thinks just like humans.

Pathway's breakthrough architecture (BDH) outperforms Transformer and provides the enterprise with full visibility into how the model works. Combining the foundational model with the fastest data processing engine on the market, Pathway enables enterprises to move beyond incremental optimization and toward truly contextualized, experience-driven intelligence. The company is trusted by organizations such as NATO, La Poste, and Formula 1 racing teams.

Pathway is led by co-founder & CEO Zuzanna Stamirowska, a complexity scientist who created a team consisting of AI pioneers, including CTO Jan Chorowski who was the first person to apply Attention to speech and worked with Nobel laureate Goeff Hinton at Google Brain, as well as CSO Adrian Kosowski, a leading computer scientist and quantum physicist who obtained his PhD at the age of 20.

The company is backed by leading investors and advisors, including TQ Ventures and Lukasz Kaiser, co-author of the Transformer ("the T" in ChatGPT) and a key researcher behind OpenAI's reasoning models. Pathway is headquartered in Palo Alto, California.

The opportunity

We are currently searching for a Machine Learning DevOps with experience in cloud and compute cluster management and Linux administration.

Our development and production environment is in the cloud, using several major cloud providers. We need support in managing and automating the processes, and scaling the infrastructure to growing team and production needs.

You Will

  • Optimize infrastructure for ML training and inference (e.g., GPUs, distributed compute)
  • Automate and maintaining ML pipelines (data ingestion, training, validation, deployment)
  • Manage model versioning, reproducibility, and traceability
  • Work with terabyte-large datasets.
  • Implement ML-centric CI/CD practices
  • Monitor model performance and data drift in production
  • Collaborate with machine learning engineers, software engineers, and platform teams

The role focuses on operationalizing machine learning models, ensuring scalability, reliability, and automation across the ML lifecycle.

Requirements

What We Are Looking For

  • Very good familiarity with Linux, shell scripts, and cluster configuration scripts as the basic work tool
  • Proficiency in workload management, containerization and orchestration (Slurm, Docker, Kubernetes)
  • Solid grasp of CI/CD tools and workflows (GitHub Actions, Jenkins, Gitlab CI, etc.)
  • Cloud infrastructure knowledge (AWS, GCP, Azure) - especially in ML services (e.g., SageMaker, Vertex AI)
  • Familiarity with monitoring/logging tools (Grafana)
  • Experience with infrastructure as code (Terraform, CloudFormation)
  • Experience with ML pipeline orchestration tools (e.g., MLflow, Kubeflow, Airflow, Metaflow)
  • Programming skills in Python (with exposure to ML libraries like TensorFlow, PyTorch)
  • Willingness to learn

Benefits

Why You Should Apply

  • Intellectually stimulating work environment. Be a pioneer: you get to work with realtime data processing & AI
  • Work in one of the hottest AI startups, with exciting career prospects. Team members are distributed across the world
  • Responsibilities and ability to make significant contribution to the company' success
  • Inclusive workplace culture

Further details

  • Type of contract: Permanent employment contract
  • Preferable joining date: Immediate
  • Compensation: based on profile and location.
  • Location: Remote work. Possibility to work or meet with other team members in one of our offices: Palo Alto, CA; Paris, France or Wroclaw, Poland. Candidates based anywhere in the EU, United States, and Canada will be considered

Key Skills

Ranked by relevance