Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Primary Responsibilities
Cloud Infrastructure & Operations
- Manage, scale, and optimize cloud environments used for data science workloads (e.g., AWS, Azure, GCP).
- Provision, maintain, and optimize compute clusters for ML workloads (e.g., Kubernetes, ECS/EKS, Databricks, SageMaker).
- Implement and maintain high-availability solutions for mission-critical analytics platforms.
- Develop CI/CD pipelines for model deployment, infrastructure-as-code (IaC), and automated testing.
- Build monitoring, alerting, and logging systems for cloud and ML infrastructure (e.g., CloudWatch, Prometheus, Grafana, ELK).
- Automate provisioning, configuration, and deployments using tools such as Terraform, CloudFormation, or Pulumi.
- Ensure smooth data ingestion, transformation, and model execution workflows.
- Support data scientists with reliable, reproducible environments for research and development.
- Collaborate with Data Engineering to maintain seamless integration between data pipelines and cloud systems.
- Implement data science security controls and compliance requirements for cloud operations.
- Conduct periodic risk assessments, patching, and governance reviews.
- Support secure handling of sensitive financial and portfolio company data.
- Partner with data scientists, machine learning engineers, and data engineers to support data-driven initiatives.
- Serve as a technical advisor on cloud architecture, performance optimization, and operational excellence.
Education & Certificates
- A bachelor's degree or higher in a STEM field, required
- 5+ years of experience in cloud operations, DevOps engineering, SRE, or related roles.
- Strong proficiency with at least one major cloud provider (AWS preferred).
- Hands-on experience with IaC tools (Terraform, CloudFormation, or similar).
- Strong scripting skills (Python, Bash, or PowerShell).
- Experience with CI/CD systems (GitHub Actions, Jenkins, CircleCI, GitLab CI).
- Familiarity with container orchestration (EKS, Kubernetes, ECS, AKS).
- Experience supporting data-intensive or ML workloads.
- Experience in financial services, investment management, or other highly regulated industries.
- Knowledge of ML/AI platform tools (Databricks, SageMaker, MLflow, Airflow).
- Understanding of networking, VPC architectures, and cloud security best practices.
- Familiarity with distributed compute frameworks (Spark, Ray, Dask).
Nimble Gravity is a team of outdoor enthusiasts, adrenaline seekers, and experienced growth hackers. We love solving hard problems and believe the right data can transform and propel growth for any organization.
Nimble Gravity is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis forbidden under federal, state, or local law. Nimble Gravity considers all qualified applicants.
Key Skills
Ranked by relevanceReady to apply?
Join Nimble Gravity and take your career to the next level!
Application takes less than 5 minutes

