Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
- Arganteal accepts applications from direct candidates only. We do not work with third-party recruiters or staffing agencies
- Required Country Location: Costa Rica, Peru, Argentina, Brazil, Columbia, South Africa, Mexico, or Panama
- This is full time work at 40 hours per week
Our client seeks a motivated Senior DevOps Engineer, Data & AI to join their team in building a groundbreaking, modular platform from the ground up. This platform digitizes and contextualizes multi-modal sensor data from both digital and physical environments into specialized time-series, graph, and vector databases—powering real-time analytics, compliance, and AI-driven context mapping.
This role is ideal for a DevOps leader with strong expertise in data engineering, distributed systems, and applied AI, who thrives on automation, scalability, and production-grade deployments across hybrid and cloud environments.
Key Responsibilities:
Platform Automation & Infrastructure
- Architect, automate, and manage infrastructure for data ingestion, contextualization, and visualization modules (Data, Access, & Agents)
- Build CI/CD pipelines for sensor collection agents across heterogeneous systems (Windows, Linux, macOS, mobile, IoT)
- Implement and automate real-time ingestion pipelines using Apache Kafka, Apache NiFi, Redis Streams, or AWS Kinesis
- Deploy, scale, and optimize multi-modal databases:
- Time-series: MongoDB, InfluxDB, TimescaleDB, or AWS Timestream
- Graph: Neo4j (Cypher, APOC, graph schema design)
- Vector: Qdrant, FAISS, Pinecone, or Weaviate
- Automate deployment and monitoring of a Database Access Layer (DBAL) to unify queries across multiple database engines
- Experiment with or extend Model Context Protocol (MCP) or similar standards for cross-database and multi-agent interoperability
- Engineer low-latency pipelines for event streams (syslog, telemetry, keystrokes, IoT feeds, cloud service logs)
- Collaborate with frontend engineers to integrate visual mapping UIs with scalable back-end pipelines
- Optimize system and database performance using down-sampling, partitioning, and caching techniques
- Design solutions for horizontal scaling and containerized deployment (Docker, Kubernetes, OpenShift)
- Apply infrastructure-as-code practices to achieve resilience, reproducibility, and rapid iteration under real-world constraints
- Partner with compliance, security, and business stakeholders to ensure systems meet regulatory and operational requirements
- Conduct architecture reviews, lead DevOps best practices, and mentor junior engineers on automation, scalability, and observability
- Programming: Strong proficiency in Python and Node.js (C++ a plus)
- Streaming: Proven hands-on experience with Kafka, NiFi, Redis Streams, or AWS Kinesis
- Databases:
- Time-series: MongoDB, InfluxDB, TimescaleDB, or AWS Timestream
- Graph: Neo4j (Cypher, APOC)
- Vector: Qdrant, FAISS, Pinecone, or Weaviate
- AI & Agents: Experience with—or strong interest in—Agentic AI frameworks, multi-agent orchestration, and context-aware data processing
- Data Interchange: Familiarity with MCP-like protocols or standardized APIs for multi-database access
- Cloud & Infrastructure: Hands-on with AWS, Azure, or GCP, plus containerization and orchestration (Docker, Kubernetes, OpenShift)
- DevOps Expertise: Deep understanding of CI/CD pipelines, IaC (Terraform/Ansible), monitoring/observability, distributed systems, and microservices security
- Problem Solving: Strong debugging skills, automation mindset, and ability to balance speed, scalability, and compliance in production systems
- Machine Learning/NLP integration into multi-modal pipelines
- CI/CD automation and DevOps practices
- Knowledge of enterprise integration patterns, event-driven systems, and zero-trust security models
- Experience with compliance frameworks (NERC CIP, FedRAMP, GDPR, SOX)
- Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent hands-on experience)
- 5+ years professional software development with data-intensive or AI-driven systems
- Proven experience designing, deploying, and scaling modular platforms in production
- Arganteal accepts applications from direct candidates only. We do not work with third-party recruiters or staffing agencies
EA95Bu2j4k
Key Skills
Ranked by relevanceReady to apply?
Join Arganteal Corporation and take your career to the next level!
Application takes less than 5 minutes