Arganteal Corporation
Senior Devops Engineer
Arganteal CorporationArgentina14 days ago
Full-timeRemote FriendlyEngineering, Information Technology
  • Arganteal accepts applications from direct candidates only. We do not work with third-party recruiters or staffing agencies
  • Required Country Location: Costa Rica, Peru, Argentina, Brazil, Columbia, South Africa, Mexico, or Panama
  • This is full time work at 40 hours per week

Overview:

Our client seeks a motivated Senior DevOps Engineer, Data & AI to join their team in building a groundbreaking, modular platform from the ground up. This platform digitizes and contextualizes multi-modal sensor data from both digital and physical environments into specialized time-series, graph, and vector databases—powering real-time analytics, compliance, and AI-driven context mapping.

This role is ideal for a DevOps leader with strong expertise in data engineering, distributed systems, and applied AI, who thrives on automation, scalability, and production-grade deployments across hybrid and cloud environments.

Key Responsibilities:

Platform Automation & Infrastructure

  • Architect, automate, and manage infrastructure for data ingestion, contextualization, and visualization modules (Data, Access, & Agents)
  • Build CI/CD pipelines for sensor collection agents across heterogeneous systems (Windows, Linux, macOS, mobile, IoT)
  • Implement and automate real-time ingestion pipelines using Apache Kafka, Apache NiFi, Redis Streams, or AWS Kinesis

Database & Data Layer Engineering

  • Deploy, scale, and optimize multi-modal databases:
    • Time-series: MongoDB, InfluxDB, TimescaleDB, or AWS Timestream
    • Graph: Neo4j (Cypher, APOC, graph schema design)
    • Vector: Qdrant, FAISS, Pinecone, or Weaviate
  • Automate deployment and monitoring of a Database Access Layer (DBAL) to unify queries across multiple database engines
  • Experiment with or extend Model Context Protocol (MCP) or similar standards for cross-database and multi-agent interoperability
Data Streaming & Integration

  • Engineer low-latency pipelines for event streams (syslog, telemetry, keystrokes, IoT feeds, cloud service logs)
  • Collaborate with frontend engineers to integrate visual mapping UIs with scalable back-end pipelines

Optimization, Reliability & Scalability

  • Optimize system and database performance using down-sampling, partitioning, and caching techniques
  • Design solutions for horizontal scaling and containerized deployment (Docker, Kubernetes, OpenShift)
  • Apply infrastructure-as-code practices to achieve resilience, reproducibility, and rapid iteration under real-world constraints

Collaboration & Leadership

  • Partner with compliance, security, and business stakeholders to ensure systems meet regulatory and operational requirements
  • Conduct architecture reviews, lead DevOps best practices, and mentor junior engineers on automation, scalability, and observability

Required Skills & Experience:

  • Programming: Strong proficiency in Python and Node.js (C++ a plus)
  • Streaming: Proven hands-on experience with Kafka, NiFi, Redis Streams, or AWS Kinesis
  • Databases:
    • Time-series: MongoDB, InfluxDB, TimescaleDB, or AWS Timestream
    • Graph: Neo4j (Cypher, APOC)
    • Vector: Qdrant, FAISS, Pinecone, or Weaviate
  • AI & Agents: Experience with—or strong interest in—Agentic AI frameworks, multi-agent orchestration, and context-aware data processing
  • Data Interchange: Familiarity with MCP-like protocols or standardized APIs for multi-database access
  • Cloud & Infrastructure: Hands-on with AWS, Azure, or GCP, plus containerization and orchestration (Docker, Kubernetes, OpenShift)
  • DevOps Expertise: Deep understanding of CI/CD pipelines, IaC (Terraform/Ansible), monitoring/observability, distributed systems, and microservices security
  • Problem Solving: Strong debugging skills, automation mindset, and ability to balance speed, scalability, and compliance in production systems

Preferred Skills:

  • Machine Learning/NLP integration into multi-modal pipelines
  • CI/CD automation and DevOps practices
  • Knowledge of enterprise integration patterns, event-driven systems, and zero-trust security models
  • Experience with compliance frameworks (NERC CIP, FedRAMP, GDPR, SOX)

Qualifications:

  • Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent hands-on experience)
  • 5+ years professional software development with data-intensive or AI-driven systems
  • Proven experience designing, deploying, and scaling modular platforms in production
  • Arganteal accepts applications from direct candidates only. We do not work with third-party recruiters or staffing agencies

Powered by JazzHR

EA95Bu2j4k

Key Skills

Ranked by relevance