Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
We are seeking a proactive Databricks Engineer to design, build, and maintain scalable data pipelines and solutions in a cloud-native environment. This role offers the opportunity to work closely with data scientists, architects, and analysts to deliver reliable, high-quality data for analytics and machine learning. You will leverage Databricks, Apache Spark, and modern cloud technologies to transform raw data into actionable insights. The position provides exposure to cutting-edge data engineering practices while working in a collaborative, fully remote Latin American team focused on innovation, efficiency, and performance.
Accountabilities:
- Develop and maintain ETL/ELT pipelines using Databricks and Apache Spark
- Transform raw data into structured, analytics-ready formats to support reporting and machine learning
- Optimize performance of Databricks notebooks, workflows, and pipelines for efficiency and scalability
- Integrate Databricks with cloud platforms (AWS, Azure, or GCP) and external data sources
- Implement data validation, monitoring, and quality assurance processes to ensure reliable outputs
- Collaborate with data scientists to support model training, deployment, and operationalization
- Participate in architecture and code reviews, providing input to improve system reliability and maintainability
- 2-5 years of experience in data engineering roles, ideally with Databricks and big data technologies
- Strong proficiency in Apache Spark, Python, Scala, and SQL
- Hands-on experience with Databricks notebooks, Delta Lake, and MLflow
- Knowledge of cloud platforms such as Azure Databricks, AWS, or GCP
- Understanding of data lakes, data warehouses, and big data concepts
- Familiarity with data quality frameworks and monitoring best practices
- English B2 or higher (conversational level)
- Strong analytical, problem-solving, and collaboration skills in remote, cross-functional teams
- 100% remote work across Latin America
- Competitive remuneration in USD
- Exposure to modern data engineering tools and cloud technologies
- Opportunities for professional growth and career development
- Collaborative, flexible, and supportive remote work environment
- Involvement in high-impact projects supporting analytics and AI initiatives
When you apply, your profile goes through our AI-powered screening process designed to identify top talent efficiently and fairly.
- 🔍 Our AI thoroughly evaluates your CV and LinkedIn profile, analyzing your skills, experience, and achievements
- 📊 It compares your profile to the job's core requirements and past success factors to calculate a match score
- 🎯 The top 3 candidates with the highest match are automatically shortlisted
- 🧠 When necessary, our human team may perform additional review to ensure no strong candidate is overlooked
Thank you for your interest!
Key Skills
Ranked by relevanceReady to apply?
Join Jobgether and take your career to the next level!
Application takes less than 5 minutes

