Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Data Scientist / Machine Learning Engineer
to develop and enhance our data and analytics infrastructure.
The position is
FULLY REMOTE
, based in Latin America.
This position will provide you with the opportunity to collaborate with a dynamic team and talented data scientists in the field of big data analytics and applied
AI
If you have a passion for designing and implementing advanced machine learning and deep learning models, particularly in the
Generative AI
space, this role is perfect for you.
We are seeking a skilled professional with expertise in
Python
for production-level projects, proficiency in machine learning and deep learning techniques such as
CNNs
and
Transformers
, and hands-on experience working with
PyTorch
We\'re looking for a versatile
Machine Learning Engineer / Data Scientist
to join our big-data analytics team.
In this hybrid role you\'ll not only design and prototype novel
ML/DL models
, but also productionize them end-to-end, integrating your solutions into our data pipelines and services.
You\'ll work closely with data engineers, software developers and product owners to ensure high-quality, scalable, maintainable systems.
Key Responsibilities
Design, train, and validate supervised and unsupervised models (e.g., anomaly detection, classification, forecasting)
Architect and implement deep learning solutions (CNNs, Transformers) with PyTorch
Develop and fine-tune Large Language Models (LLMs) and build LLM-driven applications
Implement Retrieval-Augmented Generation (RAG) pipelines and integrate with vector databases
Build robust pipelines to deploy models at scale (Docker, Kubernetes, CI/CD)
Data Engineering & MLOps
Ingest, clean and transform large datasets using libraries like pandas, NumPy, and Spark
Automate training and serving workflows with Airflow or similar orchestration tools
Monitor model performance in production; iterate on drift detection and retraining strategies
Implement LLMOps practices for automated testing, evaluation, and monitoring of LLMs
Software Development Best Practices
Write production-grade Python code following SOLID principles, unit tests and code reviews
Collaborate in Agile (Scrum) ceremonies; track work in JIRA
Document architecture and workflows using PlantUML or comparable tools
Cross-Functional Collaboration
Communicate analysis, design and results clearly in English
Partner with DevOps, data engineering and product teams to align on requirements and SLAs
Azumo is an innovative software development firm helping organizations make insightful decisions using the latest technologies in data, cloud and mobility.
We combine expertise in strategy, data science, application development and design to drive digital transformation initiatives for companies of all sizes.
If you are qualified for the opportunity and looking for a challenge please apply online at Azumo/join-our-team or connect with us at ******
Minimum Qualifications
Bachelor\'s or Master\'s in Computer Science, Data Science or related field
5+ years of professional experience with Python in production environments
Solid background in machine learning & deep learning (CNNs, Transformers, LLMs)
Hands-on experience with PyTorch or similar frameworks (training, custom modules, optimization)
Proven track record deploying ML solutions
Expert in pandas, NumPy and scikit-learn
Familiarity with Agile/Scrum practices and tooling (JIRA, Confluence)
Strong foundation in statistics and experimental design
Excellent written and spoken English
Preferred Qualifications
Experience with cloud platforms (AWS, GCP, or Azure) and their AI-specific services like Amazon SageMaker, Google Vertex AI, or Azure Machine Learning
Familiarity with big-data ecosystems (Spark, Hadoop)
Practice in CI/CD & container orchestration (Jenkins/GitLab CI, Docker, Kubernetes)
Exposure to MLOps/LLMOps tools (MLflow, Kubeflow, TFX)
Experience with Large Language Models, Generative AI, prompt engineering, and RAG pipelines
Hands-on experience with vector databases (e.g., Pinecone, FAISS)
Experience building AI Agents and using frameworks like Hugging Face Transformers, LangChain or LangGraph
Documentation skills using PlantUML or similar
Benefits
Paid time off (PTO)
U.S. Holidays
Training
Udemy free Premium access
Mentored career development
Profit Sharing
$US Remuneration
#J-18808-Ljbffr
Key Skills
Ranked by relevanceReady to apply?
Join Azumo and take your career to the next level!
Application takes less than 5 minutes