Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
We're looking for a Data Engineer (Python) with hands-on experience building and maintaining scalable data pipelines, and exposure to Machine Learning / AI workflows. In this role, you'll work closely with data scientists, ML engineers, and product teams to ensure reliable, high-quality data powers analytics and AI-driven products.
This position is ideal for a strong data engineer who enjoys working with large datasets and wants to deepen their involvement in ML and AI systems.
Requirements
- Advanced English level for fluent communication.
- +5 years of experience in Data Engineer, with strong proficiency in Python.
- Experience building data pipelines using tools such as Airflow, Prefect, Luigi, or similar.
- Solid understanding of SQL and relational databases.
- Experience working with data warehouses (e.g., BigQuery, Snowflake, Redshift).
- Familiarity with cloud platforms (AWS, GCP, or Azure).
- Experience handling large datasets and optimizing data workflows. Basic understanding of machine learning concepts (training, inference, features, evaluation).
- Ability to work collaboratively in cross-functional teams.
- Experience supporting ML pipelines or MLOps workflows.
- Familiarity with libraries such as Pandas, NumPy, Scikit-learn, PyTorch, or TensorFlow.
- Experience with feature stores or model data versioning.
- Knowledge of streaming technologies (Kafka, Pub/Sub, Kinesis).
- Exposure to LLMs, NLP, or AI-driven applications.
- Experience with containerization and orchestration (Docker, Kubernetes).
- Design, build, and maintain scalable data pipelines using Python.
- Develop and optimize ETL/ELT processes for structured and unstructured data.
- Manage data ingestion from APIs, databases, and streaming sources.
- Collaborate with data scientists to support machine learning model training, evaluation, and deployment.
- Ensure data quality, reliability, and performance across data platforms.
- Implement monitoring, logging, and data validation for pipelines.
- Work with cloud-based data infrastructure and storage solutions.
- Document data flows, schemas, and pipeline logic.
- 100% Remote work.
- Competitive salary in USD.
- Type of contract: Independent Contractor with Venon Solutions LLC.
- Contract duration: Long-term.
- 2 weeks of PTO (paid time off).
- Holidays: from the Client's calendar (USA)
- Working hours: Full-time EST timezone, fully committed.
Key Skills
Ranked by relevanceReady to apply?
Join Venon Solutions and take your career to the next level!
Application takes less than 5 minutes

