Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Job Description
We’re hiring on behalf of a client for a contract‑to‑hire position. This is a 100% remote role.
🧩 Role Summary
As a Data Scientist (Mid–Senior Level), you’ll build, deploy, and maintain scalable machine learning solutions in a cloud‑native environment. You’ll work across the full ML lifecycle—from exploration and modeling to production deployment, monitoring, and continuous improvement—while collaborating with engineering, product, and data teams to drive measurable business impact.
🔍 Key Responsibilities
- Develop, train, and evaluate machine learning models using Python and modern ML frameworks.
- Perform offline model evaluation, experiment tracking, and A/B testing to validate model performance.
- Deploy ML workflows on Google Cloud Platform, leveraging Vertex AI, Artifact Registry, and related services.
- Build and maintain Airflow DAGs for data ingestion, feature engineering, and model orchestration.
- Implement CI/CD pipelines using GitLab for automated training, testing, and deployment.
- Monitor model performance, detect drift, and maintain production model health dashboards.
- Work with data stored in Snowflake and PostgreSQL, ensuring efficient querying and data quality.
- Collaborate with data engineers to design scalable data pipelines and feature stores.
- Configure and optimize load‑balanced ML endpoints for reliability and scalability.
- Communicate insights, findings, and model behavior to technical and non‑technical stakeholders.
🎓 Qualifications
- Bachelor’s or Master’s degree in Physics, Computer Science, Data Science, Statistics, Mathematics, Engineering, or a related quantitative field.
- 4–8 years of experience in data science, machine learning, or applied analytics.
- Hands‑on experience with GCP, especially Vertex AI and cloud‑native ML tooling.
- Strong proficiency in Python and ML/data libraries (pandas, NumPy, scikit‑learn, TensorFlow/PyTorch).
- Solid experience with SQL and relational databases (PostgreSQL, Snowflake).
- Experience building CI/CD pipelines (GitLab preferred).
- Familiarity with Airflow, model monitoring, and production ML best practices.
- Ability to translate business requirements into scalable ML solutions.
- Excellent communication skills and the ability to work cross‑functionally.
🛠️ Core Technologies
Python 🐍 Google Cloud Platform ☁️ Vertex AI 🔺 Airflow 🌬️ GitLab CI/CD 🔧 Snowflake ❄️ PostgreSQL 🐘 Artifact Registry 📦 Load Balancing & API Deployment Model Monitoring & Evaluation
Key Skills
Ranked by relevanceReady to apply?
Join Washington Software Inc. and take your career to the next level!
Application takes less than 5 minutes

