Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Key Skills: Pyspark, Fast Api, Numpy, SQL, Pandas, Python, Apache Spark, Databricks, GitHub, Pytorch, Snowflake
Roles and Responsibilities:
- Develop and deploy predictive and prescriptive models using statistical and machine learning techniques.
- Analyze complex datasets to extract actionable insights and enable data-driven decision-making.
- Design, train, validate, and optimize models for classification, regression, clustering, and forecasting tasks.
- Build and maintain scalable ML pipelines for model training, evaluation, and production deployment.
- Collaborate with cross-functional teams to define use cases, gather data requirements, and interpret model outputs.
- Implement model monitoring, versioning, and retraining strategies to ensure sustained performance and reliability.
- Apply MLOps practices to streamline model lifecycle management, including CI/CD workflows for ML.
- Leverage Azure ML, AWS SageMaker, and GCP Vertex AI for scalable experimentation and deployment.
- Document methodologies, assumptions, and results to ensure transparency and reproducibility.
Skills Required:
- Strong proficiency in Python for data manipulation, model development, and automation.
- Hands-on experience with machine learning libraries including Scikit-learn, TensorFlow, PyTorch, and XGBoost.
- Expertise in data analysis libraries such as NumPy, Pandas, SciPy, and Matplotlib.
- Experience working with PySpark and Databricks for large-scale data processing.
- Proficiency in SQL for data querying and transformation.
- Experience developing APIs and lightweight applications using FastAPI, Dash, or Streamlit.
- Familiarity with CI/CD practices using Jenkins and GitHub Actions.
- Experience with cloud ML platforms such as Azure ML, AWS SageMaker, and GCP Vertex AI.
- Understanding of Apache Spark and Snowflake for distributed data processing and storage.
- Familiarity with tools such as Jupyter Notebook, Dataiku, or MATLAB is advantageous.
- Experience in model monitoring, production support, and solution sustainment for operational ML environments is valuable.
- Strong analytical thinking, communication, and stakeholder management skills.
Education: Bachelor's or Master's degree in Computer Science
Key Skills
Ranked by relevanceReady to apply?
Join MyCareernet and take your career to the next level!
Application takes less than 5 minutes

