Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Palantir Foundry (Good to Have / Optional)
- Work with Palantir Foundry to develop data pipelines, transform datasets, build data ontology, and publish data assets.
- Utilize Palantir workflows such as Code Workbook, Foundry Transformations, Contour, Quiver, and Foundry Pipelines.
- Collaborate with cross-functional teams to integrate Foundry datasets with analytics and reporting platforms.
Data Integration & Modelling
- Collaborate with business and analytics teams to gather data requirements and translate them into technical designs.
- Develop optimized data models, curated datasets, and analytics-ready layers.
- Integrate data from multiple sources leveraging APIs, batch & streaming frameworks.
Deployment & Automation
- Implement CI/CD workflows for Databricks/PySpark jobs.
- Automate workflows using tools such as Airflow, Databricks Jobs, or Foundry Build pipelines.
- Support production deployments, troubleshooting, and performance enhancements.
Collaboration & Stakeholder Management
- Work closely with data scientists, analysts, and business stakeholders to deliver data solutions aligned with business needs.
- Provide technical guidance and best practices for Python/PySpark development.
- Prepare clear documentation for pipelines, datasets, and workflows.
Must-Have Skills
- Strong hands-on experience in Databricks.
- Expert-level proficiency in Python, PySpark, and SQL.
- Experience in building scalable ETL/ELT pipelines on distributed systems.
- Strong knowledge of data modelling, data transformation, and optimization techniques.
- Experience working with large datasets in cloud environments (e.g., Azure, AWS, GCP).
Good to Have (Preferred)
- Experience with Palantir Foundry or similar data platforms.
- Knowledge of data governance, versioning, and ontology in Foundry.
- Experience in Airflow or any orchestration tools.
- Exposure to cloud technologies (Azure/AWS/GCP).
- Understanding of DevOps practices for data engineering.
Key Skills
Ranked by relevanceReady to apply?
Join Ampstek and take your career to the next level!
Application takes less than 5 minutes

