Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. A Data Engineer designs data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective.
Required Qualifications
- Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience) and able to demonstrate high proficiency in programming fundamentals.
- At least 5 years of proven experience as a Data Engineer or similar role dealing with data and ETL processes.
- Strong knowledge of Microsoft Azure services, including Azure Data Factory, Azure Synapse, Azure Databricks, Azure Blob Storage and Azure Data Lake Gen 2.
- Experience utilizing SQL DML to query modern RDBMS in an efficient manner (e.g., SQL Server, PostgreSQL).
- Strong understanding of Software Engineering principles and how they apply to Data Engineering (e.g., CI/CD, version control, testing).
- Experience with big data technologies (e.g., Spark).
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration skills.
- Strong experience in Python is preferred but experience in other languages such as Scala, Java, C#, etc is accepted.
- Experience building spark applications utilizing PySpark.
- Experience with file formats such as Parquet, Delta, Avro.
- Experience efficiently querying API endpoints as a data source.
Relocation could be considered.
International Considerations
Expatriate assignments will not be considered.
Chevron regrets that it is unable to sponsor employment Visas or consider individuals on time-limited Visa status for this position
Key Skills
Ranked by relevanceReady to apply?
Join Chevron and take your career to the next level!
Application takes less than 5 minutes

