Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Location: Pittsburgh, PA / Cleveland, OHCompany: PNC BankExperience: 1-2 yearsEmployment Type: Full-Time
Job Description
PNC Bank is seeking a Junior Data Engineer to support the design, development, and maintenance of scalable data pipelines and data platforms that enable analytics, reporting, and regulatory compliance. This role is ideal for early-career professionals eager to build hands-on experience in enterprise data engineering within the financial services domain.
Key Responsibilities
Assist in building and maintaining ETL/ELT pipelines to ingest, transform, and load data from multiple source systems.Support development of batch and near real-time data processing workflows.Work with structured and semi-structured data using SQL and Python.Participate in data validation, reconciliation, and quality checks to ensure accuracy and completeness.Collaborate with senior data engineers, data analysts, and business stakeholders to understand data requirements.Help manage data storage solutions such as data warehouses and data lakes.Assist with documentation of data models, pipelines, and operational processes.Follow data governance, security, and compliance standards relevant to banking and financial services.Monitor data pipelines and troubleshoot failures under guidance.Support deployment and version control using Git and CI/CD practices.
Required Qualifications
Bachelor's degree in Computer Science, Information Systems, Engineering, or a related field.1-3 years of experience or strong academic/project experience in data engineering or data analytics.Proficiency in SQL (joins, subqueries, performance tuning basics).Working knowledge of Python for data processing.Basic understanding of ETL concepts, data modeling, and data warehousing.Familiarity with relational databases (Oracle, PostgreSQL, SQL Server, or similar).Exposure to cloud platforms (AWS, Azure, or GCP) is a plus.Experience with version control tools such as Git.
Preferred / Nice-to-Have Skills
Exposure to Big Data technologies (Spark, Hadoop).Familiarity with cloud data services (AWS S3, Glue, Redshift, Azure Data Factory, Snowflake).Understanding of banking or financial data, including transactions, risk, or regulatory reporting.Knowledge of data quality frameworks and basic data governance concepts.Experience with workflow orchestration tools (Airflow, Control-M).
Soft Skills
Strong analytical and problem-solving skills.Willingness to learn and adapt in a regulated environment.Good communication and documentation skills.Ability to work effectively in a team-oriented, Agile environment.
Key Skills
Ranked by relevanceReady to apply?
Join The Value Maximizer and take your career to the next level!
Application takes less than 5 minutes

