Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Functional Skills
- Determining, creating, and implementing internal process improvements, such as redesigning infrastructure for increased scalability, improving data delivery, and automating manual procedures.
- Building analytical tools that make use of the data flow and offer a practical understanding of crucial company performance indicators like operational effectiveness and customer acquisition.
- Helping stakeholders, including the data, design, product, and executive teams, with technical data difficulties.
- Working on data-related technical challenges while collaborating with stakeholders, including the Executive, Product, Data, and Design teams, to support their data infrastructure needs.
- Remaining up to date with developments in technology and industry norms can help you to produce higher-quality results.
Technical Skills:
- Analyze large datasets to derive actionable insights and support decision-making processes.
- Develop and maintain data pipelines using PySpark and other data processing tools.
- Write efficient SQL queries to extract, transform, and load data from various sources.
- Implement data models and schemas to organize and optimize data storage and retrieval.
- Perform data normalization and denormalization to ensure data integrity and accessibility.
- Collaborate with data engineers to centralize and manage data assets.
- Ensure data quality through validation and cleansing processes.
- Utilize CI/CD pipelines to streamline data deployment and maintain continuous integration.
Qualifications:
- Proven experience in data analytics and working with large datasets.
- Proficiency in Python, including libraries such as Pandas and Numpy for data manipulation.
- Strong SQL skills for querying and managing databases.
- Experience with PySpark for large-scale data processing.
- Basic understanding of Hadoop and its ecosystem.
- Familiarity with data engineering concepts and best practices.
- Knowledge of data modeling, including schemas, normalization, and denormalization techniques.
- Understanding of data centralization, cardinality, and data quality principles.
- Good to have experience in CI/CD pipelines and tools
Banking
- Deep understanding of banking operations, financial products, and regulatory frameworks
- Experience with data modeling, ETL processes, and statistical analysis
- Prior experience in retail or corporate banking analytics
- Analyze banking data including customer transactions, loan performance, and financial statements
- Support credit risk analysis and fraud detection initiatives
- Maintain and optimize banking databases and data pipelines
Key Skills
Ranked by relevanceReady to apply?
Join KPMG India and take your career to the next level!
Application takes less than 5 minutes

