Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
You will:
- Design and implement efficient, scalable web scraping pipelines using Python.
- Analyze and reverse-engineer the structure of various online resources to extract structured and semi-structured data.
- Develop and maintain crawlers and parsers for diverse content types.
- Ensure reliability and stability of scraping solutions (e.g., handling anti-bot protections, proxies, headless browsers).
- Collaborate with data engineers and product teams to deliver clean, normalized, and production-ready datasets.
- Maintain best practices around legal and ethical scraping (robots.txt, rate limiting, terms of service compliance).
- 3+ years of Python development experience, including solid experience with web scraping frameworks (requests, BeautifulSoup, Scrapy, Selenium or alternatives).
- Strong understanding of web technologies: HTML, CSS, JavaScript, and browser DOM behavior.
- Practical knowledge of parsing complex web pages and dynamic content.
- Experience working with APIs and designing scraping logic for different data formats (JSON, XML, etc.).
- Familiarity with common scraping challenges (e.g., captchas, user-agent spoofing, proxy rotation) and solutions.
- Experience storing and processing scraped data efficiently (SQL/NoSQL databases).
- Good communication skills and ability to work autonomously on mid-sized projects.
- Experience deploying scraping workloads to the cloud (AWS, GCP, Azure).
- Knowledge of distributed scraping architectures.
- Familiarity with Docker and orchestration tools like Airflow.
- Understanding of handling large-scale media or binary data scraping.
Key Skills
Ranked by relevanceReady to apply?
Join GRAI and take your career to the next level!
Application takes less than 5 minutes