Parallel
Machine Learning Engineer
ParallelUkraine19 hours ago
Full-timeEngineering, Information Technology

This job is posted for Tie via Parallel


About Tie


Tie is building the next generation of identity resolution and marketing intelligence. Our platform connects hundreds of millions of consumers across devices, browsers, and channels—without relying on cookies—to power higher deliverability, smarter targeting, and measurable revenue lift for modern marketing teams. At Tie, AI is not a feature—it is a core execution advantage. We operate large-scale identity graphs, real-time scoring systems, and production ML pipelines that directly impact revenue, deliverability, and customer growth.


The Role


We are looking for a Senior AI / Machine Learning Engineer to design, build, and deploy production ML systems that sit at the heart of our identity graph and scoring platform. You will work at the intersection of machine learning, graph data, and real-time systems, owning models end to end—from feature engineering and training through deployment, monitoring, and iteration. This role is highly hands-on and impact-driven. You will help define Tie’s ML architecture, ship models that operate at sub-second latency, and partner closely with platform engineering to ensure our AI systems scale reliably.


What You’ll Do


  • Design and deploy production-grade ML models for identity resolution, propensity scoring, deliverability, and personalization
  • Build and maintain feature pipelines across batch and real-time systems (BigQuery, streaming events, graph-derived features)
  • Develop and optimize classification models (e.g., XGBoost, logistic regression) with strong handling of class imbalance and noisy labels
  • Integrate ML models directly with graph databases to support real-time inference and identity scoring
  • Own model lifecycle concerns: evaluation, monitoring, drift detection, retraining, and performance reporting
  • Partner with engineering to expose models via low-latency APIs and scalable services
  • Contribute to GPU-accelerated and large-scale data processing efforts as we push graph computation from hours to minutes
  • Help shape ML best practices, tooling, and standards across the team


What You’ll Bring


Required Qualifications


  • 5+ years of experience building and deploying machine learning systems in production
  • Strong proficiency in Python for ML, data processing, and model serving
  • Hands-on experience with feature engineering, model training, and evaluation for real-world datasets
  • Experience deploying ML models via APIs or services (e.g., FastAPI, containers, Kubernetes)
  • Solid understanding of data modeling, SQL, and analytical workflows
  • Experience working in a cloud environment (GCP, AWS, or equivalent)


Preferred / Bonus Experience


  • Experience with graph data, graph databases, or graph-based ML
  • Familiarity with Neo4j, Cypher, or graph algorithms (community detection, entity resolution)
  • Experience with XGBoost, tree-based models, or similar classical ML approaches
  • Exposure to real-time or streaming systems (Kafka, Pub/Sub, event-driven architectures)
  • Experience with MLOps tooling and practices (CI/CD for ML, monitoring, retraining pipelines)
  • GPU or large-scale data processing experience (e.g., RAPIDS, CUDA, Spark, or similar)
  • Domain experience in identity resolution, marketing technology, or email deliverability


Our Technology Stack


  • ML & Data: Python, Pandas, Scikit-learn, XGBoost
  • Graphs: Neo4j (Enterprise, GDS)
  • Cloud: Google Cloud Platform (BigQuery, Vertex AI, Cloud Run, Pub/Sub)
  • Infrastructure: Docker, Kubernetes, GitHub Actions
  • APIs: FastAPI, REST-based inference services


What We Offer


  • Competitive compensation, including salary, equity, and performance incentives
  • Opportunity to work on core AI systems that directly impact revenue and product

Key Skills

Ranked by relevance