Opal Data Consulting
Data Scientist
Opal Data ConsultingUkraine21 hours ago
ContractRemote FriendlyEngineering, Information Technology

About Opal Data Consulting: 

At Opal Data we combine business and technical expertise to create solutions for businesses. Traditional management consultants offer strategic advice without the technical skills to implement their proposed solutions. Software consultants often build tools that aren’t truly optimized for a business or organizational need. We combine the best of both worlds.


We do several kinds of projects: building tools to help our clients understand their organizations in real time, building predictive models to improve our clients’ operations, and building custom applications. Our clients are typically small to medium sized companies across industries (typically $2M – 100M revenue) or government agencies with similarly sized annual budgets.


Building real time understanding of an organization often involves creating and populating a data warehouse by using APIs, scrapers, or prebuilt connectors to integrate all of a clients systems (ERP, CMS, marketing platforms, accounting systems, etc), writing ETL scripts in Python or SQL to shape and combine that data (often necessitating creating of a cloud environment to host serverless functions), and then building visualizations that allow the clients to see what is happening in real time (in Tableau, Power BI, Looker, etc). We often do a significant amount of related analytical work looking for patterns, identifying areas of improvement, and creating software tools to reinforce those learnings (e.g., building notification systems for operations teams to follow the best practices we identify in our analysis, automating tasks, etc.) 


Building predictive models to improve our clients’ operations involves using machine learning to solve particular organizational challenges or take advantage of opportunities. For instance, we have built models to predict which customers will churn (unsubscribe) in advance in order to identify the causal factors leading to churn as well as prioritize customers for outreach from customer retention teams. In other cases, we have built models to predict the performance of individual stores within a network to identify and spread best practices from outperforming stores as well as identify ideal locations for new store expansion.


We are a small but nimble team looking to bring on a self-starter who excels at data science. You can read more about us at: www.opal-data.com


Job Summary:

The Data Scientist will report directly to Opal’s founder / technical lead. As a core member of a small team, the position provides an opportunity for growth across a wide range of technical skillsets, as well as experience working in a wide range of industries across our client projects. 


This is a fully remote position, but we are seeking candidates exclusively from Ukraine - we hire from Ukraine as our small way to help during the war.


Because of the broad range of work we do, candidates are not expected to be experts in everything. The ideal candidate should have experience in many of the areas listed below in Major Responsibilities, and have strong interest in learning the tools and techniques in which they do not already have expertise. Raw intelligence, curiosity, and excitement about experimentation and learning are some of the most important determinants of success in this position. We believe strongly in developing our team members and promoting from within, and are looking for candidates who are interested in continuing to learn and grow within the organization.


In addition to generous base compensation commensurate with experience, this position will also earn profit sharing. Each month, the total compensation earned will be the greater of base compensation or profit sharing, whichever is larger for that month. In good months, our staff typically earn 30-60% more than their base compensation.


Major Responsibilities:

  • Use APIs and build scrapers to ingest data
  • Setup and work within cloud environments (Azure, AWS, GCP) to create and populate data warehouses and deploy ETL code to serverless functions
  • Create ETL scripts / data pipelines in Python / SQL to shape data for visualization and automation tasks
  • Visualize data and create dashboards in Tableau, Power BI, Looker, etc
  • Conduct one-off analyses to look for patterns and insights, make suggestions on future improvements to data collection, etc.
  • Create machine learning models, including variable creation from available data to turn hypotheses we want to test into variables
  • Work on application backends
  • Write clean, well-documented code
  • Create agents via prompt engineering, fine-tuning, and decision pooling open source LLMs


Qualifications:

  • Bachelor’s degree in a computational field and a minimum of 2 years of work experience using Python as a Data Scientist, Data Engineer, or backend Software Developer; or, in lieu of formal education, a minimum of 4 years of technical work experience in those fields
  • Extremely proficient in Python
  • Proficient in SQL
  • Fluency in English
  • Please mention:
  • Experience with Javascript, particularly for scrapers
  • Any other programming languages used
  • Experience with Tableau, Power BI, DOMO, Looker, or other dashboarding platforms
  • Experience building or expanding data warehouses
  • Experience with Dev Ops - setting up cloud environments and deploying containerized (Docker) or serverless functions
  • Machine learning / predictive modeling experience


Compensation

  • $2,000 - 3,250 a month base salary
  • 30% profit sharing on top of base compensation. In good months our staff make 30-60% over their base earnings
  • Full time
  • UKR employment type: independent entrepreneur

Key Skills

Ranked by relevance