Skip to content

Professional experience

Let’s connect on LinkedIn Download my CV (PDF)

Nike (2 years)

  • πŸ‘¨πŸ½β€πŸ­   Tech Lead - Machine Learning Engineering
  • πŸ‡³πŸ‡± Hilversum, NL
  • πŸ—“οΈ Aug 2022 - Aug 2024

My responsibility was to guide and lead all technical aspects of our forecasting system and its development, including solutions architecture, systems design, infrastructure and reliability considerations, CI/CD, speed of iteration, down to repository structure, code patterns, software tooling, and developer experience.

Product: Delivery of reliable insights on Nike’s consumer demand to senior ELT members and multiple store planning and merchandise financial planning teams around the globe on time and on a regular basis.

Deliverables: Medium- to long-term forecasts and demand sensing for multiple metrics, various granularity levels (both temporal and hierarchical), and under different assumptions for leading demand indicators such as promotional plans.

Forecast reconciliation: Our solutions ranged from highly granular bottom-up forecasts using many contextual drivers, to high-level top-down forecasts driven by macroeconomic projections. In all cases, we had to ensure that our forecasts (as well as the forecasts produced by other teams) were aligned and consistent.

Forecast evaluation: To build trust with our business stakeholders and help our team iterate faster and more confidently on new model versions, I put forward a set of guidelines and design principles for implementing robust, unbiased, and theoretically sound evaluation strategies for our modelling approaches. First shared as an RFC within the team and later externally, I documented important considerations such as choosing the right measures of forecast accuracy, CV design choices, aggregation over temporal and hierarchical granularities, the difference between model-selection and evaluation and its consequences, etc.

Orchestration: Our pipelines were orchestrated and scheduled using Airflow, Kubernetes, and SageMaker. We defined many ETL pipelines but made all inference jobs idempotent and easy to debug by providing clear data provenance and allowing engineers to resolve issues with upstream data sources asynchronously.

Data governance: Dataset validation checks were implemented for all upstream and downstream datasets and most intermediate transformations. Apart from allowing us to better document all our datasets, this has proved useful many times in detecting upstream regressions and changes that could go unnoticed otherwise.

Beat (1yr 2mos)

  • πŸ‘¨πŸ½β€πŸ­   Senior Machine Learning Engineer
  • πŸ‡³πŸ‡± Amsterdam, NL
  • πŸ—“οΈ Jun 2021 - Jul 2022

Beat was the fastest-growing ride-hailing service in Latin America. My mission was to develop, deploy, and maintain Data Science and Machine Learning solutions to detect and prevent fraud, reduce financial losses and abuses, and ensure a safe environment for the all users on our platform. All of this while keeping the business metrics healthy and the company growing!

Feature store deployment: Coordinated and led the design and implementation of a feature store solution to serve all ML and Analytics teams at BEAT. Started by testing and evaluating managed solutions such as Databricks, SageMaker, and Tecton, but settling for Feast (open-source) as the registry and serving layers. Deployed the end-to-end solution on our Kubernetes cluster using our data lake and Trino as the offline store, and a low-latency gRPC service and ElastiCache as the online store. Worked closely with Feast’s dev team by providing feedback and contributing to the open-source project (see my open-source contributions).

Fraud prevention: Worked on the design and development and led the deployment of a chargeback fraud prevention system by 1) designing the solution’s architecture; 2) validating the assumptions and performance estimations; 3) developing an ML batch workflow (PySpark + Argo Workflows) that computes risk scores and pushes them to a Kafka topic, with asynchronous ingestion to a fast-store by a backend system; 4) configured monitoring, alerting, and data validation for the application and infrastructure with Prometheus and Grafana, and; 5) designing and running an online controlled experiment to measure to real impact of the model.

Fraud detection: Collaborated with another Data Scientist on a fraud detection problem where the labels were sparse, biased, noisy, and mostly positive and unlabelled (PU). We considered and explored several modelling options from naive binary classification with gradient boosting models to more robust active-learning approaches. The final implementation is based on an iterative semi-supervised learning solution that yields twice as many fraud cases as the previous system while keeping precision extremely high.

Tiqets (2yrs 5mos)

  • πŸ‘¨πŸ½β€πŸ­   Machine Learning Engineer
  • πŸ‡³πŸ‡± Amsterdam, NL
  • πŸ—“οΈ Jan 2019 - May 2021

As an ML Engineer, I was part of Tiqets’ core Data Team. Working closely with business analysts, data engineers, product owners, and ELT members, I applied software development, data analytics, and machine learning to scale and operationalise statistical models and make the whole organisation more data-driven.

Time series forecasting: Operationalised and automated demand forecasting at Tiqets by developing a generalised time-series forecasting framework from scratch that would support pre-processing, model selection, evaluation, and periodic batch inference jobs for various business contexts and requirements. Each task was distributed across an array of Celery workers deployed on our Kubernetes cluster. Forecasted values and model metadata were pushed to Amazon Redshift and visualised by business stakeholders in our Looker BI instance. We also used DataDog for application and infrastructure monitoring.

Recommender system: Improved recommendations across our platform by integrating with the AWS Personalize service and developing robust heuristics for cold-start instances (taking distance, popularity, and seasonality into account). To help the team iterate faster and with greater confidence, we also implemented a time-dependent offline evaluation for recommender systems, curated for our e-commerce setting.

Learning-to-Rank & Reinforcement Learning: Implemented online Bayesian Reinforcement Learning-to-Rank bandit strategies (e.g. Thompson sampling ), which use explore and exploit to continuously learn and improve the rankings of product variants on product pages. An Airflow pipeline was scheduled to frequently update all item rankings by re-sampling from updated posterior distributions as new data came in.

Supervised Learning-to-Rank: Improved product rankings by framing the task as a Supervised Machine-Learned Ranking problem, and comparing predicted rankings to a defined ideal ranking (e.g., using nDCG).

πŸ‘¨πŸ½β€πŸ’Ό Leadership / Management / Soft Skills

  • Helped management (CEO, COO, and CTO) and product owners in brainstorming sessions, defining company OKRs and team KPIs, and led meaningful reporting initiatives and important ad-hoc analysis.
  • The team grew from 3 to 15 people since I joined. I helped with the recruiting, assessing, and interviewing of candidates (including the current Head of Data) as well as organising and attending career fairs.
  • Provided guidance and supervision to two University students working on their Master’s Thesis. Both students produced valuable projects for Tiqets and finished with outstanding grades (8/10 and 8.5/10).
  • Integrated ideas from methodologies such as CRISP-DM to bring structure to the Data Science project lifecycle. Helped creating and prioritising the Data Science backlog, as well as making weekly meetings more fruitful and actionable.
  • Lead several company-wise trainings in general analytical competency and advanced BI (Looker) practice.

Accelogress (1yr 10mos)

  • πŸ‘¨πŸ½β€πŸ­   Machine Learning Engineer
  • πŸ‡¬πŸ‡§ Gildford, UK
  • πŸ—“οΈ Jun 2016 - Mar 2018

Worked closely with the CEO and Lead Developer at Accelogress on the Save-a-Space project, where I led the development and implementation of time-series forecasting models and scheduled batch inference jobs for predicting car park availability for multiple locations around the UK. I also developed and deployed a REST API to expose historical, real-time, and forecasted availability to our mobile app and web dashboards.

Technologies used: Docker, nginx, Gunicorn, Django REST Framework, JavaScript, AngularJS, scikit-learn, MySQL