Data Engineer in Prediktive

Closed job - No longer receiving applicants

Prediktive is the premier Technology Business Partner for Startups and Digital Companies.
We dedicate to build technology teams and connect them with companies that are looking to hire only the best! Prediktive specializes in the execution of software product development projects and business strategic programs.

👉 About the position 👈

We are looking for a Data Engineer to work on a long term project for one of our clients, a Data Analytics and Business Intelligence services company based in Los Angeles.

The person in this role will be working with a team of 3 full-stack engineers based in Los Angeles to help our Client build a new version of their existing Sales product that has ~10k users on both desktop and mobile. This role will focus on building the data pipelines that consume and transform the end customer's data received in CSV format and feed it downstream into various data sources including Google BigQuery, Google Firestore, and Cloud Storage. Our Client is building the entire stack Cloud-Native from front to back – including the data pipelines. On the data pipeline side, our Client will be utilizing Google Cloud Platform for their Cloud Composer (managed Airflow), Google DataProc (hosted Spark), and BigQuery products. All development for the data pipelines will be in Python and SQL.

The most exciting aspect of this position is that the candidate will learn how to build an entire stack of Cloud Native and on Google’s latest platform products. Beyond the data pipeline work our Client needs to do initially there will also be opportunities for this role to build out data microservices and serverless functions, machine learning algorithms and services, and data infrastructure automation.

Responsibilities & Requirements ❗


  • Building Data Pipelines with Python and SQL that consume and transform the end customer’s data received in CSV format and feed it downstream into various data sources including Google Big Query, Google Firestore and Cloud Storage
  • Build out data microservices, serverless functions, machine learning algorithms and services
  • Contribute with the data infrastructure automation


  • Bachelor’s Degree in Computer Science, Systems Engineering or related fields
  • Advanced Level of English
  • +4 years of experience working as a Data Engineer
  • Expertise in Python and SQL development
  • Experience building Data Pipelines using tools such as Python (PySpark), Pandas, SQLAlchemy
  • Experience working with ETLs
  • Familiarity with Cloud Technologies such as Google Cloud Platform and its services

Bonus Points 💯

  • Experience working with Docker
  • Experience with DataProc, BigQuery, Apache Airflow, Apache Spark, Cloud Composer

Benefits 💌

  • Paid time off
  • USD Compensation
  • 100% remote

Fully remote You can work from anywhere in the world.
Flexible hours Flexible schedule and freedom for attending family needs or personal errands.
Informal dress code No dress code is enforced.
Life's too short for bad jobs.
Sign up for free and find jobs that are truly your match.