Interested to join us?

At Data Build Company, we’re happy to tell you more about who we are, what we stand for, or specifics of the role to see if we’re the right fit for you.

Senior Engineering Consultant

Job Description

Interview Process

At Data Build Company, we value your time. As such, we have clear interview steps. 

  1. Intro & culture fit
  2. Use case
  3. Final presentation


The process is completed within 1-2 weeks, dependent on the candidate’s availability.

The role

  • Design, implement and manage data pipelines that ingest vast amounts of data from various sources

  • Enhance end-to-end workflows with automation, CI/CD processes, proper orchestration and monitoring

  • Innovate and advise on the latest technologies and standard methodologies in Data Engineering and be able to identify software solutions that can address hurdles in client organizations

  • Act as a technical leader for resolving problems, with both technical and non-technical audiences

  • Mentoring, coaching, and steering colleagues across technical challenges

  • Ability to work alone and self-steer initiatives, influencing the community of developers

  • Take ownership of project work and develop client relationships

  • Be a confident self-starter! Independently plan, design, code, debug and test major features, ensuring issues are identified early and requirements are delivered

  • Identify technical areas for improvement and create business cases for improvement

Desired skills & experience

  • BS/MS in Computer Science, or related field with 7+ years of software/data engineering experience

  • In-depth understanding of data lake architectures and experience implementing, mesh architecture a plus

  • Experience working across cloud providers a plus (i.e., AWS, Azure, GCP)

  • Experience in orchestration technologies (e.g., Airflow, AWS Step Functions)

  • Excellent knowledge of programming languages (e.g., Python, Scala, SQL)

  • Hands-on experience in application deployment (e.g., Docker, container registry, AKS, etc.)

  • Hands-on experience with CI/CD tooling (e.g., GitHub Actions, Gitlab CI/CD, Travis, etc)

  • Technical expertise with data modeling and mining techniques

  • Experience within the Apache Hadoop Ecosystem (I.e., Kafka, Spark, Hive, etc.)

  • Experience with data warehousing technologies (e.g., Snowflake, BigQuery, Synapse)

  • Experience managing and provisioning Infra-as-Code (i.e., Terraform, Ansible)

  • Experience (and understand the importance of) implementing proper platform/pipeline logging and monitoring

  • Experience with data governance initiatives and/or integrating data quality/data catalogue/MDM solutions a plus

  • Proficiency with modern software development methodologies such as Agile, source control, project management and issue tracking with JIRA

  • You are fluent in English. Dutch is a great plus.