Senior Data Engineer

We are seeking an experienced Senior Data Engineer for a client of ours on a contract basis.. As a Senior Data Engineer, you will design, build, and optimise data pipelines and architectures, leveraging Databricks, AWS/GCP/Azure, SQL, and Python (or comparable languages such as Scala or Java) to enable efficient data processing and analytics. This role offers the opportunity to work on innovative projects in a collaborative and fast-paced environment.

Key responsibilities:

  • Design, develop, and maintain scalable data pipelines and workflows using Databricks, PySpark, and other relevant technologies.
  • Implement and optimise ETL/ELT processes to extract, transform, and load data from various sources into centralised data repositories.
  • Build and manage data architectures, including data lakes and warehouses, to support analytics and business intelligence needs.
  • Write efficient, scalable, and maintainable SQL queries for data modeling, analysis, and reporting.
  • Collaborate with data scientists, analysts, and cross-functional teams to deliver data solutions that drive business decisions.
  • Monitor, troubleshoot, and optimise data pipelines for performance and reliability.
  • Ensure data quality, security, and compliance with industry standards and regulations.
  • Document systems, processes, and workflows to support knowledge transfer and ongoing development.
  • Stay up to date with emerging technologies and best practices in data engineering.

Required skills and qualifications:

  • 5+ years of experience in data engineering or related roles.
  • Proficiency in at least one major programming language (e.g., Python, Scala, Java) for data processing.
  • Advanced SQL skills for querying, modelling, and optimising complex datasets.
  • Experience working with cloud platforms (AWS, GCP, Azure) and their data-related services (e.g., S3/BigQuery/ADLS, Glue/Dataflow/Data Factory, Redshift/Snowflake/Synapse).
  • Strong hands-on experience with PySpark (or a similar distributed data processing framework).
  • Experience designing and managing ETL/ELT workflows and pipelines.
  • Familiarity with big data frameworks and technologies for handling large-scale datasets.
  • Strong understanding of data modeling, normalisation, and schema design principles.
  • Experience with version control systems (e.g., Git) and CI/CD pipelines.
  • Excellent problem-solving and troubleshooting skills, with the ability to work independently and collaboratively.

Desirable skills:

  • Experience with real-time data streaming tools like Apache Kafka or Kinesis.
  • Familiarity with orchestration tools like Apache Airflow or Prefect.
  • Knowledge of data visualisation platforms (e.g., Tableau, Power BI).
  • Understanding of machine learning workflows and integrating models into data pipelines.
  • Knowledge of data security and compliance regulations (e.g., GDPR, HIPAA).

If you believe you’re a great fit and share our passion, we’d love to hear from you! Even if we don’t contact you for this specific opportunity, your profile will be considered for future roles. Our team continuously matches top IT experts with leading companies, ensuring you get the best opportunities when they arise.

Apply for position