We are looking for a skilled and proactive Data Engineer with strong expertise in the Microsoft Azure data ecosystem, including Microsoft Fabric, Synapse Analytics, and Delta Lake. This role requires hands-on experience in building and optimising data pipelines and notebooks, transforming large datasets, and enabling data-driven decision-making across the organisation.
The successful candidate will play a key role in designing and implementing scalable data solutions, collaborating with both technical and non-technical stakeholders, and helping shape our modern data architecture.
Key Responsibilities
- Design, develop, and maintain robust data pipelines and notebooks using Synapse Analytics and Microsoft Fabric.
- Build and optimise complex data transformation and ingestion logic to support enterprise-level reporting and analytics.
- Leverage Delta Lake architecture to support both batch and streaming data processes.
- Develop high-quality, reusable code in Python and PySpark, following best practices and ensuring maintainability.
- Create insightful, interactive reports and dashboards using Power BI.
- Use Git and Azure DevOps (ADO) for version control and deployment workflows.
- Continuously improve the performance, reliability, and scalability of data systems.
- Advocate for best practices in data governance, data quality, and documentation.