We are currently based at Metropolis, Buona Vista, Singapore. This is a 12 months fixed term contract role, with possibility of extension.

The Senior Data Engineer will be part of Data Engineering team which is creating, maintaining, scaling and improving the enterprise data platform providing the data for AI/Data Science solutions, applications/tools, and other digital use cases.

Job Requirements:

  • Design, develop, and maintain robust, scalable and sustainable data products and build & optimize data pipeline and infrastructure.
  • Collaborate with stakeholders to understand their data requirements and translate them into technical solutions.
  • Identify and implement data quality monitoring and validation processes to ensure data integrity.
  • Implement data quality frameworks and ensure data governance best practices, including data lineage, data documentation, and data security.
  • Build out the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Synapse, ADF, Spark, Kafka, or similar technologies.
  • Work closely with Data Analysts and Data Architect to support their data needs and enable advanced analytics and machine learning initiatives.
  • Contribute to the development of the organization's data strategy, including evaluating new technologies, tools, and frameworks to enhance the data engineering ecosystem.

Educational Qualification & Experience:

  • BA/BS degree in Computer Science, Computer Engineering, Electrical Engineering or related technical field
  • 4- 8 years to total IT experience preferably in field of data engineering
  • 4+ years’ experience with Azure services including IAM, Synapse, DataLake, SQL Server, ADF etc.
  • 2+ years’ experience in creating and deploying docker containers on Kubernetes.
  • 2+ years’ experience in supporting development teams on Kubernetes best practices, troubleshooting, and performance optimization.
  • 2+ years’ experience with CI/CD pipelines toll such as Jenkins and GitHub Actions
  • 2+ years’ experience with Synapse data warehousing and data lake solutions
  • Strong programming skill in Python, PySpark and SQL
  • 4+ years of experience in scripting and automation using languages such as Bash, Python, or Go
  • 2+ years of experience with infrastructure-as-code tools such as Terraform, Ansible, or CloudFormation and containerization technologies (e.g., Docker, Kubernetes).
  • Knowledge of Agile methodologies and software development lifecycle processes
  • Proven experience in designing and implementing large-scale data solutions, including data pipelines and ETL processes on Azure.


Required Knowledge:

  • Troubleshoot and resolve data-related issues, performance bottlenecks, and scalability challenges on Azure
  • Solid understanding of DevOps principles and experience with infrastructure automation using tools like Terraform or CloudFormation.
  • Hands-on experience with cloud platforms - Azure, and related services (e.g., Synapse, Data Lake, etc.).
  • Understandings of data warehousing concepts and best practices
  • Work closely with stakeholders to understand business requirements and translate them into data solution designs.
  • Strong understanding of data architecture principles, data modelling techniques, and data integration patterns.



Ref ID: JT