
Senior DevOps Data Engineer ; SAS / Cloud / Data Platforms
Senior DevOps/Data Engineer
Location: Sydney – Surry Hills (Hybrid)
Employment Type: 12-month contract + 12-month extension potential
Rate: Open to market rate
Citizenship: Australian citizens only
Security Clearance: Must be eligible for Baseline
About the Role
We are seeking a highly skilled Senior DevOps/Data Engineer to join a major federal data platform project. This is a hands-on role where you will lead the development, deployment, and optimisation of enterprise data solutions across multiple platforms. You will collaborate with architects, developers, and analysts to ensure data pipelines, ETL processes, and analytics systems are robust, secure, and scalable.
You will also contribute to DevOps practices, including CI/CD, version control, and containerisation, while mentoring team members and implementing best practices across the data delivery lifecycle.
Key Responsibilities
-
Develop, audit, and optimise SQL and ETL code across multiple data warehouses (MS SQL, Teradata, Snowflake, Redshift, Databricks).
-
Implement, customise, and maintain SAS solutions (Viya, Grid, 9.4) and SAS Visual Investigator workflows.
-
Apply DevOps practices including CI/CD pipelines, version control (GitHub, GitLab, Azure DevOps), and release management.
-
Work with cloud and container platforms (AWS, Azure, OpenShift, Kubernetes) to modernise and scale data solutions.
-
Review and approve data designs, data models, and integration frameworks.
-
Facilitate knowledge transfer, mentoring, and training for developers and team members.
-
Ensure compliance with data governance, security standards, and Australian Government frameworks.
Key Deliverables
-
Well-documented and optimised SQL/ETL code and data models.
-
Functional and maintainable SAS solutions supporting analytics and investigative workflows.
-
Effective CI/CD pipelines, version-controlled codebases, and secure data platforms.
-
Data integration frameworks that support rapid delivery of new data sources.
Required Skills & Experience
-
Proven experience as a Data Engineer, DevOps Engineer, or similar, with hands-on SAS expertise (Viya/Grid/9.4).
-
Strong SQL/ETL development skills and experience with data warehouses (Teradata, Snowflake, Redshift, Databricks).
-
Programming experience in Python, SAS, R, or Java for building ETL pipelines.
-
Knowledge of DevOps tools, CI/CD pipelines, and version control practices.
-
Experience with cloud platforms (AWS, Azure) and container orchestration (OpenShift, Kubernetes).
-
Experience in data modelling, metadata management, and data governance.
-
Ability to understand user requirements, prioritise work items, and deliver high-quality code.
-
Exposure to government or enterprise environments handling sensitive data is highly desirable.
Desirable Skills
-
Experience monitoring and troubleshooting system health using SAS tools.
-
Familiarity with cloud data integration services (AWS Glue, S3, Databricks).
-
Tertiary qualifications in mathematics, statistics, computer science, or equivalent experience.
Additional Information
-
Hybrid work arrangement: minimum 3 days in-office per week, flexible WFH for remaining days.
-
Occasional travel may be required depending on project needs.
-
Candidates must be Australian citizens and able to obtain Baseline security clearance.
-
Resumes to be sent to: Priya.gabriel@talentinternational.com