Opportunity to work for this dynamic, fast paced FMCG company as a Data Engineer where you will be responsible for the preparation, development, testing and maintenance of data engineering which includes building the data pipelines, data warehouse integration, analysis, profiling, data development and production infrastructure.
Your brand-new role will also see you responsible for building data ingestion and data transformation infrastructure while also playing an integral role in the building, managing, supporting enterprise data platform that empower the business stakeholders to drive data value and insights. and ultimately enterprise data operations
- Translate strategic requirements to ensure effective solutions meet business requirements
- Perform systems and applications performance characterization and trade-off studies through analysis and simulation
- Embed business intelligence best practices (i.e. dimensional modelling, ETL pipeline, large scale distributed ETL pipelines) to enable enterprise data insights and large-scale machine learning.
- Create and maintain analytics data pipelines that generate data + insight to power business decision making
- Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
- Create data models and speak to the trade-offs of different modelling approaches
- Design, model and maintain data structures for analytics purposes
- Maintain and configure data engineering and ETL/ELT tools
- Improve the data availability by acting as a liaison between analytics teams and source systems
- Collect, blend, and transform data using ETL tools, types of databases, and code development (SQL, Spark, AWS Glue etc) for batch and streaming use cases
Skills and experience
- Minimum 5-7+ years’ experience in a similar data engineering position
- Extensive knowledge of BI concepts (i.e., ETL, dimensional modelling, data warehouse design, dashboarding)
- Strong experience with cloud-based data warehousing, data architecture and data lake concepts
- Knowledge of query languages (i.e., SQL, Spark), database design, optimising queries, internals knowledge of query planning
- Experience in managing data pipelines with tools and packages such as Talend, Informatica, Airflow
- Experience with delivering on AWS or Azure cloud environments
- Proficient in AWS data engineering technical stack, AWS Glue, SQL, Python, dbt cloud or core, AWS S3, Kinesis,RDS, Dynamo DB, Athena, Linux, Unix, windows shell scripting
- Experience in database technologies such as snowflake, AWS redshift, MS SQL Server, Oracle, Teradata etc
- Experience in DevOps tech stack in supporting CI/CD pipelines including documentation
Apply now to secure an interview or contact Josh D’Monte on 9236 7723 for a confidential discussion.