About the role
AI summarisedSenior Data Engineer role at AWS's Sales Insights, Analytics, Data Engineering and Science (SIADS) organization, responsible for building and maintaining data infrastructure and analytics products that power business decisions across Asia Pacific and Japan (APJ). The role involves designing data pipelines, data lakes, and visualization platforms, collaborating with Sales, Sales Operations, and Finance stakeholders.
BusinessFull-timeCorporate Operations
Key Responsibilities
- Own the design, development, and maintenance of the APJ SIADS' data analytics platform, including data pipelines, data lakes, and analytical and visualisation platforms that power SMGS business insights
- Build and maintain APJ-specific data models and pipelines, ensuring alignment between worldwide definitions and APJ business logic, and validating data quality against WW business standards
- Design and maintain the backend data infrastructure for end-user last-mile analytics tools, implementing ETL processes that ensure data is clean, fresh, and optimised for APJ analytical requirements
- Develop monitoring capabilities for data quality and query performance to keep platforms reliable and responsive
- Collaborate on the design and delivery of metrics, reports, analyses, and dashboards that drive key business decisions across APJ, in partnership with key stakeholders e.g., Sales, Sales Operations, Finance, etc.
- Ensure pipelines and data products meet the requirements of field teams across key verticals and business units
- Maintain compliance with information security policies and data governance standards for all infrastructure and software used by the team
Requirements
- 8+ years of data engineering experience
- Competent in performing data transformation using SQL
- Experienced in managing and maintaining data pipelines
- Proficient in cloud computing with Infrastructure-as-Code (IaC)
- Skilled in designing system solutions at scale
- Familiar with leveraging AI-assisted coding tools
- Strong written and verbal communication skills
- Knowledgeable in Apache Spark for large-scale data workloads
- Proficient in serverless ETL using AWS Glue for data ingestion, transformation, and cataloguing
- Skilled in automating data quality validation to detect anomalies and enforce data integrity