About the role
AI summarisedApple's AiDP team is seeking a Software Engineer to build scalable distributed systems for cloud analytics and data pipelines. The role involves designing, developing, and maintaining data platforms using technologies like Kafka, Spark, and Snowflake, and collaborating with internal customers to streamline data solutions.
TechnologyFull-time
Key Responsibilities
- Build high quality, scalable and resilient distributed systems that power Apple's cloud analytics platforms and data pipelines.
- Engineer high-quality, scalable and resilient distributed systems on cloud that power data exploration, analytics, reporting and production models.
- Build solutions that integrate open source software with Apple's internal ecosystem.
- Drive development of new components and features from concept to release: design, build, test, and ship at a regular cadence.
- Work closely with internal customers to understand their requirements and workflows, and propose new features and ecosystem changes to streamline their experience of using the solutions on our platform.
- Spend a large part of time writing code and designing/developing applications on cloud, with the remainder being spent on tuning and debugging codebase, supporting production applications and supporting application end users.
Requirements
- Knowledge of BI concepts and Implementation experience on Cloud with databases like SnowFlake or Big Query.
- Programming experience with Python, Scala or Java.
- Experience in developing highly optimized SQLs, procedures & semantic process for distributed data applications.
- Bachelor's degree in Computer Science or equivalent experience.
- 3 or more years of experience building enterprise-level data applications on distributed systems.
- Hands-on experience in designing and development of cloud-based applications that include compute services, database services, APIs to design RESTful services, ETL, queues and notification services.
- Experience in cloud data warehousing platforms like Snowflake is highly valued.
- Hands-on knowledge of Spark cluster-computing framework & Kubernetes or similar containerization technologies.
- Experience developing Big Data applications using Java, Spark, Kafka is a huge plus.
- Understanding of fundamentals of object-oriented design, data structures, algorithm design, and problem solving.
- Cloud technology experience on platforms like AWS, Microsoft Azure, Google Cloud.
- Data Visualization Tools: experience in software such as Streamlit, Superset, Tableau, Business Objects, and Looker.
- Data Insights and KPIs: Working experience on generating and visualizing data insights, metrics, and KPIs.
- Usage of basic ML models in the space of anomaly detection, forecasting, GenAI.