Apple

Software Engineer (Data), Ai & Data Platforms

Apple
TechnologySingaporeOnsitePosted 3 months ago

About the role

AI summarised

Apple's AI and Data Platforms team is seeking a Software Engineer to build scalable, resilient distributed systems for cloud-based analytics platforms and data pipelines. The role involves designing, developing, testing, and shipping new components and features, working closely with internal customers to understand requirements and improve user experience. The engineer will focus on coding, debugging, tuning production applications, and supporting end users while integrating open-source technologies with Apple's internal ecosystem.

TechnologyOnsite

Key Responsibilities

  • Build high quality, scalable and resilient distributed systems that power Apple's cloud analytics platforms and data pipelines
  • Drive development of new components and features from concept to release: design, build, test, and ship at a regular cadence
  • Work closely with internal customers to understand their requirements and workflows, and propose new features and ecosystem changes to streamline their experience of using the solutions on our platform
  • Spend a large part of engineering time writing code and designing/developing applications on cloud
  • Spend remainder of time on tuning and debugging codebase, supporting production applications and supporting application end users
  • Integrate open source software with Apple’s internal ecosystem
  • Independently learn new technologies and contribute to the success of various initiatives
  • Develop solutions that integrate with proprietary and open source technologies including Kafka, Spark, Iceberg, Airflow, Presto
  • Focus on ease of use, ease of maintenance, and scalability of infrastructure solutions both on-prem and in cloud
  • Support business functions like Sales, Operations, Finance, AppleCare, Marketing and Internet Services through data analytics platforms

Requirements

  • Knowledge of BI concepts
  • Implementation experience on Cloud with databases like SnowFlake or Big Query
  • Programming experience with Python, Scala or Java
  • Experience in developing highly optimized SQLs, procedures & semantic process for distributed data applications
  • Bachelor’s degree in Computer Science or equivalent experience
  • 3 or more years of experience building enterprise-level data applications on distributed systems
  • Hands-on experience in designing and development of cloud-based applications that include compute services, database services, APIs to design RESTful services, ETL, queues and notification services
  • Experience in cloud data warehousing platforms like Snowflake is highly valued
  • Hands-on knowledge of Spark cluster-computing framework & Kubernetes or similar containerization technologies
  • Experience developing Big Data applications using Java, Spark, Kafka is a huge plus
  • Understanding of fundamentals of object-oriented design, data structures, algorithm design, and problem solving
  • Cloud technology experience on platforms like AWS, Microsoft Azure, Google Cloud
  • Experience in software such as Streamlit, Superset, Tableau, Business Objects, and Looker
  • Working experience on generating and visualizing data insights, metrics, and KPIs
  • Usage of basic ML models in the space of anomaly detection, forecasting, GenAI