Seatrium

Data Architect

Seatrium
Advanced Manufacturing & ElectronicsSeatrium (SG) Pte. Ltd.OnsitePosted 2 weeks ago

About the role

AI summarised

The Data Architect is responsible for designing and implementing an enterprise data platform across all layers, from ingestion to consumption. This role involves defining data standards, ensuring data quality and governance, and enabling AI/ML use cases through reliable data pipelines and curated datasets. The architect collaborates with business, engineering, and infrastructure teams to deliver scalable, secure, and supportable data solutions.

IndustrialOnsite

Key Responsibilities

  • Design data architecture and implement enterprise data platform across ingestion, storage, transformation, serving, and consumption layers
  • Define and drive standards for data modelling, metadata, lineage, ownership, security, quality, governance, and compliance (PDPA)
  • Collaborate with business stakeholders or cross-functional teams to define problems, identify key metrics, and present data-driven recommendations
  • Design scalable data integration patterns across enterprise applications, operational systems, industrial systems, and analytics environments
  • Design, develop, and maintain trusted, reusable, and efficient data pipelines, curated datasets, and enterprise data assets
  • Partner with AI Engineers to ensure the data platform supports AI/ML use cases, including historical data retention, feature-ready datasets, trusted training data, reproducibility, and consistency between training and production usage
  • Work closely with engineering and infrastructure teams to ensure solutions are practical, scalable, secure, and supportable
  • Provide insights to business units through advanced analytics and visualization
  • Contribute to roadmap planning, architecture governance, technology evaluation, and continuous improvement of the enterprise data platform

Requirements

  • Bachelor’s degree in Computer Science, Information Technology, Computer Engineering, Data Engineering, Information Systems, or a related discipline
  • Minimum 8 years of experience in data architecture, data engineering, data platform development, or similar enterprise data roles
  • Experience working with ERP, operational, industrial, or engineering data sources
  • Exposure to data foundations supporting AI/ML, predictive analytics, or intelligent automation
  • Strong expertise in data or platform architecture, leading data engineering, or other senior hands-on technical roles
  • Proven experience designing enterprise-scale data platforms, integration patterns, and analytical data structures
  • Hands-on experience building or guiding implementation of data pipelines, platform components, and data engineering solutions
  • Solid understanding of enterprise data architecture, including data warehouse, data lake/lakehouse, dimensional modeling, metadata, lineage, governance, and security
  • Strong knowledge of data engineering and integration practices such as ETL/ELT, batch processing, CDC, APIs, event-driven integration, schema evolution, observability, idempotency, and reprocessing
  • Good understanding of data platform requirements for AI/ML workflows, including historical data management, feature engineering, training data preparation, and reproducibility
  • Strong communication and stakeholder management skills, with the ability to collaborate across architecture, engineering, infrastructure, analytics, and AI teams
  • Ability to manage multiple projects and deliverables effectively in a dynamic environment
  • Experience with data warehousing platforms such as Snowflake, BigQuery, Redshift, or Synapse
  • Experience with data lake/lakehouse ecosystems such as Databricks, Delta Lake, Apache Iceberg, or Hudi
  • Experience with data integration/orchestration tools such as Airflow, Dagster, dbt, Kafka, or Spark
  • Familiarity with enterprise databases and integration technologies such as SQL Server, Oracle, PostgreSQL, MySQL, SAP HANA, CDC tools, and API/event-based integration
  • Experience working with cloud or hybrid environments such as AWS, Azure, or GCP
  • Exposure to data governance, cataloging, lineage, and data quality tools