About the role
AI summarisedData Scientist intern at Infineon, a semiconductor company. The role involves monitoring and maintaining deployed AI models, developing new use cases, enhancing data pipelines, and supporting deployment and scaling. The intern will gain hands-on experience in machine learning, deep learning, and AI workflow optimization.
IDMFull-timeBE
Key Responsibilities
- Establish and maintain monitoring metrics for deployed AI models to ensure accuracy, reliability, and stability in production.
- Identify performance deviations, investigate root causes, and implement corrective actions (e.g., re-training, calibration, fine-tuning).
- Support ideation, data exploration, feasibility assessment, and prototyping for new AI use cases aligned with business needs.
- Assist in selecting suitable approaches (ML/DL/GenAI) and conducting experiments to validate technical feasibility and KPI impact.
- Contribute to transitioning prototypes toward pilot/production readiness through reproducible workflows and documentation.
- Refine datasets and improve data preprocessing pipelines to ensure robust and repeatable model performance.
- Support data quality checks, feature engineering, augmentation (where applicable), and labeling/ground-truth alignment.
- Benchmark models against KPIs and validate model outputs to ensure adherence to business requirements and risk mitigation.
- Document evaluation outcomes, assumptions, limitations, and recommendations.
- Work closely with cross-functional teams to sustain current AI applications and align new developments with business needs.
- Contribute to internal documentation, demos, and knowledge-sharing to improve team efficiency and reuse.
- Explore and propose tools, techniques, and emerging best practices to improve model performance, maintainability, and development efficiency.
Requirements
- On track to attaining Masters/Bachelors in Business Analyst, Computer Science, and Data Science related roles.
- Basic to intermediate knowledge of machine learning fundamentals (e.g., supervised learning, model evaluation, overfitting, cross-validation).
- Proficiency in Python.
- Working knowledge of deep learning frameworks: TensorFlow and/or PyTorch.
- Basic knowledge of SQL for data extraction and analysis.
- Familiarity with Git or similar version control systems.
- Exposure to compute environments involving GPU usage and/or HPC concepts (e.g., job scheduling, resource constraints, performance considerations).
- Awareness of containerization and orchestration concepts, including Kubernetes (basic understanding acceptable).
- Preferred Internship Period: Jun 26 - Dec 26