About the role
AI summarisedStaff DevOps Engineer at Illumina, a biotech company, responsible for optimizing deployment and management of global cloud-based informatics assets. The role involves automating CI/CD pipelines, managing AWS environments, and collaborating with global teams to ensure service continuity and security.
BiotechFull-timeGeneral
Key Responsibilities
- Automation and implementation of highly available and resilient platforms in cloud.
- Implementation and maintenance of workflows and procedures to ensure consistency and auditability in a managed infrastructure.
- Maintain and extend the code base for an on-market SaaS genomic application extending the capability, improving robustness and quality, and optimizing for performance.
- Write and peer review automation code with an eye to creating a flexible, scalable codebase.
- Develop solutions to drive optimization of cloud operations around cost management, time to market, security and privacy, and delivery of worldwide capability.
- Troubleshooting and creating tools to resolve incidents.
- Work within and enforce security and regulatory compliance procedures.
- Document and communicate procedures, configurations and standards effectively.
- Apply state of the art cloud best practices (e.g. stateless applications; analytical automated logging, diagnostics, and optimization; automated 'one click' creation of new geographic AWS enterprise application instances; application autoscaling; and automated testing of global cloud instances) in order to deliver toward operational objectives.
- Be part of our follow-the-sun on call rotation.
Requirements
- Programming skills – and of python, go, ruby, bash and/or various scripting
- Hands on experience building enterprise AI solutions using LLMs, RAG architectures, and AWS Bedrock, with strong expertise in cloud native development, integrations, and production grade AI platforms.
- Expertise in administration, configuration, optimization and monitoring of Linux at scale. CentOS, ubuntu and Amazon Linux are our base distributions.
- Solid understanding of networking, network routing, and trouble shooting
- Skilled in the implementation of monitoring tools like Grafana, ELK, cloud watch, cloud trial, and others
- Experience with Public (AWS or Azure) or Private (OpenStack. etc) cloud-based deployment and support.
- Experience with modern container systems such as , Kubernetes, ECS, EKS, Fargate, etc. preferred.
- Experience with one or more automation tools such as Ansible, Chef, Puppet, Salt, Terraform, CloudFormation.
- The ability to pick up new technologies quickly and rapidly deep-dive
- Use of large-scale data processing tools such as Hadoop, Elastic Search.
- A strong preference for collaborative teamwork and work/knowledge sharing.
- Typically requires a minimum of 8 years of related experience with a Bachelor's degree; or 6 years and a Master's degree.