ML Ops Engineer


Be part of the Day 1 team building the future of enterprise AI

kAIgentic Private Limited is a Singapore-headquartered start-up with presence across Singapore, India, UK, Germany, US, and Japan. We are on a mission to help enterprises elevate outcomes by uniting human insights with technology innovation and empowering everyone to unlock real value.

Most enterprises struggle to transform because their tacit knowledge is hidden, their systems are fragmented, and their risk tolerance is low. kAIgentic platform provides an end-to-end environment that captures knowledge, designs future workflows, and runs them as governed agentic operations. The outcome is an organization that evolves faster than technology itself.

Now, we’re assembling our engineering team in India to build the future of enterprise operations. As an MLOps Engineer, you will design and own the ML operational infrastructure that enables scalable, secure, and reliable model deployment.

 

Why Join Us?

  • High-Impact Work – Build the ML infrastructure powering enterprise-grade AI at scale.
  • Leadership & Mentorship – Direct access to architects and engineering leaders.
  • Competitive Compensation & performance-linked rewards.
  • Team & Culture – Flat hierarchy, values-driven leadership and an opportunity to shape the culture
  • Global Exposure – Work with distributed teams across India and Singapore.
  • In-office with Flexibility – Collaborative workplace in India with flexible hours to balance focus and flow.

 

What You’ll Do

  • Build and maintain end-to-end MLOps pipelines: training → evaluation → deployment → monitoring.
  • Design scalable, secure deployment architectures for LLM and ML models.
  • Develop CI/CD pipelines, automated testing systems, and reproducible build workflows.
  • Manage cloud ML infrastructure using Kubernetes, Docker, and cloud services.
  • Building and maintaining the scalable infrastructure required to support ML workflows.
  • Implement observability for inference performance, data drift, and model degradation.
  • Set up experiment tracking, lineage, versioning, and registries using MLflow/Metaflow/Flyte/Kubeflow.
  • Partner with ML and AI engineers to operationalize research models.
  • Contribute to long-term system architecture for AI runtime reliability.

 

What You’ll Bring

  • 4–8 years of experience in MLOps or ML systems engineering.
  • Strong experience with Kubernetes, Docker, and GPU workload orchestration.
  • Experience with MLflow, Metaflow, Flyte, Kubeflow, Helm, Ray, or similar frameworks.
  • Setting scalable automation and data ingestion pipelines.Experience with CI/CD pipelines and IaC tools like Terraform.
  • Experience deploying LLM or RAG-based systems is a strong plus.
  • Experience with monitoring tools (Prometheus, Grafana, ELK, Sentry).
  • Solid scripting skills (Python, Bash, Go preferred).
  • Extensive DevOps background is a plus.
  • Strong understanding of cloud infrastructure and distributed systems.
Apply Now