MLOps Live

MLOps Engineer

SG CONSULTING

Experience Band

5–8 years

Work Mode

Onsite

The MLOps Engineer will be responsible for designing and implementing MLOps as Code methodologies, ensuring pipelines and infrastructure are versioned and reproducible. This role requires expertise in managing deep learning orchestration platforms and optimizing model lifecycle management using MLflow. The candidate will work within a hybrid cloud environment leveraging Azure and AWS, building robust CI/CD pipelines to automate training and deployment workflows. A strong proficiency in Python and container orchestration is essential.

Position – MLOps Engineer
Required Qualifications:
* Orchestration: Deep experience with Valohai (Preferred), Kubeflow, Airflow, or AWS SageMaker Pipelines.
* Model Lifecycle: Expert-level knowledge of MLflow for tracking experiments and managing model registries.
* Cloud Proficiency: Hands-on experience with both Azure and AWS ecosystems.
* Coding: Strong proficiency in Python and shell scripting.
* Containers: Docker and container orchestration.

Key Responsibilities:
MLOps as Code & Orchestration
* Design and implement MLOps as Code methodologies. pipelines, infrastructure, and configurations must be versioned, reproducible, and automated (GitOps).
* Manage and optimize deep learning orchestration platforms (specifically Valohai, or similar tools like Kubeflow/SageMaker Pipelines) to automate training, fine-tuning, and deployment workflows.
* Standardize execution environments using Docker and ensure reproducibility across local, dev, and production environments.
Central Registry & Governance
* Own the Central Model Registry strategy using MLflow. Ensure strict versioning, lineage tracking, and stage transitions (Staging to Prod) for all models.
* Enforce governance policies for model artifacts, ensuring security and compliance across the model lifecycle.
Multi-Cloud Architecture (Azure & AWS)
* Operate in a hybrid cloud environment. You will leverage Azure (AI Foundry, OpenAI Service) and AWS (SageMaker, Bedrock, EC2/GPU instances) based on workload requirements.
* Design seamless integrations between cloud storage (S3/Blob), compute, and the orchestration layer.
* Experience creating custom execution environments for specialized hardware (NVIDIA GPUs, TPUs).
CI/CD & Automation
* Build robust CI/CD pipelines (GitHub Actions/Azure DevOps) that trigger automatic training or deployment based on code or data changes.
* Automate the ‘hand-off’ process between Data Scientists and production environments.

Experience- 5-8 Years
No. of positions- 2
Duration- 12 months
Location- Bangalore/WFO
Budget- 1.05 LPM per month

Submit Your Application

Apply directly and track your application status in real-time.

×

Join the Human Intelligence Club

Signal-preserving access for practitioners ready to be measured by applied depth.

Designed for builders entering the Human Intelligence club. Bring your PDF resume and intent snapshot. For companies running talent searches via Human Intelligence Recruiting Agent. Official email + role context required.

Max 10MB. We keep resumes private and route them only to HIRA reviewers.

Already earned access?

×

Log back into the club

Pick up where you left off. Evaluations, trajectories, and HIRA signals stay synced.

New to Human Intelligence?