US
0 suggestions are available, use up and down arrow to navigate them
PROCESSING APPLICATION
Hold tight! We’re comparing your resume to the job requirements…
ARE YOU SURE YOU WANT TO APPLY TO THIS JOB?
Based on your Resume, it doesn't look like you meet the requirements from the employer. You can still apply if you think you’re a fit.
Job Requirements of MLOps Engineer:
-
Employment Type:
Contractor
-
Location:
Houston, TX (Onsite)
Do you meet the requirements for this job?
MLOps Engineer
Careers Integrated Resources Inc
Houston, TX (Onsite)
Contractor
Job Title: MLOps Engineer
Job Location: Houston, TX, 77002 (Hybrid - 4 Days a week in office)
Job Contract: 8 Months+ contract (with possible extension)
Note: W2 only
Job Description:
Job Summary:
Key Responsibilities:
Required Qualifications:
Preferred Qualifications:
Job Location: Houston, TX, 77002 (Hybrid - 4 Days a week in office)
Job Contract: 8 Months+ contract (with possible extension)
Note: W2 only
Job Description:
- Must-have: Hands-on experience with AWS, Microsoft Azure, and Snowflake in building or supporting production ML/data platforms.
Job Summary:
- We are seeking an MLOps Engineer to design, deploy, monitor, and maintain machine learning solutions in production across AWS, Microsoft Azure, and Snowflake environments. This role will partner with data scientists and cloud teams to operationalize ML models, automate pipelines, and build reliable, secure, and scalable ML platforms.
- The ideal candidate has strong experience in the end-to-end ML lifecycle, cloud-native deployment, CI/CD automation, model monitoring, and production data pipelines, with hands-on expertise in AWS, Azure, and Snowflake.
Key Responsibilities:
- Design and implement end-to-end ML pipelines for data ingestion, feature engineering, model training, validation, deployment, and monitoring.
- Deploy and manage ML models in production across AWS, Azure, and Snowflake-based ecosystems.
- Build batch and real-time inference pipelines using cloud-native and platform-native services
- Automate model packaging, testing, release, and rollback using CI/CD best practices.
- Integrate ML workflows with services such as AWS SageMaker, AWS Lambda, Azure Machine Learning, Azure Data Factory, and Snowflake.
- Build and maintain orchestration workflows using tools such as Airflow, Azure Data Factory, or similar platforms.
- Implement experiment tracking, model registry, and model governance processes.
- Monitor model accuracy, drift, latency, throughput, pipeline failures, and infrastructure usage.
- Establish deployment strategies such as canary, shadow, blue-green, and rollback mechanisms.
- Collaborate with cross-functional teams to move models from research to production.
- Ensure security, compliance, traceability, and access control for models and data across cloud environments.
- Optimize platform performance, reliability, and cost across AWS, Azure, and Snowflake.
- Document architecture, deployment standards, and operational procedures.
Required Qualifications:
- Master’s or Advanced degree (PhD) in Computer Science, Computer Engineering, or Similar
- Five or more years of relevant experiences
- Proven experience in MLOps, ML engineering, platform engineering, or DevOps
- Strong hands-on experience with AWS, Microsoft Azure, and Snowflake
- Strong programming skills in Python and SQL
- Experience deploying and managing ML models in production
- Experience with cloud ML services such as AWS SageMaker and Azure Machine Learning
- Experience building data pipelines and integrating with Snowflake
- Knowledge of CI/CD pipelines, infrastructure automation, and model versioning
- Experience with containerization and orchestration tools such as Docker and Kubernetes
- Experience with workflow orchestration tools such as Airflow, Azure Data Factory, or similar
- Familiarity with model monitoring, logging, alerting, and observability
- Solid understanding of data engineering concepts, APIs, and distributed processing
- Strong troubleshooting, communication, and cross-team collaboration skills
Preferred Qualifications:
- Experience with Snowflake Cortex AI, Snowpark, or ML workloads in Snowflake
- Experience with AWS Bedrock, Azure Open AI, or production LLM workflows
- Experience with real-time inference, event-driven pipelines, and server less architectures
- Familiarity with feature stores, vector databases, and RAG-based systems
- Experience with Terraform, Cloud Formation, or Azure infrastructure-as-code tools
- Understanding of security, compliance, and governance requirements for regulated environments
- Experience with production A/B testing, shadow deployment, and rollback strategies
Get job alerts by email.
Sign up now!
Join Our Talent Network!