US
0 suggestions are available, use up and down arrow to navigate them
What job do you want?

Apply to this job.

Think you're the perfect candidate?

Software Development Engineer in Test

Careers Integrated Resources Inc Orlando, FL (Onsite) Contractor
Job title:  Software Development Engineer in Test (SDET)
Location: Orlando, FL/ Anaheim, CA/ Seattle, WA
(Hybrid – 2–3 days onsite, subject to change)
Duration: 22 Months

Pay rate:- $76/hr. - $89.30/hr. on W2
 
Position Summary:
The Software Development Engineer in Test (SDET) will be responsible for designing and maintaining automated testing frameworks and quality assurance processes for AI/ML and Generative AI platforms. The role emphasizes testing model accuracy, reliability, and safety while ensuring robust automation and integration throughout the AI/ML lifecycle.
This position sits within the Gen AI organization under Technology & Digital, collaborating closely with cross-functional application teams.
The SDET will report to the Delivery Lead, Gen AI / ML QA, Guardrails, & Evaluations. Approximately 50% of the role involves hands-on programming.
 
Key Responsibilities
AI/ML Testing & Quality Assurance
  • Design, build, and maintain automated testing frameworks for backend and frontend components of AI/ML and GenAI platforms.
  • Develop and execute comprehensive test strategies to validate generative AI models, ML pipelines, and knowledge base integrations.
  • Implement automated testing for ML performance metrics (accuracy, precision, recall, F1-score, custom business metrics).
  • Create automated validation systems for model training, inference, and data preprocessing stages.
  • Conduct A/B testing to compare model performance and generative AI output quality.
 
AI/ML Safety & Evaluation
  • Evaluate generative AI and ML outputs for hallucinations, bias, and factual accuracy using tools like Arize and Langfuse.
  • Implement and test guardrails ensuring safe, compliant, and responsible AI model behavior.
  • Conduct adversarial and red-teaming exercises to identify vulnerabilities in AI/ML systems.
  • Develop monitoring systems to detect model drift, data drift, and performance degradation.
  • Ensure compliance with fairness, interpretability, and explainability requirements.
 
Collaboration & Integration
  • Collaborate with Data Scientists, ML Engineers, Developers, and Product Managers to refine requirements and enhance testing coverage.
  • Integrate testing into ML deployment pipelines and model lifecycle management.
  • Work with the System Integration Testing and Performance Engineering teams on load and performance testing.
  • Partner with Data Engineering to validate data pipelines, feature stores, and ML training infrastructure.
 
Testing Excellence & Process Improvement
  • Conduct functional, regression, and performance testing for AI/ML workloads.
  • Participate in code reviews and agile planning sessions to improve testability and overall code quality.
  • Develop and maintain detailed test plans, cases, and automation scripts.
  • Enhance CI/CD processes, automate model versioning and rollback validation, and improve test coverage continuously.
  • Lead defect documentation, root cause analysis, and contribute to debugging complex AI/ML issues.
 
Innovation & Best Practices
  • Research and adopt industry best practices in AI/ML testing, automation, and responsible AI deployment.
  • Build and validate guardrails to prevent unsafe, biased, or inaccurate outputs.
  • Define and monitor new KPIs for AI/ML testing and platform performance.
  • Drive continuous process improvements across AI/ML QA systems and tools.
 
Core Development Responsibilities
  • Define and validate acceptance criteria using BDD/Gherkin.
  • Design and implement test automation code, tools, unit tests, and harnesses for Windows and/or mobile applications.
  • Manage and enhance CI tools such as Jenkins, Xcode Server, and VS Build Server.
  • Participate in technical design, specification creation, and code reviews.
  • Contribute to feature development and automation-related programming efforts.
  • Generate and maintain technical and process documentation.
  • Experience with iOS development or Quality Engineering is required.
 
Required Qualifications & Experience
Education
  • Bachelor’s Degree in Computer Science, Software, Electrical, or Electronics Engineering, or equivalent professional experience.
 
Professional Experience
  • Proven experience as an SDET with exposure to AI/ML testing or as a Machine Learning/AI Engineer.
  • Strong background in testing traditional ML models (supervised, unsupervised, reinforcement learning) and generative AI systems.
  • Experience with ML model validation, statistical testing, and cross-validation techniques.
  • Hands-on experience deploying, monitoring, and maintaining ML models in production.
 
Technical Skills
  • Advanced proficiency in Python and libraries such as PyTorch, TensorFlow, Pandas, Scikit-learn, NumPy, Matplotlib.
  • Experience with ML frameworks for model development, training, and evaluation.
  • Familiarity with cloud ML platforms: GCP (Vertex AI, BigQuery ML), Azure (Azure ML, Cognitive Services), or AWS (SageMaker, Bedrock).
  • Proficiency with automation tools, CI/CD pipelines, and MLOps practices.
  • Experience with RESTful API testing tools (e.g., Postman) for ML model endpoints.
  • Knowledge of SQL, database testing, and data validation for ML datasets.
  • Experience with Docker and Kubernetes for ML model deployment and testing.
AI/ML-Specific Knowledge
  • Understanding of model lifecycle management, versioning, and A/B testing frameworks.
  • Familiarity with feature engineering validation, data quality assessment, and ML pipeline testing.
  • Experience with interpretability tools (e.g., SHAP, LIME) and observability tools (e.g., MLflow, Weights & Biases).
  • Understanding of AI/ML security, privacy, and compliance testing.
  • Experience with distributed ML training and inference testing.
 
Preferred Qualifications
  • Passion for improving AI/ML reliability, safety, and ethics.
  • Strong collaboration skills in agile, fast-paced environments.
  • Experience building evaluation frameworks for generative AI outputs and responsible AI systems.
 
About the Team
The Gen AI Platform is an advanced system designed to host and orchestrate models, agents, MCP servers, and knowledge bases. It enables users to visualize, analyze, and collaborate on data-driven insights through multi-model AI capabilities—supporting strategic business intelligence and predictive analytics.
Get job alerts by email. Join Our Talent Network!

Job Snapshot

Employee Type

Contractor

Location

Orlando, FL (Onsite)

Job Type

Information Technology

Experience

Not Specified

Date Posted

10/11/2025

Job ID

25-62720

Apply to this job.

Think you're the perfect candidate?