About Awign Expert:
Awign Expert is an enterprise-focused platform that helps businesses Hire, Assess and Manage highly skilled resources for Gig Based Projects. We provide our Experts a gateway to work for and build a freelance/consulting career with large-scale Enterprises. We are a newly launched business division of Awign, which is one of the pioneers and currently the largest player in the Gig Economy in India. Here at Awign, we are changing how the world works with a vision to uplift millions of Careers.
About the client:
Our client is a global IT services and software development company delivering tailored technology solutions. Specializing in mobile app development, web development, enterprise software, and digital transformation, they serve diverse industries with a focus on quality and innovation. Leveraging modern technologies like AI, cloud computing, and blockchain, they craft scalable solutions. Committed to agile methodologies and client-centric delivery, they prioritize transparency and collaboration, ensuring efficient project execution and establishing themselves as a trusted technology partner for businesses in the digital era.
Role - AI Tester
Experience- 3-6 years
Work Location -Remote
JD-
We are looking for an experienced AI Tester with 3-5 years of expertise in software testing and AI/ML testing. The ideal candidate will validate AI systems' performance, accuracy, and compliance with ethical standards while ensuring high-quality deliverables.
Responsibilities:
• Develop and execute test plans for AI/ML models, ensuring functionality, performance, and reliability.
• Validate data integrity, model outputs, and preprocessing pipelines.
• Perform functional, regression, and performance testing for AI models, focusing on bias, fairness, and scalability.
• Test AI integrations in end-to-end systems and workflows.
• Build and maintain automation frameworks for AI testing using tools like TensorFlow Testing Library and PyTest.
• Analyze key metrics such as accuracy, latency, and resource utilization.
• Collaborate with cross-functional teams to enhance model quality and deployment processes.
• Document test cases, scenarios, and results, reporting issues using tools like Jira.
Skills Required:
• Proficiency in Python, AI/ML frameworks (e.g., TensorFlow, PyTorch), and testing tools.
• Solid understanding of software testing principles and AI/ML algorithms.
• Experience with model evaluation metrics (e.g., F1 Score, ROC-AUC) and cloud platforms (AWS, Azure, GCP).
• Strong communication, analytical, and problem-solving skills.
• Preferred: Experience with MLOps, ethical AI testing, and tools like SHAP or LIME.