An innovative method evaluates assessments for AI resilience by testing their completion using generative AI with varying levels of prompting. This process, derived from the Critical Thinking Project at UQ, assesses the strength and vulnerability of assessments based on cognitive skills and inquiry values. It provides insights into AI’s ability to mimic values like accuracy and coherence. Educators can use these findings to modify assessments, making them more resistant to AI completion, through changes in cognitive verbs, rubrics, or question types.