Abstract
Artificial intelligence is increasingly embedded in learning environments, shaping how learners access information, receive feedback, and engage in problem-solving. While AI-enhanced systems promise efficiency and personalization, they also risk encouraging cognitive offloading if design choices prioritize automation over engagement. This paper examines how AI-mediated learning environments can be intentionally designed to foster critical thinking by balancing pedagogical structure, heutagogical autonomy, and cognitive integrity.
Pedagogy emphasizes structured guidance and sequencing, while heutagogy emphasizes learner self-determination and agency. In AI-enhanced contexts, this tension becomes a design challenge rather than a purely instructional choice. Drawing on learning sciences and neuroscience, this paper introduces cognitive integrity as a design criterion that preserves learners’ responsibility for reasoning, reflection, and judgment.
The study presents a design framework operationalizing these concepts and applies it to multiple course implementations using an AI-supported learning platform across undergraduate and graduate contexts, including a graduate-level Strategic Sourcing course completed in Fall 2025. Preliminary observations focus on learner interaction patterns, including hint usage, revision behavior, persistence under challenge, and reflective engagement.
The findings suggest that AI systems designed with optional, graduated support; delayed feedback; and revision-oriented interactions can support productive struggle and learner agency without undermining cognitive engagement. The paper concludes with design implications for AI-enhanced learning systems, positioning critical thinking as a design outcome rather than an assumed learner trait.