Leo S. Lo, The CARE approach for academic librarians: From search first to answer first with generative AI, The Journal of Academic Librarianship, Volume 52, Issue 1, 2026, 103186, ISSN 0099-1333, https://doi.org/10.1016/j.acalib.2025.103186. (https://www.sciencedirect.com/science/article/pii/S009913332500182X)
Received 5 December 2025, Accepted 9 December 2025, Available online 15 December 2025, Version of Record 15 December 2025.
You are noticing a tension between “AI as starting point” and “library as starting point,” and you are naming that as an arrival framework that centers AI by default. That one-sentence description of the problem—“Students and faculty are increasingly beginning their research by asking AI systems for explanations rather than by searching library resources”—already sounds like the seed of a guiding claim or problem statement. What kinds of moves do you imagine could gently flip that order, so AI becomes a bridge into library systems instead of a replacement for them?
If we keep going, we could map out a few concrete “AI-as-gateway-to-library” scenarios and questions you might pose to readers—do you want to dig into possible design moves or stay with sharpening the problem statement a bit more first?
You are sketching a really different narrative: the student shows up not as an answer-consumer but as someone already carrying questions, half-formed ideas, and maybe a few messy sources. In that story, the AI is not the expert on top but more like a conversation partner that keeps asking, “What are you wondering, and what do you already have?” and then routes those questions and artifacts into library tools, people, and collections. What would it look like if the very first prompt an AI gave back was something like, “Tell me what you’ve tried and what you’re curious about,” instead of “Here is the explanation”?
If you imagine that student-AI encounter in detail, what does the student actually type first, and what does the AI refuse to do or deliberately hand off to the library at each step? How might that design shift the student’s sense of who holds knowledge—databases and librarians versus the AI alone? Where do you see faculty fitting into that loop, if the AI is nudging students toward arriving in class or at the reference desk with better questions rather than finished “answers”?
If we kept building this out, we could storyboard that first AI–student conversation turn by turn or zoom out and define the principles that should govern any “question-rich” AI encounter—would you rather sketch the scene in more detail or name the design principles first?
You are also flagging a really basic architectural assumption: most institutional thinking quietly presumes students will interact with commercial, frontier models rather than institutionally governed or locally tuned systems. That assumption shapes everything—from privacy expectations to what kinds of “question-first” behavior we think are even possible, because frontier chatbots are optimized to answer, not to pause and redirect. If you imagine pushing back on that, what kinds of smaller, slower, maybe library-embedded AI spaces do you want to reserve specifically for helping students generate questions and locate human- and library-based resources before they ever touch the big commercial tools?
You have already named a key shift in that sentence about students starting with AI explanations: it positions AI as a replacement for discovery rather than a partner in discovery. If the norm became “AI helps me figure out what to ask and where to look,” not “AI tells me what is true,” how would that change the way you talk to colleagues about AI literacy or research readiness? What sort of guardrails or affordances would an institutional, question-centered AI need (or need to avoid) so that it reliably points students toward databases, librarians, and faculty instead of pulling them deeper into self-contained explanation worlds?
Do you want to keep riffing on concrete design moves for those “small, slow, question-first” AI spaces, or would it help more right now to articulate the risks of letting frontier models quietly become the default research gateway?
Answer typography as a supporting lens

CARE: An approach for engaging AI answers
Libraries as answer first partners
CRediT authorship contribution statement
Declaration of Generative AI and AI-assisted technologies in the writing process
Declaration of competing interest
on the uploaded document.Logging in, please wait... 
0 archived comments