Head Space

Calm productivity for academics

AI for academics: From prompting to professional practice

The landscape of AI for academics has expanded rapidly in the last couple of years, and many colleagues have incorporated it into their daily work. Almost every academic I speak to has tried ChatGPT, Gemini, or Claude, with some of them using it regularly to summarise papers, draft emails, or generate teaching materials. And when they use it in these ways they report having pretty good outcomes.

But here’s a pattern I keep observing: even academics who use AI frequently and consider themselves reasonably AI-literate tend to use it in ad hoc, disconnected ways. Each session starts from scratch. There’s no accumulation of context, no systematic approach to different types of work, no clear sense of when AI use adds genuine value versus when it’s just performative productivity (for example, using AI to generate more words than is necessary simply because it’s easy to generate words). The result is that AI remains an occasional tool, used when you remember to, rather than an integrated professional practice and ensuring that its potential value remains frustratingly out of reach for most academics.

Contrast this colleagues who seem most comfortable with AI. They aren’t necessarily the ones who use it most frequently or who’ve mastered the fanciest prompting techniques. They’re the ones who’ve moved beyond ad hoc use into something more systematic; a different relationship with these tools that extends beyond episodic engagement.

This distinction matters because generative AI is already embedded throughout academic life. The question isn’t whether AI will become part of academic workโ€”it already is. The question is whether you’ll develop the capabilities to engage with it systematically and meaningfully, or whether you’ll continue with ad hoc approaches that deliver occasional value but never quite integrate into your actual workflow.

The progression that most approaches miss

Most introductions to AI for academics focus on prompting techniques. Learn to write better prompts, the logic goes, and you’ll get better outputs. This isn’t wrongโ€”structured prompts do produce better results than naive queries. But it misses something essential about how academics actually need to work with AI in the sense that a simple focus on prompting ensures that AI use remains isolated from the broader systems of work that define scholarship.

The prompting trap

Consider how you develop any professional capability. When you learned to teach, you didn’t just master techniques for delivering information. You probably also developed judgement about when and what different approaches would best serve your students, how to adapt in the moment, and what pedagogical choices aligned with your values. The techniques mattered, but the judgement about when and how to apply might have mattered more.

Working with AI requires similar development:

  • Practical skills: Knowing how to write effective prompts
  • Systematic approaches: Engaging AI for complex scholarly challenges
  • Professional judgement: Discerning meaningful engagement from superficial use

Most courses and tutorials stop at the first level. They teach prompting techniques and maybe some practical applications, then assume you’ll figure out the rest. But the progression from practical skill to professional judgement doesn’t happen automatically. It requires deliberate development through three distinct stages.

Three stages of AI engagement

To be honest, there’s no firm evidence for these three stages; they’re simply the ones I’ve come up with through my own experience of running workshops, doing presentations, and getting feedback from colleagues. While the stages may not be grounded in the literature, the value is in recognising that there’s a progression for developing AI use across contexts and scales. And understanding these stages helps you recognise where you are and what developing the next level of capability requires.

Stage 1: Substitution

Let’s start with a familiar example: Padlet. In the first stage you use Padlet as a drop-in activity during a lecture; students post responses to a prompt, you acknowledge a few contributions, then continue with your planned session exactly as before. Your teaching design, assessment approach, and preparation process remain unchanged; you’ve simply digitised an existing activity.

In the context of exploring AI use in your practice, you begin with a specific, bounded task; summarise this paper, generate quiz questions on this topic, draft an email with this information. You write a prompt, AI generates a response, you evaluate whether it’s useful, and you move on. This is episodic engagement where AI serves as a sophisticated information processor. You’ve substituted one way of doing a fairly typical task with another, but everything else around the task remains the same.

What substitution is good for:

  • Saving time on routine tasks
  • Working through large volumes of information
  • Freeing cognitive resources for more demanding work

What substitution can’t do:

  • Help you think through complex problems
  • Build genuine competence in unfamiliar areas
  • Develop extended understanding that rigorous scholarly work requires

Practical takeaway: Substitution is genuinely valuable for many tasks. The problem isn’t using substitution. The problem is when you stop there and never develop more sophisticated engagement patterns.

Stage 2: Adaptation

Let’s continue with the more familiar example of using Padlet. You might design pre-session Padlets to surface student questions, and reshape your teaching based on the patterns you observe in their responses rather than simply following a predetermined plan. Your preparation process has changed. Now you’re creating thoughtful prompts and building sessions responsively and students experience different participation patterns where contributions happen across multiple modes.

In this stage, AI becomes intellectual infrastructure that supports rigorous inquiry. Rather than asking for quick outputs, you engage in extended conversations that help you decompose complex problems, evaluate conflicting evidence, build working competence in adjacent fields, or test the assumptions underlying your arguments. The engagement is now a sustained collaboration on intellectually demanding challenges rather than ad hoc, episodic interactions to solve bounded problems.

Example of the difference: When you’re trying to understand whether a particular methodological approach suits your research question, you don’t just ask AI to explain the methodology. You engage in an iterative dialogue where you:

  1. Explore multiple ways to frame the problem
  2. Understand how different framings shape what you’ll discover
  3. Test whether your evidence actually supports your emerging conclusions
  4. Surface assumptions you hadn’t examined

This takes more time than substitution, but it extends your intellectual capacity in ways that quick summaries don’t.

Practical takeaway: Next time you face a complex methodological or conceptual challenge, try a 30-minute extended dialogue with AI instead of asking for a quick summary. Document what you learn about the problem itself, not just what AI tells you.

Stage 3: Transformation

In the transformation stage of using Padlet as a familiar example, you might now maintain ongoing Padlets across the entire term as a persistent space for student thinking, fundamentally restructuring how participation and formative assessment work in your course. Your professional identity has shifted where you now think about teaching as orchestrating learning across synchronous and asynchronous modes, your course planning assumes this infrastructure exists, and your judgement about which thinking works better in which mode shapes all your pedagogical decisions.

This is the stage where AI becomes persistently integrated into your scholarly work, not as a tool you occasionally deploy but as ongoing cognitive infrastructure. You’re no longer thinking about individual interactions with AI but about how AI integration shapes your professional practice over time.

This requires developing three capabilities:

  • Context sovereignty: Controlling the professional context AI accesses so it understands your work without repetitive prompting
  • Cultivating taste: Professional judgement about when and how AI engagement serves your goals
  • Evaluative capacity: Assessing the quality of collaborative outputs across different contexts

Important note: These three stages aren’t rigid categories. You might use substitution for content creation while engaging at the adaptation level with AI for research design, all while gradually developing transformation-level judgement about both. But understanding the stages helps you recognise where you are and what developing the next level requires.

Now that we’ve established these three stages, let’s explore how two critical capabilities develop across this progression: context sovereignty and cultivating taste. Understanding how these capabilities manifest at each stage helps you recognise not just where you are in your own progression, but what specific practices will help move you forward.

Context sovereignty across the stages

Context sovereigntyโ€”controlling the professional context that AI accessesโ€”develops progressively as you move through the stages. What starts as a frustrating bottleneck at the substitution stage becomes a powerful capability at transformation.

Stage 1 (Substitution): The repetitive prompting bottleneck

At the substitution stage, context sovereignty doesn’t exist. Every conversation with AI requires establishing context from scratch. You need to explain:

  • Your field and expertise
  • Your current project
  • Your methodological preferences
  • The specific question you’re exploring

By the time you’ve provided enough context for AI to offer useful responses, you’ve written several paragraphs and at the next session, you’ll need to do it all again.

Compare these two prompts to see why context matters:

  • Generic: “Help me analyse this interview data”
  • With context: “As a qualitative researcher with expertise in grounded theory, working on a study about academic workload in UK universities, with interest in how systemic pressures shape individual practices, help me analyse this interview data”

The second prompt will produce genuinely better outcomes compared to the first exchange which will create a fairly generic and superficial output. But at the substitution stage, you’re either providing all that context repeatedly or accepting generic responses. Neither is optimal.

This repetitive context-setting creates a fundamental limitation. As AI systems become more capable, the limiting factor isn’t what they can do but whether they understand your professional context well enough to do it meaningfully, and generate outputs that create real value.

Stage 2 (Adaptation): Beginning systematic context curation

At the adaptation stage, you start addressing the bottleneck systematically. You’re no longer accepting repetitive prompting as inevitable and instead, you’re capturing professional context in reusable forms.

What this looks like in practice:

  • You create a document describing your expertise, current projects, and methodological preferences
  • You copy this context into AI sessions when working on substantive challenges
  • You notice the quality difference when AI has this context versus when it doesn’t
  • You begin refining what context matters most for different types of work

What you might capture:

  • Your specialisations and areas of expertise
  • Current projects and research questions
  • Methodological preferences and commitments
  • CPD trajectory and areas of growing expertise
  • Typical scholarly challenges you face

The key insight in this stage is that context is where personal meaning lives. When you control the context that AI has access to, you control how it interprets your requests and shapes its responses. This isn’t about making AI remember facts about you. It’s about establishing the intellectual territory within which human-AI collaboration happens.

At this stage, you might still be copying and pasting context manually, or you might have set up some Custom Instructions or Projects with saved documentation. It’s better than starting from scratch every time, but it’s not yet systematic infrastructure. You’re adapting your practice around context provision, but you haven’t transformed it.

Practical takeaway: Create a one-page professional context document this week. Use it in your next three substantial AI sessions. Document what changes about the quality of engagement when you don’t have to re-establish context each time.

Stage 3 (Transformation): Context sovereignty

At the transformation stage, context sovereignty becomes foundational infrastructure. You’re no longer manually providing context because you’ve built systematic approaches to context that work across all your AI engagements.

What makes this level different

To be honest, while the technical infrastructure for full context sovereignty exists now, I think this is probably beyond what’s easily manageable for most academics. I believe that I’m skirting around the edges of this capability through a combination of software development tools (e.g. Cursor, VS Code) with AI integration (e.g. Claude Code) and API access to multiple language models, including local and frontier models. And protocols like Model Context Protocol allow you to create machine-readable professional contexts that work with any AI providerโ€”local or commercial models, current or future systems. This means that you’re not locked into one company’s system or capabilities. Your professional context remains yours, portable across whatever language models you choose to use.

Context sovereignty at this stage positions AI collaboration fundamentally differently. You’re not learning to use a specific tool that might be obsolete next year. And you’re not sitting around hoping that companies like OpenAI or Anthropic build features into their chatbots that you might like to use. Instead you’re developing the capability to maintain persistent cognitive partnerships that evolve as both AI technology and your practiceโ€”and contextโ€”develop.

The work of curating and maintaining professional context therefore becomes an ongoing professional practice, similar to maintaining your teaching resources and research projects, or updating your CV. But unlike those activities, context curation has compounding returns. The more sophisticated your professional context becomes, the more sophisticated your AI engagements can be.

The philosophical shift: When AI lacks persistent understanding of your professional context, what distinguishes your use of AI from anyone else’s? Generic prompting produces generic results. But when AI collaborations are shaped by your accumulated expertise, your specific scholarly concerns, your methodological commitments, and your personal preferences and values, then your professional context shapes collaborative outcomes into something that’s truly unique.

Your value doesn’t come from what only humans can do. It comes from how you orchestrate human-AI collaboration toward meaningful ends through systematic context sovereignty. At the transformation stage, context sovereignty isn’t something you think about for individual sessions. It’s infrastructure that all your AI work builds on, evolving alongside you.

Cultivating taste across the stages

Think about developing taste in wine, design, or music. Beginners might be able to tell good from terrible, but experts perceive subtle distinctions that novices might miss entirely. That expertise doesn’t come from memorising rules but instead develops through repeated practice, careful attention, and accumulated judgement that’s often based on reflection about what works in different contexts.

Professional taste with AI follows the same pattern. If context sovereignty is about what AI knows about your work, cultivating taste is about developing judgement about what constitutes meaningful engagement. Like context sovereignty, taste develops progressively across the three stages, from basic evaluation to sophisticated professional judgement. For me, the value of this concept became clear as I started to understand that, as AI was enabling me to significantly increase my output, I was also having concerns about the value of that output. Just because something can exist, doesn’t mean it should exist.

Let’s explore how tasteโ€”or evaluative judgementโ€”can be developed at each stage.

Stage 1 (Substitution): Basic evaluation

At the substitution stage, taste is minimal. Your evaluation is straightforward: Did this work? Did I get what I needed?

You’re asking questions like:

  • Did the summary capture the key points?
  • Are these quiz questions usable?
  • Is this email draft acceptable?

This isn’t sophisticated judgementโ€”it’s binary assessment. The AI output either meets your immediate need or it doesn’t. If it doesn’t, you revise your prompt and try again. If it does, you move on.

To be clear, there’s nothing wrong with this level of evaluation for substitution-level tasks. But it doesn’t develop the judgement you need for more sophisticated human-AI collaboration. You’re not yet thinking about whether AI engagement was the right choice, whether it produced meaningful versus superficial value, or how this interaction compares to working independently.

Stage 2 (Adaptation): Developing judgement through reflection

At the adaptation stage, taste begins developing. You’re no longer just asking “did this work?” You’re noticing patterns about when and how AI engagement produces genuine value versus when it feels hollow or counterproductive.

What’s different: You’re paying attention to your experience with AI engagement. After sessions, you notice:

  • Some interactions left you thinking more clearly about a problem
  • Others produced outputs you could use but didn’t deepen your understanding
  • Some prompts led to genuine dialogue that shaped your thinking
  • Others felt like you were just extracting content

You begin recognising the difference between:

  • Using AI because it’s available vs using AI because doing so produces genuine value
  • Quick substitution that suffices vs adaptation-level engagement that’s worth the time investment
  • AI contributions that enhance your thinking vs those that short-circuit necessary intellectual work

The writing example: Consider academic writing. AI can help at multiple points; generating initial outlines, providing feedback on argument structure, suggesting clearer phrasing, identifying assumptions you haven’t examined, challenging your reasoning from different theoretical perspectives.

At the adaptation stage, you’re learning through practice that all of these can be valuable, but they’re not always valuable:

  • Sometimes you need to sit with unclear thinking until it crystallises
  • Sometimes feedback too early prevents you from developing your own analytical voice
  • Sometimes the struggle to articulate an argument is where you discover what you actually think

This is where you’re clearly starting to develop taste, which is the ability to tell the difference between using AI for valuable work, and using it because it’s there. This isn’t rule-following. It’s accumulated pattern recognition about what serves your work in different contexts and there are no instruction manuals or best practice guides that can tell you what to do in each situation.

How taste develops at this stage:

  1. Doing the work with AI across different contexts
  2. Paying attention to your experience (meaningful vs superficial)
  3. Reflecting on why particular engagements helped while others felt hollow
  4. Building intuitions about effective engagement through accumulated practice

The reason taste matters is that AI introduces genuine ambiguity into professional practice. There are no universal rules for “correct” AI use. The same task might call for different levels of engagement depending on your goals, timeline, current understanding, and what you’re trying to achieve. Taste is what helps you navigate this ambiguity.

Practical takeaway: For the next week, after each AI interaction, spend two minutes writing: “This engagement was meaningful because…” or “This felt superficial because…”. You’re not looking for universal rules. You’re developing awareness of what meaningful engagement feels like in your specific work contexts.

Stage 3 (Transformation): Professional judgement shapes practice

At the transformation stage, taste becomes professional-level judgement that shapes all your decisions about AI engagement. You’re no longer consciously evaluating each interactionโ€”you’ve developed intuitions that operate at the level of professional practice.

What this looks like: The academics who’ve reached this stage don’t follow rules about AI use. They’ve built professional judgement about meaningful engagement. They know:

  • When a twenty-minute dialogue with AI will help them think more clearly about a methodological choice
  • When they need to work through that choice independently first
  • When AI’s challenge to their argument reveals genuine weakness versus when it’s misunderstanding their position
  • Which tasks benefit from AI collaboration and which require working alone
  • How different types of AI engagement serve different scholarly purposes

Taste at this level is contextual and sophisticated. You don’t have universal “AI taste” and you don’t get to a point where you’ve “mastered AI use”; you have developed judgement in specific domains:

  • You might have strong taste about using AI for literature review but still be developing taste for AI-assisted writing
  • The taste you develop for research work might differ from taste for teaching applications
  • Taste develops through sustained engagement with particular kinds of challenges

Why policies miss the point: This is why institutional policies that try to prescribe “correct” AI use often miss the mark. They’re attempting to create rules where judgement is required. Policies can establish boundaries e.g. here’s what violates academic integrity, here’s where transparency is required, and so on. But they can’t tell you when engaging AI will serve your scholarly thinking versus when it will undermine it. That requires cultivated judgement that develops through practice, not compliance with regulations. This is why effective AI for academics programmes focus on developing judgement rather than teaching rules.

At the transformation stage, your taste shapes how you approach all your work. You’re not thinking “should I use AI for this?” as a conscious decision each time. Your professional judgement about meaningful engagement is integrated into how you work. You’ve developed the capability to orchestrate human-AI collaboration toward meaningful ends, guided by sophisticated judgement about what serves your scholarly purposes.

Why AI for academics requires this progression

Academic work has particular characteristics that make this progression from practical skill to professional practice especially important. The complexity of academic work is precisely why AI for academics requires more sophisticated frameworks than general-purpose prompting advice.

Multiple timescales

Academic work operates across different timescales simultaneously:

  • Immediate tasks: Responding to emails, preparing next week’s lecture, reviewing a paper by Friday
  • Medium-term projects: Developing a research proposal over several months, writing a paper across a semester
  • Long-term development: Building expertise in a new area, developing a research programme over years, establishing yourself within scholarly conversations

AI can support work at all these timescales, but it requires different approaches at each level:

  • Quick substitution helps with immediate tasks
  • Adaptation supports medium-term projects
  • Transformation enables long-term intellectual development

Without understanding this progression, you’re likely to use AI primarily for immediate tasks while missing how it could support deeper work.

Breadth and depth requirements

Academic work requires both breadth and depth:

  • Breadth: Working knowledge across multiple domains to engage in interdisciplinary conversations, supervise students in adjacent fields, or review work that draws on literature you don’t regularly follow
  • Depth: Genuine depth in your primary areas to make original contributions, evaluate methodological choices, or recognise when findings challenge existing frameworks

AI can help with both, but requires different approaches:

  • Context sovereignty becomes particularly valuable for depth work, where AI needs to understand your accumulated expertise to engage meaningfully
  • Taste becomes crucial for breadth work, where you need judgement about building sufficient understanding without claiming expertise you don’t have

Intellectual risk

Academic work involves significant intellectual risk:

  • Exploring new research directions where you don’t yet know what matters
  • Developing arguments by following lines of thinking that might not pan out
  • Learning new methodologies by struggling with concepts until they click

AI can support this exploratory work, but it can also short-circuit it if you’re not careful. The risk is treating AI as providing answers when what you actually need is help thinking through questions. This is where adaptation approaches and cultivated taste become essentialโ€”not to avoid AI but to engage it in ways that support genuine inquiry rather than replacing it.

Collaborative complexity

Academic work is increasingly characterised by collaborative complexity:

  • Working with colleagues across disciplines, institutions, and countries
  • Supervising students with diverse backgrounds and needs
  • Contributing to conversations involving multiple theoretical traditions and methodological approaches

This collaborative complexity is exactly where AI could be most valuable, by helping you navigate different disciplinary languages, bridge theoretical frameworks, or understand methodological approaches outside your training. But realising this value requires transformation-level capabilities. You need to maintain sophisticated engagements across projects, and to recognise when AI helps with collaboration versus when it introduces friction. And you need to assess collaborative outputs in contexts where you can’t fully evaluate them yourself.

Summary: These characteristics of academic workโ€”multiple timescales, breadth and depth, intellectual risk, collaborative complexityโ€”are why developing transformation with AI matters more than just learning prompting techniques. The techniques help with immediate tasks but transformation enables you to work differently across all aspects of your academic life.

From knowing about AI to using AI

Reading about how AI can enhance scholarship is one thing. Actually integrating it into your daily academic practice is another. The gap between understanding AI’s potential and developing the capabilities to realise that potential in your own work can feel overwhelming, particularly when you’re already stretched thin.

This isn’t about lacking motivation or technical ability. It’s about the fundamental difference between conceptual knowledge and practical competence. You can read extensively about AI’s possibilities without developing the judgement to recognise when AI will serve your work and when it won’t. You can understand the structure of effective prompting without cultivating the taste to distinguish meaningful from superficial interaction. And you can appreciate AI’s potential for scholarship without building the systematic approaches that make that potential sustainable in your work.

Starting where you are

The path from reading to practice doesn’t require a complete overhaul of your academic workflow. It begins with recognising that developing AI capabilities happens in stages, each building on the previous one.

If you’re completely new to AI, your first step is simply having structured conversations with it about your work. Not as a replacement for your thinking, but as a cognitive partner that catches different errors than you do. This means understanding what AI actually is; a language-based cognitive extension, not a magic oracle or a threat to academic integrity. Start by using AI for a single routine task where efficiency creates headspace: summarising a paper you need to understand quickly, drafting routine emails, or creating initial outlines for teaching materials. The goal isn’t to transform your practice immediately, but to build confidence through successful completion of bounded tasks where you’re simply substituting one thing for another.

If you’re already using AI occasionally, the next stage involves developing more systematic approaches. This means moving beyond “does this output work?” to “is this engagement serving my scholarly goals?” You’ll need to experiment with different ways of structuring your prompts, comparing AI assistance across different types of reading (monitoring literature versus building genuine competence), and beginning to develop judgement about what constitutes meaningful engagement versus superficial efficiency. At this stage, you’re not just extracting information; you’re starting to adapt your practice and using AI to think with, to develop arguments, to explore alternative interpretations.

If you’re a confident AI user, the transformation stage involves restructuring your workflows to integrate AI as infrastructure rather than supplement what you already do. This requires actively controlling what professional context AI can access about your work, your standards, your preferences, and your values and beliefs. It means cultivating professional taste, which is the judgement to recognise what meaningful engagement looks like in different domains of your practice. Research, teaching, and administration each demand different approaches, and taste helps you navigate these differences without rigid rules.

Building capability systematically

The challenge many academics face isn’t finding time to learn about AIโ€”it’s finding a structured path through the overwhelming volume of advice, tools, and possibilities. What helps is a framework that acknowledges where you’re starting from and provides clear progression through identifiable stages.

This is why I’ve updated the original Generative AI for Academics online course, which is now an 11-lesson course that takes you from complete novice to confident professional practitioner. The course is built around three progressive stages: Substitution (adding AI to existing workflows), Adaptation (reshaping practice for rigorous scholarly inquiry), and Transformation (integrating AI as infrastructural professional practice).

Rather than teaching you isolated skills, the course develops integrated capabilities:

You’ll learn effective prompting not through memorising templates, but by understanding the RGID framework (Role, Goal, Instruct, Discuss) and recognising which level of engagementโ€”extraction, conversation, or contextโ€”serves different tasks. And you’ll develop this through structured practice, not passive reading.

You’ll engage with AI-assisted reading that goes beyond simple summarisation to genuine scholarly engagement. This means understanding when summary suffices (e.g. routine monitoring of evidence) versus when you need deeper reading (e.g. when you need to build competence), how to use AI as a reading companion for complex arguments, and how to maintain critical thinking while using AI assistance.

You’ll create academic content more effectively across teaching, presentations, and scholarly materials. But crucially, you’ll develop judgement about when efficiency serves your goals and when deeper engagement matters. Not all content creation deserves the same investment, and developing taste helps you allocate your attention appropriately.

Throughout the course, you’ll build context sovereignty and professional tasteโ€”the two capabilities that distinguish sustainable AI integration from episodic tool use. Context sovereignty means actively controlling what AI knows about your work, your standards, your preferences, and your values. Professional taste means developing judgement about what constitutes meaningful engagement in different domains of your practice.

The self-directed course takes 15-20 hours over 6-8 weeks, with multiple entry points depending on your current capabilities. If you’re completely new to AI, start with the foundations. If you’re already using AI occasionally but want more systematic approaches, begin with Module 2 on Adaptation. The structure is flexible enough to meet you where you are while providing clear progression toward where you want to be.

What makes structured development different

You might wonder: why not just experiment on your own? Many academics do, with varying degrees of success. The difference with structured capability development is that you’re not just accumulating tips and tricks; you’re building systematic approaches that evolve with your practice and with AI’s continuing development.

Each lesson includes practical activities you complete within the context of your actual work. You’re not learning about AI in the abstract; you’re developing capabilities through direct application to your everyday academic tasks. The activities are designed to be completable in 5-30 minutes, making them feasible even in busy academic schedules.

More importantly, the course builds evaluative judgement: the ability to assess not just whether AI outputs are usable, but whether your engagement with AI is serving your scholarly goals. This metacognitive capabilityโ€”knowing what meaningful engagement looks like in different contextsโ€”is what enables sustainable integration rather than episodic experimentation.

Taking the next step

If the vision of AI-enhanced scholarship resonates with youโ€”if you want to move from being curious about developing a better understanding of AI’s potential to actually realising it in your daily practiceโ€”the question is: how do you bridge that gap systematically?

I’m working on a new Generative AI for Academics course that provides that structured path. It won’t solve every challenge you face in academic work, but it will provide a framework that helps you develop the capabilities for integrating AI thoughtfully into your practice. Not as another burden to manage, but as a cognitive partner that helps create the headspace you need for your most meaningful work. Sign up for the newsletter below to be notified when the course is available.

The integration of AI into scholarship isn’t inevitable or automatic. It requires intentional development of new capabilities, systematic approaches, and ongoing reflection. But for academics willing to invest in that development, the potential rewards are significant: more impactful research, more effective teaching, broader engagement, and ultimately, more sustainable academic careers.

The future of your scholarship isn’t about AI versus traditional approaches. It’s about developing the judgement to know when AI serves your work and when it doesn’t, the taste to recognise meaningful engagement, and the systematic practices that make AI integration sustainable. That development begins with a single stepโ€”and that step might be choosing to move from reading about AI to actually using it.


Scholar: Making sense of our complex world.

My upcoming book teaches systematic thinking for navigating complex decisions in the workplace, family choices, and community issuesโ€”no academic training required.

Get book updates and more practical tools: Join the newsletter


Comments

Leave a comment