Better Prompts, Better People: Rethinking How We Lead

When it comes to supervising or delegating, it’s easy to assume we’re being clear—only to be surprised or frustrated by the outcome. The task didn’t get done the way we expected, or the result misses the mark entirely. Our first reaction? Often, it’s to take it back and do it ourselves.

I’ve been there. As a student leader, my instinct used to be: “Forget it—I’ll just do it.” Instead of offering clarity or grace, I’d grab the responsibility again, convinced I’d been perfectly clear the first time. But the truth is, if the outcome wasn’t right, maybe the input wasn’t either. This became even clearer to me the more I used ChatGPT. Like a lot of folks, I was both impressed and confused by it at first. The earliest versions of the tool had their quirks. You’d type in a prompt, and the response might be vaguely useful—or wildly off. As the platform evolved (and as I got better at using it), I started noticing a trend: the better my prompts were, the better the outcomes became.

Now, ChatGPT knows me and my context so well that I can type, “Write a bio,” and she (yes, she—I’ve gifted her feminine pronouns because she seems to know everything) gives me a nearly perfect snapshot of who I am professionally, academically, and personally. It feels like magic—but it’s really just pattern recognition and context cues.

There are two strategies I always lean on when using ChatGPT that have unexpectedly become relevant to how I delegate as a leader:

  1. Start with questions. I’ll often say, “Here’s what I’m trying to do, but ask me some questions before you begin.” That back-and-forth helps the system fill in the gaps.
  2. Invite critique. After I get a draft, I’ll ask, “What’s missing? What doesn’t land? What assumptions am I making?”

It’s made me wonder: how often do we give our student staff or colleagues the space to do either of those things? How often do we actually invite clarifying questions—or just expect them to magically know?

There have been plenty of times when ChatGPT gave me a completely off-base response. But when I reviewed my prompt, I usually found the problem wasn’t the AI—it was me. I’d left out key details. I didn’t fully explain a switch in context. I relied too much on jargon. Or maybe I was multitasking and didn’t proofread. And yet, I keep working with the tool. I try again. I give it more detail. I show patience.

Am I giving this non-sentient, emotionally-neutral chatbot more grace and clarity than I offer to actual people? Does my patience not go so far as to reach students who are often learning in real time, juggling multiple roles, and relying on me for the very context I failed to give?

That question stung a bit.

The truth is, good delegation is not just about being efficient—it’s about being clear, curious, and collaborative. If we want students to grow, we have to treat them like partners in the process. We need to slow down, provide full context, and encourage them to ask for what they need—before assuming they should already know.

So, next time you assign a task, imagine you’re prompting ChatGPT:

  • Did you give the full context?
  • Did you invite questions?
  • Did you check for clarity before expecting perfection?
  • Did you allow for re-dos and learning, or did you jump to correction?

Delegation isn’t just about getting things done. It’s about building capacity in others—and that means we have to be intentional with our inputs. Whether it’s a chatbot or a student employee, the principle is the same: clarity + curiosity = better outcomes.

Let’s lead like we prompt: with more patience, more detail, and more room for growth.