Domain 3: Use AI Effectively
Choosing the right tools, crafting strong prompts, and evaluating results
Introduction
Being an effective AI user means more than just knowing what AI is or where it shows up. It requires practical skill: choosing the right tool, using it thoughtfully, and evaluating the output critically. These are habits that support academic success and career readiness.
Many students jump into AI tools without asking an essential question: Is this the right tool for my task? Just as you wouldn’t use a calculator to design a slide deck, you shouldn’t use a chatbot when you really need a data visualization tool. The first step in using AI effectively is understanding which tool fits your purpose. The next is knowing how to interact with that tool in ways that produce relevant, trustworthy, and ethical results.
This domain will help you develop practical strategies for selecting, using, and evaluating AI tools with confidence.
Choosing the Right AI Tool
AI tools vary widely in purpose, functionality, and underlying design. Selecting the right one starts with understanding what the task requires.
According to the Student Guide to AI (2025), generative AI tools can be grouped into several categories:
- Text-based tools: These include ChatGPT, Copilot, Gemini, and Claude. They help with writing, editing, summarizing, explaining, and brainstorming.
- Research tools: Tools like Perplexity, Consensus, and Elicit are designed to help find real, cited sources, summarize academic work, or identify knowledge gaps. Some library databases are starting to include AI research assistants that pull from the collection of articles or other information being searched.
- Creative tools: DALL·E, Adobe Firefly, and Runway ML are used for producing visuals, videos, or creative assets.
- Analytical tools: Power BI, Wolfram Alpha, Tableau, and RapidMiner assist with data interpretation, visualization, and modeling.
- Embedded tools: Microsoft 365 Copilot, Zoom AI, and Google Docs AI assistants offer in-tool support for routine tasks like writing, formatting, and scheduling.
Each of these categories overlaps. For example, Microsoft Copilot can summarize text and generate visuals within Word or Excel. However, understanding the primary design of a tool helps you assess whether it’s trustworthy for a particular outcome (Student Guide to AI, 2025; Digital Promise, 2023).
The ethical dimension matters too. If your task involves academic writing, you may need a tool that can help find credible sources, not just generate polished paragraphs. If you’re working on something sensitive—like mental health or financial information—choose tools that prioritize privacy and transparency (UNESCO, 2021).
When choosing a tool, ask:
- What task am I trying to complete?
- Does this tool specialize in that task?
- Does it provide sources or data transparency?
- How does this tool treat privacy and bias?
- Will it help me build skills, or just produce content?
The tool you choose shapes the quality of your thinking. Select wisely.
Prompting Basics: Better Input Leads to Better Output
Prompting is the act of instructing an AI system what to do. According to Ben Hylak’s model, endorsed by OpenAI’s Greg Brockman, an effective AI prompt includes a clear task, context, format, constraints, and role for the AI (Student Guide to AI, 2025).
Better prompts produce better results because they give the AI more structure to predict what you actually need. Prompt engineering is not about tricking the tool—it’s about communicating clearly.
Effective Prompting Tips (Student Guide to AI, 2025; UNESCO, 2021):
- Be specific: Include relevant details about your task, audience, format, and goals.
- Give context: Describe your role, who the output is for, or what you’re trying to achieve.
- Use placeholders: Create flexible prompts that can be reused (e.g., “[career title]”).
- Ask for questions: Prompt the AI to clarify before generating an output.
- Iterate: Try different phrasings or refine the output with follow-up prompts.
Prompt Comparison Table
Prompt Type | Example
Context: You want to improve your resume. |
Output Quality | Why |
Weak | “Tell me how to improve my resume” [upload resume to tool, without identifiable information] | Generic, vague | No context, no details |
Better | “Improve my resume for a [career title] with skills in [insert skills]. Highlight experience in [insert experience]. Target this for an entry-level role.” [upload resume to tool, without identifiable information] | Focused, job-specific | Gives structure and intent |
Reflective | “You are a career advisor. Ask me clarifying questions about my experience and goals until you are 99% certain you understand my professional experience. Then draft a tailored resume for a [career title] position based on my responses. Do not fabricate any experiences/roles.” | Highly interactive | Promotes reflection and better alignment |
Strong prompting encourages dialogue. You shape the AI’s output through iteration, revising your prompt based on what you receive, just as you would revise a draft in writing.
Evaluating Output: Accuracy, Relevance, and Hallucinations
AI tools are known to generate hallucinations, confident-sounding but false, misleading, or fabricated information. This is one of the most important risks students need to understand. AI systems do not know facts; they predict what words or phrases are likely to come next based on patterns in data. As a result, they sometimes invent citations, misquote statistics, or offer answers that seem logical but are factually wrong (Student Guide to AI, 2025; UNEP, 2024).
Examples of AI Hallucinations
- Citing a study from a journal that doesn’t exist
- Attributing a quote to the wrong historical figure
- Summarizing an article incorrectly, leaving out key limitations
- Recommending strategies based on fake job market data
This matters. Using hallucinated content in academic or workplace settings can damage your credibility and mislead others.
Evaluation Checklist (adapted from Student Guide to AI, 2025):
- Accuracy
- Can you verify claims with multiple trusted sources?
- Are statistics backed by real reports or peer-reviewed work?
- Are citations real and linked to actual documents?
- Relevance
- Does the response match your specific question or context?
- Is the tone appropriate for the audience?
- Are any parts off-topic or generic?
- Logical Consistency
- Are the arguments clear and connected?
- Are there contradictions, oversimplifications, or unsupported claims?
- Bias and Perspective
- Does the output reflect diverse viewpoints?
- Is anything missing that would normally be included?
- Could the output reinforce stereotypes or cultural assumptions?
Use these questions every time you work with AI. Evaluating output is not optional—it is a core responsibility of ethical, capable AI users (Digital Promise, 2023; UNESCO, 2021).
Career Use Cases: Applying These Skills
Education Example
You are a middle school science teacher preparing a lesson on climate change.
Weak Prompt: “Create a lesson plan on climate change for 8th graders.”
Better Prompt:
“Create a lesson plan on climate change for 8th graders, including activities that are accessible to students with diverse learning needs and cultural backgrounds. It should meet [insert state or common core standard]. The class is made up of [insert number] students. The lesson should last [insert class duration]. Incorporate different teaching strategies to support [insert specific learning needs].”
The revised prompt includes academic requirements, student context, learning goals, and time frame, leading to more usable and inclusive results.
Healthcare Example
You are preparing patient education materials for individuals recently diagnosed with hypertension.
Prompt:
“Explain hypertension in plain language to a patient who reads at a 5th-grade level. Include causes, risks, and two lifestyle tips. The audience includes [insert cultural or linguistic background if relevant].”
This prompt builds health literacy, matches patient needs, and supports equity in healthcare communication.
Business Example
You are launching a product campaign and need ad copy for digital platforms.
Prompt:
“Write three Instagram captions to promote [insert product] with verified statistics about [insert issue, e.g., plastic pollution]. The tone should be professional but engaging, and each caption should stay under 150 characters. Include a call to action.”
This prompt combines format, content expectations, tone, and audience to guide output effectively.
Final Reflection
Being able to use AI effectively is not about memorizing tools. It’s about asking better questions, evaluating what you receive, and using the tool to extend your thinking. As the Student Guide to AI (2025) reminds us, your goal is not just to get answers—but to build skills and self-awareness through responsible use.
As a future professional, your ability to prompt well and evaluate rigorously will help you:
- Avoid misinformation and academic risks
- Collaborate productively with AI across different industries
- Communicate ideas clearly and ethically
- Stay adaptive as tools evolve
Using AI is a process of learning through feedback. As you grow, your ability to use it with integrity will reflect your readiness to lead and contribute in your field.
Extended Reflection Prompt
Choose a recent moment when you used (or could have used) AI in a school, work, or personal context. Rewrite your original prompt to be more specific and reflective. Then, write 200–250 words describing:
- What changes you made and why
- How the improved prompt changed the output
- What you learned about prompting or evaluating AI
- How you might apply this skill in your future academic or career setting
References
American Association of Colleges and Universities & Elon University. (2025). Student Guide to Artificial Intelligence V2.0. https://studentguidetoai.org
Digital Promise. (2023). AI Literacy Framework. https://digitalpromise.org
Padmanabhan, B., Zhou, B., Gupta, A. K., Coronado, H., Acharya, S., & Bjarnadóttir, M. (2025). Artificial Intelligence and Career Empowerment [Online course]. University of Maryland. Canvas LMS.
United Nations Educational, Scientific and Cultural Organization. (2021). Recommendation on the Ethics of Artificial Intelligence. https://unesdoc.unesco.org
United Nations Environment Programme. (2024). AI Environmental Impact Issues Note. https://www.unep.org
Media Attributions
- ChatGPT Image Sep 13, 2025, 06_33_01 PM