"

Domain 4: Using AI Responsibly

Applying ethics, privacy, and fairness in every interaction with AI

Introduction

AI is a powerful tool, but with that power comes responsibility. As AI tools become more deeply embedded in education, the workplace, healthcare, social media, and government systems, understanding how to use them ethically is essential. This means not only knowing what AI can do, but also recognizing its risks—especially around fairness, privacy, misinformation, and environmental impact.

AI is not magic. It may seem advanced, but fundamentally, it is a system that identifies patterns in data and regurgitates what it has learned in response to prompts or problems. The results can feel intelligent, but they are ultimately rooted in patterns, probabilities, and training data. That means you are still the decision-maker. You must bring your values, judgment, and humanity to your use of AI. Literacy means knowing when AI is helpful—and when your critical thinking matters more.

Responsible use of AI isn’t just about personal integrity. It’s also about contributing to a society where technology supports equity, safety, and truth. This chapter will help you develop the awareness, reasoning, and habits needed to engage with AI tools thoughtfully and ethically.

Note: This is one of the most important chapters of the microcredential. Being responsible with AI will influence how you use every tool in every context. Take your time here—even if it goes beyond the one-hour mark.

What Makes AI Use “Responsible”?

Responsible AI use means:

  • Understanding Data Bias: All AI tools are trained on data, and that data often reflects real-world inequalities, omissions, or stereotypes. As a responsible user, you should not accept AI responses at face value. Pause and ask: Who might be missing from this data? Whose experiences are not represented? Bias in training data can lead to unfair or inaccurate outputs, especially for marginalized groups (Digital Promise, 2023; UNESCO, 2021).

But bias is only part of the issue. AI tools can also generate false or misleading information, known as hallucinations. These may include made-up quotes, fake sources, or incorrect statistics that sound believable but are not grounded in fact (Student Guide to AI, 2025).

Before using AI-generated content, evaluate the output carefully:

  • Accuracy: Can you confirm key facts with credible sources?
  • Sources: Do the citations exist and come from trusted organizations?
  • Relevance: Does the output actually address your question?
  • Timeliness: Is the information current and still valid?
  • Use AI as a starting point, not a final product. Your ability to guide, question, and verify is what makes you a literate and ethical AI user.

Practicing Transparency: Transparency means being honest about when, where, and how you use AI. If you use AI to help generate content but do not acknowledge it, others may assume the work is entirely your own. This can create confusion in group projects, lead to mistrust with instructors, or cause ethical problems in academic or professional settings. It also limits your own growth as a learner.

Citing AI

If you use AI in academic work, you are expected to cite it properly. Montgomery College’s Library provides clear guidance on how to cite AI-generated content using MLA, APA, and Chicago styles. You can find this information on the Cite Sources guide (link).

Being transparent about your use of AI builds trust, supports academic integrity, and reinforces your role as an ethical and reflective user.

  • Respecting Consent and Privacy: Uploading someone else’s work, likeness, or personal data into an AI tool—even as an example—can be a violation of their privacy, with many employers outright banning this practice. Once data is submitted, it can be hard or impossible to trace where it is going. It may be stored, used for training, or even leaked. Responsible users ask: Do I have permission to share this? What does the AI company say they’ll do with it?
  • Evaluating Impact: Responsible users consider how their use of AI affects others and how it reflects on themselves. If you submit AI-generated work without acknowledgment, the issue is not just about grades. It misrepresents your thinking, effort, and voice. This can affect how instructors and classmates understand who you are, what you value, and how you approach learning.

Ask yourself:

  • Does this reflect my own thinking?
  • Am I being honest about what I contributed and what AI helped with?
  • How might this shape others’ trust in my work?

AI use also affects others. An image generator might reinforce harmful stereotypes. A chatbot reply could spread misinformation or cause harm. Thinking ahead and using AI with care is part of ethical, informed participation in any community.

Choosing When Not to Use AI: Some situations call for human judgment, empathy, or creativity. A condolence message, a love letter, a personal reflection, or a decision affecting someone’s future may lose meaning, or cause harm, if written or made by AI. Knowing when not to use AI is a sign of both literacy and maturity.

New Case Study: Predictive Policing

In cities like Chicago and Los Angeles, predictive policing algorithms were used to determine which neighborhoods were most likely to experience crime. These systems relied on past arrest records and police reports (Gorner, 2020).

  • Type of AI: Predictive AI using supervised learning
  • Function: Analyzes historical crime and arrest data to forecast “hotspots” of future crime
  • Risk: Because policing has historically been heavier in Black and Brown neighborhoods, these areas were more heavily represented in the data. The AI ended up recommending even more policing in these same areas, regardless of actual crime rates. This created a feedback loop of surveillance and arrests.
  • Outcome: Public criticism, academic studies, and local organizing led some departments to suspend or terminate these tools. In 2020, LAPD stopped using PredPol after an internal audit revealed that the tool led to disproportionate targeting of specific communities.
  • What could’ve been done: AI developers and police departments could have tested the system for racial bias, cross-checked forecasts with non-policing data, and included oversight from civil rights organizations before deployment.

Takeaway: Users, whether developers, public servants, or informed citizens, must critically examine the training data, the assumptions built into algorithms, and the real-life impacts. You don’t have to be a software engineer to ask the right questions.

Recognizing and Preventing Harm: Practical Actions

As a Student:

  • Cite AI when you use it for academic work. This shows respect for your audience and keeps you in control of your learning.
  • Don’t let AI do your thinking. It can support your ideas, but if you skip the thinking process, you also skip learning.
  • Double-check everything. AI makes mistakes and can hallucinate facts. Being a responsible user means verifying information.

As a Consumer:

  • Be skeptical of content that feels emotionally manipulative or “too perfect.” AI is often used to generate clickbait, misinformation, and deepfakes.
  • Verify before you share. AI-generated media can spread fast. Responsible users take a moment to check sources.

As a Future Employee or Leader:

  • Ask tough questions about the tools your company uses. Who tested them? For whom do they work well ,or not?
  • If you notice bias or harm, speak up. Ethical workplaces need voices that question the status quo.
  • Advocate for fairness. Promote testing AI systems across different groups, especially those historically marginalized.

Frameworks for Ethical Use

Ethical frameworks help you make sense of AI’s role in society and support you in making choices that reflect your values. In this section, you’ll explore three frameworks: the UNESCO Recommendation on the Ethics of Artificial Intelligence, the Digital Promise AI Literacy Framework, and Montgomery College’s RESPECT Framework. Each one offers a different perspective on what responsible AI use looks like.

These frameworks highlight the importance of critical thinking, transparency, and ethical decision-making. Reviewing them can help you better understand how to choose tools, use them with care, and reflect on their broader impact in academic, professional, and everyday settings.

1. UNESCO AI Ethics Framework

UNESCO’s framework, adopted by 193 member states, lays out an international agreement on what ethical, inclusive, and sustainable AI should look like. It includes 10 guiding principles and 11 policy recommendations that touch nearly every part of civic and educational life.

Key pillars that strengthen AI literacy:

  • Human Agency and Oversight: You are always the final decision-maker. AI should not make decisions for you, but with your input. For students, this means using AI to support thinking and creativity, not to replace your ideas or voice.
  • Fairness and Non-Discrimination: Systems should be audited and adjusted to reduce bias. Ask: Does this tool reflect diverse communities? Could it unintentionally stereotype or exclude?
  • Transparency and Explainability: Responsible AI is understandable. If a tool doesn’t explain its process, don’t accept its outputs blindly. When AI tools are opaque, users lose control of their work, education, and personal data.
  • Data Governance and Privacy: You should understand how your data is used, stored, and possibly reused for training models. Often, this process is invisible, and once shared, you may never regain control.
  • Education and AI Literacy: The UNESCO framework explicitly calls for nations to build AI literacy in schools and higher education. You are part of this vision. It’s not enough to use AI, you must learn to question it, improve it, and hold it accountable.
  • Sustainability and Environmental Impact: Choose tools with smaller carbon footprints when possible. Challenge institutions to think about the hidden energy costs of AI use.

As a student, apply the UNESCO framework by:

  • Asking for transparency in tools you’re assigned or expected to use
  • Noticing when systems seem to treat people unfairly, and voicing concern
  • Refusing to use AI in ways that erase human experience or amplify bias
  • Advocating for inclusive datasets and culturally respectful content in classroom tools

 

2. Digital Promise AI Literacy Framework

Digital Promise’s framework is tailored for learners and educators in today’s classrooms. It outlines four pillars of AI literacy that connect deeply to practice:

  1. Understanding AI Concepts and Applications:
    1. Know the difference between predictive, generative, and analytical AI.
    2. Practice identifying AI in your daily tools, chatbots, social feeds, grammar checkers.
  2. Evaluating AI and Its Impact:
    1. Think systemically. Who made the AI tool? Who benefits from it? Who might be harmed?
    2. Use examples like facial recognition or hiring tools to trace how AI affects real lives.
  3. Practicing Ethical Use:
    1. Acknowledge when you’ve used AI in your work.
    2. Use it to deepen understanding, not to cut corners.
    3. Follow academic honesty and digital citizenship standards.
  4. Participating in AI-Driven Communities:
    1. Speak up when you notice harmful patterns in AI use at school, work, or online.
    2. Help peers navigate tools critically, and share strategies for responsible use.
    3. Advocate for AI tools that reflect your community’s needs, values, and languages.

Digital Promise sees AI literacy as more than knowing how a model works, it’s about becoming a participant in ethical technology use, design, and critique.

 

3. Montgomery College’s RESPECT Framework

Developed by Dr. Paul Miller of Montgomery College’s Center for Teaching and Learning, the RESPECT framework helps students build ethical awareness, critical thinking, and responsible AI practices across disciplines. It complements the UNESCO and Digital Promise frameworks by grounding AI literacy in everyday student use and digital citizenship.

Each letter in RESPECT represents a core skill or mindset:

  • Research Skills: Check facts, compare sources, and verify AI outputs.
  • Ethical Use: Be honest and transparent about how you use AI. Avoid plagiarism and follow academic guidelines.
  • Safety Online: Recognize misinformation and algorithmic manipulation.
  • Privacy: Understand what personal data AI tools collect and how it might be stored or reused.
  • Effective Communication: Use AI to help express ideas clearly, but review for tone, audience, and accuracy.
  • Critical Thinking: Don’t take AI outputs at face value—analyze them for bias, quality, and logic.
  • Technology Basics: Learn how AI tools work, including their strengths, weaknesses, and limits.

The RESPECT framework invites you to go beyond passive use. It positions you as an active, ethical participant in shaping how AI is used in learning, work, and civic life.

Reviewing these kinds of frameworks helps you develop a clearer picture of what responsible, informed AI use looks like. They offer guidance for evaluating tools, making ethical decisions, and using AI in ways that reflect your values and goals. As your AI literacy grows, let these principles shape how you engage with AI—not just as a user, but as someone who influences its role in the world around you.

AI in Your Life: Thinking Ethically

Being AI literate means recognizing where AI is used, thinking critically about its role, and using it in ways that reflect your values. Without that awareness, you risk:

  • Being influenced by algorithms designed for engagement, not truth
  • Reinforcing bias or exclusion without realizing it
  • Sharing false or misleading information
  • Replacing your unique voice and thinking with automation

Let’s revisit common tools and explore what ethical use looks like:

  • Grammarly / Copilot / ChatGPT
    • Use these tools to spark ideas, refine structure, or improve clarity. Do not rely on them to think or write for you. Your ideas, voice, and reasoning should always come through.
    • You might say: “I used AI to help organize my outline, but all final ideas and wording are my own.”
    • Evaluate the output before using it:
      Ask yourself:

      • Is the information accurate? Can I verify key facts or terms?
      • Are any sources cited? Do they actually exist and come from credible places?
      • Does the tone or suggestion match what my assignment or audience expects?
      • Does the response align with course rubrics, academic integrity policies, or professional standards?
    • When in doubt, check with your instructor or refer to the college’s academic honesty policies.
    • Using AI well means staying in control, thinking critically, and being honest about your process.
  • Image/Video Generators
    • Avoid stereotyping. For example, don’t use AI to create images of “professional people” that only include one race or gender.
    • Add disclaimers to visual content: “This image was generated using DALL-E and is not a real person.”
    • Don’t use likenesses or cultural symbols without context or permission.
  • Social Media and Recommender Systems
    • Actively curate your feed. Follow creators from different backgrounds and perspectives.
    • Don’t repost AI-generated content that you haven’t verified.
    • Understand the algorithm: The more you interact with a type of content, the more it feeds you the same. Break the loop.

By applying frameworks like UNESCO’s (considering dignity, inclusivity, and transparency) and Digital Promise’s (focusing on participation and empowerment), you can begin to use AI not just safely, but justly. You become someone who not only consumes AI, but helps shape its future.

Conclusion: Be Curious. Be Critical. Be Literate.

You are entering a world where AI will influence nearly every aspect of life. The goal is not to avoid AI, it’s to use it with clarity, confidence, and care. Ask questions. Push back. Experiment. Share knowledge. Responsible AI use is a lifelong skill,and it starts here.

Reflection Prompt (Optional)

Choose an AI tool you regularly use. Write a short ethical use policy for yourself: When will you use it? When won’t you? How will you explain your use to others?

References

Digital Promise. (2023). AI Literacy Framework. https://digitalpromise.org

Miller, P. D. (2023). From RESPECT to Co-Creation: Integrating AI into Higher Education Course Design through Backward Design. Montgomery College.

Padmanabhan, B., Zhou, B., Gupta, A. K., Coronado, H., Acharya, S., & Bjarnadóttir, M. (2025). Artificial intelligence and career empowerment [Online course]. University of Maryland. Canvas LMS.

UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. https://unesdoc.unesco.org

United Nations Environment Programme. (2024). AI Environmental Impact Issues Note. https://www.unep.org

Student Guide to AI. (2025). Student Guide to Artificial Intelligence V2.0. https://studentguidetoai.org

Gorner, J. (2020, April 21). LAPD ends use of controversial predictive policing program. Los Angeles Times. https://www.latimes.com/california/story/2020-04-21/lapd-ends-use-of-predictive-policing-program

Lum, K., & Isaac, W. (2016). To predict and serve? Significance, 13(5), 14–19.

 

Media Attributions

  • ChatGPT Image Sep 13, 2025, 06_37_27 PM

License

AI Literacy for Career & College Success Copyright © by Daniel Umana, MSEd. All Rights Reserved.