"

Domain 1: Understand What AI Is and Isn’t

Separating the myths from the reality of AI

Introduction

Artificial intelligence, or AI, refers to technologies designed to perform tasks that usually require human intelligence. These tasks can include recognizing speech, making decisions, understanding language, and even generating creative content. AI does not think or feel like a human, but it mimics human decision-making processes using data and algorithms.

Whether you’re a frequent user of AI tools or just beginning to explore them, developing foundational AI literacy skills is essential. These skills will empower you to be a discerning and strategic user of AI technologies. This course is designed to help you understand how to use and shape AI tools to meet your needs, rather than being unknowingly shaped or manipulated by them.

What Is Artificial Intelligence?

At its core, AI is the science of building machines that can perform tasks that typically require human intelligence. These machines operate based on data, logic, rules, and patterns. There are different types of AI, and each serves different functions.

Key Vocabulary

  • Supervised Learning: A type of machine learning where the model is trained using labeled data (data that already includes the correct answer).
  • Generative AI: A category of AI that creates new content, text, images, and audio, based on patterns in data. ChatGPT, Gemini, and Copilot are examples of generative AI tools.
  • Artificial Intelligence: Technology that mimics human thinking to complete tasks.
  • Machine Learning (ML): A method for training AI using data.
  • Predictive AI: AI that forecasts outcomes using existing data.
  • Prompt Engineering: The skill of crafting effective inputs to get useful outputs from AI.
  • Hallucination: When AI generates incorrect or fabricated information.
  • Large Language Models (LLMs): AI systems trained on massive text datasets to understand and generate language.
  • Embeddings: Representations of text that help AI understand relationships between concepts.
  • Transformer Architecture: A model design that allows AI to analyze context and sequence in language data. It powers tools like ChatGPT.

Real-World Applications Beyond Generative AI

AI is not just about generating text or images. Many industries use non-generative AI tools in critical functions.

Healthcare Example: Predictive AI and Supervised Learning

Hospitals and insurance providers use predictive AI to forecast patient readmission risks and automate billing workflows. They also use supervised learning to detect anomalies in insurance claims. For instance, a predictive model may flag a claim as potentially fraudulent based on historical data patterns. While this increases efficiency, it can also reinforce bias if the training data reflects inequitable care patterns.

Logistics and Retail Example: Predictive AI

Retailers use predictive AI to forecast supply chain needs, optimize delivery routes, and analyze customer purchasing behavior. A company like Amazon uses predictive algorithms to route packages more efficiently based on prior data. However, this same data can lead to invasive consumer tracking or over-surveillance of warehouse employees.

Customer Service Example: Natural Language Processing and Sentiment Analysis

AI chatbots use natural language processing (NLP), often powered by supervised learning, to triage customer requests. Some systems include sentiment analysis tools that attempt to gauge customer frustration. These tools can streamline service but may misread cultural or emotional cues, leading to inaccurate support responses.

Education Example: Machine Learning and Recommendation Systems

Learning platforms incorporate machine learning and recommendation systems to customize content and flag students who may need extra support. For example, an AI in a learning management system might suggest additional resources based on prior quiz performance. If not carefully calibrated, such systems could unintentionally label students unfairly or overlook those who learn differently.

Creative Tools Example: Generative AI in Writing and Media

Tools like Grammarly use machine learning and natural language processing to help users improve their writing. Many times, users might be tempted to over-rely on these tools, leading to possible examples of plagiarism. Image and video generation tools like DALL-E, Adobe Firefly, and Runway ML are increasingly used in advertising, design, and entertainment. These tools allow users to bring creative visions to life quickly. However, they can also be used to mislead or manipulate. In recent elections, AI-generated political videos have circulated online, mimicking real candidates’ voices and faces to spread misinformation. As AI-generated media becomes more realistic, it’s important to critically evaluate the authenticity of what we see and share.

Why This Matters

AI tools are increasingly embedded in the platforms we use Google Maps, Netflix, Grammarly, and LinkedIn all rely on AI. In the workplace, you may already be interacting with AI without realizing it.

Understanding how these systems function helps you:

  • Ask critical questions about how they affect decisions.
  • Identify limitations and risks.
  • Use them responsibly for your own goals.

But it also helps you protect yourself. AI systems can unintentionally reinforce harmful stereotypes or make decisions that are not inclusive. For example, a hiring algorithm developed by Amazon was discontinued after it was found to deprioritize resumes that included the word “women’s,” such as “women’s chess club captain.” Without proper oversight, AI can magnify existing inequalities (Padmanabhan, et a.l, 2025).

Being unaware of how AI operates can also make you vulnerable to manipulation. Recommendation algorithms that prioritize click-through rates, for example social media platforms like TikTok, Instagram, might feed you misleading or biased information simply because it keeps you engaged. Developing AI literacy allows you to question, critique, and redirect these systems toward more equitable and transparent outcomes.

Environmental Concerns

AI systems have an environmental cost. Training large models like LLMs requires enormous amounts of data and computational power, which in turn consumes significant energy and water. According to the United Nations Environment Programme (UNEP), the global water demand for AI could reach up to 6.6 billion cubic meters by 2027, more than half the UK’s annual consumption in 2023. Making and throwing away the powerful computer parts used to run AI, such as advanced chips and servers, adds to electronic waste and uses up valuable natural resources (UNEP, 2024). As a responsible AI user, it’s important to understand these impacts and consider advocating for more environmentally sustainable ways to develop and use AI.

AI Privacy Concerns

Many AI tools collect and store the information you input, even if they don’t make it obvious. Some tools use your data to improve their systems, train future models, or serve ads. This creates risks around consent, transparency, and long-term data control. Once you submit information into an AI system—whether it’s a personal story, academic writing, or someone else’s data—you may not be able to retrieve or delete it.

Responsible AI use requires understanding how tools handle your information. The UNESCO (2021) framework emphasizes the importance of data governance and privacy, calling for clear guidelines on how data is stored, processed, and reused. The Student Guide to AI (2025) encourages users to ask critical questions:

  • What does this tool collect?
  • Who owns the data?
  • Can I opt out of data use?

Before entering personal, academic, or sensitive information into any AI tool, read about the privacy policy and terms of use. If you’re working in a classroom or workplace setting, make sure you have permission to input others’ data. When in doubt, don’t upload it.

A Note on Tools

Throughout this course, you’ll encounter AI tools like ChatGPT, Gemini, and Copilot. These are examples of large language models, and their features may change over time. New tools will emerge. The examples in this course are designed to teach concepts that remain relevant, even as the tools evolve.

Remember, AI tools change quickly. Focus on understanding how they work, not memorizing tool names.

 

Reflection Prompt (Optional)

Think of an AI-powered tool you’ve used recently, even if it wasn’t obvious (like a music playlist, a GPS app, or Grammarly). What task did the AI help you complete? Did it provide accurate information, and perform the way you expected it to?

References

American Association of Colleges and Universities & Elon University. (2025). Student Guide to Artificial Intelligence V2.0. Retrieved from https://www.studentguidetoai.org

Digital Promise. (2023). AI Literacy Framework. Retrieved from https://digitalpromise.org

Padmanabhan, B., Zhou, B., Gupta, A. K., Coronado, H., Acharya, S., & Bjarnadóttir, M. (2025). Artificial intelligence and career empowerment [Online course]. University of Maryland. Canvas LMS.

United Nations Educational, Scientific and Cultural Organization. (2021). Recommendation on the Ethics of Artificial Intelligence. Retrieved from https://unesdoc.unesco.org

United Nations Environment Programme. (2024). AI Environmental Impact Issues Note. Retrieved from https://www.unep.org

Media Attributions

  • ChatGPT Image Sep 13, 2025, 06_39_42 PM

License

AI Literacy for Career & College Success Copyright © by Daniel Umana, MSEd. All Rights Reserved.