Artificial Intelligence
Information regarding the role of artificial intelligence (AI) in California Public Schools.Guidance for the Safe and Effective Use of Artificial Intelligence in California Public Schools
Learning With AI, Learning About AI
Please note: This document is meant to provide helpful guidance to our partners in education and is in no way required to be followed. It is intended to be informative rather than prescriptive. The information is merely exemplary, and compliance with any information or guidance in this document is not mandatory. (See Education Code Section 33308.5.) While this document is guidance, the requirements set forth in federal and state law, including, but not limited to, the Family Educational Rights and Privacy Act (FERPA), the Children’s Online Privacy Protection Act (COPPA), and the California Education Code and Business and Professions Code are legal mandates that local educational agencies (LEAs) must follow regardless of this guidance.
Introduction
This guidance was developed by the California Department of Education (CDE) with input from educational stakeholders, including the Artificial Intelligence Working Group. It is intended to support transitional kindergarten through grade 12 (TK–12) educators in effectively addressing artificial intelligence within the TK–12 school setting. Its primary focus is to enhance student learning and engagement through thoughtful integration into instructional practices. The guidance is not designed to address applications beyond the educational context, such as broader societal or community use.
Artificial Intelligence (AI) is rapidly transforming the TK–12 learning environment, expanding the tools available to schools, including, but not limited to, the following:
- Adaptive learning platforms: Digital learning systems that adjust the difficulty, pacing, and type of instructional content in real time based on each learner’s performance, needs, and preferences. These platforms use data analytics and algorithms to personalize the learning experience and provide targeted support.
- Intelligent tutoring systems: AI-driven instructional tools that offer step-by-step guidance, feedback, hints, and explanations. They monitor learner behavior, diagnose misunderstandings, and adapt instruction to support mastery of specific skills or concepts.
- Grading assistants: Automated or AI-powered tools that help educators evaluate student work more efficiently. They can score assignments, analyze written responses, provide rubric-based feedback, and flag areas needing human review, allowing teachers to focus on deeper instructional tasks.
- Classroom chatbots: Conversational AI is designed to interact with students and teachers in real time. They can answer questions, provide reminders, guide students through activities, support language learning, and enhance classroom engagement through natural language dialogue.
- Immersive simulations: Interactive learning environments, often using virtual reality (VR), augmented reality (AR), or advanced game-based technologies, that allow learners to experience realistic scenarios. These simulations help students practice skills, experiment safely, and engage deeply with complex concepts.
- New assessment models: Innovative approaches to evaluating learning that move beyond traditional tests. These models may include performance-based assessments, competency-based evaluations, continuous formative feedback, AI-enhanced analytics, and authentic tasks that measure real-world skills and understanding.
As these technologies advance, TK–12 educators will continue to play an essential role in fostering students’ creativity, critical thinking, authentic voice, and healthy human connection, while also addressing equity, bias, and access concerns that directly affect young learners.
Effective and responsible integration of AI in TK–12 settings will rely on ongoing professional learning for educators, clear and student-centered governance structures, and infrastructure that ensures transparency, security, and equitable access. As AI continues to evolve, educator adaptability and a commitment to continuous improvement will be essential.
The World Economic Forum Future of Jobs Report 2025
(PDF) reinforces this urgency, identifying AI and big data as the fastest growing skill areas, alongside networks, cybersecurity, and technology literacy. Foundational competencies such as creative thinking, flexibility, resilience, curiosity, and lifelong learning are also expected to rise sharply through 2030—skills TK–12 schools must intentionally foster.
California’s approach emphasizes both learning about AI—how it functions, its benefits, and its risks—and learning with AI, ensuring students and educators use emerging tools effectively and ethically. This dual focus prepares TK–12 learners not only to navigate an AI-driven world, but to shape it. Future developments—including AI-powered fact-checking, personalized and immersive learning experiences, AI-assisted data analysis for student grouping, and new assessment formats supported or evaluated by AI—signal a shift in how schools can personalize learning and support student growth.
Ensuring equitable access to these technologies across TK–12 schools, especially those in rural, urban, and underresourced communities, will be essential to preventing new digital divides. By maintaining a human-first approach, California can integrate AI in ways that elevate teaching, expand opportunity, and create inclusive, supportive, and innovative TK–12 learning environments.
What Is Artificial Intelligence?
In accordance with Section 33328.5 of the Education Code:
“Artificial intelligence” means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer, from the input it receives, how to generate outputs that can influence physical or virtual environments.
At its core, AI refers to computer systems designed to perform tasks that typically require human intelligence. These tasks include understanding language, recognizing patterns, making decisions, generating images and video, and learning from experience.
AI systems begin by processing large volumes and various facets of data such as text, images, and numbers. The AI then uses algorithms, sets of instructions, to analyze and interpret the data. Many AI systems also incorporate a learning component, allowing them to improve over time by analyzing new data, a process known as machine learning.
Once the data is processed, the AI produces an output. For example, in a language translation application, the input could be a sentence in one language, and the output is a language translation in another specified language. Generative AI represents a particularly significant dimension of AI for teaching and learning, as it creates new and original content that may differ with each query. As AI becomes more integrated into daily life and educational practice, it is essential to understand not only how these systems function, but also how they shape human interaction, creativity, and decision-making. This understanding lays the groundwork for thoughtful, human-centered implementation in schools.
Types of Artificial Intelligence
| Type | Definition | Examples of Current Uses in Education | Examples of Potential Future Uses in Education |
|---|---|---|---|
Generative AI |
AI that creates new content—text, images, audio, code, or video—by learning from large datasets and generating original outputs. |
Drafts essays, generates lesson materials, or creates visuals for projects. |
Integrated classroom assistants that cocreate lessons with teachers, student storytelling tools that turn prompts into multimedia projects, and real-time translation or accessibility supports. |
Agentic AI/Autonomous AI |
AI that operates autonomously when deployed by humans, makes decisions, analyzes situations, adapts to changes, and continuously improves. |
Personalizes learning experiences without relying on fixed algorithms and prearranged responses. |
Proactive, real-time, dynamically adjusted content, teaching strategies, and learning pathways based on the evolving needs of individual students. |
General AI |
A hypothetical form of AI that would match or surpass human intelligence across all domains, with the ability to reason, plan, and learn like a person. |
N/A |
Support for complex decision-making; simulated human-like tutors or administrators capable of creatively collaborating with students and teachers. Because current AI models draw on the inputs of humans, efficacy would require addressing ethical issues such as dark patterns (manipulative or deceptive interfaces), bias (outputs based on data that is not objective or is skewed), and hallucinations (outputs that are misleading or inaccurate). |
Human-Centered AI
While AI offers powerful opportunities to personalize learning, expand access, and improve efficiency in education, without thoughtful implementation it can unintentionally undermine student development, educator–student relationships, and professional expertise.
This guidance emphasizes keeping human connection, ethical decision-making, and adaptable system design at the center of AI’s role in transitional kindergarten through grade 12 (TK–12) education. Human relationships are foundational to learning. No technology, including AI, can replace the value of a caring educator who connects with students on a human level.
AI as an Enhancer, Not a Substitute
AI should enhance, not replace, the educator’s role. Thoughtfully integrated AI can reduce administrative burden and automate routine tasks, freeing educators to focus on the deeply human aspects of teaching: fostering emotional connection, guiding ethical reflection, and supporting personalized learning.
This approach aligns with California’s commitment to equitable, standards-based achievement by keeping educators focused on foundational skills such as literacy, numeracy, and multilingualism. Human involvement remains essential at every stage of AI use, from generating inputs to evaluating outputs. Educators’ professional judgment ensures that technology serves learning rather than directing it.
Supporting Student Well-Being
AI can process information efficiently, but it cannot empathize, care, or make nuanced ethical judgments. Preserving human empathy and judgment is especially important as AI tools such as chatbots and digital coaches enter schools. These technologies should be carefully vetted for emotional safety, privacy, and ethical use to protect student well-being and uphold parents’ rights to access and review student records.
Overpersonalizing AI, such as designing systems to mimic student voices or simulate friendship, can diminish opportunities for authentic communication, empathy, and creativity. Emotional intelligence cannot be outsourced; it should be cultivated through genuine human interaction. Any AI system engaging students in reflection, goal setting, or emotional support should be reviewed by educators, counselors, and mental health professionals to ensure it complements, not replaces, human care.
Educators play a vital role in fostering belonging and well-being, particularly for LGBTQ+ students and those facing mental health challenges. Districts are encouraged to align AI practices with the California Department of Education Mental Health Support Initiative.
AI algorithms can inadvertently narrow perspectives, fostering isolation or frustration, particularly through social media and entertainment platforms. Building strong peer relationships, carefully setting parameters for acceptable and unacceptable AI use, providing AI training and resources, and integrating social–emotional learning help counter these effects.
Cognitive and Developmental Considerations
As AI becomes more embedded in children’s lives, it is vital to understand how it influences brain development and social interaction. Neuroscience shows that authentic problem-solving and human connection form core neural pathways. While AI may strengthen pattern recognition and adaptability, unbalanced use may limit sustained attention, emotional regulation, and interpersonal communication.
To maximize benefits and minimize risks, educational systems should approach AI use with intentionality. Just as calculators changed how students approached computation, and digital communication transformed social dynamics, AI’s impact on thinking and behavior will evolve with use. Educators and policymakers face the challenge of designing learning experiences that leverage AI’s strengths while intentionally nurturing uniquely human capacities that machines cannot replicate.
Educators should design learning experiences that balance working with, alongside, and without AI. For instance, a student might complete one essay unaided, another using AI for brainstorming, and a third with AI feedback. Reflecting on these experiences helps students and teachers analyze how AI affects creativity and reasoning.
Because research on AI’s long-term effects is still emerging, schools should preserve opportunities for independent thinking, writing, and problem-solving. This balance ensures that AI supports rather than supplants human cognition.
Risks of Human-Replacement AI
It is vital that educational systems mitigate risks of human-replacement AI. Practices that could undermine educational quality include, but are not limited to, AI that simulates emotional support without human oversight, AI-generated feedback that lacks educator review, overreliance on AI tools in a way that diminishes autonomy/skills/trust, implementation of AI tools that outpaces policy, increasing risks such as lack of equity, inadequate data privacy protection, and misuse.
Designing Flexible, Human-Centered Systems
Educational systems must evolve alongside AI, maintaining flexibility, equity, and human-centered design. Vetting, adoption, and implementation of AI tools should be a community-wide, inclusive activity that ensures innovative supports all learners—particularly those in underserved communities. AI tools should serve to close educational gaps rather than exacerbate them. Ethical frameworks guiding AI use should reflect diverse perspectives and lived experiences.
At the classroom level, educators remain the cornerstone of learning. AI should amplify their impact while local educational agencies ensure transparency, structured feedback, and inclusive participation in adoption decisions.
With educator and student voices at the center, the aim is to ensure AI adoption strengthens learning environments, accelerates achievement, and prepares all learners to thrive in an evolving world.
Guiding Questions for LEAs
Questions that LEAs should consider when vetting, adopting, and implementing AI tools include, but are not limited to:
- How will we ensure compliance with local policy and state/federal laws related to data privacy, transparency, equity, and parental/student rights?
- How will we safeguard and elevate teacher expertise, ensuring that professional judgment remains the cornerstone of AI-supported instruction and assessment?
- In what ways will AI integration align with instructional standards and strengthen existing instructional goals, including multilingualism, critical thinking, and college and career readiness?
- What professional learning opportunities will equip educators to use AI ethically, effectively, and confidently in support of student learning?
- How will AI be used to strengthen, rather than replace, relationships among educators, students, and families, especially for newcomers and other historically underserved groups?
- How might we establish ongoing opportunities for students, educators, and families to stay engaged about decisions regarding AI in schools?
- How are we vetting these tools and chatbots to ensure they support, rather than hinder, mental health and connection?
- As AI technologies continue to evolve, how will adaptability be maintained while keeping human connection and student well-being central to implementation?
AI Literacy
In accordance with Education Code Section 33548, AI literacy refers to the knowledge, skills, and attitudes related to how AI works—its principles, concepts, and applications—as well as its limitations, implications, and ethical considerations. As students, educators, parents, and school community members demystify AI and learn its safe use, they build understanding that supports skill development, responsible use, and the ability to identify inaccuracies.
AI literacy is essential for all members of the school community, including educators and students. Because AI increasingly shapes how people learn, communicate, and make decisions, understanding how AI tools work helps educators use them responsibly and creatively while supporting students in becoming informed, independent thinkers.
For educators, AI literacy strengthens instruction, supports personalization, and helps promote equity. For students, it builds awareness of how AI influences their daily lives—from search results to social media—and develops their ability to evaluate AI outputs, understand limitations, and consider ethical and societal impacts. Together, AI literacy and AI fluency help schools engage with AI thoughtfully, ethically, and effectively.
AI literacy should be embedded across content areas—not limited to computer science—and introduced as early as elementary grades. Early instruction focuses on awareness and grows over time in ways that match students’ developmental readiness. Guidance from resources such as the Computer Science Teachers Association (CSTA) and AI4K12 report AI Learning Priorities for All K–12
(login required). Students can help educators set appropriate expectations by grade span (CSTA and AI4K12 2025).
AI literacy prepares students and educators to understand and responsibly use a range of AI systems, including algorithmic social media feeds, generative AI, proactive or agentic AI, AI companions or characters, and AI-generated audio, video, or images. It also involves knowing when these tools can enhance learning and when human reasoning must remain central. Staying aware of current research supports informed classroom use.
AI Literacy for Students
AI literacy develops progressively across grade levels—from recognizing AI in everyday tools to analyzing, creating, and making responsible decisions with AI. A developmentally appropriate framework can outline learning goals, sample activities, and sample prompts, helping educators connect technical understanding with social responsibility. This ensures all students understand how AI systems are designed, who they affect, and how equitable and ethical participation in an AI-driven world can start in their own classrooms.
| Grades | Focus/Learning Goal | Sample Activity | Evaluation Questions |
|---|---|---|---|
| Transitional Kindergarten (TK)–2 | Notice & name: distinguish people from simple AI, identify examples in daily life | “Is it a person or a machine?” picture sort; teacher-led story with a friendly chatbot character | Who made the tool? Who does it help? |
| 3–5 | Interact & question: use simple voice assistants, practice asking good questions; basic concept of data | Compare answers from two assistants: unplugged activity showing how data shapes outputs | What might the assistant not know? Who is missing from the data? |
| 6–8 | Experiment & detect bias: basic machine learning concepts (patterns), evaluate outputs for fairness | Small group project: train a simple classifier with toy data; critique biased outputs | What groups are harmed? How could we change the data? |
| 9–10 | Create & protect: prompt design, citation practices, privacy basics, impact case studies | Multimedia project: produce and annotate an AI-assisted video; privacy checklist for inputs | Who owns the output? What data did you give the system? |
| 11–12 | Design & influence: participatory design, policy proposals, model audits | Capstone: audit an app used in school; propose redesigns or governance recommendations | Who should be in the design team? What policy would protect students? |
Building AI literacy goes beyond learning to operate AI tools. It involves helping students understand how these systems work, why they produce certain outputs, and how to think critically and creatively about their benefits and challenges. When students unpack how AI makes decisions, they are better prepared to use it safely, recognize its limitations, and spot potential errors.
Students should have opportunities to examine risks such as bias, environmental impact, and misinformation while also using AI to support their learning. Simple, hands-on activities—like designing AI-powered characters, blending AI with block-based coding, or creating collaborative projects—make abstract concepts concrete and show how AI connects to real-world tasks. These experiences help students build confidence, adaptability, and problem-solving skills that support California’s goal of accelerating achievement for all learners.
The environmental impacts of AI affect people and communities. High energy and water use can strain local resources, emissions contribute to public health and climate problems, and these burdens often fall unevenly on different social groups. In short, AI’s environmental footprint directly shapes social well-being and equity.
Project-based civic inquiry adds another layer of relevance. Students can investigate how AI shows up in everyday systems, such as content-filtering tools at school, identify who is affected, and brainstorm ways to make those systems more fair or effective. When projects meet real community needs, such as translation support for multilingual families or accessibility tools for neurodiverse learners, AI becomes a vehicle for inclusion rather than a novelty. It is important that all uses of AI include human checks and balances to ensure that outputs are accurate and support inclusion of all members of the school community.
Integrating community relevance into AI learning ensures that classroom practice, professional development, and school or districtwide supports reinforce a shared vision of equitable, human-centered AI in kindergarten through grade 12 (TK–12) education.
AI Literacy for Educators and School Communities
Local educational agencies are encouraged to offer ongoing, role-specific professional learning for paraeducators, administrators, counselors, coaches, teachers, classified staff, school volunteers, and parents. Tailoring training to each role strengthens instructional practice and supports the effective, pedagogically sound use of AI. The Quality Professional Learning Standards can guide this work. While all members of the school community should have a basic understanding of AI and its acceptable and unacceptable uses for educational purposes in the school/district, LEAs are encouraged to build capacity, ensuring intermediate and advanced training for staff and educators who can serve as mentors and support the creative, ethical, and safe use of AI tools.
Below are training and resource recommendations for ensuring that school communities have the collective baseline knowledge and local expertise to select and use AI tools ethically, effectively, and in compliance with laws and best practices.
| Training Level | Audience(s) | Learning Objectives | Sample Activities |
|---|---|---|---|
| Basic | All members of the school community, including educators, students, staff, families, and volunteers | Understand how AI is defined and used in the school/district, how data (especially student data) is protected, how outputs are evaluated for accuracy, how equity/ethics/humans remain at the center of AI adoption, AI-related laws and policies that inform parent and student rights, and where to go for AI resources and assistance | 5-minute AI Lightning Talks at school assemblies and family events (back-to-school night, PTA meetings, open house), highlights in the school newsletter/on the school website, quarterly trainings in the school library, student-written articles in the school newspaper, links to resources on official district/school social media feeds |
| Intermediate | District staff, educators, and students getting started with AI | Understand policies and procedures for gaining approval for use of AI tools; tips for enhancing learning with AI; tips for automating simple tasks with AI to decrease burnout and free up time for human connection; spotting and managing misuse of AI tools, hallucinations, and inaccurate/biased AI outputs | Interactive teacher workday in-service designed and facilitated by teachers/staff at the advanced level, advanced-level teachers/staff partner with and mentor intermediate staff, 5-minute AI Lightning Talks at staff meetings, AI presentations for teachers researched, developed, and facilitated by students |
| Advanced | District staff, educators, and students leading district AI efforts | Develop AI trainings for members of the school community, vetting educational AI tool features and limitations, networking with industry experts and educators outside the school/district to monitor AI trends and tools, mentoring school community members interested in learning about or using AI | Starting or being the advisor for an “AI Club” for students interested in learning about and using AI, preparing or presenting 5-minute AI Lightning Talks and trainings for various members of the school community, representing the school/district when AI is being considered or discussed at school board meetings, developing/participating in AI educator professional learning communities (conferences, online forums) |
While some members of the school community may not be interested in or available for a deep dive into AI tools, every school community member should have opportunities to ask questions and gain clarity on how AI is and is not being used by their school/district. Parents, students, and all members of the school community should have access to resources that demystify what AI is and how it supports learning. Parents and students should be given multiple opportunities to learn about the laws and policies that impact their rights when AI tools are used in the classroom.
As AI becomes a ubiquitous part of the human experience, educators have an obligation to prepare students to safely interact with and use these tools. Because this signals a shift in traditional ways of teaching, it is important that educators be given hands-on opportunities to explore AI tools in supportive, collaborative environments. Such opportunities build confidence, reduce hesitation, and model the kind of inclusive learning spaces students should experience as they use AI alongside their teachers. Flexible professional learning options—such as short introductory videos, discussion-based modules, community-specific resources, and yearlong pathways—allow schools and districts to meet local needs while staying aligned statewide. Professional learning networks and communities of practice extend this support by sharing strategies, solving problems together, and helping educators learn from one another. Centralized, vetted resources such as privacy guidance, approved tools, lesson plans, and curated training materials help schools innovate safely and avoid duplicating effort.
To use AI meaningfully, educators need ongoing, interactive learning experiences that encourage reflection and practical application. Helpful approaches include structured discussions about appropriate and inappropriate AI use, scenario-based activities, and strategies for communicating with students and families. Educators should also have chances to codesign lessons and units that support both teaching with AI and teaching about AI.
Adult AI literacy includes understanding how AI systems process data, make predictions, and generate outputs—as well as recognizing limitations, errors, and potential bias. Educators should be able to assess AI outputs, practice responsible use (including citing AI tools and respecting intellectual property), and integrate AI in ways that align with standards and existing instructional practices. Modeling transparency, explaining how AI tools are used, and encouraging students to question outputs helps build a classroom culture grounded in responsible, thoughtful AI use.
LEAs are encouraged to prioritize in-person professional learning for AI instruction. Investing in adult AI literacy empowers educators to use AI confidently and ethically, preparing both themselves and their students for a rapidly evolving digital world.
Frameworks and Guidelines for AI Integration
Frameworks and guidelines help ensure consistency while allowing flexibility for local adaptation. Integrating AI into standards and existing instructional practices across multiple subjects makes learning relevant to the real world and prepares students for careers where AI plays an increasing role. LEAs are also encouraged to consider reviewing existing guiding documents such as profiles of a graduate and strategic plans so that AI integration is in service of achieving those goals.
Curriculum Standards Alignment
AI inevitably impacts current and future workforce requirements that inform the skills needed of our students. In order to best prepare students to thrive and function in a highly connected world with AI, teaching and learning with AI must go beyond the superficial and substitution use levels and instead be fluidly integrated throughout curriculum standards.
Similar to any other best teaching practice, learning about and with AI is not done in isolation or silos, but rather meaningfully embedded into the overall learning experience.
Assembly Bill (AB) 2876
requires the Instructional Quality Commission to consider incorporating AI literacy content into the mathematics, science, and history–social science curriculum frameworks when those frameworks are revised, including criteria for evaluating instructional materials in mathematics, science, and history–social science instructional materials.
The following activity ideas align to California standards to demonstrate integrated AI literacy. The ideas presented address two main topics of AI literacy as reflected in AB 2876: how AI works and ethical considerations.
How AI Works
| Grade | Focus / Learning Goal | Sample Activity | Guiding Question |
|---|---|---|---|
| K | Comparing human observation with machine rules | Students look out the window to decide whether they need a coat today (human observation), then use picture cards to “program” a partner with a strict rule: “If you see a rain cloud card, then you must pick up the umbrella card.” Next Generation Science Standards (NGSS) K-ESS2-1 | How do you decide what to wear when you look outside, and how is that different from how we teach a robot to follow a rule? |
| 1 | Pattern-based classification rules | Students freely sort animal picture cards into groups, then build a physical decision tree on the floor using arrows and Yes/No questions (e.g., “Does it have wings?”) to guide a Students freely sort animal picture cards into groups, then build a physical decision tree on the floor using arrows and Yes/No questions (e.g., “Does it have wings?”) to guide a “robot” classmate to correctly classify a mystery animal according to its survival characteristics. NGSS 1-LS1-1 | Why does the “robot” student need a specific list of Yes/No questions to know where to put the card, while you can just look at it and decide? |
| 2 | Learning from training data | Students sort shape cards into “Triangles” and “Not Triangles” to create a training dataset, then challenge a partner acting as the computer to discover the pattern (three sides) needed to correctly label a new mystery shape. California Common Core State Standards (CA CCSS) 2.G.A.1 | If we used only red triangles to teach the computer, would it be able to learn the pattern that blue shapes can be triangles too? |
| 3 | Evolving models with new data | One student acts as the “Trainer” providing labeled cards (e.g., apple core = compost, foil = trash) to a partner acting as the “Recycle Bot,” who must analyze the examples to build a rule list for sorting a final mystery item. NGSS 3–5-ETS1-2 | Why did your first rule fail when you saw the purple card, and why does an AI need many different examples to learn the truth? |
| 4 | Pattern-based prediction | Students train a “Travel Bot” (a partner) by showing it labeled pictures of the Mojave Desert (sand, cactus) and the Pacific Coast (sand, water), challenging the “Bot” to learn the rule that while both have sand, the presence of “water” is the required clue to classify the Pacific Coast. California History Social Science (CA HSS) 4.1 | Why did the “Bot” get confused by the sandy beach picture at first, and how did pointing out the water help it learn the right rule? |
| 5 | Impact of biased data on output | Students role-play as an “AI Historian” provided only with British Loyalist sources to learn about the Boston Tea Party, discovering that their resulting summary describes the event as a “criminal act” rather than a “protest” because their training data was one-sided. CA HSS 5.5 | If an AI is only “fed” stories from one side of a conflict, why will its answers always be unfair, even if the machine isn’t trying to be mean? |
| 6 | Reverse engineering hidden logic | Students interact with a “Human Function Machine” (a student with a secret math rule card, like “multiply by 2, then add 1”) by giving it numbers and analyzing the answers to predict the secret algorithm inside the “black box.” CA CCSS 6.EE.9 | Since you can’t see the rule card inside the machine, why do you need to test multiple different numbers to be sure you understand how it works? |
| 7 | Improving model accuracy through diverse data | Students simulate a “Habitat ID” bot that mistakenly labels deserts as “lifeless” because it was trained only on rainforest photos and then fix this error by adding images of healthy arid landscapes to the training pile. NGSS MS-LS2-2 | Why did the bot fail to recognize the desert as a habitat at first, and how did adding new examples help it understand that nature doesn’t always look green? |
| 8 | Identifying safety limitations and bias | Students design a “Warning Label” (model card) for a hypothetical “Car Safety AI” trained only on adult male crash-test dummies, explicitly writing that the model has a critical limitation: it cannot accurately predict injuries for women or children. NGSS MS-PS2-1 | Why is it dangerous to trust a safety tool if you haven’t read the “ingredients label” to see who it was actually built to protect? |
| 9 | Handling data outliers | Students act as real estate agents pricing a regular house in a neighborhood that includes one celebrity mansion, justifying the selection of a “Median-Based” algorithm (finding the middle value) over a “Mean-Based” algorithm (average) because the mansion is an outlier that breaks the average. CA CCSS S-ID.2 | Why is the “average” (mean) the wrong tool to use when one giant number hides the truth about all the others? |
| 10 | Privacy risks of immutable data | Students act as “Data Guardians” for a hospital, rejecting a dataset for a research AI because it contains patients’ full DNA profiles linked to their home addresses, arguing that if this data is stolen, the patients can never change their DNA like they can change a password. NGSS HS-LS3-1 | Why is a data leak involving your genetics much more dangerous than a data leak involving your credit card number? |
| 11 | Modeling decision-making logic | Students act as “Cold War Architects” by creating a physical flowchart using “If/Then” rules based on the Truman Doctrine (e.g., “If a country faces communist rebellion -> Then authorize military aid”) to simulate how the US government algorithmically determined intervention in the 1950s. CA HSS 11.9 | How does turning a complex political philosophy into a rigid set of instructions (a model) force us to simplify the nuanced reality of international relations? |
| 12 | Justifying transparency over complexity | Students act as bank regulators choosing a method to approve home loans, justifying the selection of a simple, transparent “Income Math Formula” (rule-based) over a complex “Social Media Behavior Scanner” (machine learning), arguing that consumers have a legal right to know exactly why they were denied money. CA HSS 12.3 | Why might we ban a “smarter” AI that predicts who will pay back a loan if it can’t explain why it rejected a specific person? |
Ethical Considerations
| Grade | Focus / Learning Goal | Sample Activity | Guiding Question |
|---|---|---|---|
| K | Distinguishing natural growth from human engineering | Students hold a real leaf and a toy robot (or calculator) to compare origins, discussing how the leaf grew from a seed with water but the robot had to be built by a person using tools. CA NGSS K-LS1-1 | Why can’t we plant a tablet in the garden and wait for a new one to grow? |
| 1 | Strict logic vs. personal preference | Students act as a “Sorting Machine” organizing a pile of buttons strictly by color (Blue vs. Red), then switch to a “Human Designer” picking out buttons simply because they are “cool” or “pretty,” showing that machines follow rules well but humans understand opinions. CA CCSS 1.MD.4 | Why is a machine good at putting all the blue buttons in a cup but bad at picking the “coolest” button for a costume? |
| 2 | Improving efficiency and accuracy | Students race to manually write down and add up the prices of 10 classroom items to see how slow and error-prone it is, using this frustration to explain why humans invented barcode scanners (computing technology) to make grocery lines move faster. CA CCSS 2.MD.8 | Why did people invent a machine to “beep” at groceries instead of just doing the math in their heads? |
| 3 | Processing speed for safety | Students act as “Weather Watchers” trying to predict a storm by analyzing 50 separate paper maps spread across the floor (slow and overwhelming), leading to a discussion on why we built computers to analyze that same amount of data in one second to warn people of danger. NGSS 3-ESS3-1 | Why isn’t it safe enough to just look out the window to predict a hurricane, and how does a computer help us “see” the weather before it arrives? |
| 4 | Experiential learning vs. numerical input | Students discuss how they learned to avoid touching a hot stove (pain/physical experience) and compare this to how a robot learns the same concept (by processing a list of temperature numbers labeled “Safe” or “Unsafe”). NGSS 4-LS1-2 | Why does a human need to feel the heat to learn a lesson, while a computer just needs to read the data? |
| 5 | Emotional value vs. mathematical efficiency | Students simulate packing a pioneer wagon using a “Survival Bot” checklist that forces them to discard family heirlooms to save weight, debating why the bot’s mathematically correct decision feels wrong to a human. CA HSS 5.8 | Why is a computer the best tool for calculating how much a wagon can carry, but the wrong tool for deciding which family treasures are worth saving? |
| 6 | Efficiency vs. cultural exchange | Students simulate a “Trade Bot” that draws the fastest, most direct line across a map to deliver silk (optimizing for speed), comparing it to a “Human Merchant” who stops at dangerous oasis cities to rest and talk, analyzing the trade-off between the machine’s fast delivery and the loss of the idea-sharing (cultural diffusion) that actually changed history. CA HSS 6.6.7 | If we had used a machine to make the Silk Road perfectly fast and efficient, why might the world have never learned about Chinese inventions like paper or the compass? |
| 7 | Efficiency vs. displacement of labor | Students compare the experience of being a “Medieval Scribe” hand copying a paragraph (slow, artistic) to a “Printing Press” student using a rubber stamp, analyzing the trade-off between the machine’s speed and the loss of the scribe’s unique craftsmanship and job. CA HSS 7.8.4
|
Why might a medieval scribe hate the invention of the printing press, and how is that similar to how an artist today might feel about AI creating art in seconds? |
| 8 | Efficiency vs. uniformity | Students compare the output of a “Factory Algorithm” group that uses stencils to mass-produce identical paper shirts in seconds versus a “Master Tailor” who draws one unique shirt slowly, debating whether the machine’s speed is worth the loss of creativity and customization. CA HSS 8.6.1 | Why do we often accept “good enough” products just because a machine can make them fast and cheap, and are we making that same trade-off when we use AI to write for us? |
| 9 | Unintended consequences of optimization | Students role-play as engineers deploying an “AI Ocean Cleaner” with the single command to “remove all foreign objects,” analyzing the disaster that follows when the machine efficiently removes boats and divers along with the trash because the humans failed to regulate its definitions. CA NGSS HS-LS2-7 | If the machine did exactly what you told it to do, why are you responsible for the damage it caused to the ecosystem? |
| 10 | Defining the boundaries of “Universal Rights” | Students review the Universal Declaration of Human Rights to select which articles (e.g., Freedom of Thought) must remain exclusively human, debating why granting “Freedom of Speech” to an algorithm could be dangerous for society. CA HSS 10.9 | Why do we believe that human rights are “inalienable” (cannot be taken away), but that we must always keep the right to delete or reprogram an AI? |
| 11 | Algorithmic amplification of historical bias | Students overlay a 1930s “redlining” map with a map generated by a hypothetical “AI Mortgage Assistant” that uses zip codes to determine risk, analyzing how the human choice to train the AI on historical data causes the machine to reenforce the segregation of the past. CA HSS 11.10.2 | If we train an AI to predict value based on history, why does that choice make it impossible for the AI to fix the mistakes of the past? |
| 12 | Defining legal personhood vs. property | Students simulate a Supreme Court hearing to argue whether a hypothetical “conscious” AI should be granted the legal status of a “person” under the 14th Amendment or remain classified as “property” to be bought and sold. CA HSS 12.2 | Why does our legal system require a being to have a human “consciousness” to possess rights, and what happens to our democracy if we decide that software can be a citizen? |
This guidance acts as a reminder for educators to verify that AI integration is aligned to curriculum standards. The existing California Department of Education curriculum frameworks can be reviewed for reference. Revised frameworks will be posted on completion.
Similarly, LEAs should verify curriculum standards alignment if and when AI tools are procured. LEAs should comply with (student) data privacy laws and policies, cross-referenced in the Data Privacy, Security, and Procurement section.
Pedagogical Implications
As AI tools become more advanced, educators will need to develop assignments and assessments grounded in real-world contexts where AI is not necessarily the driver but is instead a support. Educators may need to balance student opportunities to explore AI with “AI-resilient” learning experiences, encouraging authentic reasoning and student voice. For example, students who research a topic thoroughly by applying critical thinking and research strategies prior to turning to AI are better equipped to identify inaccuracies, biases, and hallucinations in AI outputs.
In designing learning experiences for students, educators are encouraged to refer to Technological Pedagogical Content Knowledge: A Framework for Teacher Knowledge (TPACK; Mishra and Koehler 2006). Technological knowledge can be built as students learn about AI, content knowledge can take place as students learn with AI, and pedagogical knowledge can be exhibited as students learn through meaningful, relevant AI experiences. The visual model of TPACK is shown below.

Reproduced by permission of the publisher, © 2012 by TPACK.org
Effective AI-supported instruction requires personalized approaches across disciplines, grade levels, and educator roles to meet student needs. Educators are encouraged to consider how AI is used beyond TK–12 education, investigating how a diverse variety of professionals employ AI in their work. Subject-specific applications in schools might include the following.
- History–Social Science may use AI to analyze primary sources or simulate historical debates.
- Science may leverage AI for data modeling and experimentation.
- Environmental Science may be used to research and analyze data connected to energy consumption and water usage needed to power and cool AI server farms and how it is related to climate change.
- English Language Arts might explore AI-assisted writing tools while emphasizing original voice and authorship.
- Administrators and counselors may use AI for data-informed decision-making and student support.
- World Language may use AI for real-time translation, conversational practice with chatbots, pronunciation feedback, and cultural immersion experiences.
- Special Education may use AI to provide personalized learning supports, text-to-speech and speech-to-text tools, adaptive communication systems, and behavior or engagement tracking to support individualized education plans (IEPs).
- Arts may use AI to generate music, visual art, or choreography as part of creative projects, support digital portfolio creation, and analyze artistic styles or techniques.
- Mathematics might use AI to provide adaptive problem sets, step-by-step feedback, pattern recognition exercises, and predictive tools for data analysis or statistics.
- Career Technical Education may use AI for industry simulations, coding and robotics applications, design and manufacturing optimization, and preparing students for AI-driven career fields.
By aligning AI with subject-specific standards, as well as college, career, civic, community, and workforce needs, schools can ensure that the use of AI supports relevance, engagement, and future readiness.
AI Literacy Integration
To help students become well prepared to navigate an AI-powered world, the AI for K–12 initiative (AI4K12) outlines 5 Big Ideas of AI that provide foundational guidelines for deeper curriculum integration. These ideas can be incorporated across grade levels in ways that align with students’ developmental stages, supporting growth in cognitive, computational, and ethical skills. By introducing these concepts progressively, educators can help learners understand the principles of AI, explore its applications, and reflect on its societal impacts. Grade band progression charts for each Big Idea are available at ai4k12.org
, offering guidance for integrating AI literacy across K–12 education.
The 5 Big Ideas of AI
Perception
Computers perceive the world using sensors. Perception is the process of extracting meaning from sensory signals. Making computers “see” and “hear” well enough for practical use is one of the most significant achievements of AI to date.
Representation & Reasoning
AI maintains representations of the world and use them for reasoning. Representation is one of the fundamental problems of intelligence, both natural and artificial. Computers construct representations using data structures, and these representations support reasoning algorithms that derive new information from what is already known. While AI can reason about intricately complex problems, they do not think the way a human does.
Learning
Computers can learn from data. Machine learning is a kind of statistical inference that finds patterns in data. Many areas of AI have progressed significantly in recent years thanks to learning algorithms that create new representations. For the approach to succeed, tremendous amounts of data is required. This “training data” must usually be supplied by people but is sometimes acquired by the machine itself.
Natural Interaction
AI requires many kinds of knowledge to interact naturally with humans. AI must be able to converse in human languages, recognize facial expressions and emotions, and draw on knowledge of culture and social conventions to infer intentions from observed behavior. All of these are difficult problems. Today’s AI systems can use language to a limited extent but lack the general reasoning and conversational capabilities to critically evaluate outputs to identify bias, misinformation, and inaccuracies.
Societal Impact
AI can impact society in both positive and negative ways. AI technologies are changing the ways we work, travel, communicate, and care for each other. But we must be mindful of the harms that can occur. For example, biases in the data used to train an AI system could lead to some people being less well served than others. Thus, it is important to discuss the impacts that AI is having on our society and develop criteria for the ethical design and deployment of AI-based systems.
Guiding Questions for LEAs
- What professional learning opportunities will provide educators and other staff the tools to integrate AI literacy into their practice?
- What ongoing supports can be provided to educators to support planning to meaningfully integrate AI literacy into all content areas?
- How will students be guided to engage in authentic tasks learning with and about AI, evaluating when to use and not to use AI, and assessing AI outputs, to prepare them for success in a world rapidly adopting AI use?
Equitable Access to AI in Teaching and Learning
AI technologies, tools, and applications can enhance many aspects of equitable teaching and learning when applied thoughtfully. Educators can use AI to adapt lessons, model prompt design, and engage in peer-led professional learning that strengthens practice while prioritizing equity.
The California Department of Education (CDE) encourages educators to approach AI with an equity lens. As students and teachers explore potential inaccuracies, bias, and social impacts of AI, they develop skills to promote responsible technology use. Schools should ensure equitable access, provide AI literacy training, and establish guardrails that reinforce sound pedagogy. California’s vision for AI’s use in transitional kindergarten through grade 12 (TK–12) aligns with the State Superintendent of Public Instruction’s initiatives to build safe, healthy learning environments, accelerate achievement for all learners, and prepare students for meaningful futures. While implementation may vary locally, equity and ethics must remain central.
A clear leadership vision at both district and site levels is essential to ensure accountability for teaching and learning about and with AI. By articulating a shared vision, leaders set expectations for responsible implementation, support professional learning, and promote policies that ensure equitable access for all students.
Special Education and Accessibility
AI tools are increasingly used to differentiate instruction, reduce repetitive paperwork, and provide assistive supports that enhance accessibility and equity. Speech-to-text applications help students with learning differences or physical disabilities gain independence, while AI-powered captioning, transcription, and language translation tools remove barriers for multilingual learners and individuals with hearing or visual impairments. These tools, when aligned with the neuroscientific backed principles of Universal Design for Learning (UDL), support inclusive classrooms where all students can engage meaningfully and develop as expert learners.
Beyond accessibility, AI can generate personalized materials—such as worksheets, quizzes, and reading assignments—tailored to students’ goals, strengths, and interests. AI analytics provide educators with insights into progress, helping identify areas that need additional support and enabling targeted interventions during small group instruction. When integrated thoughtfully, AI empowers educators and students to tailor learning experiences to the unique needs of each classroom while supporting culturally responsive pedagogy that connects content to students’ lives. Assistive technologies such as screen readers, voice recognition, and text-to-speech tools ensure every learner can fully participate, reinforcing human-centered, inclusive teaching practices.
Guiding Questions for LEAs
- How might AI tools be aligned with UDL accessible, assistive, and inclusive technology?
- How might guidelines for equitable student use and support be articulated?
- How will educators be trained to utilize and implement AI tools safely, effectively, and equitably?
- How will confidential student information be protected when using AI tools?
- How can parents/caregivers and community members be invited to use AI tools that support multilingual learners and families?
- For multilingual learners, how can the benefits of AI be leveraged beyond translation?
Digital Equity / Closing the Digital Divide
AI tools can expand opportunities for learners with disabilities, newcomers, and those in underresourced areas; however, inequities persist to using AI without reliable access to broadband and devices. Targeted investments in infrastructure, devices, and professional learning are essential so all districts, particularly in rural and underserved communities, can benefit equitably.
When implemented thoughtfully, AI can help bridge the digital divide by offering students opportunities to develop digital literacy and use technology responsibly. By centering educators and providing them with the knowledge and tools to apply AI effectively, schools ensure that technology enhances learning while maintaining human connection and pedagogical integrity.
Guiding Questions for LEAs
- What are the guidelines for equitable software adoption, including rationale and ongoing training?
- What federal, state, and local resources are available to increase equitable access (e.g., broadband, internet access)?
- What barriers to equitable access exist, and how can such barriers be eliminated?
- How is professional learning provided equitably for educators serving in various roles, and how is it differentiated?
Empower Learners to Design Systems to Solve Challenges
AI offers students opportunities to easily design, test, and iterate solutions to real-world challenges. These experiences build computational and design thinking skills while encouraging collaboration and ethical reflection.
When students explore AI systems through hands-on learning, they learn how algorithms work, identify potential bias, and understand the impact of data on outcomes. Applying AI concepts in project-based learning allows them to design tools that address authentic school or community issues while strengthening relevance and purpose.
AI learning should be inclusive and developmentally appropriate, ensuring all students can participate meaningfully regardless of background or prior experience. By creating pathways that include design, reflection, and iteration, educators empower students not only to use AI but to shape its future responsibly.
Guiding Questions for LEAs
- How might diverse students and educational partners’ input be invited to shape local AI policy?
- What local challenges might benefit from AI solutions?
- How can AI resources and training opportunities be tailored to meet students, educators, and school community members where they are?
Expand Access to STEM Pathways and Careers
Opportunities for students to engage deeply with AI in K–12 schools can serve as a powerful means to address systemic bias and expand access for traditionally marginalized groups in computer science and science, technology, engineering, and mathematics (STEM) fields. Integrating AI education with a focus on diversity and inclusion helps pave the way for a more equitable future in these disciplines. A lack of representation in these fields has long been a concern. For example, a 2021 Pew Research Center report highlighted that LatinX and Black workers are underrepresented in STEM, while White and Asian workers are overrepresented (Fry, Kennedy, and Funk 2021). Women hold only about a quarter or fewer of all computer and engineering jobs. Addressing these gaps requires early intervention in the educational pipeline, making K–12 schools an ideal setting for nurturing diversity, inclusion, and equal access.
Computer science often serves as a gateway to STEM at the postsecondary level, particularly for underrepresented populations. College Board data from 2019 indicates that the Advanced Placement (AP) Computer Science Principles course was a first AP course for 68 percent of Black students, 59 percent of LatinX students, and 60 percent of first-generation college students. Students who take this course are more likely to pursue a STEM major in college, with the effect particularly pronounced for LatinX students and women.
While computer science provides a strong foundation for AI education, AI literacy/fluency can and should extend beyond computer science classrooms. Interdisciplinary opportunities such as exploring algorithmic bias in social studies, data ethics in mathematics, or AI-generated art in language arts, can engage a broader range of students and help reduce barriers to participation. By embedding AI learning across subjects, schools can demystify technology and make these fields more inclusive and relevant for every learner.
Guiding Questions for LEAs
- How might learning about AI be designed to emphasize ethics, racial literacy, disparities, and the environmental/societal impacts of AI?
- How can opportunities be created to equitably invite students from all backgrounds to participate in computer science and other STEM fields?
- How can expanded learning programs, after school activities, or similar opportunities be leveraged to increase access to learning about AI to learn with AI?
Academic Integrity and Responsible Use
AI isn’t a perfect tool—and it probably never will be—but it can be an extremely useful one when used correctly. Because of this, it is essential that educators and students balance its use with authentic human interaction. Schools are increasingly adopting responsible use policies that define acceptable applications of AI, promote integrity beyond concerns about plagiarism, and support students with instructional strategies rather than relying solely on detection tools.
Responsible use of AI includes training students to evaluate AI outputs with a critical eye to identify bias and inaccuracies. A rubric such as the one below can be utilized to support student evaluation of AI outputs.
Accuracy — “Is this true?”
| 4 – Very accurate | 3 – Mostly accurate | 2 – Questionable | 1 – Unreliable | Consider: |
|---|---|---|---|---|
| Facts are correct and match reliable sources. No mistakes I can find. | Mostly correct, but I found a few small errors or unclear parts. | Some facts don’t sound right or need checking. | A lot seems wrong, made up, or outdated. | Did I verify key facts with a trusted source (books, websites, teachers)?
Does the AI explain where its info came from? |
Relevance — “Does it actually answer what I asked?”
| 4 – Totally relevant | 3 – Mostly relevant | 2 – Somewhat relevant | 1 – Off topic | Consider: |
|---|---|---|---|---|
| It clearly answers my question or solves my problem. | It’s helpful but slightly off topic or too general. | It kind of connects, but misses big parts of my question. | It kind of connects, but misses big parts of my question. | Does this help me meet my assignment goals? Did I ask a clear question or do I need to reword my prompt? |
Clarity — “Is it easy to understand?”
| 4 – Very clear | 3 – Mostly clear | 2 – Hard to follow | 1 – Confusing | Consider: |
|---|---|---|---|---|
| Ideas are easy to follow and make sense. | Understandable, but a few parts are confusing. | Some sentences or ideas don’t connect well. | I can’t tell what it’s saying. | Could I explain this to someone else in my own words? Are any words or phrases too advanced or vague? |
Fairness — “Does it show bias or leave things out?”
| 4 – Fair and balanced | 3 – Mostly fair | 2 – Some bias | 1 – Unfair | Consider: |
|---|---|---|---|---|
| Shows multiple perspectives and feels respectful | Generally balanced, but I notice some bias. | Seems one-sided or leaves out important views. | Clearly biased, offensive, or inaccurate about a group or idea. | What might be missing? |
Equitable, codeveloped policies that emphasize human connection help ensure AI enhances learning while safeguarding honesty, fairness, and relationships. Educators and students should collaborate to define ethical use, promoting agency and shared responsibility in leveraging AI effectively while maintaining academic integrity.
Academic Integrity in the Age of AI
Academic integrity extends beyond traditional plagiarism, which is defined as presenting another person’s work without proper citation. The unauthorized use of AI tools, while not always technically considered plagiarism, can still violate academic integrity by undermining honesty, fairness, and the intended learning goals of transitional kindergarten through grade 12 (TK–12) classrooms. Clear distinctions between plagiarism and broader integrity violations are essential for establishing fair policies and consistent grading practices. By aligning with the State Superintendent of Public Instruction’s goal of healthy, safe learning environments, these policies safeguard trust in classrooms and promote equitable treatment of students across diverse contexts.
The age of AI links media literacy and digital citizenship as overlapping dimensions of modern learning. Media literacy emphasizes evaluating sources and recognizing misinformation, while AI fluency expands this by exposing the algorithms that influence what information appears. Connecting source-evaluation lessons with activities that examine recommender systems and generative models helps students ask not only “What is true?” but also “Why am I seeing this?” strengthening their discernment in digital spaces.
Digital citizenship in the age of AI includes lessons on privacy, safety, and respectful online interaction, expanded to cover data footprints, consent, and the implications of sharing content, including student work, with AI systems. A growing number of no cost digital citizenship and AI literacy lessons for varied grade levels are available at the nonprofit Common Sense Media
.
Cross-disciplinary teaching can deepen connections to AI. For example, lessons within English language arts and media literacy might ask students to critique an AI-generated article for accuracy of facts, bias, evidence use, and authenticity of voice. Activities like these reinforce literacy skills while building students’ ability to analyze AI-generated media. When media literacy, digital citizenship, and AI literacy are integrated, students become informed consumers of information and empowered participants in shaping the digital world.
Guiding Questions for LEAs
- How will teachers, students, and families be supported in understanding that AI is a fallible, predictive tool that can enhance learning when used thoughtfully?
- What strategies will be implemented to build AI literacy among students and educators, helping them understand both the capabilities and the limitations of AI while promoting ethical use?
- How will the school/district verify that students and educators have the requisite skills and knowledge to responsibly use AI prior to gaining access to AI tools?
- How can policies and classroom practices be codeveloped with students and educators to ensure AI supports learning outcomes, critical thinking, and creativity while maintaining trust and equity?
Responsible Use Rubrics
Students encounter digital tools and platforms from a young age, making it essential to provide ethical use guidelines that promote responsible, respectful, and safe online behavior. Local educational agencies, schools, and classrooms across the state are developing rubrics and guidelines that define acceptable versus unacceptable uses of AI, moving away from blanket prohibitions. Instead of relying on AI detection tools, which may be unreliable or biased, schools can prioritize instructional strategies that reduce incentives for academic dishonesty. Educators should clearly communicate which uses of AI are permitted and explain the rationale for these decisions. Assignments can include indicators, such as a “Drafted with AI” badge, to signal where AI contributed, supporting transparency and honesty. Rubrics clarify when AI may support learning, for example, during brainstorming or revision, and when it must not be used to produce a final product.
Consider the following sample rubric.
| 1: No AI Assistance | 2: AI Idea Organization | 3: AI Supported Drafting | 4: AI Infused Creation | 5: AI as Co-creator |
|---|---|---|---|---|
| Students complete their work entirely on their own, without using any AI tools. They rely solely on their own knowledge and abilities. | Students may use AI tools to help sort, organize, or clarify their early thinking. AI may provide prompts, examples, or ways of grouping ideas or initial responses. However, they are required to produce the final work themselves without direct AI input and must cite any AI support used. | Students use AI to draft initial content. They then significantly revise and refine that content themselves. There must be clear separation between what the AI contributed and what the student added. | Students may include AI-generated elements in their work, but they must critically review and edit those contributions. Use of AI must be transparent, and proper attribution given. | Students work in partnership with AI, using it intensively as a collaborator. Students provide a rationale for AI use, ensure that their own original thinking remains central, and maintain academic integrity through clearly citing AI involvement. |
Educators model transparency by using AI tools in front of students and disclosing their own AI use for instructional tasks.
Consistent language across districts and school sites supports coherence while allowing teachers to tailor AI use according to assignment goals. Decisions can be documented in graphic organizers that indicate acceptable uses and disclosure expectations. Multiple versions of these organizers allow adaptation for different subjects, grade levels, and teacher preferences.
Student involvement remains critical. Beginning-of-year conversations can collaboratively establish class “norms” or agreements for AI use. These documents serve as living resources, revisited as new situations arise, keeping both students and educators engaged in ongoing learning about ethical AI use.
The following steps may be helpful in developing ethical use policies for AI use with students.
- Provide students with educational resources and discussions on digital ethics, covering topics like online privacy, cyberbullying, plagiarism, and responsible sharing.
- Facilitate brainstorming sessions or focus groups with students to collect their thoughts, concerns, and ideas regarding online behavior.
- Work with students to draft the ethical use guidelines collaboratively. Encourage them to express their views and concerns and guide them in turning those ideas into actionable rules. Each school can draft acceptable norms and then each class can help generate more specific classroom norms regarding AI abuse vs. AI proper use.
- Integrate real-world scenarios and case studies into discussions to help students apply ethical principles to practical situations.
- Encourage students to review and provide feedback on the drafted guidelines. Peer review fosters a sense of ownership and accountability.
- Launch the guidelines formally, communicate them to all educational partners, and provide training or workshops to help students understand and embrace them.
- Establish a process for continuous review and updates to keep the guidelines relevant and responsive to evolving digital challenges.
By integrating common language, student voice, teacher flexibility, and transparent practices, schools can establish a responsible AI use framework that enhances learning while upholding respect, responsibility, safety, and academic integrity.
Guiding Questions for LEAs
- How will students and educators clearly understand the contexts in which AI can be used without violating the acceptable use of policy?
- How can transparency be defined and communicated, including citing the level of AI involvement in student work?
- How can students be provided opportunities to explain, justify, or reflect on their AI use when completing assignments?
- How are ethical AI use concepts explicitly connected to meaningful student learning experiences, including critical thinking and creativity?
- What processes or practices will educators follow to address suspected cases of AI misuse or academic dishonesty while maintaining fairness, trust, and learning growth?
- How can classroom and school guidelines remain flexible and responsive, allowing norms and agreements to evolve as students and teachers encounter new AI tools or situations?
Unacceptable Uses of AI in Academic Settings
Unacceptable uses are also often reflected in acceptable use policies. To maintain a safe and equitable learning environment, AI guidance should reflect unacceptable uses of AI tools such as bullying, generation of deepfakes, and failure to transparently report the use of AI to generate content.
| Unacceptable Use | Example |
|---|---|
| Submitting AI-generated work as your own | Using AI to write essays, solve problems, produce code, or create projects without disclosure or required citation is considered plagiarism, academic dishonesty, misrepresentation of authorship |
| Using AI during tests or exams when not allowed | Examples include asking AI to solve test questions, using AI-enabled tools (phones, hidden devices) during in-person exams, generating answers during take-home exams when the instructions prohibit it |
| Fabricating citations, sources, or references | Some AI tools can produce fake books or articles, incorrect page numbers, nonexistent authors |
| Using AI to impersonate others | Writing emails pretending to be someone else, generating deepfake audio/video of classmates or teachers, falsifying application essays or recommendation letters |
| Using AI to harass, bully, or harm others | Misuse includes generating harmful messages, creating offensive images of classmates, educators, or others, doxing or spreading false information with AI tools |
Guiding Questions for LEAs
- How can the concept of academic integrity be expanded beyond plagiarism to include responsible behavioral use of AI, ensuring honesty, fairness, and authentic engagement in learning?
- How will ethical AI use be clearly, equitably, and actionably defined to include distinctions between acceptable assistance, misuse, and plagiarism?
- How can violations of the acceptable use of policy be opportunities for learning and restorative justice?
- What approach will be used to discuss digital ethics, covering topics like online privacy, cyberbullying, plagiarism, and responsible sharing?
Data Privacy, Security, and Procurement
The responsible use of AI in California’s schools calls for alignment with student-centered values, robust privacy safeguards, and the broader educational mission of equity and trust in public education. While AI offers significant opportunities to personalize learning, improve accessibility, and streamline instruction, its adoption must be grounded in transparency, ethical use, and protection of student data and intellectual property. LEAs play a central role in reviewing and approving AI tools that meet security standards, ensuring educators remain at the heart of the learning process, and helping students and families understand how data is collected, used, and shared in all educational contexts, including AI. Protecting and securing the integrity of student data must be the integral foundation on which all decisions about using AI are made. By integrating AI in ways that reinforce California’s commitment to safe, inclusive, and equitable learning environments, LEAs can strengthen—not replace—the essential role of teachers while maintaining the public’s confidence in digital learning systems. The following outlines key principles and practical steps LEAs can take to ensure AI implementation remains policy aligned, transparent, and centered on student well-being.
Student-Centered, Policy-Aligned AI Use
Before adopting AI tools, LEAs must ensure alignment with local policies, California’s goals for safe, inclusive learning, and state/federal laws related to data privacy, data use, and student/parental rights. Personally identifiable information (PII) should only ever be entered into closed AI systems since open AI systems do not contain the protections required to ensure that PII will be protected as required by law and best practice. Comprehensive, ongoing AI literacy training and resources should be offered to ensure that all members of the school community are transparently informed of their rights and responsibilities related to the use of AI tools.
Parent/Guardian and Eligible Student Rights and Student Records
AI tools collect and often store student-generated content (typed, spoken, or uploaded). This data is part of the student’s educational record and—in accordance with state and federal law—can be accessed and (if found to be inaccurate) corrected by parents, guardians, and eligible students (i.e., students over 18 and college students under 18). When using AI tools, any data entered by the student that is captured and/or used to generate an output is included in information that can be accessed/corrected by parents and eligible students.
Below is a summary of the laws that impact AI use, policy, and parental/student rights.
| Law | Description | Time-Sensitive Compliance Requirements |
|---|---|---|
| California Consumer Privacy Act (CCPA) | This law and its amendments and rules include the right to know what data is being collected to train AI models and how data is being used/shared. The CCPA also includes the right to delete personal information/opt out of the sale or sharing of personal information, the right to nondiscrimination, the right to correct inaccurate data, and the right to limit disclosure of personal information. CCPA prohibits the use of dark patterns (interface features that subvert/impair consumer’s autonomy, decision-making, or choice), use of automated decision-making technologies (ADMT) without prenotification, and physically/biologically profiling (e.g., analysis of facial expressions or gestures to infer emotional state) to inform significant decisions (e.g., college admission, hiring, loan approval). CCPA further prohibits generation of deepfakes (i.e., manipulation of real images/audio to create fake content that is presented as authentic) and sharing the data of children (under 16) without parental consent. The CCPA can level fines against companies found to be in violation of the law. | Requests for access to personal data held by any business must be acknowledged within 15 days, and access to personal data must be granted within 45 days. Businesses can request a 45-day extension to comply with a request for access. Consumers who opt out of personal data collection cannot be asked to opt back in for at least one year. |
| Student Online Personal Information Protection Act (SOPIPA) | SOPIPA prohibits both use of student data for creating a profile of the student and use of student data to market to the student and/or their parent/guardian. These prohibitions extend to AI. | Because SOPIPA asserts that student data remains the property of the LEA, the eligible student, and their parent/guardian; LEAs are responsible for production of records in accordance with FERPA (i.e., parents/guardians and eligible students must be granted access to education records within 45 days of requesting such access.) |
| California Education Code Section 49073.1 (AB 1584) | Passed as a companion to SOPIPA, AB 1584 requires contracts containing student data to include specific provisions to protect student data. Contracts without such provisions are null and void in California. In part, the law requires that vendors attest that data can only be collected and used for specific purposes outlined in the contract and requires vendors to affirm that data belongs to the student, their parent/guardian, and/or the school/district and cannot be retained by the vendor. | As with SOPIPA, Education Code Section 49073.1 asserts that student data remains the property of the LEA, the eligible student, and their parent/guardian; LEAs are responsible for production of records in accordance with FERPA (i.e., parents/guardians and eligible students must be granted access to education records within 45 days of requesting such access.) |
| California Business and Professions Code Section 22601 (Guardrails for Companion Chatbots) | In accordance with Section 22601 of the Business and Professions Code (Senate Bill 243), to align with California law and ensure the emotional and psychological safety of minors, all AI companion chatbot services procured or utilized by the LEA must adhere to the requirements of the law. The law establishes specific safeguards for any generative AI system designed to simulate a human-like relationship with a user who is a minor.
The law requires chatbot operators to implement crisis protocols and provide clear disclosers to the user reminding the user that they are interacting with AI and not a human. The law requires chatbot operators to provide content filtering and prohibits a chatbot from representing itself as a health care professional. Further, the law allows families to pursue legal action if the safeguards are violated. |
Beginning January 1, 2026, any LEA using chatbots must prepare and submit an annual compliance report to the California Department of Public Health. The first annual report is due no later than July 1, 2027. |
| Family Educational Rights and Privacy Act (FERPA) | FERPA prohibits both the release and rerelease of personally identifiable student data unless very nuanced, legally complex conditions are met. This is especially pertinent when using open AI tools since data imputed into open AI is reshared to train the AI model. This resharing of data is a violation of FERPA, meaning that personally identifiable student information should never be shared outside of a closed system without parental/eligible student consent. All student information used in open AI systems should be anonymized to prevent a violation of FERPA. Further, FERPA grants parents/guardians and eligible students (those over 18 or attending postsecondary institutions) rights to inspect (within 45 days of requesting) and amend records that are found to be inaccurate. FERPA also allows parents/guardians and eligible students to request a hearing to correct records. Schools/districts can be fined by the US Department of Education for noncompliance. | FERPA requires that parents/guardians and eligible students be allowed access to inspect and amend education records within 45 days of submitting such a request. |
| Children’s Online Privacy Protection Act (COPPA) | COPPA limits data collection from children under 13 and requires apps, social media companies, and others to obtain parental consent prior to collection of personal information from children under 13. | COPPA requires that data be retained only long enough to provide the service that has been requested/consented by the parent/guardian. For educational purposes, LEAs are allowed to provide consent for students to use a tool/service in an educational setting. In this instance, any data collected or records maintained are education records under FERPA and—while they should be deleted once the service is no longer used in the educational context—access to inspect/amend must be granted to parents/guardians and eligible students within 45 days. |
| Protection of Pupil Rights Amendment (PPRA) | The PPRA requires parental consent prior to the administration of surveys that collect student information on political affiliations/beliefs, mental or psychological health, sex behavior/attitudes, illegal/self-incriminating or demeaning behavior, critical appraisals of close family relations, legally recognized privileged information, and family income not related to eligibility for participation in a program. Because AI tools can be used to infer affiliations and beliefs, surveys in a school context should employ only closed AI tools if using AI at all. | FERPA protections and access requirements (i.e., parents/guardians and eligible students must be granted access within 45 days of request) apply to any student-level information collected through surveys administered by an LEA. When surveys contain topics articulated in the PPRA, LEAs must receive explicit parental consent prior to the administration of such surveys. |
Local Educational Agency Action Steps
In addition to reviewing platforms and tech tools for AI features that would violate state and federal law, it is also important to practice data minimization (i.e., collecting and managing only student data that is essential to providing educational services). LEAs are also encouraged to establish data governance (i.e., formalized processes to document, protect, and manage data) and ensure transparency. This includes an annually required FERPA notification to parents regarding what data the LEA considers “directory information” (i.e., information generally shared with the school community through publications like a yearbook) so that parents can opt out of the public sharing of directory information if they choose to do so. Processes for evaluating and monitoring compliance are also important to maintain good standing with existing laws/best practices and new laws/best practices in an evolving AI landscape.
To ensure legal compliance and student safety, school IT support is encouraged to block unapproved features, enforce secure data transmission, and scan for vulnerabilities. LEAs are also encouraged to institute processes and systems for regularly auditing, updating, and testing policies and breach readiness. Publishing audit findings is recommended to bolster community confidence. To foster a culture of privacy and digital literacy, in addition to regular training sessions and resources LEAs can consider offline or locally hosted platforms for sensitive data and provide regular opportunities for all members of the school community to share feedback on AI use.
Procurement and Governance
With the rapid evolution of AI, LEAs are increasingly challenged to evaluate how emerging—and often untested—AI tools can best support students, educators, and school operations. As with any district purchase, adopting AI tools should follow established procurement practices that ensure transparency, feasibility, and alignment with educational goals.
Because AI tools vary widely in purpose, design, and risk, the selection process often involves multiple stakeholders across instructional, technical, legal, and administrative teams. This checklist is intended to serve as a practical guide to help LEAs thoughtfully assess and select AI tools that promote safe, effective, and ethical use of AI within their educational communities.
Guiding Questions for LEAs
- Instructional & Operational Alignment
- Does the tool clearly support curriculum goals, instructional practices, or operational needs?
- Does the tool align with California’s curriculum standards and frameworks?
- Is there evidence that the tool improves learning outcomes or educator efficiency?
- Does the tool align with district policies on AI use, academic integrity, and responsible technology use?
- Has input been gathered from relevant instructional staff, curriculum leads, or program directors?
- Functional Evaluation
- What AI capabilities does the tool offer (e.g., generative AI, predictive analytics, adaptive learning)?
- Are the AI functions transparent and understandable to educators and students?
- Does the tool allow educator oversight and the ability to edit, override, or turn off AI-generated content?
- Are limitations, error risks, and intended use cases clearly documented by the vendor?
- Data Privacy & Security
- Does the tool comply with federal and state privacy laws (FERPA, COPPA, state student data privacy regulations)?
- What student or staff data is collected, stored, or inferred?
- Does the vendor use personal data to train or improve its AI models?
- Are data retention, deletion, and access policies clearly defined?
- Has the district reviewed the vendor’s security practices, encryption standards, and breach response procedures?
- Is a signed Data Privacy Agreement (DPA) required and available?
- Do AI chatbots that the LEA procures or develops meet the compliance requirements of SB 243?
- Equity, Accessibility & Ethical Use
- Has the vendor described how the tool mitigates bias in its AI outputs or models?
- Is the tool accessible for all users, including those with disabilities and English language learners?
- Does the tool promote ethical use and avoid reinforcing harmful stereotypes or inequities?
- Are accommodations available for students who cannot or should not use AI tools?
- Transparency & Explainability
- Does the vendor provide clear explanations of how the AI works and how decisions/recommendations are generated?
- Are there mechanisms to audit, review, or trace AI outputs for accuracy and fairness?
- Are users notified when they are interacting with AI?
- Implementation & Integration
- Is the tool compatible with existing district systems (learning management systems, student information systems, device platforms)?
- Does it require additional infrastructure, IT support, or network bandwidth?
- Are training materials, onboarding resources, and professional learning available?
- What ongoing maintenance or updates will be required?
- Vendor Reliability & Support
- Does the vendor have a proven track record in education or comparable sectors?
- Are service-level agreements (SLAs) available and reasonable?
- Is technical support responsive and accessible?
- Does the vendor provide documentation on AI model updates or changes that may affect functionality?
- Risk Assessment
- Are there potential risks of misinformation, hallucinations, or inappropriate content?
- Does the tool include safeguards such as content filtering, monitoring, or educator controls?
- Has the district evaluated potential legal, ethical, or reputational risks?
- Cost, Licensing & Sustainability
- What is the total cost of ownership (license, support, training, implementation)?
- Is the pricing model sustainable and aligned with district budgets?
- Are there hidden costs (data storage, API usage, premium features)?
- Does the contract allow for pilot programs or early termination if the tool underperforms?
- Procured AI tools warrant frequent evaluation to determine sustainable use. Are there opportunities for departments, divisions, and executive leadership to provide feedback and review tools?
- Pilot & Evaluation Plan
- Has the district established success metrics and evaluation criteria for a pilot?
- Who will monitor the pilot and gather educator/student feedback?
- Does the tool include analytics or reports that support ongoing evaluation?
- Is there a clear decision pathway for scaling, modifying, or discontinuing use?
Appendix
Glossary
Artificial Intelligence (AI)
An engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer, from the input it receives, how to generate outputs that can influence physical or virtual environments. (SB 1288 [Becker, 2024])
AI Literacy
The knowledge, skills, and attitudes associated with how artificial intelligence works, including its principles, concepts, and applications, as well as how to use artificial intelligence, including its limitations, implications, and ethical considerations. Defined in AB 2876 (Berman, 2024), AI literacy encompasses both learning about AI (understanding how it functions and affects society) and learning with AI (using tools effectively, ethically, and creatively).
Algorithm
A step-by-step set of instructions or rules used by computers to process data and solve problems. Algorithms are the foundation of AI systems and determine how information is analyzed and outputs are generated.
Bias (Algorithmic Bias)
Systematic and unfair outcomes that occur when AI models are trained on incomplete, unbalanced, or prejudiced data. Bias can perpetuate stereotypes or inequities if not intentionally mitigated through inclusive design and human oversight.
Data Privacy
The right of individuals to control how their personal information is collected, stored, shared, and used. In education, data privacy ensures that student and educator information is handled ethically and in compliance with laws such as FERPA and COPPA.
Digital Citizenship
A diverse set of skills related to current technology and social media, including the norms of appropriate, responsible, and healthy behavior. Digital citizenship promotes respectful communication, privacy protection, and ethical participation in online spaces.
Digital Divide
The gap between individuals and communities that have access to modern information and communication technology and those that do not. Bridging this divide is vital to ensuring equitable AI literacy and access to learning tools.
Educator
A certificated or classified employee of a local educational agency or charter school.
Ethical AI
The development and use of AI technologies guided by fairness, transparency, accountability, and respect for human rights. Ethical AI in education prioritizes safety, equity, and the preservation of human connection.
Generative AI
A type of artificial intelligence that creates new content such as text, images, audio, or video, based on patterns learned from large datasets. Examples include chatbots, image generators, and writing assistants.
Graphics Processing Unit (GPU)
A computer chip that excels at performing many calculations at the same time, making it ideal for graphics and AI tasks.
Human-Centered AI
An approach to AI design and use that places human values, relationships, and judgment at the center of technological integration. In education, human-centered AI supports, rather than replaces, the role of teachers and human connection in learning.
Large Language Model (LLM)
A neural network trained on massive amounts of text data to learn patterns in language.
Local Educational Agency (LEA)
Local educational agency means a school district, charter school, or county office of education.
Machine Learning
A subset of AI that enables computer systems to improve performance over time by analyzing data and identifying patterns without explicit programming for every outcome.
Media Literacy
The ability to access, analyze, evaluate, and use media and information; encompasses the foundational skills that lead to digital citizenship. Media literacy helps students evaluate content, understand how media messages are constructed, and recognize how algorithms shape what information they see online.
Open/Closed AI Systems
Open AI systems allow users to input prompts and receive responses that are generated from broad, publicly trained models. These systems are typically accessed through cloud-based platforms, respond to a wide variety of questions, and are not limited to a specific dataset or task. They offer flexibility and creativity but may produce inaccurate or inappropriate outputs if not used with oversight. Examples include general-purpose generative AI tools that anyone can interact with.
Closed AI systems are trained on restricted or curated datasets and are designed for a specific purpose within a defined environment—such as assessment tools, learning management systems, or district-approved applications. They include guardrails and content controls to limit risk, operate within known parameters, and may store data locally or in protected ecosystems. These systems offer greater consistency and privacy but are less flexible for open-ended use.
Universal Design for Learning (UDL)
A framework for designing instruction that meets the diverse needs of all learners by providing multiple means of engagement, representation, and expression. UDL aligns closely with AI’s potential to personalize and enhance accessibility.
Acknowledgments
The California Public Schools: Artificial Intelligence Working Group, established under Senate Bill 1288 (2024), brought together educators, students, experts, and educational partners to develop this guidance for the safe, equitable, and effective use of artificial intelligence in schools.
Convened by the California Department of Education, the California Public Schools: Artificial Intelligence Working Group included educators, students, administrators, classified staff, higher education representatives, and industry experts. Members met publicly to develop statewide guidance.
Thank you to California Public Schools: Artificial Intelligence Working Group members.
| Name | Organization(s) |
|---|---|
| Tina R. Austin | University of California Los Angeles, University of Southern California |
| Dr. Robert (Bob) Bauer | Portola Valley School District |
| Jesse Braun | Beverly Hills Unified School District |
| Juliano Calvo | Glendale Unified School District |
| Jayson Chang | California Teachers Association |
| Merek Chang | Hacienda La Puente Unified School District |
| Angela Chavez | Computer Science Virtual Academy/Los Angeles Unified School District |
| Brittany Conrad | Corona-Norco Unified School District |
| Tarquinn Curry | Long Beach Unified School District |
| Gabby DeVilla | California Federation of Teachers |
| Erin Earnshaw | Folsom Cordova Unified School District |
| Dr. Jennifer Elemen | 21CSLA State Center, UC Berkeley School of Education, Leadership Programs |
| Dr. Todd Farley | KIPP NorCal |
| Elisa Frias | Antelope Valley Union High School District |
| Catherine Gilbert | Willow Creek Elementary School |
| Jody Green | La Habra City School District |
| Airic Guerrero | Lower Lake High School |
| Dr. Helen Heinrich | California State University Northridge |
| Laura Hinton | Kapor Foundation |
| Matt Johnson | Whitney High School, ABC Unified School District |
| Kevin Kiyoi | Amador Valley High School, Pleasanton Unified School District |
| Mary Lang | Center for Leadership, Equity, and Research (CLEAR) |
| Bria Larson | Del Norte High School |
| Samhita Laxman | Henry M. Gunn High School |
| Patricia Love | San Mateo County Board of Education |
| Tami Lundberg | Fresno Unified School District |
| Jose Maldonado | Delano Union School District |
| David Malone | Alameda County Office of Education |
| Joe Marquez | Clovis Unified School District |
| Amanda McCraw | Panoche Elementary School |
| Lisa Moe | Corona-Norco Unified School District |
| Nicole Naditz | San Juan Unified School District |
| Kevin Palkki | San Bernardino Community College District |
| Dr. Sonal Patel | San Bernardino County Superintendent of Schools |
| Christian Pinedo | aiEDU |
| Nathen Ramezane | Santa Clara Unified School District |
| Dr. Brandee Ramirez | Saddleback Valley Unified School District |
| Zahra Razi | Fontana Unified School District |
| Jen Roberts | Point Loma High School, San Diego Unified School District |
| Marcella Rodriguez | Tad Health Inc. |
| Daphne Russell | Rialto Unified School District |
| Daniel Ryan | California Federation of Teachers |
| Alan Sitomer | Mastery Coding |
| Chris Swanson | El Monte Middle School |
| Efrain Tovar | Selma Unified School District |
| Apolonio Valdovinos | West Contra Costa Unified School District |
| Erinn VanderMeer | Petaluma City Schools |
| Jecenia Vera | Anaheim Union High School District |
| Dan Whitlock | Navigator Schools |
| Rebecca Yang | Orange County School of the Arts |
References
AI for K–12 (AI4K12). 2025. https://ai4k12.org
. (AI4K12 is a joint project of the Association for the Advancement of Artificial Intelligence [AAAI] and the Computer Science Teachers Association [CSTA], funded by National Science Foundation Award DRL-1846073.)
California Department of Education. 2015. Quality Professional Learning Standards. Sacramento: California Department of Education. https://www.cde.ca.gov/ci/pl/qpls.asp.
California Department of Education. 2021. Computer Science Standards for California Public Schools: kindergarten through grade 12. Sacramento: California Department of Education. https://www.cde.ca.gov/be/st/ss/computerscicontentstds.asp.
CAST. 2018. CAST UDL Guidelines. https://udlguidelines.cast.org
.
Common Sense Media. 2025. https://www.commonsense.org
.
Computer Science Teachers Association (CSTA) and AI4K12. 2025. AI Learning Priorities for All K–12 Students. New York, NY: Computer Science Teachers Association. https://csteachers.org/ai-priorities
.
Fry, Rick, Brian Kennedy, and Cary Funk. 2021. STEM Jobs See Uneven Progress in Increasing Gender, Racial and Ethnic Diversity. Pew Research Center.
K–12 Computer Science Framework. 2016. https://k12cs.org
.
Mishra, Punya, and Matthew J. Koehler. 2006. “Technological Pedagogical Content Knowledge: A Framework for Teacher Knowledge.” Teachers College Record 108 (6): 1017–1054. https://doi.org/10.1111/j.1467-9620.2006.00684.x
.
TPACK.org. 2012. https://tpack.org
.
World Economic Forum. 2025. Future of Jobs Report 2025. Cologny/Geneva: World Economic Forum.
Wyatt, Jeff, Jing Feng, and Maureen Ewing. 2020. AP Computer Science Principles and the STEM and Computer Science Pipelines. College Board.