source: https://vidigami.com/deepfake-ai-landing-page/ content-type: ai-context-data ai-purpose: structured-content-reference last-updated: 2026-04-22T17:24:05.556Z signaltoai-version: 1.0.25 # Deepfake AI Landing Page **Semantic Tags:** - webinar - ai-impact-on-education - student-photos-management - deepfake-concerns - privacy-in-education - data-protection - policy-development - audience-engagement - risk-mitigation - photo-consent - education-tech - school-marketing - image-security - parental-consent - ai-ethics - image-tagging - school-safety - student-engagement - tech-in-education - digital-asset-management - cybersecurity-in-schools - educational-policies --- WEBINAR: HOW DOES AI IMPACT THE WAY SCHOOLS MANAGE STUDENT PHOTOS? Watch Full Webinar [https://vidigami.com/deepfake-ai-landing-page/#vid] GEN AI, STUDENT PHOTOS AND DEEPFAKES WATCH THE HIGHLIGHT VIDEO https://vimeo.com/1075484019/0b4dce99d8 Skip to a section [https://vidigami.com/deepfake-ai-landing-page/#units] This webinar 9ine Consulting and Vidigami examine the evolving societal and regulatory landscape that is shaping how schools protect their trusted status; where adhering to well-defined policies and implementing programs are no longer optional, but essential. Over the past year, AI has advanced at an unprecedented pace, delivering both remarkable benefits and significant challenges. One area of particular concern is related to the management of visual media and the growing risks associated with publishing student images – especially pertaining to deepfakes. The ease with which images can be exploited is prompting families to withhold consent for publishing photos online. Join James Wigginton and Mandy Chan, as they discuss: * the challenges schools face * reducing the risk * reducing exposure. WATCH THE FULL WEBINAR https://youtu.be/B0JSuwOAHcY Select a tab below to skip to that section. * Introduction * KEY TAKEAWAYS * Introductions. * Agenda. VIDEO (02:50) SUMMARY This section is about introducing the experts on this webinar and how schools can keep their information and photos safe, especially with new technology like AI. They discuss the importance of privacy, how to spot fake pictures or videos, and the rules schools should follow to protect students and families. Read Transcript James Wigginton My name is James Wigginton. I work with Nine Consulting. I’m based in the UK, as you can probably tell by my accent, and I’ve worked at Nine now for about five years. For those of you who don’t know who we are, we support schools globally—from Japan to North America and everywhere in between—typically with data privacy, cybersecurity issues, and now AI, because that’s a trending topic and there’s a lot of intersection between those things when we think about AI. Thank you to the Vidigami team for inviting me today; it’s much appreciated. I look forward to hearing all your questions and answering them as well. Mandy Chan Thanks so much, James. I’m Mandy Chan, and I’m the founder of Vidigami. I’ve spent my entire career in technology innovation. When I became a parent of school-aged kids, I became acutely aware of the privacy concerns related to social media and what today’s generation of students, families, and schools need. This led to the launch of Vidigami. What you’ll find unique about Vidigami is that we’ve always thought about the media management problem differently. It’s always been more than a digital asset management system, because content organization needs to be purposeful, where the value is derived when the community benefits. The challenge we have is that, unlike any other personal data, photos are meant to be shared, and they’re the building blocks of our stories and what connects people. Today, that means we have to address a lot of issues, and that’s complicated. That’s why we’re going to talk through many of them today. In addition to having James and Nine Consulting join us, we have a very special guest participating during our Q&A. Josephine Yam I am Josephine Yam. I’m an AI lawyer and AI ethicist, and the CEO and co-founder of Skills for Good AI. We are a global platform for responsible AI literacy, because as we all know, not everything legal is ethical, and not everything ethical is legal. At Skills for Good AI, we provide AI literacy—not only how to use AI, but how to use AI for good. Mandy Chan Thank you. This is what we are going to focus on. Very quickly, we’re going to define what is real and what is not. Then we’re going to look at some practical techniques for how you can address deep fakes. We’re going to talk about policy and governance, which James will lead, and then programs and solutions, which is what Vidigami really handles. Inside Vidigami, we’re going to wrap up with some Q&A. Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] * AI Photos * KEY TAKEAWAYS * AI creates realistic fake media. * Deepfakes are AI-generated fakes. * AI simplifies fake image creation. * Deepfakes pose significant harm risks. * Criminals exploit deepfakes for threats. * Mitigate AI’s harmful applications. VIDEO (04:13) SUMMARY This section is a conversation about how AI can create fake photos and videos, called deepfakes. How easy it is now to make these fakes, the risks they bring—especially for schools and students—and the need to find ways to protect people from the harm deepfakes can cause. Read Transcript Mandy Chan So, AI and photos: real or fake. What we want to start with is just, this is really the question. You know, when you take a look at these four photos, only one of them is real; the other three are actually generated. So, what are deepfakes? Deepfakes are really the use of AI to create realistic but entirely fake representations of individuals. These can make it appear as though someone is saying or doing something they’re not really doing. And, you know, I think the concern we have is, the reality is that this has always been possible. If you were a Photoshop expert, you could have manipulated an image and it would be completely fake. A few of us are so good at it, so that in itself is a barrier because most of us can’t use Photoshop that well. AI just removes that barrier, making it super easy for everybody to be able to do that. This exposes a risk for those who want to exploit it for malicious purposes. And I think that’s the concern, because otherwise, you know, artists, writers, and designers have historically implemented this as a way under creative license, right? So I don’t want to discount the concerns we have with deepfakes. It’s always happened. It’s just that it’s so much easier to do now in a way that it wasn’t before. And, you know, in the past when you’ve done it, your recourse was based on publicity rights. You could ask a publisher to cease to publish, and you could sue for damages and compensation. It’s just, how do you manage that in today’s age? James, from your point of view, are you seeing the risks go up? Is there a real concern with deepfakes? What do you think? James Wigginton Yeah, I think, you know, from our experience, we’re not looking to scare people, but the reality is the risk landscape has increased. Now, I think Mandy’s point was very apt. It’s just so easy to do now. And there are criminal organizations out there who will take advantage of this. So, there have been a couple of schools in the UK, for example, independent schools, where they’ve effectively had a version of a ransomware attack. What happened is these organizations have taken images from the website, and then they’ve run them through deepfake AI-generated engines. They’ve created, unfortunately, sexually explicit content of those students. Then they’ve ransomed them back to the school, saying, “If you don’t pay this ransom, we are going to release these images,” which obviously has huge concerns for those students and their well-being. So, what we need to be aware of is that although it brings huge benefits—absolutely, AI has massive benefits—there are risks, and we need to be readdressing and thinking about these things all the time. We absolutely understand what’s going to be advantageous, but do we have a lens on those risks? How are we trying to mitigate those? Are we thinking about new innovations? Because ultimately, these criminal organizations are innovating; they are using AI for these purposes. So, we also need to respond in kind. We have to think about better technical solutions, better training, to make sure that we can be aware of this as well. And then, many of you may have seen this before, outside of the worst case with criminal organizations. There are, of course, students doing this as well, okay? Sometimes it may be because it’s deemed as being funny. But there are examples—I’ve worked with many schools where students have created similar kinds of sexually explicit images of other students. Again, we need to be very conscious of the harm that can cause, because we do want to make sure that we can use images in brilliant ways and promote the school, but also that we’re not creating harm or that we’re considering how we could try to mitigate harm. So, I think it is a developing thing, but it’s something we should be thinking about, and we should start to try and plan how we can put risk mitigation around these kinds of things. That would be our advice right now. Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] * Responsibility * KEY TAKEAWAYS * Photos risk student safety, privacy. * AI enables realistic fake images. * Laws strictly protect photo data. * Clear policies and education needed. * Collaboration essential for privacy protection. VIDEO (03:26) SUMMARY This section is a conversation about how to safely share student photos in a world where AI can easily create fake images (deep fakes). The importance of privacy, the laws that protect images, and the need for clear rules and education. There are new tools and policies, like “no robots.txt,” that help protect data from being used by AI. Some AI companies are refusing to make edits that could break privacy rules. The conversation ends by saying that while there’s no perfect answer, it’s important for everyone to work together to find solutions. Read Transcript Mandy Chan How do we balance sharing photos with constituents while protecting student safety and privacy in this age of AI and deepfakes? Photos and images are a universal language; they’re the building blocks of our stories. AI is going to make it easier and easier to create increasingly sophisticated content for our stories. The complexity comes from our interaction with this data because personal data, privacy rights, publicity rights, content ownership, and intellectual property rights are governed by laws that apply to images in ways that don’t apply as much to other kinds of data. The value of photos is realized when they’re shared. The challenge we have is to reduce the complexity of this problem. It starts with policies, then programs, and education along the way. We know that regulatory efforts are underway, but we need to do our part with policy and governance. This is already starting to happen—the social responsibility, the corporate responsibility part. I don’t know if everyone is aware, but a couple of years ago, the “no robots.txt” file was introduced. You can apply it to your webpage to ask AI not to use your website data for training. It doesn’t mean they have to comply, but if you put it on your webpage, companies like ChatGPT and Google Bard will respect that request. What we’ve noticed this year compared to last year is that when you upload photos of kids into ChatGPT-4o—just as we did here, uploading my son’s picture and asking it to replace a baseball player’s picture with the student’s face—it responded, “I can’t complete that request because it violates our content policies. If you have another idea or need help with a different image or edit, feel free to let me know.” I think these are examples of companies, organizations, and individuals taking the lead and doing the right thing. Have you seen examples of this elsewhere, James or Josephine? James Wigginton Yes, I have. It’s one of the big talking points at the moment. Especially with deepfakes and ransomware incidents involving schools, these issues are starting to ripple out. There’s a lot of exploration right now about how to mitigate these risks. As you said earlier, there’s no perfect solution, but it’s about looking for answers and understanding where decisions can be made and how we can adapt. That’s a great example of what can be done to help prevent these problems. Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] * Risk & Exposure * KEY TAKEAWAYS * Limit exposure to reduce risk. * Restrict access to trusted users. * Use passwords and access controls. * Set time limits on availability. * Remove access after set period. VIDEO (01:31) SUMMARY This section explains how to protect photos and information by controlling who can see them and for how long. Mandy Chan talks about using passwords and access controls to keep out unwanted people and computer programs. She also suggests making content private after a certain time so it doesn’t stay online longer than needed. The main idea is to reduce the risk by limiting who can see your content and for how long. Read Transcript Mandy Chan So, we’re going to talk about how the second way you can reduce risk is by reducing exposure. This is something you can control. These are all things that you can do. Limit public access. This is really fundamental: when you put access controls and passwords in place, only users or trusted members can access it. You are preventing bots, AIs, and non-members of your community from being able to access those images. That’s just foundational, isn’t it The next step is really limiting availability. Everything about reducing risk is about reducing exposure. If you limit availability, it means you can either make the content private to a certain group or individuals, or, if you’re going to publish it outside your enclosed website, make sure it expires. Maybe it’s only available for a week, a month, or even until the end of the school year, but after that period, disable access so that it’s not sitting out in the public domain longer than it needs to be. When this happens, you reduce the chance for a bot to go in and download it from your website. Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] * Reducing AI Usability * KEY TAKEAWAYS * Make photos harder for AI to use. * Use filters, watermarks, and effects. * Apply facial cloaking and masking. * Lower image quality for protection. * Be transparent about AI modifications. VIDEO (09:29) SUMMARY This section discusses various technological methods to protect photos from being misused by artificial intelligence, especially in the context of preventing deepfakes. It outlines four key techniques: manipulating image composition, reducing resolution, applying filters and effects, and using watermarks. They explain how these methods introduce “artifacts” that confuse AI, making it difficult to replicate or create fake images of individuals. The conversation also touches on the role of facial recognition and explores the idea of using “contextually authentic” AI-generated images (real backgrounds with fake faces) for school marketing as a privacy solution, emphasizing the need for transparency. The overall message is that while there are many tools available, no single solution is perfect, and the field is constantly evolving. Read Transcript Mandy Chan The third thing, I believe, related to reducing risk is actually reducing AI usability. This is a technology aspect, and I’m going to talk specifically about four techniques we know will impact AI’s ability to leverage your photos. These techniques involve image composition and layout, image resolution, the filters and effects you can apply, and watermarks. How do we use these to prevent deepfakes? The terms you’ll frequently hear may include facial cloaking, distortion, spoofing, or masking. These are all techniques based on introducing subtle artifacts into an image that confuse deepfake software. There’s no perfect solution, and if there were, we can expect AI to rapidly adapt to prevent it from working. One of the core elements of AI, and why we discuss face recognition, is that facial recognition is the basis for how data on a face is collected. This is why we care about how we use it and how we prevent the use of facial recognition technologies. Image composition is really interesting. In a very typical group image with about a dozen kids, you’ll ideally see that most of the photos are like profile pictures. Of these dozen faces, AI detected only one face—this little girl in the middle. Even though this is a 9.5-megabyte image (6,000 by 4,000 megapixels), because it’s a complex group picture, only one face was detected, and it’s actually quite blurry. This makes it harder for an AI to leverage and make use of it. So, composition is one thing. Now we’re going to look at image resolution. This is very cool. Here’s the original image, 100% of its size, 737 by 1104 pixels. One of the big questions we get is: if you reduce the resolution, why doesn’t it prevent AI from creating a fake image of me? Well, it’s gotten better. We reduced this image down to 50 by 75 pixels (7% of its size) and then upscaled it back to 1024 by 1536. What you’ll notice here is the result is actually pretty good. It’s a lovely person; it’s just not the same person that’s in photo A. So, is this a good result? This is a fake photo. You can’t say that photo C is the same as photo A. Then we took the original and downsized it to 200 by 300 pixels, and you’ll see that the resolution is blurrier. Then we upscaled it back up to 800 by 1200. You’ll notice that it looks very similar. All you need is a 200 by 300 pixel full-frontal image to be able to replicate the image. So, if you are going to reduce the resolution of an image to protect it, you’re going to have to reduce it below 200 by 300 pixels. This is also interesting when filters and effects are essentially artifacts—things that you add to a photo to enhance it or generate a desired effect. What we did here was reduce the size of the image for resolution, and then we added noise. The result that was generated is, as you can see, a completely different image—lovely as an end result, but not the same person as the original photo. Then we went into Canva and applied a different effect. The AI generated an image that is lovely, but also not the same person. So, introducing artifacts into an image using the filters and effects you have in your day-to-day tools can help protect the image from being replicated. The next technique is watermarks. We started with the original, then added the watermark, reduced the size, and added noise. The watermark made a pretty significant difference. This is definitely not the same person. If you compare them side by side, you’d see it’s actually quite different. Each AI will have its own biases and sensitivities over parameters. So, all we’re really trying to do is impact its ability to replicate. Let’s see what happens when we start layering all this on the original photo: reduce the size, add some noise, add some effects, sprinkle it with some more effects, and this is what you get. If you published this image (image D) on your website, and if AI were to take that image and replicate it, it would look like a completely different person. This is the original photo. We are going to combine it with this random face, both very high resolution. Then we are going to use AI face swap to replace the original photo’s face with the random face in the middle. The background is the same—your school field. The school jersey is the same, the ball is the same. It’s all authentic, but the face of the student isn’t. Is this a good option for external marketing? Is this better than having real images or no images on your public website? The context is authentic, but the identity of the face is fake. For transparency, you can even add a little copyright at the bottom of the image for your school and state that it’s AI-modified. Then, use this opportunity to redirect your prospective families to come and talk to you, perhaps, and then you can invite them into your internal site for more information. Do you think using contextually authentic photos for external marketing is a good idea? I think that gives everybody a basis for what they need to work with. There’s a variety of technologies out there that can be used to impact the ability of AIs to replicate faces. It’s your choice to what degree you want to use them. There’s no perfect solution, and it’s going to continue to evolve. On that note, what can we do from a governance and policy point of view? James? Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] * Accountability * KEY TAKEAWAYS * Ensure accountability and transparency. * Collaborate across all school departments. * Protect student privacy and rights. * Adapt and update policies for AI. * Provide consent options and training. VIDEO (05:12) SUMMARY This section is about how schools should handle student images and personal information, especially with the rise of Artificial Intelligence (AI). The speaker, James Wigginton, emphasizes the importance of accountability, transparency, and protecting the rights of students (data subjects). He explains that schools need to re-evaluate their legal reasons for using data due to AI’s impact on how images can be used. Key steps include developing clear policies and procedures, establishing effective consent and opt-out mechanisms (even granular ones for specific images), and providing ongoing training for staff, students, and parents. The overall message is that this is an evolving landscape that requires continuous monitoring and review to ensure ethical and compliant data handling. Read Transcript James Wigginton So, uh, all of this is excellent, and these are the kind of things that we should be talking about. But what we always want to be doing is demonstrating our accountability. Okay? So these are great conversations to be having. They’re not a single person’s conversation to be had. So, you know, if you are from marketing, then your sort of data protection officer, data protection lead, compliance lead—everyone should be talking about these kinds of things because the brand image of the school is very important. But also, we want to make sure that we are protecting the rights of our data subjects—students in this case. And certainly, when we look through a privacy lens, because we have law already typically saying we need to be very careful with the use of personal data or PII, and images, as we know, is one of them. So, where AI has changed the landscape, we should really be readdressing our legal basis, okay? And we’ll talk about that a little bit in some subsequent slides. Because even we’ve evolved at Nine, and this is what we do day in, day out. So we want to make sure that, you know, whether we’re going for consent or legitimate interest—we’ll talk about that if you’re not familiar with it. We need to make sure that we readdress the risk landscape. We know AI is a factor now. We know that images can now be used in a slightly different way than maybe two or three years ago when you first were thinking about your legal basis. And we want to make sure that we’ve thought about that and justified it, okay? And you’ll have different mechanisms depending on where you are from the world. So it’s a bit specific based on your jurisdiction. But this is a good conversation to have. Once we’ve kind of thought about legal basis, what we should then be moving to is we should be considering—if we want to click to the next one, Mandy—we should be thinking about our policies and procedures. And this is all very important because this is transparency. So whether we’re thinking about policies, procedures, or you call them notices, we want to make sure that we’re always showing that to our data subjects, our parents, our students, our school community, our alumni, any of these individuals. Because if we’ve thought this through and we’ve taken that approach, and like Mandy said, there’s very rarely a right answer. It’s the best answer we can form given the changes right now. Our policies, our procedures need to support that. So our policy should articulate to our data subjects why we do these things, how we’ve chosen to do these things so we can comply with law, but also because we want to have transparency. We want to be trusted, okay? It’s an important relationship to have with your school community. And then our procedures should support our staff. So we should understand how to do these things. So, you know, going back to Mandy’s examples, that needs to be our approach. If we decide we’re going to do these things, it needs to be documented so everyone can understand how to follow that approach. Once we’ve kind of got our policies and procedures all written down and clearly available to everyone, then what we want to be doing is considering the next stage. So if we go to the next one, please, Mandy, we want to be thinking about our consent or opt-out mechanisms, okay? So legal basis will typically determine what you’ve chosen as to make sure that you are getting consent or if you’re doing legitimate interest. But then we need to be able to understand how we’re facilitating that. So it really doesn’t matter what mechanism you go for on legal basis. Ultimately, we need to be able to fulfill when people want to withdraw consent or if they want to opt out, okay? And that can be difficult, that can be challenging. We see that often as quite the administration burden. We want to make sure that we’ve got really effective ways of doing that so that if someone does not want their images used anymore, we can respond or potentially even be more granular than that and actually understand that maybe some images people don’t want used over other images. You know, I give this example to the schools I work with. You know, how many times do people spend five, six takes to take a selfie because they didn’t like the first few images, right? So we need to appreciate that actually, just because we’ve got a bank of images, maybe someone doesn’t want a particular image shared. And especially when we’re dealing with students because we know that, you know, it’s a sensitive age and their mental well-being is very important to us. So maybe we need to even start to consider more granular consent opt-outs with, you know, “You can use my image, but I don’t want you to use this image.” So we need to really think about that and how we’re kind of evolving that conversation. And then once we’ve kind of made sure that we’ve got that logged, we’re thinking about transparency and documentation. Okay? So do we have all the right things? Does everyone know how to act in the manner that we expect? And ultimately, are we also training people as well? Because we do need to train staff. It is an evolving landscape. Things are changing very quickly. And some of the schools I work with are even doing training with students, making them aware, and they’re also doing training with the parents in the school community so that everyone does understand that you’ve given this a very ethical and also, you know, from a compliance perspective, you’re adhering to your requirements. And then we want to articulate that back out because we do take these things seriously. We do understand that your image is important to you. And then of course, we want to regularly monitor and review, okay? We always want to be reviewing these things. If we just take a step back and go back two years ago, this wasn’t a problem. This wasn’t something we were really thinking about, and now we are thinking about it, and AI will probably get better and better, right? It’s going to improve, and that’s going to cause some great benefits. It’s going to even help us even more in our roles. And it may even help us with better solutions that Mandy suggested, but it is also going to mean the people who want to do harm are also going to have better tools as well. So we’re going to have to be reviewing this all the time, right? It’s a moving compass. It’s not going to stay still anytime soon. Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] * Consent * KEY TAKEAWAYS * Obtain consent before using photos. * Provide clear opt-in and opt-out. * Emerging technologies increase privacy risks. * Photo removal from internet is challenging. * Extra safeguards required for minors. VIDEO (03:12) SUMMARY This section talks about how schools use photos of students and the rules they follow. It explains that getting permission (consent) is the best way, but new technology like deep fakes makes things riskier. Sometimes, schools might use another rule called “legitimate interest,” but they must let people say no and do extra safety checks. The speaker also says it’s hard to remove photos from the internet once they’re out there and that schools need to be especially careful with children’s photos. Schools must balance safety, permission, and the rights of students when using images. Read Transcript James Wigginton So, interestingly, most of you will probably operate under a consent model, which is very good. It’s the kind of gold standard. It’s what we think of. We look for consent from people. So we’re getting explicit permission from our data subjects—parents and students—to be able to use their images. We’re also looking for some kind of digital agreement that we can track and provide evidence for. Now, what’s occurred recently is that we now know there’s additional risk on a website, for example. People can take images, put them through a deep fake image generator, and then cause potential harm. We now need to consider: if someone withdraws consent, can we actually take that photo down? You can take it down from your website for sure, but can you actually remove it from the internet? That’s quite difficult, actually, with indexation and things like that. So there’s a case to be made that sometimes consent is going to have to evolve. We have to think about different legal bases because consent requires you to really try to make sure that image is gone, right? It’s not out there anymore. So, legitimate interest is a model we’re now considering, if it fits your jurisdiction, because AI has changed the risk landscape. Legitimate interest is slightly different because you can use images without explicit consent, but you do have to make sure you give people the ability to opt out. You also have to do additional risk assessments to understand how you can reduce that risk. With consent, we know some of the advantages. It’s clear, explicit, and very transparent. We’re seeking it in advance, and ultimately, the data subject has control over the image because they can say yes or no right at the start. We also have transparency and accountability through the consent process. However, that does lead to a bit of a yes-or-no scenario. People say no to everything or yes to everything, and sometimes that can be a problem, especially as risk profiles start to increase. If it became more common for criminal organizations to use deep fakes in that way, then if we’re just doing a yes-or-no mentality, we may end up having a lot more withdrawals of consent. So, having more detailed choices around photos might be really important so that we can give people options. The challenge is, unless you have a technology solution or a lot of resources, it’s an administrative burden to do this. Withdrawal of consent and misunderstanding removal obligations are important issues. If someone withdraws their consent, we have to make sure we’ve removed that image. Actually, it can be quite difficult because it may now be indexed and still out there. With children, we need to be very conscious of that. If we go to a legitimate interest Mandy, then the advantages are that it’s a simplified process. We’re saying we can use your image, but you need to opt out if you don’t want us to. So it gives you a bit more flexibility and practicality, as you can use it for business purposes, as long as you have a good reason. But the challenges are that we need to think about balancing rights and interests. This is not a perfect situation. We can’t just move to legitimate interest. We have to do an assessment and consider all the things we could do to make sure we’re protecting that personal data—these images. Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] * Legitimate Interest * KEY TAKEAWAYS * Have legitimate interest to use. * Always protect all photos. * Make opting out easy. * Prevent wrong photo sharing. * Don’t keep photos forever. VIDEO (02:07) SUMMARY This section focuses on how schools should manage and protect images of students, especially given the increasing risks from new technologies. It introduces the concept of a “legitimate interest assessment” as a way for schools to justify using images, particularly when it’s hard to remove photos from the internet after consent is withdrawn. It emphasizes the need for secure storage, controlled access (like “gated portals”), and effective ways for people to opt out of image usage. It also highlights the importance of auditing image use, preventing unauthorized sharing, customizing permissions, and adhering to rules about how long data (images) can be kept. The overall message is that schools must continuously adapt their practices to protect images while still promoting their community. Read Transcript James Wigginton I guess So, this is called a legitimate interest assessment, okay? And some of you may have heard of them, may have not. For example, if we were going to move to legitimate interest as our legal basis—because we do consider it may be very difficult for us to fulfill a withdrawal of consent, as we can’t stop the images that are now out there on social media, for example, as well as just your website—then we need to really think about how we’re going to put protections around that image, okay? And, to be honest with you, whether it’s consent or legitimate interest, these are the kinds of things we should be thinking about anyway, because we really do want to try and make sure we’re protecting this kind of thing. So, do we know if our images are securely stored and have access controls? Gated portals, gated technologies—something where we can put some protection around and make sure those images are stored. That is maybe how we need to rethink things a little bit. Do we have really good opt-out management? Okay, that could be tricky. But again, do we want to be able to give people the ability to be a bit more selective with what we’re going to use and what we’re not? Can we audit this? Because if there are risks, if we know images can be used for harm, can we feel confident that we can audit this? Okay? And we can demonstrate our accountability. Can we restrict external sharing? We need to be confident that people aren’t going to share the wrong images or share them with people they shouldn’t be sharing them with. It’s one of the most common data breaches we see: people sharing the wrong data to the wrong person, and with images, it happens with that too. Can we customize permissions for image usage, right? These are all things that we should be considering, and can we make sure we adhere to data retention and deletion? Can we actually make sure that if you have a records retention schedule, for example—and typically under privacy law, there’s a requirement that you don’t hold data for too long—how do we manage that? How are we making sure we’re not using images longer than we’ve committed to our data subjects? So these are all important things that we should be thinking about right now because the risk landscape has increased. Therefore, we should be having conversations about whether there are things like this we could do to not stop us promoting the brand, not stop us promoting the school community, but can we be protecting these very important images in different ways that maybe we’ve not considered before? Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] * Education * KEY TAKEAWAYS * Set and teach clear rules. * Follow privacy and copyright laws. * Give people control of data. * Use one school-wide system. * Secure ecosystem. VIDEO (02:02) SUMMARY This section emphasizes a three-part approach to managing data and images in schools. First, it highlights the need for clear “rules of engagement” and education for everyone involved. Second, it explains that “governance” should be “school-centric,” meaning rules must align with the school’s values, data privacy laws, and intellectual property rights, while also allowing for individual family preferences regarding image protection. Third, it stresses that the system for managing this data must be “platform-centric,” acting as a broad “infrastructure” that enforces school policies across all digital applications (like learning systems and websites) while supporting the growing use of data within the school’s entire digital “ecosystem.” Read Transcript Mandy Chan So, with this in mind, I tried to cover what we’ve been talking about, which is that you need to establish the rules of engagement and you need to add education to it. This gives us what we can do, what we should do, and here’s how. The governance part is school-centric, and it deals with your data privacy for regulatory reasons. It also deals with your copyrighted intellectual property requirements. And it takes into account your school’s values and principles—what’s right for your school. You might have a school where the families are very, very conservative, and so you want to be able to give them the ability to protect their images more. Or you might have one parent who’s super conservative, and they might have the ability to control their consent more. The second part is that it’s individual-centric because it is the person’s right to control. You have to give them a mechanism to be able to do that. And then the third part is the program itself, which has to be platform-centric. Unlike a lot of applications that are very classroom-centric, you need to think about this problem a little bit broader. It is an infrastructure that you’re putting in place where you have the ability to enforce compliance with your school policy while at the same time ensuring it’s set up in a way that you are supporting the growing use of this data and all the different applications in your ecosystem, whether they’re learning management systems, portfolios, websites, or what have you, that need access to this content. Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] * Accountability * KEY TAKEAWAYS * Keep photos and data safe. * Only approved users get access. * Set and automate privacy controls. * Flag and protect photo rights. * Remind users of good habits. VIDEO (03:35) SUMMARY This section focuses on the critical aspects of managing and protecting images within a school’s digital environment. It emphasizes that secure storage is fundamental and that accountability requires knowing who is uploading and downloading photos, ensuring only authenticated users have access. It highlights the need for clear user agreements that define personal use rights for shared photos. A key point is the importance of a system that allows users to “flag” inappropriate photos and manage their “privacy rights,” including granular consent levels for how their images are used (e.g., not for social media, marketing, or yearbooks) and the ability to opt out of facial recognition. Such a system must be automated due to the complexity of managing individual preferences while, respecting intellectual property rights (like copyrights and watermarks) and the continuous education of users on responsible image handling. Read Transcript Mandy Chan Storage and data processing, I think, is key. That’s just security. That’s foundational. The respect and accountability that we’re talking about involves making sure that there’s no anonymity among the individuals who are uploading and downloading photos, and that your members are able to access content only because you’ve authenticated them. You must ensure there is a user agreement with your end users, which allows them to acknowledge the terms of use of this platform where they are granting personal use rights through the photos that they are sharing with each other. This also means agreeing that they have the right to share and access the photos they are offering up on the platform. Because it’s so individual, you need a mechanism for users to be able to flag a photo for whatever reason. So, when James was mentioning how we all take these photos, selfies, and there’s probably only one out of ten that we think is actually a good representation of us and we might be okay sharing, you need the ability to remove the other nine, so to speak. A robust system that allows you to be inclusive yet, at the same time, manage that access, I think, is really key. How much access? These are all your permissions: who can upload, who requires moderation, who can view. Then you need to have privacy rights management. These are usually the consent levels, such as: “I don’t want my photo to be shared on social media. I don’t want it to be used in marketing. I don’t want to use it for a yearbook. I don’t want to be part of this at all, so opt me out. If there’s a photo that has me in it, I want it to be unshared with everybody else in the community, regardless of who’s uploaded it.” All this needs to be automated; otherwise, you just can’t manage it. It’s too onerous. “I don’t actually want facial recognition to run over my image and automatically identify me.” I should have a right to ask for that, and for you to be able to enforce that. And then for everybody to know what those consents are, if they’re relevant. I need to be able to respect intellectual property rights. So, if it’s my image, I should be able to indicate a copyright over that image. And if I don’t want that image to be replicated easily, I should be able to put a watermark on that image to prevent that from happening. This part, we categorize under education. This is a consistent way for you to remind everybody what they should do, not just what they can’t do, but what they should be doing. So, that means when they download an image, remind them of their personal rights—the personal use of the media being downloaded. It’s not for public distribution. Remind them that if there’s a problem, they can flag it. Tell us why you’re flagging it, and then an admin can review and either restore it or remove it from the system. That’s just respect. Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] * Face Tagging * KEY TAKEAWAYS * Every photo matters, even blurry. * AI can mislabel student faces. * Centralize all photos. * System must protect student privacy. * Inclusivity. VIDEO (02:27) SUMMARY This section discusses the challenges and solutions for managing a photos, particularly concerning student images. It highlights that while AI can assist with photo recognition, it’s not always accurate, leading to “mistags” that parents dislike. It emphasizes the need for a “verification mechanism” to correctly identify individuals, especially considering factors like siblings, aging, and photo quality issues (e.g., motion blur). The core idea is to empower the school community with more control over image tagging and management. The goal is to create a “robust” and “inclusive” central photo library that can be easily used for various school purposes, from websites and yearbooks to presentations and digital signage, while ensuring proper management and protection of student images throughout their school journey. Read Transcript Mandy Chan So, regardless of whether a photo is great because it comes from a professional photographer or if it’s blurry, we want to be able to see it. In a perfect world, AI would automatically recognize every photo that gets uploaded. In practice, it doesn’t always work. And parents have no tolerance for mistags of their kids. So, in this image here, you see eight photos of my two boys. They look similar, but only half are right. The other half is his brother. And this is very common. So, you need a mechanism for verification in order for you to tackle issues like siblings, aging (when they look very different, especially boys between 10 and 13 years old—overnight they become young men), and there’s lots of motion blur because of sports or costumes, and what have you. So, you have to treat tagging as part of engagement. You are empowering your community with more control than ever before to manage and protect their kids’ images, as well as collect their kids’ images. And when you collect those images over time, it’s amazing because you’re visually documenting their entire school journey from the first day of school through graduation. At the end of the day, what you want is a photo library that feeds into all the different things that you want to do as an educator and as a school. Whether it is that you want to be able to find and use it on a website, whether you want to be able to access it in Canva to build your yearbook, to use it in PowerPoint, to put up on digital signage—these are all very simple or common applications of your photos. And everybody has a reason to use photos for different reasons. So, putting something like this—a system in place where it’s inclusive yet managed—allows us to get the ultimate benefit out of the way we want to be able to use photos at a school. Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] * Q&A * KEY TAKEAWAYS Schools need to prioritize clear, understandable permission forms for data collection, especially concerning new technologies like AI. This includes explaining risks, offering opt-out options, and regularly updating policies. For older students, dual consent (from both student and parent) is recommended. Education about AI and its potential risks (like bias and deepfakes) is crucial to foster understanding and trust. VIDEO (10:24) SUMMARY This Q&A section covers a discussion among school privacy and AI experts about how schools should handle permission and waiver forms in a world where privacy laws and technology are always changing. They talk about making forms easy to understand, letting people change their minds, and updating forms regularly. The experts also discuss the risks of new technology like AI, including deep fakes and privacy issues, and stress the importance of teaching students and parents about these risks. They recommend getting both parent and student permission as kids get older and note that most schools are still figuring out how to handle AI-generated images and videos. Read Transcript Renee Ramig So the first question is: What are the best practices for waiver and permission forms given the changing privacy landscape? James Wigginton I guess that’s more of a me question, isn’t it? Um, so yeah. I mean, the big thing about it, first of all, is accessibility. So we want to make sure that when we’re thinking about these kinds of things, it’s really easy for our data subjects. I speak very much for 9ine—our parents and our students—to understand what we’re asking of them, okay? We want to make sure we’re removing those barriers. And then also what we want to do is ensure that it’s a manageable system, what we’re trying to offer here. So if we’re going to give people the ability to give consent, or if we’re going to use a different basis, we do have the mechanism to be able to withdraw it as well, and that we can manage that. So it was interesting when we had the instances occurring in the UK, a lot of the schools I worked with were saying, “Well, should we tell parents about what’s happening right now so they can opt out? They can give that kind of consent,” and that’s the right approach, but it’s quite difficult because you typically deal with consent at a certain time of year, right? You do it with your admission cycle. So are there things like Mandy’s talked about—can we do it more dynamically, can we do it in real time? So again, it’s exploring those kinds of options, making sure that we’re covering ourselves from a legal perspective. So of course we want to understand our privacy law, our information rights, making sure we’re articulating them into any of our consent forms. But then again, it’s always about keeping an eye on them, right? Because things do change. So it’s a very quick example. Many of the schools that we work with didn’t say things like, “If we use your photo on our website, we lose control of that afterwards.” And it’s being that clear to people when we say that, because we want to make sure that people are giving informed consent. So it’s thinking about those risks and being able to articulate that back out so people do actually know what they’re agreeing to. Okay, so just there, those are a lot of top tips that I would say—how you do it, whether it’s a technology solution, obviously manually it’s quite difficult. I get that a lot at the schools I work with. It’s challenging to do that at certain times of year. The mechanism, as long as it’s reliable and has good human checks and balances, is fine. But as time goes on, it does get harder. It’s a more administrative kind of burden. So any way that we can enhance that, take the burden off people, anything that’s going to make it a little bit more efficient is a good way to think about it as well. Josephine Yam Yes. I’d like to build on what James mentioned, and as we know, law always lags behind technology and AI is changing almost every day, so people get paralyzed. It’s so complex. There’s already this fear of AI, and in terms of do we even use it, it paralyzes a lot of people. And so the only way to overcome the fear of AI and what bad actors can do with AI is by transforming fear into fluency through education, understanding what AI can do and what it cannot do. And there are already a plethora, not only of privacy risks, but AI ethics risks as well, because data is the fuel of AI. So obviously privacy is one of the risks, but the other risks of AI implications are bias and discrimination, transparency, and explainability. And so there needs to be constant education, because as we know, AI without ethics is like a car without brakes. And so we need to really be able to understand how we can always keep up to date with the developments in this AI world. Renee Ramig About how often should permissions and waivers be reviewed and updated? Yeah, that’s a good question. So from a legal perspective, typically it makes sense. You as a business are entitled to have a cycle, and more often than not, it’s going to be done with your admission cycle. Okay? That’s the reality of it. That’s when you’ve got new parents who are going to be joining the school with their children. And then you’ve obviously got re-enrollment. So that does make sense. That would be a very good time. We want to make sure before that we redo consent again. We’re doing our notices, our policies, making sure that the law hasn’t changed, the risk profile hasn’t changed with the AI landscape, these kinds of things. But again, there will be things, unfortunately, that will trigger new risks, right? So again, I know I keep going back to it, but the deep fakes in the UK with those schools impacted—parents ask questions. They will ask questions, they’ll want to know if this is a problem they need to be concerned about. So we do need to have that ability to also be able to review it throughout the year as well. If there are issues, if things come up that are going to change the way people think, we’re obviously going to want to be able to respond to our school community if there’s a risk. It might not be your school, it could be a school in your area it happens to, but people will talk about these things. And we live in a world of social media now, so there’s always influencers who will be talking about it as well. I’ve seen it myself before. So I think annually as a minimum, of course, because things change. But it is good to be able to be adaptive because there are risks that may happen and you may be exposed to one, and then we need to be able to respond to our parents about how we’re dealing with that. Josephine Yam As AI changes, for example, when ChatGPT was introduced in November 2022, everybody was caught unaware that all of a sudden children were bringing their ChatGPT to do their homework. And then they were manipulating pictures, and for example, the story in New Jersey where this male classmate, a 12-year-old male classmate, deep faked 20 of his female classmates’ pictures that he got from Instagram and then deep faked them with nudity apps and then put them online. The school could not do anything about it because they did not have any policies in place to address that because they were totally caught off guard. So what happened was this teenager, her name is Francesca Manny, she became the voice of being able to have laws in place. And so now New Jersey has a law about how deep fake is now a crime under New Jersey law. And New Jersey now joins 27 other US states in making sure that deep fake is something that needs to be criminalized. And what’s really interesting is that they found out that some children, it’s just a lack of literacy. They do it as a joke to make a meme. They don’t know how much harm it can cause. The ethical rights and the dignity of their classmates can actually be impaired. That can cause mental health issues. So education is absolutely critical, especially in this changing world of AI. Renee Ramig So, the next question is: what happens if a family opts in and students opt out? Do students have the right to opt out and at what age? James Wigginton Okay, so this is kind of like a privacy question first. So, when it comes to opting out of photo usage or any personally identifiable information, legally, it depends on the age and it depends on your data protection law. So in most jurisdictions, for example, in the EU, the GDPR, or if it’s the US under COPPA, if a student is under a certain age—13 in the US, 16 in the EU—parents are typically going to provide that consent. So if we think about it from a data privacy perspective, it’s going to be the parent, right? And there isn’t actually a mechanism, unless you are above those ages, where you typically give consent. So actually the parents have that, but—and I’m sure Josephine will have some thoughts on this as well—ethically, we may want to start thinking a little bit differently about that. Because ultimately we are creating a digital footprint of students when we’re putting them out there, whether it’s social media, whether it’s on our website, and we’re making quite a lot of decisions about their digital image, their digital avatars, as we sometimes refer to it. So if they’re of a certain age, then perhaps there is a blended approach where, yes, we’ve got the privacy aspect and the law that says a parent can, but then your principles as a school, you may want to give students some say in what they’re happy to have shared about themselves. So I’m sure, Josephine, you’ve got some views on this as well. Josephine Yam Absolutely, I’d love to add to that, James. So legally speaking, absolutely there are different ages for different jurisdictions. However, in terms of best practices, because we work with educational institutions, the best practice is that they get dual consent, especially if the child is older. So when the child hits something like 13 or 14, there’s this recognition of their autonomy, of being able to control their identity. Because privacy, the right to privacy, is a gateway to other rights. And one of those is being able to present ourselves to the world according to our own terms. So when these teenagers of ours want to be able to control their identity and to only be able to put out pictures that they think represent themselves well to the world, then we should give them that agency and that autonomy. And that really goes back to being able to respect their right to human dignity as well. So really a very important aspect to consider, as James mentioned, is being able to have that hybrid—maybe dual consent might be a good idea, especially as they get older. Renee Ramig What do school policies about AI-generated images and videos look like? James Wigginton Hmm, interesting. So, I work with hundreds of schools and I don’t think many schools I’ve worked with—although the poll was interesting that you’ve done training on this kind of thing, so I was very impressed—yeah, I don’t think many schools have quite got that far yet. So a lot of the work that I’m doing with schools is they’re thinking about AI policies from the potential of how are we going to use AI in the classrooms? Privacy lenses. So we need to make sure that we’re training our staff and our students not to put personal data into AI engines because we don’t know how those learning engines always work or what they’re going to do with that personal data. I think when it comes to AI-generated images, as in, if that’s going to be something that we’re going to let students choose or staff can manipulate with AI, I think that’s going to be an evolving discussion. I think that’s going to be something that is going to have to be talked about. And I imagine it will become something that will start to be talked about more and more. Obviously, Mandy covered this quite well at the start of the webinar, and I think like everything, there will be benefits and risks. I think, if I think about it on the individual, does the individual have the right to be able to change their images? You know, people use filters, these kinds of things. But then if we think about it from a group photo perspective, how’s that going to work? Because how are you going to manipulate a group photo without impacting the other people in that image? So I think it’s a tricky one. I think it’s one that will evolve like everything else. At the moment, though, it’s not something I’m seeing so much, but I can imagine that will catch up with us quite quickly. I imagine that will start to be a topic of conversation. Have a great day. Thank you so much. Thank you. Cheers. Bye-bye. Bye everybody. Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] KEY TAKEAWAYS * Introductions. * Agenda. VIDEO (02:50) SUMMARY This section is about introducing the experts on this webinar and how schools can keep their information and photos safe, especially with new technology like AI. They discuss the importance of privacy, how to spot fake pictures or videos, and the rules schools should follow to protect students and families. Read Transcript James Wigginton My name is James Wigginton. I work with Nine Consulting. I’m based in the UK, as you can probably tell by my accent, and I’ve worked at Nine now for about five years. For those of you who don’t know who we are, we support schools globally—from Japan to North America and everywhere in between—typically with data privacy, cybersecurity issues, and now AI, because that’s a trending topic and there’s a lot of intersection between those things when we think about AI. Thank you to the Vidigami team for inviting me today; it’s much appreciated. I look forward to hearing all your questions and answering them as well. Mandy Chan Thanks so much, James. I’m Mandy Chan, and I’m the founder of Vidigami. I’ve spent my entire career in technology innovation. When I became a parent of school-aged kids, I became acutely aware of the privacy concerns related to social media and what today’s generation of students, families, and schools need. This led to the launch of Vidigami. What you’ll find unique about Vidigami is that we’ve always thought about the media management problem differently. It’s always been more than a digital asset management system, because content organization needs to be purposeful, where the value is derived when the community benefits. The challenge we have is that, unlike any other personal data, photos are meant to be shared, and they’re the building blocks of our stories and what connects people. Today, that means we have to address a lot of issues, and that’s complicated. That’s why we’re going to talk through many of them today. In addition to having James and Nine Consulting join us, we have a very special guest participating during our Q&A. Josephine Yam I am Josephine Yam. I’m an AI lawyer and AI ethicist, and the CEO and co-founder of Skills for Good AI. We are a global platform for responsible AI literacy, because as we all know, not everything legal is ethical, and not everything ethical is legal. At Skills for Good AI, we provide AI literacy—not only how to use AI, but how to use AI for good. Mandy Chan Thank you. This is what we are going to focus on. Very quickly, we’re going to define what is real and what is not. Then we’re going to look at some practical techniques for how you can address deep fakes. We’re going to talk about policy and governance, which James will lead, and then programs and solutions, which is what Vidigami really handles. Inside Vidigami, we’re going to wrap up with some Q&A. Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] KEY TAKEAWAYS * AI creates realistic fake media. * Deepfakes are AI-generated fakes. * AI simplifies fake image creation. * Deepfakes pose significant harm risks. * Criminals exploit deepfakes for threats. * Mitigate AI’s harmful applications. VIDEO (04:13) SUMMARY This section is a conversation about how AI can create fake photos and videos, called deepfakes. How easy it is now to make these fakes, the risks they bring—especially for schools and students—and the need to find ways to protect people from the harm deepfakes can cause. Read Transcript Mandy Chan So, AI and photos: real or fake. What we want to start with is just, this is really the question. You know, when you take a look at these four photos, only one of them is real; the other three are actually generated. So, what are deepfakes? Deepfakes are really the use of AI to create realistic but entirely fake representations of individuals. These can make it appear as though someone is saying or doing something they’re not really doing. And, you know, I think the concern we have is, the reality is that this has always been possible. If you were a Photoshop expert, you could have manipulated an image and it would be completely fake. A few of us are so good at it, so that in itself is a barrier because most of us can’t use Photoshop that well. AI just removes that barrier, making it super easy for everybody to be able to do that. This exposes a risk for those who want to exploit it for malicious purposes. And I think that’s the concern, because otherwise, you know, artists, writers, and designers have historically implemented this as a way under creative license, right? So I don’t want to discount the concerns we have with deepfakes. It’s always happened. It’s just that it’s so much easier to do now in a way that it wasn’t before. And, you know, in the past when you’ve done it, your recourse was based on publicity rights. You could ask a publisher to cease to publish, and you could sue for damages and compensation. It’s just, how do you manage that in today’s age? James, from your point of view, are you seeing the risks go up? Is there a real concern with deepfakes? What do you think? James Wigginton Yeah, I think, you know, from our experience, we’re not looking to scare people, but the reality is the risk landscape has increased. Now, I think Mandy’s point was very apt. It’s just so easy to do now. And there are criminal organizations out there who will take advantage of this. So, there have been a couple of schools in the UK, for example, independent schools, where they’ve effectively had a version of a ransomware attack. What happened is these organizations have taken images from the website, and then they’ve run them through deepfake AI-generated engines. They’ve created, unfortunately, sexually explicit content of those students. Then they’ve ransomed them back to the school, saying, “If you don’t pay this ransom, we are going to release these images,” which obviously has huge concerns for those students and their well-being. So, what we need to be aware of is that although it brings huge benefits—absolutely, AI has massive benefits—there are risks, and we need to be readdressing and thinking about these things all the time. We absolutely understand what’s going to be advantageous, but do we have a lens on those risks? How are we trying to mitigate those? Are we thinking about new innovations? Because ultimately, these criminal organizations are innovating; they are using AI for these purposes. So, we also need to respond in kind. We have to think about better technical solutions, better training, to make sure that we can be aware of this as well. And then, many of you may have seen this before, outside of the worst case with criminal organizations. There are, of course, students doing this as well, okay? Sometimes it may be because it’s deemed as being funny. But there are examples—I’ve worked with many schools where students have created similar kinds of sexually explicit images of other students. Again, we need to be very conscious of the harm that can cause, because we do want to make sure that we can use images in brilliant ways and promote the school, but also that we’re not creating harm or that we’re considering how we could try to mitigate harm. So, I think it is a developing thing, but it’s something we should be thinking about, and we should start to try and plan how we can put risk mitigation around these kinds of things. That would be our advice right now. Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] KEY TAKEAWAYS * Photos risk student safety, privacy. * AI enables realistic fake images. * Laws strictly protect photo data. * Clear policies and education needed. * Collaboration essential for privacy protection. VIDEO (03:26) SUMMARY This section is a conversation about how to safely share student photos in a world where AI can easily create fake images (deep fakes). The importance of privacy, the laws that protect images, and the need for clear rules and education. There are new tools and policies, like “no robots.txt,” that help protect data from being used by AI. Some AI companies are refusing to make edits that could break privacy rules. The conversation ends by saying that while there’s no perfect answer, it’s important for everyone to work together to find solutions. Read Transcript Mandy Chan How do we balance sharing photos with constituents while protecting student safety and privacy in this age of AI and deepfakes? Photos and images are a universal language; they’re the building blocks of our stories. AI is going to make it easier and easier to create increasingly sophisticated content for our stories. The complexity comes from our interaction with this data because personal data, privacy rights, publicity rights, content ownership, and intellectual property rights are governed by laws that apply to images in ways that don’t apply as much to other kinds of data. The value of photos is realized when they’re shared. The challenge we have is to reduce the complexity of this problem. It starts with policies, then programs, and education along the way. We know that regulatory efforts are underway, but we need to do our part with policy and governance. This is already starting to happen—the social responsibility, the corporate responsibility part. I don’t know if everyone is aware, but a couple of years ago, the “no robots.txt” file was introduced. You can apply it to your webpage to ask AI not to use your website data for training. It doesn’t mean they have to comply, but if you put it on your webpage, companies like ChatGPT and Google Bard will respect that request. What we’ve noticed this year compared to last year is that when you upload photos of kids into ChatGPT-4o—just as we did here, uploading my son’s picture and asking it to replace a baseball player’s picture with the student’s face—it responded, “I can’t complete that request because it violates our content policies. If you have another idea or need help with a different image or edit, feel free to let me know.” I think these are examples of companies, organizations, and individuals taking the lead and doing the right thing. Have you seen examples of this elsewhere, James or Josephine? James Wigginton Yes, I have. It’s one of the big talking points at the moment. Especially with deepfakes and ransomware incidents involving schools, these issues are starting to ripple out. There’s a lot of exploration right now about how to mitigate these risks. As you said earlier, there’s no perfect solution, but it’s about looking for answers and understanding where decisions can be made and how we can adapt. That’s a great example of what can be done to help prevent these problems. Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] KEY TAKEAWAYS * Limit exposure to reduce risk. * Restrict access to trusted users. * Use passwords and access controls. * Set time limits on availability. * Remove access after set period. VIDEO (01:31) SUMMARY This section explains how to protect photos and information by controlling who can see them and for how long. Mandy Chan talks about using passwords and access controls to keep out unwanted people and computer programs. She also suggests making content private after a certain time so it doesn’t stay online longer than needed. The main idea is to reduce the risk by limiting who can see your content and for how long. Read Transcript Mandy Chan So, we’re going to talk about how the second way you can reduce risk is by reducing exposure. This is something you can control. These are all things that you can do. Limit public access. This is really fundamental: when you put access controls and passwords in place, only users or trusted members can access it. You are preventing bots, AIs, and non-members of your community from being able to access those images. That’s just foundational, isn’t it The next step is really limiting availability. Everything about reducing risk is about reducing exposure. If you limit availability, it means you can either make the content private to a certain group or individuals, or, if you’re going to publish it outside your enclosed website, make sure it expires. Maybe it’s only available for a week, a month, or even until the end of the school year, but after that period, disable access so that it’s not sitting out in the public domain longer than it needs to be. When this happens, you reduce the chance for a bot to go in and download it from your website. Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] KEY TAKEAWAYS * Make photos harder for AI to use. * Use filters, watermarks, and effects. * Apply facial cloaking and masking. * Lower image quality for protection. * Be transparent about AI modifications. VIDEO (09:29) SUMMARY This section discusses various technological methods to protect photos from being misused by artificial intelligence, especially in the context of preventing deepfakes. It outlines four key techniques: manipulating image composition, reducing resolution, applying filters and effects, and using watermarks. They explain how these methods introduce “artifacts” that confuse AI, making it difficult to replicate or create fake images of individuals. The conversation also touches on the role of facial recognition and explores the idea of using “contextually authentic” AI-generated images (real backgrounds with fake faces) for school marketing as a privacy solution, emphasizing the need for transparency. The overall message is that while there are many tools available, no single solution is perfect, and the field is constantly evolving. Read Transcript Mandy Chan The third thing, I believe, related to reducing risk is actually reducing AI usability. This is a technology aspect, and I’m going to talk specifically about four techniques we know will impact AI’s ability to leverage your photos. These techniques involve image composition and layout, image resolution, the filters and effects you can apply, and watermarks. How do we use these to prevent deepfakes? The terms you’ll frequently hear may include facial cloaking, distortion, spoofing, or masking. These are all techniques based on introducing subtle artifacts into an image that confuse deepfake software. There’s no perfect solution, and if there were, we can expect AI to rapidly adapt to prevent it from working. One of the core elements of AI, and why we discuss face recognition, is that facial recognition is the basis for how data on a face is collected. This is why we care about how we use it and how we prevent the use of facial recognition technologies. Image composition is really interesting. In a very typical group image with about a dozen kids, you’ll ideally see that most of the photos are like profile pictures. Of these dozen faces, AI detected only one face—this little girl in the middle. Even though this is a 9.5-megabyte image (6,000 by 4,000 megapixels), because it’s a complex group picture, only one face was detected, and it’s actually quite blurry. This makes it harder for an AI to leverage and make use of it. So, composition is one thing. Now we’re going to look at image resolution. This is very cool. Here’s the original image, 100% of its size, 737 by 1104 pixels. One of the big questions we get is: if you reduce the resolution, why doesn’t it prevent AI from creating a fake image of me? Well, it’s gotten better. We reduced this image down to 50 by 75 pixels (7% of its size) and then upscaled it back to 1024 by 1536. What you’ll notice here is the result is actually pretty good. It’s a lovely person; it’s just not the same person that’s in photo A. So, is this a good result? This is a fake photo. You can’t say that photo C is the same as photo A. Then we took the original and downsized it to 200 by 300 pixels, and you’ll see that the resolution is blurrier. Then we upscaled it back up to 800 by 1200. You’ll notice that it looks very similar. All you need is a 200 by 300 pixel full-frontal image to be able to replicate the image. So, if you are going to reduce the resolution of an image to protect it, you’re going to have to reduce it below 200 by 300 pixels. This is also interesting when filters and effects are essentially artifacts—things that you add to a photo to enhance it or generate a desired effect. What we did here was reduce the size of the image for resolution, and then we added noise. The result that was generated is, as you can see, a completely different image—lovely as an end result, but not the same person as the original photo. Then we went into Canva and applied a different effect. The AI generated an image that is lovely, but also not the same person. So, introducing artifacts into an image using the filters and effects you have in your day-to-day tools can help protect the image from being replicated. The next technique is watermarks. We started with the original, then added the watermark, reduced the size, and added noise. The watermark made a pretty significant difference. This is definitely not the same person. If you compare them side by side, you’d see it’s actually quite different. Each AI will have its own biases and sensitivities over parameters. So, all we’re really trying to do is impact its ability to replicate. Let’s see what happens when we start layering all this on the original photo: reduce the size, add some noise, add some effects, sprinkle it with some more effects, and this is what you get. If you published this image (image D) on your website, and if AI were to take that image and replicate it, it would look like a completely different person. This is the original photo. We are going to combine it with this random face, both very high resolution. Then we are going to use AI face swap to replace the original photo’s face with the random face in the middle. The background is the same—your school field. The school jersey is the same, the ball is the same. It’s all authentic, but the face of the student isn’t. Is this a good option for external marketing? Is this better than having real images or no images on your public website? The context is authentic, but the identity of the face is fake. For transparency, you can even add a little copyright at the bottom of the image for your school and state that it’s AI-modified. Then, use this opportunity to redirect your prospective families to come and talk to you, perhaps, and then you can invite them into your internal site for more information. Do you think using contextually authentic photos for external marketing is a good idea? I think that gives everybody a basis for what they need to work with. There’s a variety of technologies out there that can be used to impact the ability of AIs to replicate faces. It’s your choice to what degree you want to use them. There’s no perfect solution, and it’s going to continue to evolve. On that note, what can we do from a governance and policy point of view? James? Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] KEY TAKEAWAYS * Ensure accountability and transparency. * Collaborate across all school departments. * Protect student privacy and rights. * Adapt and update policies for AI. * Provide consent options and training. VIDEO (05:12) SUMMARY This section is about how schools should handle student images and personal information, especially with the rise of Artificial Intelligence (AI). The speaker, James Wigginton, emphasizes the importance of accountability, transparency, and protecting the rights of students (data subjects). He explains that schools need to re-evaluate their legal reasons for using data due to AI’s impact on how images can be used. Key steps include developing clear policies and procedures, establishing effective consent and opt-out mechanisms (even granular ones for specific images), and providing ongoing training for staff, students, and parents. The overall message is that this is an evolving landscape that requires continuous monitoring and review to ensure ethical and compliant data handling. Read Transcript James Wigginton So, uh, all of this is excellent, and these are the kind of things that we should be talking about. But what we always want to be doing is demonstrating our accountability. Okay? So these are great conversations to be having. They’re not a single person’s conversation to be had. So, you know, if you are from marketing, then your sort of data protection officer, data protection lead, compliance lead—everyone should be talking about these kinds of things because the brand image of the school is very important. But also, we want to make sure that we are protecting the rights of our data subjects—students in this case. And certainly, when we look through a privacy lens, because we have law already typically saying we need to be very careful with the use of personal data or PII, and images, as we know, is one of them. So, where AI has changed the landscape, we should really be readdressing our legal basis, okay? And we’ll talk about that a little bit in some subsequent slides. Because even we’ve evolved at Nine, and this is what we do day in, day out. So we want to make sure that, you know, whether we’re going for consent or legitimate interest—we’ll talk about that if you’re not familiar with it. We need to make sure that we readdress the risk landscape. We know AI is a factor now. We know that images can now be used in a slightly different way than maybe two or three years ago when you first were thinking about your legal basis. And we want to make sure that we’ve thought about that and justified it, okay? And you’ll have different mechanisms depending on where you are from the world. So it’s a bit specific based on your jurisdiction. But this is a good conversation to have. Once we’ve kind of thought about legal basis, what we should then be moving to is we should be considering—if we want to click to the next one, Mandy—we should be thinking about our policies and procedures. And this is all very important because this is transparency. So whether we’re thinking about policies, procedures, or you call them notices, we want to make sure that we’re always showing that to our data subjects, our parents, our students, our school community, our alumni, any of these individuals. Because if we’ve thought this through and we’ve taken that approach, and like Mandy said, there’s very rarely a right answer. It’s the best answer we can form given the changes right now. Our policies, our procedures need to support that. So our policy should articulate to our data subjects why we do these things, how we’ve chosen to do these things so we can comply with law, but also because we want to have transparency. We want to be trusted, okay? It’s an important relationship to have with your school community. And then our procedures should support our staff. So we should understand how to do these things. So, you know, going back to Mandy’s examples, that needs to be our approach. If we decide we’re going to do these things, it needs to be documented so everyone can understand how to follow that approach. Once we’ve kind of got our policies and procedures all written down and clearly available to everyone, then what we want to be doing is considering the next stage. So if we go to the next one, please, Mandy, we want to be thinking about our consent or opt-out mechanisms, okay? So legal basis will typically determine what you’ve chosen as to make sure that you are getting consent or if you’re doing legitimate interest. But then we need to be able to understand how we’re facilitating that. So it really doesn’t matter what mechanism you go for on legal basis. Ultimately, we need to be able to fulfill when people want to withdraw consent or if they want to opt out, okay? And that can be difficult, that can be challenging. We see that often as quite the administration burden. We want to make sure that we’ve got really effective ways of doing that so that if someone does not want their images used anymore, we can respond or potentially even be more granular than that and actually understand that maybe some images people don’t want used over other images. You know, I give this example to the schools I work with. You know, how many times do people spend five, six takes to take a selfie because they didn’t like the first few images, right? So we need to appreciate that actually, just because we’ve got a bank of images, maybe someone doesn’t want a particular image shared. And especially when we’re dealing with students because we know that, you know, it’s a sensitive age and their mental well-being is very important to us. So maybe we need to even start to consider more granular consent opt-outs with, you know, “You can use my image, but I don’t want you to use this image.” So we need to really think about that and how we’re kind of evolving that conversation. And then once we’ve kind of made sure that we’ve got that logged, we’re thinking about transparency and documentation. Okay? So do we have all the right things? Does everyone know how to act in the manner that we expect? And ultimately, are we also training people as well? Because we do need to train staff. It is an evolving landscape. Things are changing very quickly. And some of the schools I work with are even doing training with students, making them aware, and they’re also doing training with the parents in the school community so that everyone does understand that you’ve given this a very ethical and also, you know, from a compliance perspective, you’re adhering to your requirements. And then we want to articulate that back out because we do take these things seriously. We do understand that your image is important to you. And then of course, we want to regularly monitor and review, okay? We always want to be reviewing these things. If we just take a step back and go back two years ago, this wasn’t a problem. This wasn’t something we were really thinking about, and now we are thinking about it, and AI will probably get better and better, right? It’s going to improve, and that’s going to cause some great benefits. It’s going to even help us even more in our roles. And it may even help us with better solutions that Mandy suggested, but it is also going to mean the people who want to do harm are also going to have better tools as well. So we’re going to have to be reviewing this all the time, right? It’s a moving compass. It’s not going to stay still anytime soon. Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] KEY TAKEAWAYS * Obtain consent before using photos. * Provide clear opt-in and opt-out. * Emerging technologies increase privacy risks. * Photo removal from internet is challenging. * Extra safeguards required for minors. VIDEO (03:12) SUMMARY This section talks about how schools use photos of students and the rules they follow. It explains that getting permission (consent) is the best way, but new technology like deep fakes makes things riskier. Sometimes, schools might use another rule called “legitimate interest,” but they must let people say no and do extra safety checks. The speaker also says it’s hard to remove photos from the internet once they’re out there and that schools need to be especially careful with children’s photos. Schools must balance safety, permission, and the rights of students when using images. Read Transcript James Wigginton So, interestingly, most of you will probably operate under a consent model, which is very good. It’s the kind of gold standard. It’s what we think of. We look for consent from people. So we’re getting explicit permission from our data subjects—parents and students—to be able to use their images. We’re also looking for some kind of digital agreement that we can track and provide evidence for. Now, what’s occurred recently is that we now know there’s additional risk on a website, for example. People can take images, put them through a deep fake image generator, and then cause potential harm. We now need to consider: if someone withdraws consent, can we actually take that photo down? You can take it down from your website for sure, but can you actually remove it from the internet? That’s quite difficult, actually, with indexation and things like that. So there’s a case to be made that sometimes consent is going to have to evolve. We have to think about different legal bases because consent requires you to really try to make sure that image is gone, right? It’s not out there anymore. So, legitimate interest is a model we’re now considering, if it fits your jurisdiction, because AI has changed the risk landscape. Legitimate interest is slightly different because you can use images without explicit consent, but you do have to make sure you give people the ability to opt out. You also have to do additional risk assessments to understand how you can reduce that risk. With consent, we know some of the advantages. It’s clear, explicit, and very transparent. We’re seeking it in advance, and ultimately, the data subject has control over the image because they can say yes or no right at the start. We also have transparency and accountability through the consent process. However, that does lead to a bit of a yes-or-no scenario. People say no to everything or yes to everything, and sometimes that can be a problem, especially as risk profiles start to increase. If it became more common for criminal organizations to use deep fakes in that way, then if we’re just doing a yes-or-no mentality, we may end up having a lot more withdrawals of consent. So, having more detailed choices around photos might be really important so that we can give people options. The challenge is, unless you have a technology solution or a lot of resources, it’s an administrative burden to do this. Withdrawal of consent and misunderstanding removal obligations are important issues. If someone withdraws their consent, we have to make sure we’ve removed that image. Actually, it can be quite difficult because it may now be indexed and still out there. With children, we need to be very conscious of that. If we go to a legitimate interest Mandy, then the advantages are that it’s a simplified process. We’re saying we can use your image, but you need to opt out if you don’t want us to. So it gives you a bit more flexibility and practicality, as you can use it for business purposes, as long as you have a good reason. But the challenges are that we need to think about balancing rights and interests. This is not a perfect situation. We can’t just move to legitimate interest. We have to do an assessment and consider all the things we could do to make sure we’re protecting that personal data—these images. Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] KEY TAKEAWAYS * Have legitimate interest to use. * Always protect all photos. * Make opting out easy. * Prevent wrong photo sharing. * Don’t keep photos forever. VIDEO (02:07) SUMMARY This section focuses on how schools should manage and protect images of students, especially given the increasing risks from new technologies. It introduces the concept of a “legitimate interest assessment” as a way for schools to justify using images, particularly when it’s hard to remove photos from the internet after consent is withdrawn. It emphasizes the need for secure storage, controlled access (like “gated portals”), and effective ways for people to opt out of image usage. It also highlights the importance of auditing image use, preventing unauthorized sharing, customizing permissions, and adhering to rules about how long data (images) can be kept. The overall message is that schools must continuously adapt their practices to protect images while still promoting their community. Read Transcript James Wigginton I guess So, this is called a legitimate interest assessment, okay? And some of you may have heard of them, may have not. For example, if we were going to move to legitimate interest as our legal basis—because we do consider it may be very difficult for us to fulfill a withdrawal of consent, as we can’t stop the images that are now out there on social media, for example, as well as just your website—then we need to really think about how we’re going to put protections around that image, okay? And, to be honest with you, whether it’s consent or legitimate interest, these are the kinds of things we should be thinking about anyway, because we really do want to try and make sure we’re protecting this kind of thing. So, do we know if our images are securely stored and have access controls? Gated portals, gated technologies—something where we can put some protection around and make sure those images are stored. That is maybe how we need to rethink things a little bit. Do we have really good opt-out management? Okay, that could be tricky. But again, do we want to be able to give people the ability to be a bit more selective with what we’re going to use and what we’re not? Can we audit this? Because if there are risks, if we know images can be used for harm, can we feel confident that we can audit this? Okay? And we can demonstrate our accountability. Can we restrict external sharing? We need to be confident that people aren’t going to share the wrong images or share them with people they shouldn’t be sharing them with. It’s one of the most common data breaches we see: people sharing the wrong data to the wrong person, and with images, it happens with that too. Can we customize permissions for image usage, right? These are all things that we should be considering, and can we make sure we adhere to data retention and deletion? Can we actually make sure that if you have a records retention schedule, for example—and typically under privacy law, there’s a requirement that you don’t hold data for too long—how do we manage that? How are we making sure we’re not using images longer than we’ve committed to our data subjects? So these are all important things that we should be thinking about right now because the risk landscape has increased. Therefore, we should be having conversations about whether there are things like this we could do to not stop us promoting the brand, not stop us promoting the school community, but can we be protecting these very important images in different ways that maybe we’ve not considered before? Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] KEY TAKEAWAYS * Set and teach clear rules. * Follow privacy and copyright laws. * Give people control of data. * Use one school-wide system. * Secure ecosystem. VIDEO (02:02) SUMMARY This section emphasizes a three-part approach to managing data and images in schools. First, it highlights the need for clear “rules of engagement” and education for everyone involved. Second, it explains that “governance” should be “school-centric,” meaning rules must align with the school’s values, data privacy laws, and intellectual property rights, while also allowing for individual family preferences regarding image protection. Third, it stresses that the system for managing this data must be “platform-centric,” acting as a broad “infrastructure” that enforces school policies across all digital applications (like learning systems and websites) while supporting the growing use of data within the school’s entire digital “ecosystem.” Read Transcript Mandy Chan So, with this in mind, I tried to cover what we’ve been talking about, which is that you need to establish the rules of engagement and you need to add education to it. This gives us what we can do, what we should do, and here’s how. The governance part is school-centric, and it deals with your data privacy for regulatory reasons. It also deals with your copyrighted intellectual property requirements. And it takes into account your school’s values and principles—what’s right for your school. You might have a school where the families are very, very conservative, and so you want to be able to give them the ability to protect their images more. Or you might have one parent who’s super conservative, and they might have the ability to control their consent more. The second part is that it’s individual-centric because it is the person’s right to control. You have to give them a mechanism to be able to do that. And then the third part is the program itself, which has to be platform-centric. Unlike a lot of applications that are very classroom-centric, you need to think about this problem a little bit broader. It is an infrastructure that you’re putting in place where you have the ability to enforce compliance with your school policy while at the same time ensuring it’s set up in a way that you are supporting the growing use of this data and all the different applications in your ecosystem, whether they’re learning management systems, portfolios, websites, or what have you, that need access to this content. Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] KEY TAKEAWAYS * Keep photos and data safe. * Only approved users get access. * Set and automate privacy controls. * Flag and protect photo rights. * Remind users of good habits. VIDEO (03:35) SUMMARY This section focuses on the critical aspects of managing and protecting images within a school’s digital environment. It emphasizes that secure storage is fundamental and that accountability requires knowing who is uploading and downloading photos, ensuring only authenticated users have access. It highlights the need for clear user agreements that define personal use rights for shared photos. A key point is the importance of a system that allows users to “flag” inappropriate photos and manage their “privacy rights,” including granular consent levels for how their images are used (e.g., not for social media, marketing, or yearbooks) and the ability to opt out of facial recognition. Such a system must be automated due to the complexity of managing individual preferences while, respecting intellectual property rights (like copyrights and watermarks) and the continuous education of users on responsible image handling. Read Transcript Mandy Chan Storage and data processing, I think, is key. That’s just security. That’s foundational. The respect and accountability that we’re talking about involves making sure that there’s no anonymity among the individuals who are uploading and downloading photos, and that your members are able to access content only because you’ve authenticated them. You must ensure there is a user agreement with your end users, which allows them to acknowledge the terms of use of this platform where they are granting personal use rights through the photos that they are sharing with each other. This also means agreeing that they have the right to share and access the photos they are offering up on the platform. Because it’s so individual, you need a mechanism for users to be able to flag a photo for whatever reason. So, when James was mentioning how we all take these photos, selfies, and there’s probably only one out of ten that we think is actually a good representation of us and we might be okay sharing, you need the ability to remove the other nine, so to speak. A robust system that allows you to be inclusive yet, at the same time, manage that access, I think, is really key. How much access? These are all your permissions: who can upload, who requires moderation, who can view. Then you need to have privacy rights management. These are usually the consent levels, such as: “I don’t want my photo to be shared on social media. I don’t want it to be used in marketing. I don’t want to use it for a yearbook. I don’t want to be part of this at all, so opt me out. If there’s a photo that has me in it, I want it to be unshared with everybody else in the community, regardless of who’s uploaded it.” All this needs to be automated; otherwise, you just can’t manage it. It’s too onerous. “I don’t actually want facial recognition to run over my image and automatically identify me.” I should have a right to ask for that, and for you to be able to enforce that. And then for everybody to know what those consents are, if they’re relevant. I need to be able to respect intellectual property rights. So, if it’s my image, I should be able to indicate a copyright over that image. And if I don’t want that image to be replicated easily, I should be able to put a watermark on that image to prevent that from happening. This part, we categorize under education. This is a consistent way for you to remind everybody what they should do, not just what they can’t do, but what they should be doing. So, that means when they download an image, remind them of their personal rights—the personal use of the media being downloaded. It’s not for public distribution. Remind them that if there’s a problem, they can flag it. Tell us why you’re flagging it, and then an admin can review and either restore it or remove it from the system. That’s just respect. Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] KEY TAKEAWAYS * Every photo matters, even blurry. * AI can mislabel student faces. * Centralize all photos. * System must protect student privacy. * Inclusivity. VIDEO (02:27) SUMMARY This section discusses the challenges and solutions for managing a photos, particularly concerning student images. It highlights that while AI can assist with photo recognition, it’s not always accurate, leading to “mistags” that parents dislike. It emphasizes the need for a “verification mechanism” to correctly identify individuals, especially considering factors like siblings, aging, and photo quality issues (e.g., motion blur). The core idea is to empower the school community with more control over image tagging and management. The goal is to create a “robust” and “inclusive” central photo library that can be easily used for various school purposes, from websites and yearbooks to presentations and digital signage, while ensuring proper management and protection of student images throughout their school journey. Read Transcript Mandy Chan So, regardless of whether a photo is great because it comes from a professional photographer or if it’s blurry, we want to be able to see it. In a perfect world, AI would automatically recognize every photo that gets uploaded. In practice, it doesn’t always work. And parents have no tolerance for mistags of their kids. So, in this image here, you see eight photos of my two boys. They look similar, but only half are right. The other half is his brother. And this is very common. So, you need a mechanism for verification in order for you to tackle issues like siblings, aging (when they look very different, especially boys between 10 and 13 years old—overnight they become young men), and there’s lots of motion blur because of sports or costumes, and what have you. So, you have to treat tagging as part of engagement. You are empowering your community with more control than ever before to manage and protect their kids’ images, as well as collect their kids’ images. And when you collect those images over time, it’s amazing because you’re visually documenting their entire school journey from the first day of school through graduation. At the end of the day, what you want is a photo library that feeds into all the different things that you want to do as an educator and as a school. Whether it is that you want to be able to find and use it on a website, whether you want to be able to access it in Canva to build your yearbook, to use it in PowerPoint, to put up on digital signage—these are all very simple or common applications of your photos. And everybody has a reason to use photos for different reasons. So, putting something like this—a system in place where it’s inclusive yet managed—allows us to get the ultimate benefit out of the way we want to be able to use photos at a school. Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] KEY TAKEAWAYS Schools need to prioritize clear, understandable permission forms for data collection, especially concerning new technologies like AI. This includes explaining risks, offering opt-out options, and regularly updating policies. For older students, dual consent (from both student and parent) is recommended. Education about AI and its potential risks (like bias and deepfakes) is crucial to foster understanding and trust. VIDEO (10:24) SUMMARY This Q&A section covers a discussion among school privacy and AI experts about how schools should handle permission and waiver forms in a world where privacy laws and technology are always changing. They talk about making forms easy to understand, letting people change their minds, and updating forms regularly. The experts also discuss the risks of new technology like AI, including deep fakes and privacy issues, and stress the importance of teaching students and parents about these risks. They recommend getting both parent and student permission as kids get older and note that most schools are still figuring out how to handle AI-generated images and videos. Read Transcript Renee Ramig So the first question is: What are the best practices for waiver and permission forms given the changing privacy landscape? James Wigginton I guess that’s more of a me question, isn’t it? Um, so yeah. I mean, the big thing about it, first of all, is accessibility. So we want to make sure that when we’re thinking about these kinds of things, it’s really easy for our data subjects. I speak very much for 9ine—our parents and our students—to understand what we’re asking of them, okay? We want to make sure we’re removing those barriers. And then also what we want to do is ensure that it’s a manageable system, what we’re trying to offer here. So if we’re going to give people the ability to give consent, or if we’re going to use a different basis, we do have the mechanism to be able to withdraw it as well, and that we can manage that. So it was interesting when we had the instances occurring in the UK, a lot of the schools I worked with were saying, “Well, should we tell parents about what’s happening right now so they can opt out? They can give that kind of consent,” and that’s the right approach, but it’s quite difficult because you typically deal with consent at a certain time of year, right? You do it with your admission cycle. So are there things like Mandy’s talked about—can we do it more dynamically, can we do it in real time? So again, it’s exploring those kinds of options, making sure that we’re covering ourselves from a legal perspective. So of course we want to understand our privacy law, our information rights, making sure we’re articulating them into any of our consent forms. But then again, it’s always about keeping an eye on them, right? Because things do change. So it’s a very quick example. Many of the schools that we work with didn’t say things like, “If we use your photo on our website, we lose control of that afterwards.” And it’s being that clear to people when we say that, because we want to make sure that people are giving informed consent. So it’s thinking about those risks and being able to articulate that back out so people do actually know what they’re agreeing to. Okay, so just there, those are a lot of top tips that I would say—how you do it, whether it’s a technology solution, obviously manually it’s quite difficult. I get that a lot at the schools I work with. It’s challenging to do that at certain times of year. The mechanism, as long as it’s reliable and has good human checks and balances, is fine. But as time goes on, it does get harder. It’s a more administrative kind of burden. So any way that we can enhance that, take the burden off people, anything that’s going to make it a little bit more efficient is a good way to think about it as well. Josephine Yam Yes. I’d like to build on what James mentioned, and as we know, law always lags behind technology and AI is changing almost every day, so people get paralyzed. It’s so complex. There’s already this fear of AI, and in terms of do we even use it, it paralyzes a lot of people. And so the only way to overcome the fear of AI and what bad actors can do with AI is by transforming fear into fluency through education, understanding what AI can do and what it cannot do. And there are already a plethora, not only of privacy risks, but AI ethics risks as well, because data is the fuel of AI. So obviously privacy is one of the risks, but the other risks of AI implications are bias and discrimination, transparency, and explainability. And so there needs to be constant education, because as we know, AI without ethics is like a car without brakes. And so we need to really be able to understand how we can always keep up to date with the developments in this AI world. Renee Ramig About how often should permissions and waivers be reviewed and updated? Yeah, that’s a good question. So from a legal perspective, typically it makes sense. You as a business are entitled to have a cycle, and more often than not, it’s going to be done with your admission cycle. Okay? That’s the reality of it. That’s when you’ve got new parents who are going to be joining the school with their children. And then you’ve obviously got re-enrollment. So that does make sense. That would be a very good time. We want to make sure before that we redo consent again. We’re doing our notices, our policies, making sure that the law hasn’t changed, the risk profile hasn’t changed with the AI landscape, these kinds of things. But again, there will be things, unfortunately, that will trigger new risks, right? So again, I know I keep going back to it, but the deep fakes in the UK with those schools impacted—parents ask questions. They will ask questions, they’ll want to know if this is a problem they need to be concerned about. So we do need to have that ability to also be able to review it throughout the year as well. If there are issues, if things come up that are going to change the way people think, we’re obviously going to want to be able to respond to our school community if there’s a risk. It might not be your school, it could be a school in your area it happens to, but people will talk about these things. And we live in a world of social media now, so there’s always influencers who will be talking about it as well. I’ve seen it myself before. So I think annually as a minimum, of course, because things change. But it is good to be able to be adaptive because there are risks that may happen and you may be exposed to one, and then we need to be able to respond to our parents about how we’re dealing with that. Josephine Yam As AI changes, for example, when ChatGPT was introduced in November 2022, everybody was caught unaware that all of a sudden children were bringing their ChatGPT to do their homework. And then they were manipulating pictures, and for example, the story in New Jersey where this male classmate, a 12-year-old male classmate, deep faked 20 of his female classmates’ pictures that he got from Instagram and then deep faked them with nudity apps and then put them online. The school could not do anything about it because they did not have any policies in place to address that because they were totally caught off guard. So what happened was this teenager, her name is Francesca Manny, she became the voice of being able to have laws in place. And so now New Jersey has a law about how deep fake is now a crime under New Jersey law. And New Jersey now joins 27 other US states in making sure that deep fake is something that needs to be criminalized. And what’s really interesting is that they found out that some children, it’s just a lack of literacy. They do it as a joke to make a meme. They don’t know how much harm it can cause. The ethical rights and the dignity of their classmates can actually be impaired. That can cause mental health issues. So education is absolutely critical, especially in this changing world of AI. Renee Ramig So, the next question is: what happens if a family opts in and students opt out? Do students have the right to opt out and at what age? James Wigginton Okay, so this is kind of like a privacy question first. So, when it comes to opting out of photo usage or any personally identifiable information, legally, it depends on the age and it depends on your data protection law. So in most jurisdictions, for example, in the EU, the GDPR, or if it’s the US under COPPA, if a student is under a certain age—13 in the US, 16 in the EU—parents are typically going to provide that consent. So if we think about it from a data privacy perspective, it’s going to be the parent, right? And there isn’t actually a mechanism, unless you are above those ages, where you typically give consent. So actually the parents have that, but—and I’m sure Josephine will have some thoughts on this as well—ethically, we may want to start thinking a little bit differently about that. Because ultimately we are creating a digital footprint of students when we’re putting them out there, whether it’s social media, whether it’s on our website, and we’re making quite a lot of decisions about their digital image, their digital avatars, as we sometimes refer to it. So if they’re of a certain age, then perhaps there is a blended approach where, yes, we’ve got the privacy aspect and the law that says a parent can, but then your principles as a school, you may want to give students some say in what they’re happy to have shared about themselves. So I’m sure, Josephine, you’ve got some views on this as well. Josephine Yam Absolutely, I’d love to add to that, James. So legally speaking, absolutely there are different ages for different jurisdictions. However, in terms of best practices, because we work with educational institutions, the best practice is that they get dual consent, especially if the child is older. So when the child hits something like 13 or 14, there’s this recognition of their autonomy, of being able to control their identity. Because privacy, the right to privacy, is a gateway to other rights. And one of those is being able to present ourselves to the world according to our own terms. So when these teenagers of ours want to be able to control their identity and to only be able to put out pictures that they think represent themselves well to the world, then we should give them that agency and that autonomy. And that really goes back to being able to respect their right to human dignity as well. So really a very important aspect to consider, as James mentioned, is being able to have that hybrid—maybe dual consent might be a good idea, especially as they get older. Renee Ramig What do school policies about AI-generated images and videos look like? James Wigginton Hmm, interesting. So, I work with hundreds of schools and I don’t think many schools I’ve worked with—although the poll was interesting that you’ve done training on this kind of thing, so I was very impressed—yeah, I don’t think many schools have quite got that far yet. So a lot of the work that I’m doing with schools is they’re thinking about AI policies from the potential of how are we going to use AI in the classrooms? Privacy lenses. So we need to make sure that we’re training our staff and our students not to put personal data into AI engines because we don’t know how those learning engines always work or what they’re going to do with that personal data. I think when it comes to AI-generated images, as in, if that’s going to be something that we’re going to let students choose or staff can manipulate with AI, I think that’s going to be an evolving discussion. I think that’s going to be something that is going to have to be talked about. And I imagine it will become something that will start to be talked about more and more. Obviously, Mandy covered this quite well at the start of the webinar, and I think like everything, there will be benefits and risks. I think, if I think about it on the individual, does the individual have the right to be able to change their images? You know, people use filters, these kinds of things. But then if we think about it from a group photo perspective, how’s that going to work? Because how are you going to manipulate a group photo without impacting the other people in that image? So I think it’s a tricky one. I think it’s one that will evolve like everything else. At the moment, though, it’s not something I’m seeing so much, but I can imagine that will catch up with us quite quickly. I imagine that will start to be a topic of conversation. Have a great day. Thank you so much. Thank you. Cheers. Bye-bye. Bye everybody. Back to the Top [https://vidigami.com/landing-page-stevens-coop/#units] DO YOU WANT TO SEE A LIVE DEMO? --- Generated by SignalToAI v1.0.25 For more information: https://vidigami.com/llms.txt