BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//swoogo.com//NONSGML kigkonsult.se iCalcreator 2.41.90//
CALSCALE:GREGORIAN
UID:63363431-3533-4764-b131-303233323831
BEGIN:VEVENT
UID:b495d977d176c7aaed703e0d443f2432e62aa931@swoogo.com
DTSTAMP:20260419T232759Z
DESCRIPTION:All is not always as it appears – especially in the realm of el
 ectronically-stored information.  It has always been possible for ESI to b
 e deliberately altered or entirely fabricated\, but in recent years\, new 
 machine learning techniques have made it possible for computers to produce
  a tsunami of fake images\, fake audio recordings\, and fake video recordi
 ngs with disturbing quality and ease.  First came “deepfakes” that enabled
  amateurs using personal computers to replace the original faces or voices
  in video or audio recordings with new ones\, using publicly-available mac
 hine-learning algorithms trained using online source media.  Next came clo
 ud-based generative AI tools\, which allow for images and videos to be gen
 erated automatically from text prompts.  This looming tsunami of artificia
 l content will likely pose real challenges to our current discovery proces
 ses\, from new opportunities to inadvertently rely on something altered or
  unreal\, to new opportunities for intentional spoliation or fabrication\,
  to new technical and legal authentication challenges.  In this program\, 
 a group of expert practitioners will discuss these new challenges and how 
 we can prepare for them.\n\n	* Learn the difference between deepfake tools\
 , large language models\, and other generative AI content sources\n 	* Gain
  an understanding of the potential risks  of such materials in discovery a
 nd how you can work to mitigate them\n 	* Consider how the existence of the
 se technologies may undermine even legitimate video and audio evidence\n\n
 [cle.png]
DTSTART:20250326T180000Z
DTEND:20250326T190000Z
LAST-MODIFIED:20260419T232759Z
LOCATION:Rendezvous Trianon
SEQUENCE:0
STATUS:CONFIRMED
SUMMARY:When You Can’t Believe What You See: The Rise of Deepfakes and Gene
 rative AI
TRANSP:OPAQUE
X-ALT-DESC;FMTTYPE=text/html:<p style='margin-left:15px\;'>All is not alway
 s as it appears – especially in the realm of electronically-stored informa
 tion.  It has always been possible for ESI to be deliberately altered or e
 ntirely fabricated\, but in recent years\, new machine learning techniques
  have made it possible for computers to produce a tsunami of fake images\,
  fake audio recordings\, and fake video recordings with disturbing quality
  and ease.  First came “deepfakes” that enabled amateurs using personal co
 mputers to replace the original faces or voices in video or audio recordin
 gs with new ones\, using publicly-available machine-learning algorithms tr
 ained using online source media.  Next came cloud-based generative AI tool
 s\, which allow for images and videos to be generated automatically from t
 ext prompts.  This looming tsunami of artificial content will likely pose 
 real challenges to our current discovery processes\, from new opportunitie
 s to inadvertently rely on something altered or unreal\, to new opportunit
 ies for intentional spoliation or fabrication\, to new technical and legal
  authentication challenges.  In this program\, a group of expert practitio
 ners will discuss these new challenges and how we can prepare for them.</p
 >\n\n<ul><li style='margin-left:15px\;'>Learn the difference between deepf
 ake tools\, large language models\, and other generative AI content source
 s</li>\n	<li style='margin-left:15px\;'>Gain an understanding of the potent
 ial risks  of such materials in discovery and how you can work to mitigate
  them</li>\n	<li style='margin-left:15px\;'>Consider how the existence of t
 hese technologies may undermine even legitimate video and audio evidence</
 li>\n</ul><p><img alt='cle.png' src='https://amegocontent.com/temp/cle.png
 ' /></p>
BEGIN:VALARM
UID:64633933-6135-4463-b432-353766333633
ACTION:DISPLAY
DESCRIPTION:All is not always as it appears – especially in the realm of el
 ectronically-stored information.  It has always been possible for ESI to b
 e deliberately altered or entirely fabricated\, but in recent years\, new 
 machine learning techniques have made it possible for computers to produce
  a tsunami of fake images\, fake audio recordings\, and fake video recordi
 ngs with disturbing quality and ease.  First came “deepfakes” that enabled
  amateurs using personal computers to replace the original faces or voices
  in video or audio recordings with new ones\, using publicly-available mac
 hine-learning algorithms trained using online source media.  Next came clo
 ud-based generative AI tools\, which allow for images and videos to be gen
 erated automatically from text prompts.  This looming tsunami of artificia
 l content will likely pose real challenges to our current discovery proces
 ses\, from new opportunities to inadvertently rely on something altered or
  unreal\, to new opportunities for intentional spoliation or fabrication\,
  to new technical and legal authentication challenges.  In this program\, 
 a group of expert practitioners will discuss these new challenges and how 
 we can prepare for them.\n\n	* Learn the difference between deepfake tools\
 , large language models\, and other generative AI content sources\n 	* Gain
  an understanding of the potential risks  of such materials in discovery a
 nd how you can work to mitigate them\n 	* Consider how the existence of the
 se technologies may undermine even legitimate video and audio evidence\n\n
 [cle.png]
TRIGGER:-PT15M
END:VALARM
END:VEVENT
END:VCALENDAR
