✍️ Gate 廣場「創作者認證激勵計劃」進行中!
我們歡迎優質創作者積極創作,申請認證
贏取豪華代幣獎池、Gate 精美周邊、流量曝光等超過 $10,000+ 豐厚獎勵!
立即報名 👉 https://www.gate.com/questionnaire/7159
📕 認證申請步驟:
1️⃣ App 首頁底部進入【廣場】 → 點擊右上角頭像進入個人主頁
2️⃣ 點擊頭像右下角【申請認證】進入認證頁面,等待審核
讓優質內容被更多人看到,一起共建創作者社區!
活動詳情:https://www.gate.com/announcements/article/47889
今天是國際事實查核日。刷新你的 AI 識別技能吧
AI 生成的內容無處不在,讓人愈來愈難以區分事實與虛構,尤其在涉及突發新聞時更是如此。
不妨看看伊朗戰爭。自 2 月 28 日美國與以色列攻擊伊朗以來,研究人員已辨識出前所未有的大量由人工智慧生成、並已傳播到全球無數人的虛假與誤導性影像。其中包括:從未發生的轟炸的假影片、據稱被俘的士兵影像,以及伊朗製作的宣傳影片,將美國總統唐納德・川普與其他人描繪成方塊狀、類樂高的迷你小人。
今天是第 10 屆年度「國際事實核查日(International Fact-Checking Day)」,是個很好的機會,來審視這些不斷演變的挑戰。
利用 AI 製作的錯誤資訊,正以史無前例的速度,從無數來源被分享出去。自伊朗戰爭一開始,衝突各方的帳戶就都在宣傳這類內容。
追蹤錯誤資訊與網路極端主義的「戰略對話研究所(Institute for Strategic Dialogue)」一直在審視伊朗戰爭期間的社群媒體貼文。其調查發現之一,是一群在 X(原 Twitter)上的帳戶:他們會定期發布 AI 生成的內容,並在衝突爆發後合計獲得超過 10 億次觀看。這是由大約兩打(約二十幾個)帳戶完成,其中許多帳戶都具備藍色勾勾的驗證。
以下是一些在網路世界中、面對「會變得更難」的情況下,用來辨別 AI 生成內容與現實的方法建議。
24
24
24
When AI-generated images first began spreading widely online, there were often obvious tells that could identify them as fabricated. Perhaps a person had too few — or too many — fingers or their voice was out of sync with their mouth. Text may have been nonsensical. Objects were frequently distorted or missing key components. As the technology continues to evolve, these clues aren’t as common as they once were, but it’s still worth looking for them. Watch for inconsistencies such as a car that is in a video one moment and gone the next or actions that aren’t possible according to the laws of physics. Some images may also be overly polished or have an unnatural sheen.
Seek out a source
AI-generated images get shared over and over again. One way to determine their authenticity (or lack thereof) is to hunt for their origin. Using a reverse image search is a simple way to do this. If you’re looking at a video, take a screenshot first. This can lead to a social media account that specifically generates AI content, an older image that is being misrepresented, or something entirely unexpected.
Listen to the experts
Look for multiple verified sources that can help authenticate the image. For example, that can mean a fact-check from a reputable media outlet, a statement from a public figure, or a social media post from a misinformation expert. These sources may have more advanced techniques for identifying AI-generated content or access to information about the image that is not accessible by the general public.
Make use of technology
There are many AI detection tools that can be a helpful place to start. But be wary, as they are not always correct in their assessments. Images that have been generated or altered with AI using Google’s Gemini app include an invisible digital watermarking tool called SynthID, which the app can detect. Other AI creation tools have added visible watermarks to content they generate. They are often easy to remove though, meaning the absence of such a watermark is not proof that an image is genuine.
Slow down
Sometimes it’s just about going back to basics. Stop, take a breath and don’t immediately share something you don’t know is real. Bad actors are often counting on the fact that people let their emotions and existing viewpoints guide their reactions to content. Looking at the comments may provide clues about whether the image you’re looking at is real or not. Another user might have noticed something you didn’t or been able to find the original source. Ultimately though, it’s not always possible to determine with 100% accuracy whether an image is AI-generated so remain alert to the possibility it might not be real.
See something that looks false or misleading? Email us at [email protected].
Find AP Fact Checks here: