[ad_1]
Watermark for LLM-Generated Textual content
Researchers at Google have developed a watermark for LLM-generated textual content. The fundamentals are fairly apparent: the LLM chooses between tokens partly based mostly on a cryptographic key, and somebody with data of the important thing can detect these selections. What makes this difficult is (1) how a lot textual content is required for the watermark to work, and (2) how strong the watermark is to post-generation modifying. Google’s model appears fairly good: it’s detectable in textual content as small as 200 tokens.
Tags: educational papers, synthetic intelligence, cryptography, Google, identification, LLM
Posted on October 25, 2024 at 9:56 AM •
0 Feedback
Sidebar picture of Bruce Schneier by Joe MacInnis.
[ad_2]