Tech buzzword panic   Recently updated !


EFF has run a valuable campaign on the risks of leaking metadata to third parties. If you upload a photo to a public website, for instance, the file might contain information on exactly when and where you took the picture. A stalker can make use of the information, especially if you upload photographs wherever you go. If you make a phone call or text message, the associated metadata may get less privacy protection than the what you said or typed. The US government has claimed that warrantless searches of communication metadata, which might identify the sender, receiver, and time of a message, are OK. Unfortunately, this has led some people to think that metadata themselves (I’m standing by “data” as a plural) are evil.

A 2014 post by Bruce Schneier, who has to know better, was titled “Metadata = Surveillance.” You might as well say “pictures = surveillance” or even “bits = surveillance.” This became a popular slogan in computer privacy circles for a while.

As a library geek, I first became familiar with the intricacies of metadata from a different side. In archives and libraries, metadata are crucial to organizing and finding files. They can tell you an item’s subject matter, origin, format, source, copyright status, access restrictions, and much more. One of my projects, the Filk Book Index, consists of nothing but metadata about songbooks.

Information can be useful or dangerous. The fact that it’s metadata tells you nothing. When I posted a reminder of this on Bluesky in response to a somewhat one-sided EFF post, someone answered, “Metadata can tell you where someone lives, where he works, what medical condition he has, what are his political convictions, if he cheats on his partner.” True but strangely put. You could replace “metadata” with “data” in that sentence, and it would still be true and vacuous.

What’s going on here is what I’d like to call “tech buzzword panic.” People who don’t understand a technical term but hear it in a certain context see that something can be misused and rage against the term. Using a fancy word gives others the impression they know something, so they too get angry and act as if they understand it. The word “computer” itself was once used that way, as people feared that devices with 16K memory would take over the world.

Another example is “algorithm.” An algorithm is a well-defined procedure for solving a mathematical or computational problem. All coding is based on algorithms. The only alternative is to throw lines of code together on a whim and hope they work. But to many people, “algorithms” are evil things that determine what they can see on social media sites, and they demand that people write code without algorithms. Worse, some people who know better try to reassure the idiots that they don’t use any algorithms. If I believed them, I’d have to stay miles away from their bug-laden systems.

Lately there’s been a panic over “AI.” Artificial intelligence has no clear definition. It includes chess playing, speech-to-text conversion, translation between languages, processing of natural language queries, and a lot more. Whatever is ahead of what most people think computers can do counts as AI until everyone is doing it. When ChatGPT and similar bots became available, some people got excited and others got mad. Software competes with hack writers, generating barely passable filler for less money than humans. It sometimes generates bizarre output, from made-up facts to Nazi rhetoric. Authors are concerned that these bots are taking their writing, putting it through a blender, and offering it as original material. That straddles the line between research and plagiarism and sometimes steps well over it. Most annoying to me is sites that keep asking me if I want their software to rewrite my posts. No, LinkedIn, you don’t get to generate posts and put my name on them!

But this is just one type of AI, the kind based on large language models (LLMs). AI has many other uses, some of which have become routine. It’s used in voice recognition, spam and malware detection, vehicle navigation, mobile robot operation, and a lot more. Some uses of AI are certainly dangerous and can trigger destructive actions or false accusations. Science fiction has dealt with these possibilities for decades. But the AI panic has led some to think that any use of such technologies is evil. Applied consistently, that would mean renouncing a lot of modern conveniences along with the misuses.

At Dragon Con a couple of days ago, the convention called the police because a dealer was selling AI-generated art. According to what I’ve read, that didn’t violate convention policy. Even if it did, I’ve seen no explanation of why the police were necessary. They were apparently brought in from the beginning, not because the exhibitor made any trouble. But AI-generated images are so dangerous for some reason that they have to be met with armed force. The comments I saw on one fan site were all delighted at this action.

I could talk about panics over other fancy terms, like GMO and mRNA, but computer stuff is what I know best, and this article could become a book if I kept going on other buzzwords. Whatever is in question, you have to judge your ability to grasp the issue and the qualifications of the people talking about it. Non-experts can bamboozle people by tossing around technical terms. Sometimes they even get into normally reputable publications. It can be hard to tell who the real experts are, and you might have to admit you just aren’t sure.

Leave a comment

Your email address will not be published. Required fields are marked *