May 6–The Reed-Gonzalez traveling circus will go to San Carlos on the Pacific coast to visit a friend and indulge in warm water and red wine for roughly two weeks but will renew sedition and libel on return if not caught and hanged first. Meanwhile some thoughts on artificial intelligence. Whether there is any other kind these days is an open question. While it will be apparent that I am not a leading authority on the matter, some questions seem within normal mortal grasp. Related screwy technologies included though not technically AI.
First, a flurry of alarm has broken out around the web because it is now becoming conveniently possible to make convincing but fraudulent video of people complete with their natural and naturally inflected voices. This offers delightful possibilities to wags, who might make video of Hillary bein affectionate with a burro in a Tijuana nightclub. More seriously, we could have Trump confessing to an attempt to overthrow the government by force. China reportedly requires that such artificial video be so marked.
Next, we have ChatGPT that can, among other things, write essays in the styles of different authors–Shakespeare, for example, or Mark Twain. This sounds cute, and I guess is, but has implications. One result is that a substantial number of books, mostly for kids, have been written by ChatGPT and offered for sale on Amazon. Questions arise. Should these books be marked as such? Will they put human writers out of business? Here it is worth remembering that this sort of software is in its first infancy. In ten years…?
If I were to have ChatGPT write a story in the style of Twain, it would be for the thoughtful and cultivated…disturbing, but just a clever trick. But if I had it write stories in the style of the author of the Harry Potter books, mightn’t they compete successfully with her books? She or her publishers own the copyright to her books, but how about her characters and her style.
This seems a real issue. Reportedly (a weasel word, but I have not researched the details of all of thise things) some outfit had ChatGPT write a song, if that is the word, in the style of some famed rapper and then had voice-imitation software perform it in the rapper’s voice but, I think, under another name.
This offers unsettling possibilities. Elvis could be retrieved from the dead to sing in his voice and style songs written by ChatGPT. Hillary could sing Hound Dog, as could Maria Callas. All righlt, I am enjoying myself, but it does seem that increasingly all aspect of artistic performance are readily fabricated. As such software becomes more and more accessible and easy to use, how will Warner Brothers keep teenagers from flooding the market with knockoffs?
Next, it is not secret that publications now use software to write much of their copy. I don’t know what I mean by “much of,” but I have read robowritten output and it is of normal quality. As Large Language Models such as ChatGPT advance, they will produce increasingly sophisticated copy–and they already are alarmingly good. Will a newspaper pay a human writer a large annual salary to spend hours writing a story that an LLM can produce in ten seconds?
There is also now text-to-image software that allows you to type “three Comanche warriors with iPhones” or “Viking warror,” whereupon it produces these images with enough quality for use in magazines. The images, at least the ones I asked for, are not cartoonish or characterless. I suspect that even now, with all of this in early stages of development, a clever editor could put out an entire magazine with no artists or writers.
Computer voices like that of, say, Siri sound natural and naturally inflected. Computers now understand spoken language well. I will wager that soon we will have toys for small children that will hold conversations with them. And, no doubt these days, inculcate Appropriate Values.
Some argue that AI doesn’t really understand anything. But if we can show that a computer does understand, then obviously it can understand. If i say to my iPhone, “Hey Siri, in Spanish how do you say, If grandmother had wings instead of arms, she could fly like a bird?” Siri translatee perfectly, getting both the subjunctive and the conditional. This is not simple replacement, which doesn’t work with languages. Computer translation is now good enough to replace humans for most purposes. When this column was appearing at the Unz Review, I sometimes translated it into Spanish with the translation routine built into the site. I virtually never found mistranslation.
Here we come to an aspect not often touched upon: AI is likely not just to replace employees, but leave humans unable to do those jobs because no one remembers how. If AI can write stories instantly and at least as well as human, few people will put in the time and effort to learn to write. AI routines are now reported to read chest x-rays better than humans. Does this mean that hospitals will send these x-rays to an x-ray reader in the cloud, unemploying radiologists? If AI can do tax accountancy or legal research faster and with fewer mistakes than humans, why would anyone study these things?
Recently the Chinese, apparently with the US not too far behind, had a fighter plane piloted entirely by AI fly against a manned fighter. Guess which won hands down. This isn’t surprising because computers have essentially instantaneous reflexes and make mathematical judgements. An AI-piloted plane costs far less than a manned plane, can maneuver far more sharply, and can be used to do risky things since there is no pilot to be killed.
An interesting branch of AI is called Deep Learning. It relies on layers of artificial neurons working somewhat like human brain cells and involving much technoglop like feed-forward and back propagation and error functions and stochastic gradient descent. The upshot is that if you feed these things absolutely huge amounts of data, they can figure out useful things, such as who is likely to get a certain type of cancer, or to repay a loan. But if it can replace loan officers, then before long no one able to judge loan applicants will exist. There would be no money in it.
An unnerving aspect of neural networks is that you can tell whether they get the right answer but you can’t tell how they did it. In traditional programming languages like pPython or assembly language, you can go through a program step by step and see how it decided what. With neural networks, this doesn’t work. If you can’t tell how it came up with what it came up with, you can’t be sure that under other circumstances it might do something unexpected.
In one incident I have encountered a research group trained a neural net to distinguish between German Shepherds and Huskies. To do this they fed the network huge numbers of photos of the two breeds, and were greatly pleased at how very fast it learned to tell the difference. Then it turned out that all the huskies had been photographs against fields of snow, and the German Shepherds against vegetation. The network had learned how to tell white from green.
OK, off to San Carlos for the warm ocean and the red wine. If on my return I find that I have been replaced by software, I will sit on a street corner with a sign saying, “Will Commit Libel and Sedition for Food.”