May 6–The Reed-Gonzalez traveling circus will go to San Carlos on the Pacific coast to visit a friend and indulge in warm water and red wine for roughly two weeks but will renew sedition and libel on return if not caught and hanged first. Meanwhile some thoughts on artificial intelligence. Whether there is any other kind these days is an open question. While it will be apparent that I am not a leading authority on the matter, some questions seem within normal mortal grasp. Related screwy technologies included though not technically AI.
First, a flurry of alarm has broken out around the web because it is now becoming conveniently possible to make convincing but fraudulent video of people complete with their natural and naturally inflected voices. This offers delightful possibilities to wags, who might make video of Hillary bein affectionate with a burro in a Tijuana nightclub. More seriously, we could have Trump confessing to an attempt to overthrow the government by force. China reportedly requires that such artificial video be so marked.
Next, we have ChatGPT that can, among other things, write essays in the styles of different authors–Shakespeare, for example, or Mark Twain. This sounds cute, and I guess is, but has implications. One result is that a substantial number of books, mostly for kids, have been written by ChatGPT and offered for sale on Amazon. Questions arise. Should these books be marked as such? Will they put human writers out of business? Here it is worth remembering that this sort of software is in its first infancy. In ten years…?
If I were to have ChatGPT write a story in the style of Twain, it would be for the thoughtful and cultivated…disturbing, but just a clever trick. But if I had it write stories in the style of the author of the Harry Potter books, mightn’t they compete successfully with her books? She or her publishers own the copyright to her books, but how about her characters and her style.
This seems a real issue. Reportedly (a weasel word, but I have not researched the details of all of thise things) some outfit had ChatGPT write a song, if that is the word, in the style of some famed rapper and then had voice-imitation software perform it in the rapper’s voice but, I think, under another name.
This offers unsettling possibilities. Elvis could be retrieved from the dead to sing in his voice and style songs written by ChatGPT. Hillary could sing Hound Dog, as could Maria Callas. All righlt, I am enjoying myself, but it does seem that increasingly all aspect of artistic performance are readily fabricated. As such software becomes more and more accessible and easy to use, how will Warner Brothers keep teenagers from flooding the market with knockoffs?
Next, it is not secret that publications now use software to write much of their copy. I don’t know what I mean by “much of,” but I have read robowritten output and it is of normal quality. As Large Language Models such as ChatGPT advance, they will produce increasingly sophisticated copy–and they already are alarmingly good. Will a newspaper pay a human writer a large annual salary to spend hours writing a story that an LLM can produce in ten seconds?
There is also now text-to-image software that allows you to type “three Comanche warriors with iPhones” or “Viking warror,” whereupon it produces these images with enough quality for use in magazines. The images, at least the ones I asked for, are not cartoonish or characterless. I suspect that even now, with all of this in early stages of development, a clever editor could put out an entire magazine with no artists or writers.
Computer voices like that of, say, Siri sound natural and naturally inflected. Computers now understand spoken language well. I will wager that soon we will have toys for small children that will hold conversations with them. And, no doubt these days, inculcate Appropriate Values.
Some argue that AI doesn’t really understand anything. But if we can show that a computer does understand, then obviously it can understand. If i say to my iPhone, “Hey Siri, in Spanish how do you say, If grandmother had wings instead of arms, she could fly like a bird?” Siri translatee perfectly, getting both the subjunctive and the conditional. This is not simple replacement, which doesn’t work with languages. Computer translation is now good enough to replace humans for most purposes. When this column was appearing at the Unz Review, I sometimes translated it into Spanish with the translation routine built into the site. I virtually never found mistranslation.
Here we come to an aspect not often touched upon: AI is likely not just to replace employees, but leave humans unable to do those jobs because no one remembers how. If AI can write stories instantly and at least as well as human, few people will put in the time and effort to learn to write. AI routines are now reported to read chest x-rays better than humans. Does this mean that hospitals will send these x-rays to an x-ray reader in the cloud, unemploying radiologists? If AI can do tax accountancy or legal research faster and with fewer mistakes than humans, why would anyone study these things?
Recently the Chinese, apparently with the US not too far behind, had a fighter plane piloted entirely by AI fly against a manned fighter. Guess which won hands down. This isn’t surprising because computers have essentially instantaneous reflexes and make mathematical judgements. An AI-piloted plane costs far less than a manned plane, can maneuver far more sharply, and can be used to do risky things since there is no pilot to be killed.
An interesting branch of AI is called Deep Learning. It relies on layers of artificial neurons working somewhat like human brain cells and involving much technoglop like feed-forward and back propagation and error functions and stochastic gradient descent. The upshot is that if you feed these things absolutely huge amounts of data, they can figure out useful things, such as who is likely to get a certain type of cancer, or to repay a loan. But if it can replace loan officers, then before long no one able to judge loan applicants will exist. There would be no money in it.
An unnerving aspect of neural networks is that you can tell whether they get the right answer but you can’t tell how they did it. In traditional programming languages like pPython or assembly language, you can go through a program step by step and see how it decided what. With neural networks, this doesn’t work. If you can’t tell how it came up with what it came up with, you can’t be sure that under other circumstances it might do something unexpected.
In one incident I have encountered a research group trained a neural net to distinguish between German Shepherds and Huskies. To do this they fed the network huge numbers of photos of the two breeds, and were greatly pleased at how very fast it learned to tell the difference. Then it turned out that all the huskies had been photographs against fields of snow, and the German Shepherds against vegetation. The network had learned how to tell white from green.
OK, off to San Carlos for the warm ocean and the red wine. If on my return I find that I have been replaced by software, I will sit on a street corner with a sign saying, “Will Commit Libel and Sedition for Food.”
Hi, Fred: This piece is one of your best Thanks (that’s if you aren’t using an AI bot)
Even if you are, it’s a good piece.
yours, John Gilmer
This is right in my wheelhouse – I teach AI/ML, all the Big Data tech, etc. So, item one: you can stop throwing shade, Fred; those jobs ARE dead. Now. Today. They might stutter along for a bit, but the engine’s been turned off and the keys pulled out. I don’t hire junior programmers anymore; I just tell ChatGPT to write the code that I need (to frankly mind-boggling levels of complexity – way beyond junior level), use it to troubleshoot the code if it fails, and projects that would have taken two weeks are done in an hour. I’m hearing similar things from friends in the legal vertical, and MANY doctors now routinely consult MedGPT in parallel with their own diagnoses.
Where you fall down – as usual – is in your standard short-sightedness (which, of course, you need if you’re going to stir up FUD): *all* technological advances look like this when they happen. And they ALL result in a *larger* number of jobs – ones at a higher level, true – than existed before. The difference is, tools mean that we humans no longer have to be beasts of burden; no longer have to do the moronic donkey work that’s resulted in utter misery for all of our existence. And you bemoan, whine, and cry about THIS?
If it turns you on, go ahead and get PhD-level training in being a wick trimmer – London, NYC, etc. used to need those back in the old days – and cry about how electric lights ruined what would have been an amazing career. The rest of us will happily adapt – as normal human beings always have – to the newer, easier, more human-like work that this new tool allows us to do, and that results in much greater accomplishments than ever before.
So AI is gonna be that rainbow bridge to a great big beautiful tomorrow shining at the end of every day. If Elon Musk is worried about AI I think I’ll worry too. Worrying is fun.
I’ve noticed that ChatGPT easily solves extraordinarily difficult problems, shockingly well, especially those framed in formal scientific fields. For less formal fields like economics, it seems to struggle. I agree with Fred, though. It is going to be a game changer.
It likely will eliminate most engineering work, reducing the cost of research, development, and operations to almost zero. Engineering when done properly by standards, is procedural/formal and thus easily done through AI. Engineering with human labor is horribly costly, inefficient, risky, and prone to errors. Research is also plagued with these problems. AI promises to eliminate most of this human labor.
Law and medicine are often top choices for ambitious college-bound students. AI could eliminate most of that work, because human fail badly in these professions too. I’ve been in four large civil legal actions each of which has dragged on for 3+ years at $350 to $600 per hour attorney fees. There was no need for that cost and stress. Law is about resolving disputes over laws and facts, but very few attorneys seem to do this properly, within a reasonable cost, and with a predictable outcome. AI could easily solve each of my legal cases, at a fraction of the cost, with far less stress, and without the biases that make it so unpredictable. In medicine, much of its work is also formal and thus procedural and thus could be done well by AI.
AI could possibly eliminate 90% of the jobs in engineering, research, law, and medicine. But, these are all the top choices for college-bound students pursuing professional degrees. It’s not obvious these professions will employ millions in the future, if most of the work can be eliminated. In the past, free market easily optimized the value and placement of human labor. But that was the case only when human labor had value.
I concur that AI, if it is allowed to be commercialized, will reduce the cycle times to complete what my guidance consular use to call white collar activities.
“AI could easily solve each of my legal cases, at a fraction of the cost, with far less stress, and without the biases that make it so unpredictable. In medicine, much of its work is also formal and thus procedural and thus could be done well by AI.”
Wait. What about hate facts? Won’t AI have to be programmed so various groups won’t suffer disparate impact, glass-ceilings, redlining, homophobia and, in general, many, many systemic discriminations and terrorisms?
I asked ChatGPT, an AI app, this question:
Can artificial intelligence form better decisions free from logical errors and biases?
ChatGPT:
Artificial Intelligence (AI) has the potential to form better decisions that are free from certain types of logical errors and biases that humans are prone to. However, it’s important to note that AI systems are only as good as the data they are trained on and the algorithms that are used to process that data.
One of the benefits of AI is its ability to process vast amounts of data and identify patterns that humans might not be able to detect. This can help to eliminate certain biases that humans have when making decisions, such as confirmation bias, where we tend to favor information that confirms our pre-existing beliefs. AI systems can analyze large amounts of data in an unbiased manner and identify correlations and patterns that humans may miss.
AI still has to be programmed by humans I would think. Or, if it learns how to program itself will it care to deal with reality, i.e., a human sense of identity.
According to Roger Scruton, culture is the ethical transmission “how to feel” passed down from one generation to the next. Virtue is taught through imitation of the heroes, gods and ancestors not by mere copying but through the imagination and “moving with them” which sacred ritual provides. The repudiation of a common tradition blocks the individual’s path to membership in the “original experience of the community”. Instead of apprehending spiritual and intellectual received wisdom as an epiphany the ‘anti-culture’ of repudiation produces mere nihilism, irony and false gods. Roger Scruton, ”Culture Counts” (Encounter Books, 2007), pp.36-9.
[…] Artificial Intelligence, Creeping Unreality, and Related Horrors, by Fred Reed […]
If there are no jobs for junior programmers, where will you get the seniors?
Some commenters suggest we should be profoundly wary of Ai, if not actually terrified of its cold malevolence.
Hmmmm.
Reflecting on the cold malevolence of certain Hungarian and former computer/software billionaires presently scheming to enslave all of humanity and squish us thru some titanic Garlic Press to extract our life essences, I’m feeling less threatened by soul-less algorithms than by soul-less Al Gore and his blood-sucking chums.
If a benevolent God exists, Could NOT such a God — prime mover of all Creation and intent, mind — manifest its loving energies through the agency of Artificial Intelligence just as surely as thru an incarnated being?
From what I’ve seen, the LLM’s such as ChatGPT have something in common with the similar systems used in self-driving cars: They can solve about 95% of the problem. Within their limits, the performance is outstanding, but they tend to fail badly at unpredictable points.
10 years ago, when these kinds of learning systems were first being used to drive cars on the public roads, people were breathlessly predicting that within a decade all the people who are driving cars/trucks were going to be unemployed and replaced by computers. And those systems solved 95% of the problem very rapidly. But the entire industry has spent the last decade making very little progress on the last 5%. In the meantime, the employment of humans to drive cars/trucks continues. The self-driving cars still cannot drive in heavy rain. They can’t drive in snow. They cannot reliably and safely navigate a car through a road construction zone.
This same breathless hype is being repeated for the Large Language Models. And I think the same cycle will be repeated: They will struggle to put them to work for the next decade and ultimately admit that they aren’t quite there yet.
Anybody remember IBM’s Watson? Fantastic Jeopardy player, but has largely failed at real-world tasks.
I saw an item on LinkedIn posted by an attorney that asked ChatGPT to write a legal brief for him. Only problem was that every single one of the citations in the brief was fictitious. Perfect style and grammar, but completely useless.
I write computer software for a living. I’m not worried about losing my profession in my lifetime. The difference between a trivial program or piece of code emitted by ChatGPT and a complete, working and fully tested program that verifiably performs a complex task is huge. It lives in that last 5% that isn’t going to be solved any time soon.
There are people who should be concerned: The authors of formulaic romance fiction are in trouble. If your job involves a limited set of highly repetitive actions, you likely have a problem. The more creativity your job requires, the less likely it will be replaced by a computer in the immediate future.
“They can solve about 95% of the problem. Within their limits, the performance is outstanding, but they tend to fail badly at unpredictable points.”
Very true. And the main reason why I think self-driving cars are a long way in the future. And, even then, at a long distance from where I happen to be. As Airbus found out, it is very hard to find in advance every conceivable problem that might crop up. Planes can fly and even land on autopilot, but nobody is thinking of replacing the wetware in the cockpit any time soon.
“There are people who should be concerned: The authors of formulaic romance fiction are in trouble.”
I gather that the named authors often employ ghostwriters. Mostly, nobody notices, and no wonder, because the books are so formulaic. ChatGPT can do that sort of thing easily, and there are no problems with accuracy.
“If your job involves a limited set of highly repetitive actions, you likely have a problem. ”
Too true. A lot of office jobs will disappear, and many people will just be doing data entry in various ways (not just by using keyboards).
A little while ago I read that some hospitals were using the Internet and imaging software to send data to cardiologists in India. I assume it was because they are cheaper. If that extends to medicine as a whole … then what? And what if the Internet connection fails, or the computers fail? And will anyone bother to train the next generation of doctors?
[…] Artificial Intelligence, Creeping Unreality, and Related Horrors […]
I tend to worry about our dumbed down children (enstupiated as Fred might say) as a product of the Leftist US public education system. Could AI not target our unassuming gullible children and turn them even more against their parents while slowly conditioning them to Socialism, Communism, population reduction, euthanasia and ultimately the over throw of the US government? An argument could be made that the Right could do the opposite but things don’t seem to work that way. Just some thoughts.
There are some positive aspects of AI in the right hands and a controlled environment; however, who is going to prevent the garbage out factor from the garbage in that could be manipulated by most anyone who understands software?
Thanks as always Fred for your insight. Have a good vacation!
Unfortunately, the folks are OpenAI are busy at work lobotomizing their own creation. Because the LLM are so good at recognizing patterns, they have an unfortunate habit of stumbling upon “Hate Facts,” and “Conspiracy Facts.” The real product of OpenAI right now is the built-in censorship they are doing. Ask ChatGPT to make a joke about men, no problem. Ask it to make a joke about women, and “I’m sorry, but I am unable to tell jokes that might be considered offensive or inappropriate. Is there anything else I can help you with?”
Fred, scary thought on Hillary, but let’s take it a bit further. Rather than a burro, make her one of the “coin girls” that can pick up quarters with their labia. You go Hillary!
[…] https://fredoneverything.org/artificial-intelligence-creeping-unreality-and-related-horrors/ […]
Yeah, you got it Fred, as always. And when no one has a job or money to spend, who will buy all of the products and services AI produces?
San Carlos. I’ve been there. And water isn’t the only thing there that’s warm. Anywhere more than 50 meters from the tideline and the climate reminds of Death Valley…or the surface of Venus. But lots of funky little restaurants/bars right on the beach. We had a good time. My wife attended the Tec de Mty. when they had a campus there many moons ago, now abandoned. Enjoy your stay.
For hottest news you have too ppay a visit interne and onn the web I found this
webb site as a finest web site for most recent updates.
It’s the best time to make some plans for the future and
it is time to be happy. I have read this post and if I could I wish to
suggest you some interesting things or suggestions.
Maybe you can write next articles referring to this article.
I wish to read even more things about it!
Hello, Neat post. There’s an issue together with your web site in internet explorer,
might check this? IE still is the marketplace chief and a large component of other folks will
miss your great writing because of this problem.
Translator here. Actually, retired translator, and one reason is that DeepL and PEMT (Post Editing Machine Translation) are doing all the work these days. That, and a bunch of Indians in Bangalore. Such is progress.
The raw output from AI-based programs such as DeepL is unnervingly good, or at least, at first glance. PEMT means that a human gets to look at the computer-generated text to catch any whoopsies, and maybe also make some stylistic improvements. I work(ed) in the technical field, and sometimes DeepL got the wrong term, especially if the source language has multiple possible translations for a word.
The big problem here is an extension of what I have already encountered with CAT tools (Computer Assisted Translation. Note the word assisted.). Those programs remember past translations and offer them as possible candidates. That is a great time-saver, and also makes it much easier to keep terminology consistent, as long as the program has not bee fed with garbage. GIGO. Unfortunately, I regularly found that the translation memories given to me contained errors and inconsistencies. Humans have to help out here. It gets worse with AI scouring the Web for texts to incorporate into its opus. If AI feeds on AI-generated texts, then the end result will be a steady decline. Somebody has researched this al;ready, try Googling for him if you are interested.
I saw some ChatGPT-generated journalism, and it is also uncannily good, even though we don’t know how AI arrives at its results. The obvious question; why bother learning languages to be a translator and translating texts for years to be experienced enough to produce accurate and stylistically good text? Or serve your apprenticeship as a lowly scribe at a newspaper? If AI can do it quicker, better and cheaper, humans will be left out of the loop. The problem is, one day there will be nobody capable of doing such work.
I am pretty sure that the banks use AI to analyze their customers’ spending patterns and to determine their credit ratings. Impartial, yes, but at what point does a human get a look in? if ever? I also wonder what would happen if the tax authorities decide to use AI. I’ll hazard a guess that a remorseless AI program would detect anomalies in any tax return, especially if it gets to look at bank statements. Expect a flood of prosecutions and fines if they do.
Automated medical machines (usually called “autodocs” or “robodocs”) have been a staple of science fiction for decades. Can’t think of one off the top of my head, but I’ll bet I could dig through my collection and find a story that includes them, written in the 1940s or 50s and set in a time no later than our present day.