Old computing challenge for July 19th

At least, I glanced over all the other old computer challengers
(https://occ.deadnet.se). The breadth and depth of writing is
breathtaking. I can't do anyone else justice to respond to them in the
remaining hours of July 19th. Please do read them directly. I will be
digesting them for some time. Sad to hear about the IRC incidents that
happened which are not the norm.

If you will forgive me for writing it, I did find I was able to use my
GNU emacs emacs-server Leonardo system stuff from XEmacs:

$ emacsclient --eval '(eepitch-send "*slime-repl ECL*" "`zaf")'

which echoed ZAF in the repl over there as expected, sitting in
XEmacs' M-x eshell buffer. I guess this is enough to work in, though I
did not stretch too much more towards it.

Today otherwise, I hand-constructed a 9-step sequence in my Leonardo
system plant/insect/bird simulation where a plant [] runs away from an
insect to its west, sees a plant in front of it- dodges north, sees
more plants over there, dodges west, sees more plants over there,
heads back south and is eaten by the insect it was originally running
from, which also alerts the insect there are other plants to the north
to pursue.

Constructing that was an example of a deeply involving, logical,
literate and graphical (unicode art) activity of hours that really
does not benefit from computing moderne.

One of the points Sandewall made about the Leonardo system in 2004 was
that the Leonardo software-individuals are about three megabytes in
size (well, my one now is nine megabytes). The Leonardo system itself
has no dependencies, and the host lisp exposed in it is naught but
ANSI CL. You basically wouldn't notice if you were using an individual
inhabiting an eight megabyte usb drive in 2005 on a 2005 personal
computer.

Admittedly, this meant I needed a different program to actually
interpret the unicode character codes and render them (since the
Leonardo system uses latin1).

This reminds me of the thirty year old adage that performance was a
problem of thirty years ago, summing up to a bit less than 60 years
ago now I guess.

Still, not everyone reported that performance in 20 year old computers
ain't no problem. Someone shucked openbsd from their old computer for
being slow- I do actually have that experience on my old computer
computer running at sysctl hw.setperf=0 - it really takes annoying long
for the kernel to reorder after the system boots!

And game graphics from 30 years ago look like game graphics from 30
years ago (though surely they were better...!) and you can't run
modern games even if you can start them, and modern video compressions
are likewise untenable. Withstanding web-moderne-sites at all.

I guess LLMs are the signature 2025 software that would not fit on a
circa 2000 computer's drive, would not fit at all in its RAM and would
not meaningfully process on its processor.

At the same time, a friend of mine in the NZ government told me about
the new shit sandwich training they had just been given. The idea is
that now reasoning, decisions and writing are performed by the
government's chatbot subscriptions rather than employees. However, the
employees must take responsibility for the problems that arise from
the LLM's New Zealand government policies. (The idea was that the AI
bullshit is the sandwich topping, and the culpable-for-what-happens bread
is the human).

Outside of old computer challenge, we have unreasonably consumptive
software, doing worse, that human employees must take responsibility
on behalf of when it kills someone.