There is no Antimemetics Division

There is No Antimemetics Division cover

by qntm, 2020.

I first became aware of this book when I noticed that the good folks over at The SCP Foundation have kept themselves busy in the years since I last looked over their peerless and endlessly enthralling wiki of dry and tantalizing protocols with which to "Secure, Contain, and Protect" a catalog of creepy anomalous artifacts.

One of the arcs to break the churning surface of that crowd-sourced fictional milieu is that of the Antimemetics Division:

"An antimeme is an idea which, by its intrinsic nature, discourages people from spreading it. Think of ideas that you wouldn't share - passwords, taboos, shameful secrets.

Anomalous antimemes are another matter entirely. How do you contain something you can't record, or remember? How do you fight an enemy when you can never even know you're at war?

Welcome to the Antimemetics Division.

No, this is not your first day."

Intrigued, I started clicking around, and as is the way of SCP, discovered many hours had passed. Shortly afterwards, I realized the arc has been collected into this book, and immediately purchesed it, and everything else the author had available for sale.

spoilers

Occasional SCP entries, in the format we know and love, intertwine a short but intense tale of memetic hazards, populated by a few of the Division's finest - those rare individuals who can go from the standing start of a hint that their memories can't be trusted, to a well-executed plan based upon a stack of assumptions about what might be needed, and what might already have been done that they can count upon, even though they no longer remember any of it, without ever retaining any knowledge of their situation, or what they're up against. I don't think I've ever been so impressed by the sheer depth of a character's resourcefulness and initiative.

The journey does contain world-ending hellscapes, with some gore, which might not be everyone's cup of tea. But it's short and somehow manages to stay light in tone, so isn't emotionally arduous on that front.

Probably my favorite fiction of the year.

For those like me with an epub fetish, you can buy There is No Antimememtics Division in many formats, including Amazon links, EPUBs, or free-to-read online, at the author's site.


Lena

Lena

Fabulous short story, Lena by qntm, riffs on SCP's idea of a work of fiction as a wiki entry. But this is entirely standalone, not part of the SCP universe, and more resembles a Wikipedia entry. Seems both genuinely scary and almost inevitable.

The title stems from the image above, head-and-shoulders hurriedly torn from the centerfold of a colleague's issue of Playboy in 1972, while searching for an image to use in an image processing conference paper. From this, we can infer the proper pronunciation is Lenna, as in the name of the model.

This image has been widely re-used in scientific journals for decades. It has some objectively useful properties - good dynamic range, details and flat regions, shading and texture. However, its viral popularity most probably stems overwhelmingly from its racy nature, featuring an attractive woman, used in male-dominated fields. This was in contrast to other common test images of the time, derived from dull 1960s television standards work.

The common long term use of the image was done without the knowledge or permission of Playboy magazine - the copyright holder - or Lena herself. Over the years they each became aware of it, and made their peace with it. By 2019, though, Lena stated that the image, like her, ought to be retired.


Intelligence: A Very Short Introduction

Intelligence cover

by Ian J. Deary (1st Ed, 2001)

An expert's overview for the layman, describing how and why people differ in their thinking powers, Very data driven, and by necessity, largely driven by consideration of how to measure different aspects of intelligence, and therefore what aspects intelligence can be teased apart into, such as working memory, linguistic comprehension, perceptual organization, and speed of operation.

Each chapter tackles a key scientific question, describing the experiments that were done to determine the answers, showing the actual key experimental datasets.

Such questions include: Is intelligence determined by genes or the environment? The answer is 50/50, although surprisingly, little of the environmental influence is due to the family raising the child. Also, the effect of genetics increases with age.

How does a person's intelligence change as they age? Some skills show a straight decline from age 25 to 80, such as inductive reasoning, spatial awareness, perceptual speed and verbal memory. Other skills show a peak in middle-age, with only a small decline at high ages, such as verbal reasoning and numerical ability. The amount of mental decline with age is highly variable between different people. Those whose abilities decline the least have no cardio issues or chronic disease, have high social class, live in complex and stimulating environments, and are generally satisfied with life, and unstressed through middle age.

Does intelligence, especially as measured using existing tests, correlate with life outoutcomes such doing well at a job? Depending on the job, yes, a great deal. What sort of tests are good for predicting who will do well? Work samples, structured interviews, and psychometric tests all give slightly over 0.5 correlations. Which isn't stellar, but it's the best we've got. At the other end of the scale, graphology (handwriting) and age rankings had no correlation.

It's a short book, with a lively style, densely packed with important conclusions, and descriptions of how the field has arrived at them. Edifying.


TIL: Git Annotated Tags

I've previously only ever used git's regular lightweight tags, created with git tag TAGNAME.

Today I learned about annotated tags, created with git tag -a -m "MESSAGE" TAGNAME. If you don't specify -m MESSAGE, git annoyingly prompts you for one, but it will accept -m "".

Annotated tags store the creator, created timestamp, and the message. This might occasionally be useful for understanding what happened. A release tagged this way shows us who created the release, and when, which might differ from when the commit was created.

But more importantly is the different handling of lightweight versus annotated tags when pushing to the server.

Habitually, I've been using git push --tags. But this is slightly broken, in that it pushes all tags. Some tags might be intended as my private local development state. Some of them might be unreachable in the origin repo.

To address these issues, newer versions of git push introduced --follow-tags, which only pushes annotated tags which are on ancestors of the commit being pushed, so that no unreachable tags are created on origin.

Hence, a better workflow is:

  1. Use regular lightweight tags for local state. Keep them private by never using git push --tags.
  2. Use annotated tags to share state with other developers.
  3. To share annotated tags, either push them directly, with git push TAGNAME, or use git push --follow-tags.

TIL: ‰ is per mille

‰ or 'per mille' means parts per thousand.

I wasn't aware of it until today when I idly Googled "permil", my imagined variation on "percent", to find that this is one of the many spellings of a real thing. Rarely used in English, but more common in other European languages.

Now that I see the symbol, I remember seeing it as a child, on an old typewriter that my Grandfather used. What's old is new.

The Structure of Scientific Revolutions

This post rescues content from a series of tweets I wrote in 2018.


The Structure of Scientific Revolutions cover

by Thomas S. Kuhn, 1962.

I loved The Structure of Scientfic Revolutions. It was recommended to me as having similar impact to undergraduate classics like Godel, Escher, Bach, or The Blind Watchmaker. I'm going to just summarize the content here, so: Spoilers.

It begins by observing that discredited scientific theories, even those which seem laughable today, such as phrenology, or the Ptolemaic model of the heavens, were not crackpot theories with shaky evidence. Earnest, hardworking practitioners refined them using sensible processes, which by the 17th century were converging on the modern scientific method.

This process of "normal" science excels at the incremental refinement of established scientific theories. But in practice, is unable to perform the revolutionary transitions required to overcome outmoded theories and replace them with others, no matter how bizarre and wrongheaded the initial theory looks to us now with hindsight.

So what is the unarticulated process that is responsible for these transitions, i.e. how do scientific revolutions happen? We have intuitive visions of this occurring overnight. An individual experiment yields unexpected results, contradicting conventional theory, while irrefutably supporting an alternate theory to take its place. But in practice, this never happens.

At first, and often for years or centuries, no discrepancy between theory and experiment is noticed, because the prevailing theories of the time have a massive shaping effect on what questions it is valid to ask, what experiments are deemed useful to do.

For practitioners to turn their backs on an established theory in such a time is never productive. They are shunned for turning their backs on science itself.

We see this vividly today with homeopaths (my own example, not the book's). Often, and incorrectly, homeopathy is mocked because the theories sound ridiculous to one steeped in a conventional understanding of chemistry. People will jeer at how total dilution can 'obviously' have no effect, or at the idea of water exhibiting some sort of 'memory'. But such jeering is as scientifically illiterate as the quacks it contends with. The argument from personal incredulity has no place in determining scientific truth. No newer theory makes sense in the light of the more limited, and often contradictory, paradigm that it eventually replaces. The only useful criteria is to try it out. Does it actually work? This is the axis upon which homeopathy should be judged. (and upon which it has decisively been found wanting.)

All contradictions to conventional science suffer a similar ignominious treatment, regardless of how right they might later turn out to be. Before any revolution of theory can overturn conventional understanding, the stage must be set, the community prepared.

The process begins as the incremental advances of "normal" science gradually increase the scope and precision of accepted theories. Until this point, measurements in which experiment does not conform to theory are either ignored as erroneous artifacts, or are dismissed as indicative of some separate, unknown phenomena. They are never interpreted to mean prevailing theory is wrong.

However the growing scope & precision of theory and measurement gradually uncovers more of these discrepancies, or reveals them in finer detail. Eventually they become too prominent to ignore, and a kind of phase transition occurs.

Eventually, the discrepancies become so prominent and concerning that they are judged to be a valid area of study in themselves, rather than just annoying aberrations. Leading practitioners devote themselves to the task. Foundations of the specialisation that were once accepted without question now come under scrutiny.

To partially explain the discrepancies, people introduce many incompatible variations on current theories. The once unified field divides into cliques, supporting different theoretical variations. The field, formerly a united mass, calves into fragments.

If one of these variations on existing theory manages to explain all observations, then this gradually gains mindshare, until the whole community has migrated to this new, incrementally improved theory.

However, in cases where a truly revolutionary change is required, such incrementalism is insufficient, and none of the theoretical variations are fully successful in explaining all observations. The factions' differing underlying assumptions give them no common ground upon which to arbitrate their differences, so their debates are irreconcilable. The fragments are melting, into a churning liquid of disagreement.

This state is notably similar to the state of a nascent field before any established scientific theories have taken hold.

All is chaos, with different groups supporting different ideas, agreeing on nothing. The field is in turmoil, its practitioners in genuine emotional distress. Their personal identities are undermined. What does it mean to be a practitioner when nobody can agree on what the field even is? Is what we do even science at all? A crisis has arrived. We are at boiling point.

Kuhn compares this to individuals in psychological experiments, given cunningly contradictory sensual stimuli. At first they don't notice anything wrong about a brief glimpse of a playing card showing a red king of clubs. As the length of their glimpse expands, and the stimulation becomes more intrusive, the subject starts to hesitate, and stumble on words. Suddenly it impinges on their consciousness, and they cry out, distressed, uncertain of even basic facts. "My God! What did I see? Are clubs always red? What's happening here?"

Kuhn also compares scientific revolutions to their social and political counterparts, in a chillingly familiar passage:

"Political revolutions aim to change political institutions in ways that those institutions themselves prohibit. Their success therefore necessitates the partial relinquishment of one set of institutions in favor of another, and in the interim society is not fully governed by institutions at all.

Initially it is crisis alone that attenuates the role of political institutions [...] In increasing numbers, individuals become increasingly estranged from political life, and behave more & more eccentrically within it.

Then, as the crisis deepens, many individuals commit themselves to [...] some new institutional framework. At that point, society is divided into competing camps or parties, one seeking to defend the old institutional constellation, others seeking to institute some new one.

Once that polarization has occurred, political recourse fails. Because they differ about the political matrix within which political change is to be achieved and evaluated, and acknowledge no common supra-institutional framework for adjudication of differences, the parties to a revolutionary conflict must finally resort to the techniques of mass persuasion, often including force."

At any point, the boldest practitioners, often those with least invested in the previous status quo, such as the relatively young, or those entering from adjacent fields, will introduce strikingly different sets of theories. But only now that the stage is set, amongst such distressing chaos, is the community ready to entertain truly revolutionary ideas.

Occasionally, one of these new ideas will succeed in explaining all the observations, but in order to do so, it requires incommensurable changes in the underlying philosophy of the field, from the axiomatic definitions, to the set of questions that are valid to ask. One can no longer ask, of a spherical Earth, "What happens if you fall off?"

Notably, many revolutionary changes are not an unalloyed good. Gains in explicative power in one area are often balanced by losses elsewhere.

As in evolution, the new theory is not necessarily more correct, so much as it is a better fit for the current circumstances, i.e. providing greater predictive power in an area that is currently pertinent. Maybe scientific progress is more obviously useful to society in that area, or instruments are more capable of making measurements in that area. The two often coincide, since influences are are unable to detect or manipulate are also unlikely to be of much direct use to society. So as the social and technological context evolves, so does the relative fitness of potential competing paradigms.

Nobody understands this trade-off more deeply than the field's most invested practitioners, who feel the losses of losing the old model most keenly, and therefore may resist the new paradigm for the remainder of their careers. The new paradigm will not achieve total dominance until the field is populated by a whole new generation.

I am reminded of the dark priesthood of command-line programmers, although I note with no little joy that our merry band includes some of the best and brightest of the next generation (as judged by my own paradigm's criteria.)


TIL: Format Python Snippets with Black.

Black, the opinionated Python code formatter, can easily be invoked from your editor to reformat a whole file. For example, from Vim:

" Black(Python) format the whole file
nnoremap <leader>b :1,$!black -q -<CR>

But often you'd like to reformat just a section of the file, while leaving everything else intact. In principle, it's easy to tell Vim to just send the current visual selection:

" Black(Python) format the visual selection
xnoremap <Leader>b :!black -q -<CR>

(Note that both the above Vim configuration snippets map the same key sequence -- leader (commonly comma) followed by lower case b. These can be defined simultaneously, because the second one uses 'xnoremap', meaning it is used only while a visual selection exists, while the first uses 'nnoremap', so is used all other times.)

But if the given code starts with an indent on the first line, for example if it comes from lines in the middle of a function, then this won't work. Black parses the given code into a Python abstract syntax tree (AST), and a leading indent is a syntax error - it's just not valid Python.

I filed a hopeful issue with Black, suggesting they could handle this case, but it was a long shot and hasn't gained much enthusiasm.

So, I present a tiny Python3 wrapper, enblacken, which:

  • Unindents the given code such that the first line has no indent.
  • Passes the result to Black.
  • Reindents Black's output, by the same amount as the original unindent.

See enblacken on github

LXD for Development Environments.

@hjwp asks:

I would be interested in seeing some example lxd config files, bash command history when creating, etc?

Here goes then.

I have one LXD container running for each nontrivial development project I'm working on.

$ lxc ls
|    NAME     |  STATE  |        IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
| devicegw    | RUNNING | 10.44.99.228 (eth0) |      | CONTAINER | 0         |
| ident       | RUNNING | 10.44.99.4 (eth0)   |      | CONTAINER | 0         |
| revs        | RUNNING | 10.44.99.151 (eth0) |      | CONTAINER | 0         |
| siab        | RUNNING | 10.44.99.128 (eth0) |      | CONTAINER | 0         |
| tartley-com | RUNNING | 10.44.99.161 (eth0) |      | CONTAINER | 0         |

Out of the gate we see one source of confusion. "LXD", the daemon, is a newer project that builds on top of "LXC" the containers. However the user interface to all the new LXD-goodness is through a command-line called "lxc", which replaces the older command line tool called "lxd". :-/

To create a new one:

$ time lxc launch ubuntu:16.04 -p default -p jhartley demo
Creating demo
Starting demo
real    0m9.593s

Once created, they take about 3 seconds to stop and 0.5 seconds to start.

Those "-p" options cause the container to use two profiles. They are:

  1. The default profile, which I've never touched. It's just doing whatever it always does.

  2. The jhartley profile, I created in a one-off step by running a Bash script derived from instructions one of my colleagues passed around. I'll describe it at the end.

Once a new container is up, we can execute commands directly on it:

$ lxc exec demo hostname
demo
$ lxc exec demo whoami
root

Or SSH to them using their IP address:

jhartley@t460 $ lxc ls demo
| NAME |  STATE  |        IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
| demo | RUNNING | 10.44.99.162 (eth0) |      | CONTAINER | 0         |
jhartley@t460 $ ssh 10.44.99.162
...
Warning: Permanently added '10.44.99.162' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 5.4.0-25-generic x86_64)
jhartley@demo $

Better than using IP addresses, you can run a DNS server to recognize {containername}.lxd hostnames. (This part is from here.)

Find your lxd bridge IPv4 address

lxc network show lxdbr0

Create file /etc/systemd/network/lxd.network:

[Match]
Name=lxdbr0

[Network]
Address=IPADDR/24
DNS=IPADDR
Domains=~lxd

Where IPADDR is the lxdbr0 IPv4 address.

sudo systemctl enable systemd-networkd
sudo reboot now

Then:

jhartley@t460 $ ssh demo.lxd
jhartley@demo $ # \o/

One nice thing is that DNS works both from the host and on the containers, so your services can be configured by default to talk to each other at SERVICE1.lxd, SERVICE2.lxd. Then running them in containers on your host they would just find each other. We don't actually do this, but it seems trivially easy to do. I should ask why we don't.

In practice I wrap up the ssh command with my accumulated foibles:

jhartley@demo $ type -a lssh
lssh is a function
lssh ()
{
    TERM=xterm-color ssh -A -t "$1.lxd" -- "cd $PWD && exec $SHELL -l";
}

I forget why -A and -t were required. The rest is mostly just to start the shell on the container in the same directory as I was in on the host. There is probably a simpler way.


The booooooring bits:

When we started the container, we mentioned a one-off setup script.

The script does a few things:

  1. Creates a new key pair specifically to SSH to the container.
  2. Creates the custom jhartley profile, which causes all containers started with it to:
  3. Create a new user on the container with user and group ID mapped to those of my user on the host, presumably so that file permissions work for...
  4. Mount my $HOME directory on the container. Might not always be what you want, but works for me right now.
  5. Doubtless due to my own misunderstanding somewhere, in order to get working IPv4 connections to my containers, I had to disable IPv6 connections to them.

Dina font as an OTF.

The Dina font, converted to an OpenType Font (see screenshots at the bottom of the page):

📦 Dina-v2.93-otf.tar.gz

Pango dropped support for naive bitmap fonts in v1.44 -- i.e. from Ubuntu 20.04, Focal, onwards.

So all bitmap fonts need to be converted into a format that will render, ie. a vector format such as OpenType that allows bitmaps to be embedded. (Not a conversion of the bitmap into an outline, losing the advantages of the crisp, tiny bitmaps.)

For most bitmap fonts, this conversion will be done for you, by packagers or font authors.

But you'll need to do it yourself for any peripheral fonts that you love more than your distribution does. Here's how I did it for my beloved Dina.

1. Identify the font file.

fc-list | grep Dina

2. Convert.

Use either command line tools, or fontforge.

2.1 Using fontforge

A GUI tool.

  1. Open up fontforge, paste the font path in.

  2. File / generate fonts.

  3. Select:

  4. Left dropdown: "OpenType (CFF)"

  5. Right dropdown: "In TTF/OTF"
  6. Generate

The results have some problems. I'm using it in gnome-terminal:

  • People converting other fonts report issues with ugly gaps between characters. But I don't see that, perhaps because it's a monospace font?
  • The converted font is invisible in font selection dialogs, making it look like the process did not work. But once selected, by clicking around blindly, then the font displays fine in applications.
  • Using a font size which is not defined in the font displays a blank terminal, instead of falling back to some other font.
  • Using ctrl-+/- to select font sizes cycles through three of the four defined sizes. I don't know why it skips one. But all four are usable if you explicitly select a size.

2.2 Using command-line tools

The process is described at https://fedoraproject.org/wiki/BitmapFontConversion.

Ubuntu's released version of fonttosfnt (1.0.4) produces unusable results: * Only the 1st and 2nd smallest font sizes are preserved. * In the 2nd smallest size, all variations are too bold, so that 'bold' variations look 'double-bold'. (Italics looks really ugly too, this may just be a result of the enboldening.)

TODO: Consider trying the latest fonttosfnt (1.1.0) https://gitlab.freedesktop.org/xorg/app/fonttosfnt or at least filing an issue there to try and get some help.

3. Install

  • Copy to ~/.local/share/fonts (or ~/.fonts, right?)
  • fc-cache -f

The result

I know, it doesn't look like much.

But compare it with a regular vector font. Here's Ubuntu Mono, the best of the vector fonts I could find at these sizes. Blurry and inconsistent and hard to read:

Conky.

In the typical mad-scientist thrills-per-minute that is the Linux way, adding a CPU meter to my desktop involved crafting my own conky configuration file.

As always, building your own is a chore that crops up when you least expect it. But on the other hand, the opportunity for functional and aesthetic work results in something artisnally crafted to exactly meet your own personal needs. Something you can feel a little pride about. An elegant weapon, for a more... civilized age.

An elegant weapon, for a more... civilised age.