Git Annotated Tags

I've previously only ever used git's regular lightweight tags, created with git tag TAGNAME.

Today I learned about annotated tags, created with git tag -a -m "MESSAGE" TAGNAME. If you don't specify -m MESSAGE, git annoyingly prompts you for one, but it will accept -m "".

Annotated tags store the creator, created timestamp, and the message. This might occasionally be useful for understanding what happened. A release tagged this way shows us who created the release, and when, which might differ from when the commit was created.

But more importantly is the different handling of lightweight versus annotated tags when pushing to the server.

Habitually, I've been using git push --tags. But this is slightly broken, in that it pushes all tags. Some tags might be intended as my private local development state. Some of them might be unreachable in the origin repo.

To address these issues, newer versions of git push introduced --follow-tags, which only pushes annotated tags which are on ancestors of the commit being pushed, so that no unreachable tags are created on origin.

Hence, a better workflow is:

  1. Use regular lightweight tags for local state. Keep them private by never using git push --tags.
  2. Use annotated tags to share state with other developers.
  3. To share annotated tags, either push them directly, with git push TAGNAME, or use git push --follow-tags.

‰ is per mille

‰ or 'per mille' means parts per thousand.

I wasn't aware of it until today when I idly Googled "permil", my imagined variation on "percent", to find that this is one of the many spellings of a real thing. Rarely used in English, but more common in other European languages.

Now that I see the symbol, I remember seeing it as a child, on an old typewriter that my Grandfather used. What's old is new.

The Structure of Scientific Revolutions

This post rescues content from a series of tweets I wrote in 2018.


The Structure of Scientific Revolutions cover

by Thomas S. Kuhn, 1962. spoilers

I loved The Structure of Scientfic Revolutions. It was recommended to me as having similar impact to undergraduate classics like Godel, Escher, Bach, or The Blind Watchmaker. I'm going to just summarize the content here, so: Spoilers.

It begins by observing that discredited scientific theories, even those which seem laughable today, such as phrenology, or the Ptolemaic model of the heavens, were not crackpot theories with shaky evidence. Earnest, hardworking practitioners refined them using sensible processes, which by the 17th century were converging on the modern scientific method.

This process of "normal" science excels at the incremental refinement of established scientific theories. But in practice, is unable to perform the revolutionary transitions required to overcome outmoded theories and replace them with others, no matter how bizarre and wrongheaded the initial theory looks to us now with hindsight.

So what is the unarticulated process that is responsible for these transitions, i.e. how do scientific revolutions happen? We have intuitive visions of this occurring overnight. An individual experiment yields unexpected results, contradicting conventional theory, while irrefutably supporting an alternate theory to take its place. But in practice, this never happens.

At first, and often for years or centuries, no discrepancy between theory and experiment is noticed, because the prevailing theories of the time have a massive shaping effect on what questions it is valid to ask, what experiments are deemed useful to do.

For practitioners to turn their backs on an established theory in such a time is never productive. They are shunned for turning their backs on science itself.

We see this vividly today with homeopaths (my own example, not the book's). Often, and incorrectly, homeopathy is mocked because the theories sound ridiculous to one steeped in a conventional understanding of chemistry. People will jeer at how total dilution can 'obviously' have no effect, or at the idea of water exhibiting some sort of 'memory'. But such jeering is as scientifically illiterate as the quacks it contends with. The argument from personal incredulity has no place in determining scientific truth. No newer theory makes sense in the light of the more limited, and often contradictory, paradigm that it eventually replaces. The only useful criteria is to try it out. Does it actually work? This is the axis upon which homeopathy should be judged. (and upon which it has decisively been found wanting.)

All contradictions to conventional science suffer a similar ignominious treatment, regardless of how right they might later turn out to be. Before any revolution of theory can overturn conventional understanding, the stage must be set, the community prepared.

The process begins as the incremental advances of "normal" science gradually increase the scope and precision of accepted theories. Until this point, measurements in which experiment does not conform to theory are either ignored as erroneous artifacts, or are dismissed as indicative of some separate, unknown phenomena. They are never interpreted to mean prevailing theory is wrong.

However the growing scope & precision of theory and measurement gradually uncovers more of these discrepancies, or reveals them in finer detail. Eventually they become too prominent to ignore, and a kind of phase transition occurs.

Eventually, the discrepancies become so prominent and concerning that they are judged to be a valid area of study in themselves, rather than just annoying aberrations. Leading practitioners devote themselves to the task. Foundations of the specialisation that were once accepted without question now come under scrutiny.

To partially explain the discrepancies, people introduce many incompatible variations on current theories. The once unified field divides into cliques, supporting different theoretical variations. The field, formerly a united mass, calves into fragments.

If one of these variations on existing theory manages to explain all observations, then this gradually gains mindshare, until the whole community has migrated to this new, incrementally improved theory.

However, in cases where a truly revolutionary change is required, such incrementalism is insufficient, and none of the theoretical variations are fully successful in explaining all observations. The factions' differing underlying assumptions give them no common ground upon which to arbitrate their differences, so their debates are irreconcilable. The fragments are melting, into a churning liquid of disagreement.

This state is notably similar to the state of a nascent field before any established scientific theories have taken hold.

All is chaos, with different groups supporting different ideas, agreeing on nothing. The field is in turmoil, its practitioners in genuine emotional distress. Their personal identities are undermined. What does it mean to be a practitioner when nobody can agree on what the field even is? Is what we do even science at all? A crisis has arrived. We are at boiling point.

Kuhn compares this to individuals in psychological experiments, given cunningly contradictory sensual stimuli. At first they don't notice anything wrong about a brief glimpse of a playing card showing a red king of clubs. As the length of their glimpse expands, and the stimulation becomes more intrusive, the subject starts to hesitate, and stumble on words. Suddenly it impinges on their consciousness, and they cry out, distressed, uncertain of even basic facts. "My God! What did I see? Are clubs always red? What's happening here?"

Kuhn also compares scientific revolutions to their social and political counterparts, in a chillingly familiar passage:

"Political revolutions aim to change political institutions in ways that those institutions themselves prohibit. Their success therefore necessitates the partial relinquishment of one set of institutions in favor of another, and in the interim society is not fully governed by institutions at all.

Initially it is crisis alone that attenuates the role of political institutions [...] In increasing numbers, individuals become increasingly estranged from political life, and behave more & more eccentrically within it.

Then, as the crisis deepens, many individuals commit themselves to [...] some new institutional framework. At that point, society is divided into competing camps or parties, one seeking to defend the old institutional constellation, others seeking to institute some new one.

Once that polarization has occurred, political recourse fails. Because they differ about the political matrix within which political change is to be achieved and evaluated, and acknowledge no common supra-institutional framework for adjudication of differences, the parties to a revolutionary conflict must finally resort to the techniques of mass persuasion, often including force."

At any point, the boldest practitioners, often those with least invested in the previous status quo, such as the relatively young, or those entering from adjacent fields, will introduce strikingly different sets of theories. But only now that the stage is set, amongst such distressing chaos, is the community ready to entertain truly revolutionary ideas.

Occasionally, one of these new ideas will succeed in explaining all the observations, but in order to do so, it requires incommensurable changes in the underlying philosophy of the field, from the axiomatic definitions, to the set of questions that are valid to ask. One can no longer ask, of a spherical Earth, "What happens if you fall off?"

Notably, many revolutionary changes are not an unalloyed good. Gains in explicative power in one area are often balanced by losses elsewhere.

As in evolution, the new theory is not necessarily more correct, so much as it is a better fit for the current circumstances, i.e. providing greater predictive power in an area that is currently pertinent. Maybe scientific progress is more obviously useful to society in that area, or instruments are more capable of making measurements in that area. The two often coincide, since influences are are unable to detect or manipulate are also unlikely to be of much direct use to society. So as the social and technological context evolves, so does the relative fitness of potential competing paradigms.

Nobody understands this trade-off more deeply than the field's most invested practitioners, who feel the losses of losing the old model most keenly, and therefore may resist the new paradigm for the remainder of their careers. The new paradigm will not achieve total dominance until the field is populated by a whole new generation.

I am reminded of the dark priesthood of command-line programmers, although I note with no little joy that our merry band includes some of the best and brightest of the next generation (as judged by my own paradigm's criteria.)


Format Python Snippets with Black.

Black, the opinionated Python code formatter, can easily be invoked from your editor to reformat a whole file. For example, from Vim:

" Black(Python) format the whole file
nnoremap <leader>b :1,$!black -q -<CR>

But often you'd like to reformat just a section of the file, while leaving everything else intact. In principle, it's easy to tell Vim to just send the current visual selection:

" Black(Python) format the visual selection
xnoremap <Leader>b :!black -q -<CR>

(Note that both the above Vim configuration snippets map the same key sequence -- leader (commonly comma) followed by lower case b. These can be defined simultaneously, because the second one uses 'xnoremap', meaning it is used only while a visual selection exists, while the first uses 'nnoremap', so is used all other times.)

But if the given code starts with an indent on the first line, for example if it comes from lines in the middle of a function, then this won't work. Black parses the given code into a Python abstract syntax tree (AST), and a leading indent is a syntax error - it's just not valid Python.

I filed a hopeful issue with Black, suggesting they could handle this case, but it was a long shot and hasn't gained much enthusiasm.

So, I present a tiny Python3 wrapper, enblacken, which:

  • Unindents the given code such that the first line has no indent.
  • Passes the result to Black.
  • Reindents Black's output, by the same amount as the original unindent.

See enblacken on github.

LXD for Development Environments.

@hjwp asks:

I would be interested in seeing some example lxd config files, bash command history when creating, etc?

Here goes then.

I have one LXD container running for each nontrivial development project I'm working on.

$ lxc ls
|    NAME     |  STATE  |        IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
| devicegw    | RUNNING | 10.44.99.228 (eth0) |      | CONTAINER | 0         |
| ident       | RUNNING | 10.44.99.4 (eth0)   |      | CONTAINER | 0         |
| revs        | RUNNING | 10.44.99.151 (eth0) |      | CONTAINER | 0         |
| siab        | RUNNING | 10.44.99.128 (eth0) |      | CONTAINER | 0         |
| tartley-com | RUNNING | 10.44.99.161 (eth0) |      | CONTAINER | 0         |

Out of the gate we see one source of confusion. "LXD", the daemon, is a newer project that builds on top of "LXC" the containers. However the user interface to all the new LXD-goodness is through a command-line called "lxc", which replaces the older command line tool called "lxd". :-/

To create a new one:

$ time lxc launch ubuntu:16.04 -p default -p jhartley demo
Creating demo
Starting demo
real    0m9.593s

Once created, they take about 3 seconds to stop and 0.5 seconds to start.

Those "-p" options cause the container to use two profiles. They are:

  1. The default profile, which I've never touched. It's just doing whatever it always does.

  2. The jhartley profile, I created in a one-off step by running a Bash script derived from instructions one of my colleagues passed around. I'll describe it at the end.

Once a new container is up, we can execute commands directly on it:

$ lxc exec demo hostname
demo
$ lxc exec demo whoami
root

Or SSH to them using their IP address:

jhartley@t460 $ lxc ls demo
| NAME |  STATE  |        IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
| demo | RUNNING | 10.44.99.162 (eth0) |      | CONTAINER | 0         |
jhartley@t460 $ ssh 10.44.99.162
...
Warning: Permanently added '10.44.99.162' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 5.4.0-25-generic x86_64)
jhartley@demo $

Better than using IP addresses, you can run a DNS server to recognize {containername}.lxd hostnames. (This part is from here.)

Find your lxd bridge IPv4 address

lxc network show lxdbr0

Create file /etc/systemd/network/lxd.network:

[Match]
Name=lxdbr0

[Network]
Address=IPADDR/24
DNS=IPADDR
Domains=~lxd

Where IPADDR is the lxdbr0 IPv4 address.

sudo systemctl enable systemd-networkd
sudo reboot now

Then:

jhartley@t460 $ ssh demo.lxd
jhartley@demo $ # \o/

One nice thing is that DNS works both from the host and on the containers, so your services can be configured by default to talk to each other at SERVICE1.lxd, SERVICE2.lxd. Then running them in containers on your host they would just find each other. We don't actually do this, but it seems trivially easy to do. I should ask why we don't.

In practice I wrap up the ssh command with my accumulated foibles:

jhartley@demo $ type -a lssh
lssh is a function
lssh ()
{
    TERM=xterm-color ssh -A -t "$1.lxd" -- "cd $PWD && exec $SHELL -l";
}

I forget why -A and -t were required. The rest is mostly just to start the shell on the container in the same directory as I was in on the host. There is probably a simpler way.


The booooooring bits:

When we started the container, we mentioned a one-off setup script.

The script does a few things:

  1. Creates a new key pair specifically to SSH to the container.
  2. Creates the custom jhartley profile, which causes all containers started with it to:
  3. Create a new user on the container with user and group ID mapped to those of my user on the host, presumably so that file permissions work for...
  4. Mount my $HOME directory on the container. Might not always be what you want, but works for me right now.
  5. Doubtless due to my own misunderstanding somewhere, in order to get working IPv4 connections to my containers, I had to disable IPv6 connections to them.

Dina font as an OTF.

The Dina font, converted to an OpenType Font (see screenshots at the bottom of the page):

📦 Dina-v2.93-otf.tar.gz

Pango dropped support for naive bitmap fonts in v1.44 -- i.e. from Ubuntu 20.04, Focal, onwards.

So all bitmap fonts need to be converted into a format that will render, ie. a vector format such as OpenType that allows bitmaps to be embedded. (Not a conversion of the bitmap into an outline, losing the advantages of the crisp, tiny bitmaps.)

For most bitmap fonts, this conversion will be done for you, by packagers or font authors.

But you'll need to do it yourself for any peripheral fonts that you love more than your distribution does. Here's how I did it for my beloved Dina.

1. Identify the font file.

fc-list | grep Dina

2. Convert.

Use either command line tools, or fontforge.

2.1 Using fontforge

A GUI tool.

  1. Open up fontforge, paste the font path in.

  2. File / generate fonts.

  3. Select:

  4. Left dropdown: "OpenType (CFF)"

  5. Right dropdown: "In TTF/OTF"
  6. Generate

The results have some problems. I'm using it in gnome-terminal:

  • People converting other fonts report issues with ugly gaps between characters. But I don't see that, perhaps because it's a monospace font?
  • The converted font is invisible in font selection dialogs, making it look like the process did not work. But once selected, by clicking around blindly, then the font displays fine in applications.
  • Using a font size which is not defined in the font displays a blank terminal, instead of falling back to some other font.
  • Using ctrl-+/- to select font sizes cycles through three of the four defined sizes. I don't know why it skips one. But all four are usable if you explicitly select a size.

2.2 Using command-line tools

The process is described at https://fedoraproject.org/wiki/BitmapFontConversion.

Ubuntu's released version of fonttosfnt (1.0.4) produces unusable results: * Only the 1st and 2nd smallest font sizes are preserved. * In the 2nd smallest size, all variations are too bold, so that 'bold' variations look 'double-bold'. (Italics looks really ugly too, this may just be a result of the enboldening.)

TODO: Consider trying the latest fonttosfnt (1.1.0) https://gitlab.freedesktop.org/xorg/app/fonttosfnt or at least filing an issue there to try and get some help.

3. Install

  • Copy to ~/.local/share/fonts (or ~/.fonts, right?)
  • fc-cache -f

The result

I know, it doesn't look like much.

But compare it with a regular vector font. Here's Ubuntu Mono, the best of the vector fonts I could find at these sizes. Blurry and inconsistent and hard to read:

Conky.

In the typical mad-scientist thrills-per-minute that is the Linux way, adding a CPU meter to my desktop involved crafting my own conky configuration file.

As always, building your own is a chore that crops up when you least expect it. But on the other hand, the opportunity for functional and aesthetic work results in something artisnally crafted to exactly meet your own personal needs. Something you can feel a little pride about. An elegant weapon, for a more... civilized age.

An elegant weapon, for a more... civilised age.

Vonnegut on software development teams.

So here's a thing. Spotted this in some of Kurt Vonnegut's personal correspondence, talking about an instructor of his named Slotkin:

What Slotkin said was this: no man who achieved greatness in the arts operated by himself; he was top man in a group of like-minded individuals. This works out fine for the cubists, and Slotkin had plenty of good evidence for its applying to Goethe, Thoreau, Hemingway, and just about anybody you care to name.

If this isn't 100% true, it's true enough to be interesting—and maybe helpful.

The school gives a man, Slotkin said, the fantastic amount of guts it takes to add to culture. It gives him morale, esprit de corps, the resources of many brains, and—maybe most important—one-sidedness with assurance.

Reminds me powerfully of my growing impressions of the environment a person needs to be in in order to do great things in software. There is no doubt some effectiveness in assembling great individuals to create a great team.

But in my personal experience, there is a whole lot more value in creating a great team by instilling the right values, and then watching the members visibly level each other up, producing a succession of great individuals, and only subsequently attracting more of the same.

Swamp Thing, Vol 1: Saga of the Swamp Thing

Swamp Thing cover

by Alan Moore, Jogn Totleben, & Steve Bissette spoilers

Moore's deconstruction of existing characters continues. Originally Swamp Thing was Alec Holland, miraculously transformed by an infusion of artificially stimulated plant matter. When Alan Moore takes over the writing, Swamp Thing's ostracization and existential dread is compounded by the discovery that this origins story has been a delusion all along. Alec Holland was killed outright in the accident, and an accumulation of plant matter grew around his decaying form, integrating the physical remains of his memories into a creature that yearned to recapture its human form, but was never human in the first place.


Miracleman, books 1, 2 & 3

Miracleman cover

by Alan Moore, Alan Davies, & John Totleben. spoilers

I spent a little time digging out earlier works of Alan Moore. These inter-library loans didn't disappoint.

Originally published as Marvelman by Mick Anglo, from 1954-59. Legal battles rebranded the character as Miracleman in 1985.

The opening pages reprint one of those campy early stories, involving primary-colored moralizing while flying around to punch time-travelling Nazi super-scientists.

They then continue with Alan Moore's postmodern 1980s reboot. This recasts the simplistic tales of the original period as a placating dream, fed to a captured Miracleman by his nemesis. His hokey origins story is similarly re-ploughed. The ensuing tales are dark and introspective.

One thread follows the emotional stresses placed on Miracleman when incarnated as his human alter-ego, the frail and fallible half of a godlike being. He's unable to conceive a child with his wife, although Miracleman can, and succumbs to self-loathing and jealousy, culminating in a touching scene in which he climbs a mountain, leaves a forlorn monument, and changes into Miracleman one last time, never to change back.

Yes, this is more uneven than Moore's later works. Yes, it's unashamedly an underwear-on-the-outside superhero story. But nonetheless I loved it, and scenes like the above stayed with me for months.

Also this month:

The Atrocity Archive by Charles Stross. The conceit of Lovecraftian horror rationalized to a mathematical or computable topic is appealing to me, and kept the pages turning, but I didn't ultimately find it life-changing.

Nightwings by Robert Silverberg. A fantastical far-future tale of humanity split into occupational castes, guarding the world against prophesied invasion. Not my thing.