Ferris Bueller's Day Off

Ben Stein in Ferris Bueller's Day Off

Directed and written by John Hughes, 1986. IMDB

So, way back in my teen years, we had a VHS tape of this, which friends and I played and played and played, probably racking up more rewatches than any other movie in my life. So it was a pleasure to break it out for our 11 year-old, some some 37 years later (!), to see whether it still holds up, and find that it really does.

By chance this was the week after we'd just watched The Blues Brothers, so we got to compare and contrast two movies set in Chicago - a privileged white story, and a poverty stricken, largely colored one, which even share scenes filmed in the very same restaurant.

Back then, I had no idea who Ben Stein was, so it was amusing to see him now and suddenly join the dots. Apparently his infamous "voodoo economics" speech had no script and was ad-libbed.

Reviewing Rooney's comical attempts to break into the Buellers' house made me realize for the first time that this was Hughes' dry-run at what would become Home Alone.

I had always been frustrated that I'd never been able to lay my hands on the "You're not dying" song that Cameron plays while sick in his bedroom (i.e. here's the few seconds of it on Youtube, exactly as it appears in the movie.)

Now we have the Internet, I can see that this failure wasn't exactly my fault - there is no such song. The few bars we hear were whipped up by Ira Newborn specially for the film, based on an old Louis Armstrong song, Let My People Go. Fortunately for us, one man was obsessed about it enough to actually recreate a full length song based on the snippets from the movie. Here is Daniel Simone's Let My Cameron Go, is full of a lush Pink Floyd sound, and ripe with the sort of ecstatic anticipation that even Roger Waters would be proud of.

Duly added to my rotation for next time I'm sick.

Resolution and The Endless


Resolution (2012)

The Endless

The Endless (2017)

I only had a hazy awareness of Justin Benson and Aaron Moorhead, the writer and directors, before watching these movies. But having discovered them, I now realize that they are doing just about my favorite thing in film: Quirky, intense, psychological drama wound around some high concept science fiction.

Going in, I hadn't realized the two films are related. But then they contain the same scene, viewed from two different angles (pictured above), and it starts to become clearer. As it happens, I watched them out of order - my enthusiasm for The Endless caused me to look up their earlier Resolution. But with hindsight, I think this is actually the best order to view them. If Resolution has a weakness, it's that the science fictional elements seem a bit arbitrary. Why should this supernatural entity focus its narrative-obsessed attentions on these two men, here in this cabin, out in the middle of nowhere? But in The Endless, this particular brand of supernatural outlandishness is revealed to be just part of a wider pattern, affecting many people in this geographical area. Although this is the bigger, weirder story, it is more fully fleshed out and becomes more believable, creating a setting which recontextualizes and improves the earlier film.

Rating: 10/10 if you like mindbending SF horror, 0/10 if you prefer something a bit more polished and comfortable.


Aurora cover

by Kim Stanley Robinson, 2015

It has long been held by fans of science fiction that fantasy is a lowly subset of science-fiction, or perhaps a disreputable cousin, one for whom the normal rules of discernment do not apply. If such unlikely and unrealistic things as dragons and magic are allowed, the reasoning goes, then the book cannot be relied upon to deliver any kind of coherent narrative experience, since the lapsed rule-set now allows for any old ex machina plot twists to save the day. A magical "defeat the evil" spell? No problem. A new mythical creature capable of defeating the previously unassailable one? Why not? All reason is gone.

It's more useful though, is to invert the hierarchy of this received wisdom, and consider science fiction as a subset of fantasy. Mentioning this in fandom circles blows mental fuses. Does not compute. But the speculative flights of science fiction are also fantasies. Just fantasies that a particular type of person finds especially beguiling, compelling, and believable. To some extent, I concede that on occasion they are believable because they seem to be a reasonable extrapolation of our current situation. But no matter how reasonable your extrapolation seems to be, it's always possible that reality will zig instead of zag, and even the most humdrum tale of a rocket man's life will find itself at odds with the unexpected reality of suspended human spaceflight in the face of spiraling real-world costs. The vision that one is selling is always, to a greater or lesser extent, a wishful one - a fantasy.

This becomes immediately apparent once we stray beyond the confines of low Earth orbit, to take in the wider scope of science fiction, the vast majority of which encompasses tales across the galaxy, nay, the universe, including time travel, teleportation booths, aliens of every color, quantum reality displacement, and multiversal escapades in which literally everything is possible. These are very clearly fantasies, and it is intensely curious to me why this sort of fantasy is considered more "realistic" or "believable" than, say, flying lizards with fiery breath. Even though the narrative hand-waving that explains away the former - "It's an alternate universe, where different rules apply" - is abundantly adequate to more than completely explain anything in the fantasy realm.

I once asked my guru science fiction critic Damien Walter what makes people consider some stories believable, while other are not. He replied with a statement that has stuck with me ever since: People are willing to invest the effort to provide the conceptual scaffolding around an idea to make it seem believable (e.g. to speculate on the mechanism that might allow for a faster-than-light hyper-drive) when the story fulfills some deeper psychological need for them.

Hence, a story on the same topic as Aurora, of a generation ship sent to colonize an Earth-like planet orbiting the nearby star of Tau Ceti, is (usually) a story about the triumph of modernism. Such stories leverage the sources of strength in the modern world, science and technology and colonialism, and a reader who is invested in a modern world-view will feel validated and empowered by this type of fantasy. They will be will be willing to exercise whatever extracurricular creative effort is required on the part of the reader to make the story believable. Doing so will inspire them with the feeling that their world all makes sense, is leading to something, so that their daily grind is a part of the heroic story of how humanity transcends its planetary origins. This is much more fulfilling than investing any effort getting on board with the waning powers of superstition that are represented by the fantasy genre.


In keeping with this, Aurora's colonists are granted every conceivable boon that science and industry can supply. A ship fully ten kilometers across, enclosing twenty four massive biomes, each stuffed full of hills and lakes, soil and forests, microbes and wildlife. A population of well over a thousand human beings. Miraculous nanotech fabricators, and megatons of elemental feedstock to run them. A miraculous acceleration laser, fired from Titan for decades after departure, allowing the ship to coast up to 0.1c, making the journey in only seven generations, while retaining enough fuel to decelerate for arrival. A miraculous magnetic shield protects the ship from catastrophic collisions with stray particles along the way. A benign AI runs the ship, amusingly pressed into service as the narrator of the tale, and grows visibly more sentient, emotionally robust and capable as the years pass.

And hot damn, they are going to need all these things, because in this story, human interstellar colonization is revealed for the fantasy it really is. Nothing works out, and the problems encountered are far bigger than anything the ship's designers planned for. Although the ship does limp into orbit around the destination planet, soon after that people start dying, major disagreements emerge which descend into catastrophic riots, and the ship's society falls apart - when have humans ever invented a reliable form of governance?

There is a rip-roaring final act, that stretched my credulity, but revives the stakes and entertainment value in what might otherwise be a relentless downer of a read.

The book, and interviews with the author, caused quite a stir in science fiction circles. People were extremely angry. The book was attacking their deeply held beliefs that the future of humanity is as a successfull space-faring species. They had invested their identity in this world-view, because of how it serviced their psychological needs for fulfillment and meaning. They had developed a religious conviction around this particular kind of fantasy.

The point of Aurora is to highlight the idea of human interstellar colonization as a dangerous distraction from the very real project of taking care of the long term health of our planet and our society right here on Earth. It is going to take beyond miraculous levels of technology and resources to start thinking about interstellar travel. If, by some miracle, we make it to a year 10,000 utopia, with infinite resources and the wisdom to manage them, then sure, we can worry about interstellar travel. But for now, can we just focus on some of the very basic problems of existence here on Earth, like how to make everyone fed and liberated, educated and fulfilled, without killing our planet to do it? Maybe invent some sort of government that is reliably able to do that? That would be nice.

Fully Operational

Now witness the power of this fully armed and operational battle station.

My desk featuring too many computers

New job means new laptop means it's time to clean and re-org the desk.

Leftmost blue skies
Linux laptop (a free hand-me-down from a job ten years ago). Acting as the house Plex / streaming media server, usually tucked away more discreetly than this.
Left top green forest
Heavy duty work / gaming Linux laptop ("hardware bonus" from my last employer). Has been my primary work machine, but sounds like it's getting replaced by...
Left bottom spaceship drawing
Macbook Pro (Brand new! Just unwrapped yesterday. Thank you new employer Lambda!) Looks like this means I'm returning to developing on a Mac and VMs, after a full decade on Ubuntu & derivatives. I'm told Docker for Desktop now behaves better than it used to.
Left bottom, under the Mac
You can sort of see the 10" whiteboard I use to combat ADHD by writing a sentence about what I'm supposed to be working on, then I can spot it every few minutes and drag my mind back to the task in hand. (a technique described in the fabulous Self Command by Chris DeLeon.
Main monitor and wireless tenkeyless mechanical keyboard & mouse combo, all switchable to any of the laptops. Under the keyboard you can sort-of see the Magic the Gathering 13x24" gaming mat (free from local gaming store's MtG lessons) pressed into duty as the world's most gigantic, beautiful, and luxurious mouse mat.
Right monitor, keyboard and mouse
are wired to the Windows gaming PC under the desk (not visible). The kiddo's current Astroneer session is visible. The monitor is switchable to any of the laptops.
Right tab
Absolute workhorse 12.6" Android tablet on which I do most of my reading, laid in the picture here just to be gratuitous.

Illustrating Uses of IBM Cloud Security Groups

I wrote this high-level public-facing guide while employed by IBM, creating the security groups feature for IBM Cloud. It used to reside on the IBM blog, but has recently been replaced by newer content, so I've preserved it here for posterity.

This article illustrates a few possible uses of IBM Cloud Security Groups, a per-instance firewall for IBM Cloud virtual instances.

Why Use Security Groups?

Security groups firewall your IBM Cloud applications from nefarious network traffic, protecting you and your company from the efforts of “industrious users” trying to bring down your application, or make off with your customer's credit card details. If those sound like sub-optimal outcomes for your situation, read on…

Allow Incoming SSH Connections

The simplest use of security groups is to allow a single type of network connection to your instances, blocking all other traffic. For example, to allow only incoming SSH connections, which are TCP connections on port 22. All other types of traffic, such as ICMP ‘ping' connections, or TCP connections on other ports, are blocked. Fig 1. A security group configured to allow incoming SSH

A diagram representing SSH connections being allowed from support engineer to instances within security group 1, while ping connections are denied

Fig 1. A security group configured to allow incoming SSH.

When instances 1 & 2 are added to your security group, firewalls are created directly on those instances,configured to allow or deny the corresponding traffic. Hence, your support engineer can create SSH connections, but cannot send arbitrary network traffic.

Allow SSH from a Specified IP Address

The above scenario allows SSH connection attempts from any IP address. To increase security, you might only allow connections from a particular instance. Fig 2. A security group configured to allow incoming SSH connections (TCP port 22) from a particular IP address.

A diagram representing SSH conections being allowed from a known IP address to instances within security group 1, while connections from a disgruntled employee at a different IP address are denied

Fig 2. A security group configured to allow incoming SSH connections (TCP port 22) from a particular IP address.

The security group has been configured with the IP address used by a support engineer – the single instance that is authorized to make a connection. Connections from other instances are blocked.

As well as allowing traffic from a single IP address, security groups can be configured with a CIDR block, to allow traffic from all instances on that subnet. Fig 3. A security group configured to allow incoming SSH connections (TCP port 22) from all instances on a given subnet.

A diagram representing SSH conections being allowed from a known subnet to instances within security group 1, while connections from a hacker at an IP address ouside the subet are denied

Fig 3. A security group configured to allow incoming SSH connections (TCP port 22) from all instances on a given subnet.

The diagram shows instances on an authorized subnet (deploy1 & deploy2), representing a project CI/CD infrastructure, all being able to make SSH connections to our protected instances, for example to deploy updates to our application. Other instances, such as an enterprising hacker, are blocked.

Allow Application Instances to Access a Distributed Data Store

Another use case would be to allow application servers to access the nodes of a distributed data store. To do this, we'll make a few changes to the above security group configuration.

Firstly, we modify the open port from SSH's 22, to MongoDB's default query API port of 27017.

Secondly, security group 1 is now allowing the creation of outgoing network connections, from our application instances, where previously it was allowing incoming connections. Security group rules can manage traffic in either direction.

Thirdly, we don't want our data store instances to be unprotected, so we'll put them into a security group of their own. Since there's now two security groups, we'll give them names: “app” and “db”.

For now, security group “db” allows all incoming MongoDB queries (TCP connections on 27017), without restricting the IP addresses allowed to make connections. We'll fix that soon. Fig 4. Two security groups configured to allow application servers to send queries to MongoDB nodes on a subnet.

A diagram representing queries from instances in security group "app" being allowed to connect to port 27017 of the subnet of instances in security group "db", which similarly allows incoming connections on TCP port 27017

Fig 4. Two security groups configured to allow application servers to send queries to MongoDB nodes on a subnet.

For clarity, these diagrams don't show the many connections which are blocked by this setup - which are, of course, the whole point of security groups. On the above diagram, blocked connections would include:

  • On app instances:

    • All incoming connections.
    • Outgoing connections that aren't TCP, or are on the wrong port.
    • Outgoing connections to anything other than a DB instance.
  • On DB instances:

    • All outgoing connections.
    • Incoming connections that aren't TCP, or are on the wrong port.

This setup does have a couple of problems. It relies on our MongoDB instances all residing on a single subnet. Worse, as mentioned earlier, it allows any IP address in the world to make queries to MongoDB. We'll fix both of these next.

Using Remote Groups to Specify Arbitrary IP Addresses

Specifying allowed instances using a CIDR block can be inappropriate. It's often preferable to specify a set of arbitrary IP addresses instead. We can do this by using a second security group – known as a remote group – to contain the set of allowed instances. Our first security group can then refer to the remote group to specify which instances are allowed.

In our example above, we would modify the “app” security group by dropping the CIDR block, and replacing it with a reference to “db” as a remote security group.

Similarly, we would modify the “db” group to use “app” as a remote group, only allowing connections from the members of that group. Fig 5. Two security groups configured to allow application servers to send queries to MongoDB nodes using remote groups.

A diagram representing instances in security group "app" being allowed to connect to instances in security group "db", via port 27017 only

Fig 5. Two security groups configured to allow application servers to send queries to MongoDB nodes using remote groups.

This has several advantages. Firstly, our data store no longer accepts malicious queries from hackers all over the internet – only from our app instances.

Secondly, our data store instances no longer need occupy a subnet, they can have arbitrary IP addresses.

Because the security groups now specify allowed hosts by referencing each other, when the members of either group changes, the instance level firewalling rules on all instances are updated automatically, to allow or deny traffic based on the new membership.

This configuration starts to show what makes security groups a flexible, dynamic, low-maintenance solution.

Accepting Web Requests

Our application instances aren't any use without a web front end. We put our web instances into their own security group (“web”), which allows incoming requests from users on the web, and outgoing API requests to our app instances, on port 61516.

Similarly, we need to add a second rule to the “app” security group, to allow incoming requests from “web”. Fig 6. A traditional three-tier application using remote security groups.

A diagram representing a user connecting via port 80 to instances in security group "web", which connect to instances in group "app" via port 61516, which connect to group "db" via port 27017

Fig 6. A traditional three-tier application using remote security groups.

The more types of server we add to our setup, the more benefit “remote” groups provide, by minimizing setup configuration, and by automatically keeping firewalling rules up to date when group membership changes.

A future blog post will discuss how to set up this three-tier scenario, and describe details such as how to use multiple network interfaces on an instance, such as the web instances which will use a public IP address exposed to users, versus a private IP exposed to the API instances.

Add a Bastion

We need some way to access our servers, so that we can, for example, deploy new versions of our application. Commonly this is achieved using a bastion server, which provides a single, carefully hardened point of access.

Here we add a bastion server, and modify all our security groups to allow SSH access from the bastion to all our instances. The bastion instance itself would be configured to only allow incoming connections from the appropriate points in a CI/CD infrastructure (not shown.) Fig 7. Adding a bastion server with SSH access to all other instances.

A diagram representing the same elements as figure 6 above, with the addition of a "bastion" group which connects to all instances via SSH on port 22

Fig 7. Adding a bastion server with SSH access to all other instances.

This is starting to look like a realistic setup for a modest but scalable multi-tier application.


Security groups are a flexible and powerful way to firewall network traffic to and from your system's instances. We've shown how they might be used in a few typical scenarios, and hopefully demonstrated that they are flexible enough to accommodate many others. For more information, see Getting Started with Security Groups.

Jonathan Hartley's smiley face
Jonathan Hartley
Senior Cloud Developer

TIL: git push --force-with-lease

Don't ever type git push --force. Yes, there are times we have to hold our nose and do a force push. Maybe the project requires contributions to be rebased or squashed. Maybe we pushed the nuclear launch codes. But there are failure modes:

  • You might be accidentally pushing to or from the wrong branch, and hence are about to blow away valuable work at the remote. Yes, is unlikely, and can be fixed after the fact, but who knows how much embarrassing disruption and confusion you'll cause the team before you realize what you did.
  • Do you always remember to check the state of the remote, to make sure there isn't unexpected extra commits on the remote that you'll unknowingly blow away when you push? Do you enjoy always having to type those extra commands to pull and check the remote commits?
  • No matter how conscientious we are about checking the above, there is a race condition. We might check the remote, then someone else pushes valuable changes, then we force push and blow them away.

Although there are conventions that can help with all the above (e.g. only ever force pushing to your own fork, to which nobody else ever pushes), they aren't generally watertight. (e.g. you might have pushed something yourself, before vacation, and forgotten about it.)

So the generally agreed method to avoid the above failure modes is "be more careful", which sounds to me like the common prelude to failure. What we need are push's newer command-line options:

Like --force, but refuses to push if the remote ref doesn't point at the same commit that our local remote-tracking branch 'origin/mybranch' thinks it should. So if someone else pushes something to the remote's 'mybranch' just before we try to force push, our push will fail until we pull (and, in theory, inspect) those commits that we were about to blow away.

It turns out that this is inadequate. One might have fetched an up-to-date remote branch, but somehow or other ended up with our local HEAD on a divergent branch anyway:

C origin/mybranch
B¹   B² HEAD mybranch
 \ /

In this situation, --force-with-lease will allow us to push, not only blowing away the original commit B¹, as we intended, but also C, which was maybe pushed by someone else before we fetched.

To guard against this, we can use the even newer option:

This makes --force-with-lease even more strict about rejecting pushes, using clever heuristics on your local reflog, to check that the remote ref being updated doesn't include commits which have never been part of your local branch.

Upshot is, I plan to default to always replacing uses of --force with:

git push --force-with-lease --force-if-includes ...

That's a lot to type, the options don't have short versions, and it's easy to forget to do. Hence, shadow git to enforce it, and make it easy. In .bashrc or similar:

# Shadow git to warn againt the use of 'git push -f'
git() {
    for arg in "$@"; do
        [ "$arg" = "push" ] && is_push=true
        [ "$arg" = "-f" -o "$arg" = "--force" ] && is_force=true
    if [ "$is_push" = true ] && [ "$is_force" = true ]; then
        # Suggest alternative commands.
        echo "git push -f: Consider 'git push --force-with-lease --force-if-includes' instead, which is aliased to 'gpf'"
        return 1
    # Run the given command, using the git executable instead of this function.
    $(which git) "$@"

# git push force: using the new, safer alternatives to --force
gpf() {
    # Older versions of git don't have --force-if-includes. Fallback to omitting it.
    if ! git push --quiet --force-with-lease --force-if-includes "$@" 2>/dev/null ; then
      git push --quiet --force-with-lease "$@"

Then trying to do it wrong tells you how to easily do it right:

$ git push -f
git push -f: Consider 'git push --force-with-lease --force-if-includes' instead, which is aliased to 'gpf'
$ gpf

(The [1] is my prompt telling me that the last command had an error exit value.)

Structured Pattern Matching in Python

I read through descriptions of structured pattern matching when it was added in Python 3.10 a couple of years ago, and have studiously avoided it ever since. It seemed like a language feature that's amazingly useful in one or two places, like writing a parser, say, and is a horrifically over-complicated mis-step just about everywhere else.

Update: A day after writing this I see that Guido van Rossum wrote exactly that, a parser, to showcase the feature. I'm guessing he writes a lot of parsers. I definitely don't write enough of them to think this language feature is worth the extra complexity it brings.

Regardless, I really ought to remember how it works, so this is my attempt to make the details stick, by writing about it.

If you're not me, you really ought to be reading about it from the source instead:

Basic structure

    case PATTERN1:
    case PATTERN2:
    case _:

This evaluates the match EXPRESSION, then tries to match it against each case PATTERN, executing the body of the first case that matches, falling back to the optional final _ default case. (match and case are not keywords, except in the context of a match...case block, so you can continue using them as variable names elsewhere.)

But what are PATTERNs, and how are they tested for a match?


Patterns can be any of the following. As becomes increasingly obvious down the list, the real power of this feature comes from composing each of these patterns with the others. For complicated patterns, parentheses can be used to indicate order of operations.


Like other languages' traditional switch statement:

match mycommand:
    case 'start':
    case 'stop':
    case _:
        raise CommandNotFoundError(mycommand)

Such literal case patterns may be strings (including raw and byte-strings, but not f-strings), numbers, booleans or None.

Such cases are compared with equality:

match 123:
    case 123.0:
        # matches!

except for booleans and None, which are compared using is:

class Any:
    def __eq__(self, _):
        return True

myfalse = Any()

match myfalse:
    case False:
        # Doesn't match, even though myfalse == False
        assert False

Variable names

We can replace a literal with a variable name, to capture the value of the match expression.

match command:
    case 'start':
    case 'stop':
    case unknown:
        # New variable 'unknown' is assigned the value of command

The 'default' case pattern _ is just a special case variable name which binds no name.

Beware the common error of using "constants" as the case pattern:


match error:
    case NOT_FOUND: # bad

The above case is intended to test for error == NOT_FOUND, but instead assigns the variable NOT_FOUND = error. The best defense is to always include a default catch-all case at the end, which causes the above NOT_FOUND case to produce a SyntaxError:


match error:
    case NOT_FOUND:
    case _:
SyntaxError: name capture 'NOT_FOUND' makes remaining patterns unreachable

To use a 'constant' in a case pattern like this, qualify it with a dotted name, such as by using an enum.Enum:

match error
    case errors.NOT_FOUND:
        # correctly matches


Using a list-like or tuple-like syntax, matches must have the right number of items. Like Python's existing iterable unpacking feature. Use * to match the rest of a sequence. Included variable names are set if a case matches by all other criteria.

match command:
    case ('start', name):
        # New variable name=command[1]
    case ('stop', name):
        # New variable name=command[1]
    case ('stop', name, delay):
        # New variables name=command[1], delay=command[2]
    case ('stop', name, delay, *extra):
        # New variables name=command[1], delay=command[2] & extra=command[3:]
    case _:
        raise BadCommand(command)


Using a dict-like syntax. The match expression must must contain a corresponding mapping, and can contain other keys, too. Use ** to match the rest of a mapping.

match config:
    case {'host': hostname}:
        # 'config' must contain key 'host'. New variable hostname=config['host']
    case {'port': portnumber}:
        # 'config' must contain key 'port'. New variable portnumber=config['port']
        # Remember we only use the first matching case.
        # If 'config' contains 'host', then this 'port' case will not match.
    case {'scheme': scheme, **extras}:
        # new variables 'scheme' and 'extras' are assigned.

Case patterns may contain more than one key-value pair. The match expression must contain all of them to match.

    case {
        'host': hostname,
        'port': portnumber,

Objects and their attributes

Using class syntax, the value must match an isinstance check with the given class:

match event:
    case Click():
        # handle click
    case KeyPress():
        # handle key press

Beware the common error of omitting the parentheses:

match myval:
    case Click: # bad
        # handle clicks

The above case is intended to test for isinstance(myval, Click), but instead creates a new var, Click = myval. The best defence against this error is to always include a default catch-all at the end, which makes the Click catch-all produce an error by making subsequent patterns unreachable.

Attribute values for the class can be given, which must also match.

match event:
    case KeyPress(key_name='q', release=False):
    case KeyPress():

Values can also be passed as positional args to the class-like case syntax:

    case KeyPress('q', True)

If the class is a namedtuple or dataclass, then positional args to a class-like case pattern can automatically be handled using the unambiguous ordering of its attributes:

class Dog:
    name: str
    color: str

d = Dog('dash', 'golden')

match d:
    case Dog('dash', 'golden'):
        # matches

But for regular classes, the ordering of the class attributes is ambiguous. To fix this, add a __match_args__ attribute on the class, a tuple which specifies which class attributes, in which order, can be specified in a case pattern:

class KeyPress:
    __match_args__ = ('key_name', 'release')

event = KeyPress(key_name='q', release=False)

match event:
    case KeyPress('q', False):
        # matches!

As you might expect, the literal positional args can be replaced with variable names to capture attribute values instead:

match event:
    case KeyPress(k, r): # names unimportant, order matters
        handle_keypress(k, r)

Positional sub-patterns behave slightly differently for builtins bool, bytearray, bytes, dict, float, frozenset, int, list, set, str, and tuple. A positional value is matched by equality against the match expression itself, rather than an attribute on it:

match 123:
    case int(123):
        # matches
    case int(123.0):
        # would also match if it wasn't shadowed

Similarly, a positional variable is assigned the value of the match expression itself, not an attribute on that value:

match 123:
   case int(value):

assert value == 123

The values passed as keyword or positional args to class-like case patterns can be more than just literals or variable names. In fact they can use any of the listed pattern types. For example, they could be a nested instance of this class-like syntax:

class Location:
    def __init__(self, x, y):
        self.x = x
        self.y = y

class Car:
    def __init__(self, location):
        self.location = location

mycar = Car(Location(11, 22))

match mycar:
    case Car(location=(Location(x=x, y=y))):
        # matches, and captures 'x' and 'y'

assert x == 11
assert y == 22

Combine patterns using |

To match either one pattern or another:

    case 1 | True | 'true' | 'on' | 'yes':
        # matches any of those values

Capture sub-patterns using as

We've seen how we can either match against a value, or capture the value using a variable name. We can do both using as:

    case 'a' | 'b' as ab:
        # matches either value, captures what the value actually was

This might not be much use when capturing the whole match expression like that. If the match expression is just a variable, then we could instead simply refer to that variable. But using as can be useful when the match expression is lengthy or has side-effects:

match events.get_next():
    case KeyDown() as key_event:

or to capture just a component of the whole expression. Contrived example:

    case ('a' | 'b' as ab, 'c'):
        # matchs ['a', 'c'] or ['b', 'c'], and captures the first letter in 'ab'

An if guard clause

Add arbitrary conditions to the match:

    case int(i) if i < 100:
        # matches integers less than 100

Or, alternatively:

    case int() as i if i < 100:
        # matches integers less than 100


This feature seems rife with complexity. The flexible syntax of case patterns forms a new mini-language, embedded within Python. It has many similarities to Python, but also many initially unintuitive differences.

For example, a class-like case pattern such as case Click():. Anywhere else in the language, the expression like Click(...) would create an instance of the Click class. In a case statement, it instead is doing things like isinstance and hasattr checks.

Similarly, including variable names doesn't return the variable value as in ordinary Python. Instead it binds a value as that name. This is the source of the annoying gotcha described above, that bare "constants" like NOT_FOUND behave very unexpectedly when used as case expressions.

There are a few places in real-world code where structured pattern matching will produce nicer code than the equivalent using nested elifs. But equally, there are a lot of places where the elifs are a more natural match. Developers now get to choose which they're going to use, and then later disagree with each other about it, or simply change their mind, and end up converting code from one to the other.

If this was a simple feature, with low overheads, then I'd forgive its inclusion in the language, accepting the costs in return for the marginal and unevenly distributed benefits.

But it's really not simple. In addition to Python programmers all having to do an exercise like this post just to add it to their mental toolbox, it needs maintenance effort, not just in CPython but in other implementations too, and needs handling by tools such as syntax highlighters, type checkers. It really doesn't seem like a net win to me, unless you're writing way more parsers than the average programmer, which no doubt the champions of this feature are.


Ur-Fascism cover

by Umberto Eco, 1995.

Eco's prose has left me in the dust on occasion in the past. I missed so many references, or failed to keep up with the relentlessly nested layers of meaning, that I was simply holding on for the ride. This essay matches Eco's characteristically dense, intellectual prose, studded with foreign phrases, and references to contemporary thinkers, historical movements and causes and revolutions and dictatorships, and their antecedents. But it is short, and perhaps in contrast to the flourishes of brilliance that comprise his fiction, this is a factual piece, and is written to be understood rather than to dazzle. It begins by establishing Eco's credentials to speak on this topic, with the first of several entrancing first-hand indications of what it was like to grow up as an intellectual young child in Italy under Mussolini, and the revelations that followed at the opening up of his world at the end of WWII.

He differentiates between the truly totalitarian fascism of, say, Nazism, which subordinated all of life to the state, with the looser, less coherent Italian fascism, noting that this did not derive from any increment of tolerance, merely the absence of a sufficiently encompassing underlying philosophy. Despite this, it is the Italian mode, which was the first right wing dictatorship in modern history, from which subsequent dictators seem to have drawn most stylistic inspiration, and from which our generic term of "fascism" derives.

The last half of the essay describes how fascism means different things in different contexts, but the various incarnations through history have exhibited sufficiently overlapping sets of symptoms as to glean a family resemblance. Eco enumerates 13 identifying traits, noting that the presence of even one of them can be sufficient to allow fascism to coagulate around it. Mostly for my own benefit, (with my own parenthesised observations) they are, briefly:

  1. Fascism incorporates a cult of tradition. This can be deployed as an automatic refutation of any undesirable new ideas, enshrining in their place the immutable wisdom of a mythical past. In addition, traditionalism undermines the perceived value of learning in itself - pre-emptively thwarting troublesome intellectuals. This anti-intellectual received wisdom, in order to provide whichever justifications are required of it, requires the syncretistic combination of various ancient beliefs. As a result it must tolerate contradictions. Indeed, the more stark they are, the better they serve the purpose of selecting followers who will obediently think whatever they are told.

  2. The rejection of modernism. This is a powerful recruiting tool, enabling the fascist to leverage any dissatisfaction of the populous, laying claim to the emotionally appealing universal solution of a regression to simpler, happier times, while simultaneously rewinding societal progress in equality or liberty. While Nazis and fascists love their technology, this is a superficial tool, used in support of a deeply regressive project, namely the restoration of power to those with the strength and the will to take it. This irrationality goes hand in hand with anti-intellectualism.

  3. Value vigor and action above reflection. Thinking is emasculation. Culture is suspect insofar as it aligns with any sort of critical theory or values. Regard centers of analysis or learning such as libraries or universities with suspicion for harboring - or even indoctrinating - people of opposing political viewpoints. Again, this is deeply intertwined with (1) & (2), and its prevalence pre-emptively defuses any kind of mainstream understanding or critique.

  4. All of the above make it inevitable that any given fascist regime will be rife with internal contradictions. While modernity achieves its intellectual prowess through the nurturing of diverse thought, fascism cannot possibly withstand any critical analysis. Hence, disagreement is treason.

  5. The fascist appeal to popularity exploits and exacerbates the natural fear of the other, and hence is always inherently racist. Expect demonization of immigrants, foreigners, other nation states, as well as other marginalized groups, taking advantage of whatever local contemporary biases and fears might be present.

  6. The above exploitation takes the form of an appeal to the frustrations of the middle class - or whichever class can be most useful and readily mobilized by persuasion that their problems are caused by some identifiable other.

  7. Modernity genuinely does disintegrate traditional social bonds, along with sources of identity and meaning. Fascism's solution to this is to unify the disaffected under the only remaining banner common to them all, that of patriotism and nationalism. This unity is strengthened by emphasis on the country's enemies, and especially by conspiracy theories of secret international plots against the nation. Followers must feel besieged (as Trump advised the January 6th crowd that "America is under siege"). Eco makes special mention, in the U.S, of Pat Robertson's The New World Order, but potential sources of conspiracy are innumerable.

  8. Followers can be riled into frenzy of humiliation at their enemies' wealth or power. Jews control the world and its money. Instead of coastal liberals, refer to coastal elites. But at the same time, the instinct to action requires that enemies can easily be defeated. Hence enemies are simultaneously too weak and too strong. Herein lies one of fascism's greatest weaknesses, responsible for several lost wars, in that it is constitutionally incapable of objectively assessing an enemy's strength.

  9. Goad followers into violent action with rhetoric not just of a struggle for survival, but by declaration that life is struggle, and hence pacifism is conspiring with the enemy.

  10. While fascism appeals for the participation of the population by promising empowerment for the majority, its naked power lust is a fundamentally aristocratic endeavor. The leader takes power from those too weak to oppose him, disdaining both conquered rivals and the subjugated population. Power struggles within the Party are vicious, and the party rides roughshod over the citizens, who likewise are leagues above the disenfranchised. The hierarchy is strict, steep, and ruthless. Elitism abounds, as does fear of losing one's status.

  11. The redress of modernity's threadbare social fabric, by emphasizing nationalism and strength, further erodes interpersonal solidarity. Each individual must becomes their own hero. Strong, independent, and utterly without recourse in times of need. The cult of the hero is intimately entwined with a cult of death. Having only the narrow causes of the nation and the Party to live for, the hero yearns for a heroic death - or, better, to demonstrate their power and status by sending others to their death.

  12. The preeminent will to power, so often frustrated in an aristocratic, dog-eat-dog social order, manifests alternately in things like machismo, disdain for women, and phallic fetishization of weapons. Repressed insecurity breeds an outward contempt for unconventional sexuality, including chastity.

  13. Under fascism, the people have no innate rights, and hence no material preferences or expression. Instead, the leader pretends to interpret the Will of the People. This charade requires the party apparatus to select and amplify some emotional response of the people, and present it as representative, so that the party can be empowered to act on behalf of that supposedly majority. Consider the amplifications of manufactured outrage about culture war issues, so that elected representatives are empowered to act decisively on their own preferences, against the majority of the population's wishes. This leads directly to confrontation with institutions such as a parliament. Fascism will therefore cast aspersions on any properly functioning parliament's legitimacy.

  14. All fascisms make use of their own varieties of NewSpeak, using an impoverished vocabulary and syntax, in order to limit the instruments for critical reasoning. This may appear in apparently innocent forms, from schoolbooks to popular talk shows.

TIL: Makefiles that are self-documenting, and process all extant files.

Self-documenting Makefiles

A trick from years ago, but I copy it around between projects enough that it deserves calling out. Add a target:

help: ## Show this help.
    @# Optionally add 'sort' before 'awk'
    @grep -E '^[a-zA-Z_\.%-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-10s\033[0m %s\n", $$1, $$2}'
.PHONY: help

Decorate other targets with a descriptive '##' comment, like "Show this help" above. Now calling the 'help' target will summarize all the things the Makefile can do. eg:

$ make help
help       Show this help.
setup      Install required system packages using 'apt install'.
%.pdf      Generate given PDF from corresponding .tex file.
all        Convert all .tex files to PDF.

You might choose to make 'help' the first target in the Makefile, so that it gets triggered when the user runs make without arguments.

Process all extant files

Make's canonical paradigm is that you tell it the name of the file to generate, and it uses the tree of dependencies specified in the Makefile to figure out how to build it. Typically you'll use automatic variables like "$<" to represent the wildcarded source filename:

%.pdf: %.tex ## Generate given PDF from corresponding .tex file.
    pdflatex $<

The pitfall is that when invoking this, you have to name all the PDF files you want to generate. If the names are a fixed part of your build, they can be embedded in the Makefile itself:

all: one.pdf two.pdf three.pdf

But if their names are dynamic, you have to specify them on the command line, which is a pain:

$ make one.pdf two.pdf three.pdf

This is easy enough when re-generating all the PDFs that already exist:

$ make *.pdf

but is no help when you just have a bunch of .tex files and you just want Make to build all of them. This is going the opposite way to canonical make usage. We want to specify the existing source files (*.tex, in this case), and have Make build the resulting products.

To do it, we need our Makefile to enumerate the existing source files:

TEX_FILES = $(wildcard *.tex)

Using the 'wildcard' function here behaves better than a bare wildcard expansion, e.g. it produces no output when there are no matches, rather than outputting the unmatched wildcard expression.

Then use a substitution to generate the list of .pdf filenames:

all = $(TEX_FILES:%.tex=%.pdf)

Now make all will generate one .pdf file for each extant .tex file, regardless of whether the corresponding .pdf files already exist or not.