You are viewing archived messages.
Go here to search the history.

Ivan Reese 2022-10-30 02:02:35

Future of Coding • Episode 59

Richard P. Gabriel • Worse is Better

🔗 futureofcoding.org/episodes/059

Following our previous episode on Richard P. Gabriel's Incommensurability paper, we're back for round two with an analysis of what we've dubbed the Worse is Better family of thought products:

Next episode, we've got a recent work by a real up-and-comer in the field. While you may not have heard of him yet, he's a promising young lad who's sure to become a household name.

I am usually really thorough in my editing of the show, but this one I sort of had to rush out the door because the month is rapidly drawing to a close. If anyone spots any weird edits, or anything that sounds out of place, let me know. In particular, the sponsors (which now come at the end of the episode) might be a little rough. Oh well — pays the bills, amirite?

Jim Meyer 2022-10-30 04:44:35

i Land 😂

Personal Dynamic Media 2022-10-30 05:09:15

Thank you, this was a fun listen. I also really appreciate that you have dropped the disrespectful nicknames. Thank you.

When comparing the priorities of the authors of Unix versus the authors of ITS, I think it's worth remembering some of the technical differences between the hardware.

Unix grew up on a mini computer, the PDP-11 (Yes, it started life on a PDP-7, but the real growth into the operating system we would recognize today occurred on the PDP 11.), where the kernel had to fit in 64K, and each program had to fit in its own 64K (later models let you use 64K for code and a separate 64K for data). This environment will naturally encourage one to prioritize performance and simplicity of implementation.

On the other hand, ITS grew up on a mainframe, the PDP-6 (later PDP-10), which had a 36-bit word and 18-bit addressing, making it possible for a single address space to contain substantially more memory. It's much easier to put more complexity into your kernel in this environment.

As a result, I'm not convinced that the differences in prioritization were fundamentally the results of the two cultures in question. I suspect the different priorities may have arisen partly from the technologies in question.

With respect to the question of how well the Unix interface hides complexity, I would argue that many Unix tools provide a good, simple interface for utilizing some pretty deep complexity. In fact, I think the relevant comparison is not Lisp versus C, but Lisp versus sh. The ubiquitous data type in Lisp is the list, while the ubiquitous data type in sh is the file full of variable length one line records. Pipelines in sh fill the role of function composition in Lisp. The equivalent of C in ITS was the MIDAS assembler, which WAS really nice for an assembler. See wiki.c2.com/?SymbioticLanguages for some elaboration on where I'm coming from with this comparison.

For example, make provides a relatively easy way to make use of a topological sort, without having to understand the implementation details or even what a topological sort is.

The sort command allows you to sort files substantially larger than RAM, handling all issues of breaking files into chunks that can be sorted within RAM, merging those chunks into larger chunks, storing intermediate results on disc, etc.

The diff command provides a simple interface for finding the longest common substring between two sets of lines.

lex and yacc pack a lot of powerful computer science into a relatively straightforward interface for specifying tokens and grammars.

join, comm, awk, dc, bc, and many other tools that were already available in v7 Unix also present simple interfaces for making use of powerful code.

Speaking as a huge Smalltalk and Lisp fan, as well as a huge Unix fan, I think the question of how Unix won extends far beyond Gabriel's analysis, though I appreciate the factors that he identified.

Konrad Hinsen 2022-10-31 06:22:06

Interesting episode, once more! I had read these papers many years ago, with mixed feelings about the relevance of the "worse is better" idea. Your discussion framing it as "where does the complexity go" is illuminating here. But I agree with @Personal Dynamic Media that it's not so much "developer vs. user" but "where on the many layers of a real-life software system does the complexity go?" Unix at the shell programming level is indeed free from the low-level considerations that PG mentions. Which explains why I found the topic only moderately relevant since my own focus as a power user (rather than software developer) is on levels clearly above the Linux kernel APIs. For me, Lisp machine vs. Unix is about Lisp vs. shell as the layer that defines the coherence and accessability of system features. With Lisp clearly "winning" here, but at the cost of much higher resource usage.

William Taysom 2022-10-31 07:00:57

On Ruby, Matz put it this way, "Actually, I'm trying to make Ruby natural, not simple."

Concrete example. The keyword alias is a for giving multiple names to the same method so that you can call the synonym that feels the most natural. Examples from the often used Enumerable module:

  • include? and member?
  • to_a and entries
  • detect and find
  • select and filter and find_all
  • map and collect
  • flat_map and collect_concat
  • reduce and inject

The -ect names ( select , reject , detect , inject ) come from Smalltalk.

Ivan Reese 2022-10-31 14:06:21

@William Taysom I'm not sure what you're responding to. Something from a previous comment? Something from the episode? (Perhaps Jimmy's mention that Ruby is difficult to parse?)

William Taysom 2022-11-01 02:15:20

Yes, Jimmy's comment from the episode that Ruby is complex in implementation and complex in API. Yet somehow using it often feels good. How? Comes from the complexity being in service of a kind of naturalness. The syntax, for example, if enough people interested people think something should work eventually it does.

Now does anyone actually know the syntax of the language? Not me! And I've been writing this language for twenty years. I was today days old when I learned you can use :: for method calls, as in Object.new::is_a?(Object) .