An interesting quote from an HN thread about Xanadu "The brilliance of TBL is the W3 is half assed in just the right ways to make it not yet another unused visual language or mind map format." What's a term for "half assed in just the right ways"? Is there really such a thing?
I think that’s the wrong sentiment in the same way that many misunderstand the real lesson of “worse is better.” Things that appear ideal on some axes are often poor on others and depending on the context, perfect on axis A and bad on axis B is much worse than just ok in both.
being “half-assed” compared to the perfect solution often allows for more freedom of movement on axes that may be far more important to the audience
The word “tractable” comes to mind, though I’m not sure if it’s appropriate.
See also JSON eventually winning out over all the XMLs for day-to-day use.
A thought: The original web browser for NeXT had an editor too, other ports skipped the editor and shipped the browser only. The fact that they could do that, and also the fact that HTML was so "half-assed"/simple made it easier to port the browser to more platforms making it easier to adopt, use and contribute to it, contributing to its momentum.
The downside is that it cemented the web as a consumption medium for most 😕
The NeXT version had the editor because NeXT made it easy to add one, other plaforms didn't so nobody made the effort and a browser was "good enough". We may push the blame one level below: If other platforms made it easier to embed an editor then the browser ports would have it.
Careful what you make hard/impossible to do, it may bite you one layer above 😛
Bidirectional links require coordination, more storage and open the gate for spam and DoS attacks, it's not that it wasn't attempted on the web: https://html.spec.whatwg.org/#ping
It's kind of ironic that searching for the original backlink concept on blogs is full of SEO "hacks" and can't find the original non spamy content 🙂
In an unrestricted network, bidirectional links are indeed problematic. In a bounded network, such as a Wiki, they are great. Something I'd like to see explored is the space in between. For example, a Wiki federation with shared bidirectional links managed as a commons. Just imagine such a federation around Wikipedia, with the bar to entry set very high.
Google indexes all the links too so you can search for incoming links eg. https://www.google.com/search?q=link%3Afutureofcoding.org+-site%3Afutureofcoding.org
As @Shalabh suggests, we could say that Google (page rank and its million extensions in particular) is the business of recognizing bidirectional ham.
First thing that came to mind was the notion of a minimum viable product, which I'm not sure Xanadu ever was.
@Shalabh I don’t want to see all links pointing to Wikipedia. Only links from sites that Wikipedia considers worthy of it. Wikibooks would be a good candidate.
In other words: coarse-grained social networks. Not between people, but between communities.
being “half-assed” compared to the perfect solution often allows for more freedom of movement on axes that may be far more important to the audience
There’s the notion of designing something only as far as you need to, which I’ve been exploring as one of the main themes in Christopher Alexander’s work. Ryan Singer calls it “design latitude”. It’s what pattern languages really are about: describing a design only as far as you need to in that context, leaving all the lower-level implementation details as open as possible.
In software we nowadays default to spell everything out as detailed as possible. Partly because we have to; that CPU isn’t going to do anything until you present it with some proper stream of instructions, so you’re required to fill in all the blanks somehow, even if you haven’t figured those out in the design yet. Or — what a concept — if you’d prefer not to fill these details in, but leave it to others downstream to do that.
Being able to distinguish the decisions that you need to make now from the ones you want to leave open, is something our tools today are really bad at helping us with. They usually push us towards deciding everything, even if we don’t want to.
I totally agree, design should be done in the form of specifications (which can be incomplete), not implementation (which has to be executable). Better yet, aim for composable specifications. That has worked out very well in mathematical descriptions (see https://blog.khinsen.net/posts/2020/12/10/the-structure-and-interpretation-of-scientific-models/).
Konrad Hinsen How does incompleteness work with specifications? I thought they’re only incomplete in the sense that they are a model and not reality, so they might just completely miss certain aspects (or deliberately leave them out), but they still need to be fully coherent and precise within themselves.
I don’t know enough about that to judge whether I perhaps mean a different kind of incomplete. Alexander is pretty clever in pattern languages, where he uses ambiguity of language to choose words that create the right picture in our mind’s eye, but such that we (the “user”) fill in the blanks and not him (the designer). It feels like there’s a (subtle?) difference there in that an incomplete specification misses something completely (or chooses to leave it out), whereas a pattern language very deliberately describes something, but in a way that is intentionally ambiguous. I’m having a hard time squaring precision of specifications with ambiguity.
Take mathematical equations, which are specifications for their solutions. More specifically, equations are constraints on the solutions. You can compose as many such constraints as you want. At worst, you overconstrain the solution to the point that there is no solution any more. Which means that your specifications/constraints are incoherent. But that is detectable and thus avoidable.
Ah yes, that makes sense. I was already thinking in the direction of type systems, which allow you to express such arbitrary constraints on values, making them just as specific as you need/want them.
Yes, that's a good start. Next would be constraints on relations between values. Both same-time (e.g. two arguments to a function) and different-time (e.g. input and output of a function).
Konrad Hinsen I agree we want curated bidi links. I didn't mean to say "we can do bidi links already" but rather "bidi links can be extracted, with some effort, from uni directional links" and perhaps this idea can be used to build curated lists.
One big shortcoming of the web is there is no link-content stability. The link represents a way to get the content, not the content itself. A unique link should always give you the "same thing". In some cases this might mean the latest version of that thing, but prior versions should be linked to the latest one as well. OTOH, the original author of some content should not be required to fund availability of their stable content on some server for perpetuity.
Perhaps we want a system where authors publish stable content links but availability is provided by other organizations. For large globally relevant content, large publicly funded organizations could fund availability (~wikipedia). However smaller communities could form their own organizations and fund availability of content relevant to and curated by them. As things become more relevant, content would get pinned into the zones of more and more orgs, small and large. As they become less relevant, many orgs might stop persisting the content, but archival orgs like newspapers and http://archive.org might take them on. I believe IPFS (and maybe DAT?) or something similar can be a foundation of what I am describing.
This is only part of the problem though. In IPFS for instance, a link will give me a blob of bytes but making sense of it is still left to me. What's the guarantee I'll be able to assemble the perfect combination of programs that extract meaning from that blob? In 5 years? In 50 years? How can we enrich the system to do this easily and reliably?
A big problem of the Web is indeed that it has no notion of "lifetime". There is no way to ensure persistence, nor erasure of information.
Content-addressing as in IPFS (not DAT, which is based on UUIDs for sharing mutable data) is a very useful ingredient for better information management. Preserving the semantics of data is a much harder problem, one that people (including myself) are actively working on in the context of preserving digital scientific knowledge. 5 years is doable today. If you aim for 50 years, there are techniques that make it possible under reasonable assumptions, such as the continued existence of virtual x86 machines.
Konrad, on the topic of long term preservation of semantics, I was thinking if we could store the mapping of bytes
to other structures in the long term stabilized storage as well, maybe that could work?
For the short term, something like mime type tagging can work, but a mime type is just a string tag. If a blob is annotated with image/png
we're likely to find decoders easily. However it gets harder for text/some-custom-format
. So instead of a string tag, what if we tag a file A with a link to another permanent file B which we call the class for file A. The B file would describe how to parse the content of the A. But how do we know how to parse B? Either it is well known, or it would link to another class file C, perhaps. At some point we have to agree on a small language to describe the axiomatic description B*
that all classes eventually link up to. The main thing would be these descriptions would have to be machine agnostic (no x86 specific stuff). However, as an optimization these class descriptions would also link to x86 implementations (stored in other files) of the parsers. So if you're running on architecture X, you could find the class for the content (A -> B) and then lookup implementations for X (B --(X)--> F). Parsers for future architectures could be added. Also, content files can be reencoded into instances of newer classes, if needed.
I think the key issue is "agree on". We know enough enough by now to build semantic stacks with a minimalist basis that is easy to document for future generations. But there are many ways to do it and many ideological but not very fundamental arguments to distinguish between them. Humans just like bikeshedding too much to make substantial progress.