You are viewing archived messages.
Go here to search the history.

Breck Yunits 2021-05-19 20:10:59

An interesting quote from an HN thread about Xanadu "The brilliance of TBL is the W3 is half assed in just the right ways to make it not yet another unused visual language or mind map format." What's a term for "half assed in just the right ways"? Is there really such a thing?

Chris Granger 2021-05-19 20:20:46

I think that’s the wrong sentiment in the same way that many misunderstand the real lesson of “worse is better.” Things that appear ideal on some axes are often poor on others and depending on the context, perfect on axis A and bad on axis B is much worse than just ok in both.

Chris Granger 2021-05-19 20:21:53

being “half-assed” compared to the perfect solution often allows for more freedom of movement on axes that may be far more important to the audience

Justin Blank 2021-05-19 21:22:01

The word “tractable” comes to mind, though I’m not sure if it’s appropriate.

William Taysom 2021-05-20 01:14:14

See also JSON eventually winning out over all the XMLs for day-to-day use.

Mariano Guerra 2021-05-20 10:33:10

A thought: The original web browser for NeXT had an editor too, other ports skipped the editor and shipped the browser only. The fact that they could do that, and also the fact that HTML was so "half-assed"/simple made it easier to port the browser to more platforms making it easier to adopt, use and contribute to it, contributing to its momentum.

The downside is that it cemented the web as a consumption medium for most 😕

The NeXT version had the editor because NeXT made it easy to add one, other plaforms didn't so nobody made the effort and a browser was "good enough". We may push the blame one level below: If other platforms made it easier to embed an editor then the browser ports would have it.

Careful what you make hard/impossible to do, it may bite you one layer above 😛

Mariano Guerra 2021-05-20 10:42:01

Bidirectional links require coordination, more storage and open the gate for spam and DoS attacks, it's not that it wasn't attempted on the web: https://html.spec.whatwg.org/#ping

It's kind of ironic that searching for the original backlink concept on blogs is full of SEO "hacks" and can't find the original non spamy content 🙂

Konrad Hinsen 2021-05-20 12:31:59

In an unrestricted network, bidirectional links are indeed problematic. In a bounded network, such as a Wiki, they are great. Something I'd like to see explored is the space in between. For example, a Wiki federation with shared bidirectional links managed as a commons. Just imagine such a federation around Wikipedia, with the bar to entry set very high.

Shalabh 2021-05-20 18:50:02

Google indexes all the links too so you can search for incoming links eg. https://www.google.com/search?q=link%3Afutureofcoding.org+-site%3Afutureofcoding.org

William Taysom 2021-05-21 01:05:33

As @Shalabh suggests, we could say that Google (page rank and its million extensions in particular) is the business of recognizing bidirectional ham.

Chris Rabl 2021-05-21 02:28:42

First thing that came to mind was the notion of a minimum viable product, which I'm not sure Xanadu ever was.

Konrad Hinsen 2021-05-21 05:02:39

@Shalabh I don’t want to see all links pointing to Wikipedia. Only links from sites that Wikipedia considers worthy of it. Wikibooks would be a good candidate.

Konrad Hinsen 2021-05-21 06:50:51

In other words: coarse-grained social networks. Not between people, but between communities.

Stefan Lesser 2021-05-21 10:47:35

being “half-assed” compared to the perfect solution often allows for more freedom of movement on axes that may be far more important to the audience

There’s the notion of designing something only as far as you need to, which I’ve been exploring as one of the main themes in Christopher Alexander’s work. Ryan Singer calls it “design latitude”. It’s what pattern languages really are about: describing a design only as far as you need to in that context, leaving all the lower-level implementation details as open as possible.

In software we nowadays default to spell everything out as detailed as possible. Partly because we have to; that CPU isn’t going to do anything until you present it with some proper stream of instructions, so you’re required to fill in all the blanks somehow, even if you haven’t figured those out in the design yet. Or — what a concept — if you’d prefer not to fill these details in, but leave it to others downstream to do that.

Being able to distinguish the decisions that you need to make now from the ones you want to leave open, is something our tools today are really bad at helping us with. They usually push us towards deciding everything, even if we don’t want to.

Konrad Hinsen 2021-05-21 11:20:00

I totally agree, design should be done in the form of specifications (which can be incomplete), not implementation (which has to be executable). Better yet, aim for composable specifications. That has worked out very well in mathematical descriptions (see https://blog.khinsen.net/posts/2020/12/10/the-structure-and-interpretation-of-scientific-models/).

Stefan Lesser 2021-05-21 11:33:51

Konrad Hinsen How does incompleteness work with specifications? I thought they’re only incomplete in the sense that they are a model and not reality, so they might just completely miss certain aspects (or deliberately leave them out), but they still need to be fully coherent and precise within themselves.

I don’t know enough about that to judge whether I perhaps mean a different kind of incomplete. Alexander is pretty clever in pattern languages, where he uses ambiguity of language to choose words that create the right picture in our mind’s eye, but such that we (the “user”) fill in the blanks and not him (the designer). It feels like there’s a (subtle?) difference there in that an incomplete specification misses something completely (or chooses to leave it out), whereas a pattern language very deliberately describes something, but in a way that is intentionally ambiguous. I’m having a hard time squaring precision of specifications with ambiguity.

Konrad Hinsen 2021-05-21 13:25:13

Take mathematical equations, which are specifications for their solutions. More specifically, equations are constraints on the solutions. You can compose as many such constraints as you want. At worst, you overconstrain the solution to the point that there is no solution any more. Which means that your specifications/constraints are incoherent. But that is detectable and thus avoidable.

Stefan Lesser 2021-05-21 13:53:43

Ah yes, that makes sense. I was already thinking in the direction of type systems, which allow you to express such arbitrary constraints on values, making them just as specific as you need/want them.

Konrad Hinsen 2021-05-21 16:28:07

Yes, that's a good start. Next would be constraints on relations between values. Both same-time (e.g. two arguments to a function) and different-time (e.g. input and output of a function).

Shalabh 2021-05-21 16:57:13

Konrad Hinsen I agree we want curated bidi links. I didn't mean to say "we can do bidi links already" but rather "bidi links can be extracted, with some effort, from uni directional links" and perhaps this idea can be used to build curated lists.

One big shortcoming of the web is there is no link-content stability. The link represents a way to get the content, not the content itself. A unique link should always give you the "same thing". In some cases this might mean the latest version of that thing, but prior versions should be linked to the latest one as well. OTOH, the original author of some content should not be required to fund availability of their stable content on some server for perpetuity.

Perhaps we want a system where authors publish stable content links but availability is provided by other organizations. For large globally relevant content, large publicly funded organizations could fund availability (~wikipedia). However smaller communities could form their own organizations and fund availability of content relevant to and curated by them. As things become more relevant, content would get pinned into the zones of more and more orgs, small and large. As they become less relevant, many orgs might stop persisting the content, but archival orgs like newspapers and http://archive.org might take them on. I believe IPFS (and maybe DAT?) or something similar can be a foundation of what I am describing.

This is only part of the problem though. In IPFS for instance, a link will give me a blob of bytes but making sense of it is still left to me. What's the guarantee I'll be able to assemble the perfect combination of programs that extract meaning from that blob? In 5 years? In 50 years? How can we enrich the system to do this easily and reliably?

Konrad Hinsen 2021-05-22 04:47:42

A big problem of the Web is indeed that it has no notion of "lifetime". There is no way to ensure persistence, nor erasure of information.

Content-addressing as in IPFS (not DAT, which is based on UUIDs for sharing mutable data) is a very useful ingredient for better information management. Preserving the semantics of data is a much harder problem, one that people (including myself) are actively working on in the context of preserving digital scientific knowledge. 5 years is doable today. If you aim for 50 years, there are techniques that make it possible under reasonable assumptions, such as the continued existence of virtual x86 machines.

Shalabh 2021-05-22 17:10:21

Konrad, on the topic of long term preservation of semantics, I was thinking if we could store the mapping of bytes to other structures in the long term stabilized storage as well, maybe that could work?

For the short term, something like mime type tagging can work, but a mime type is just a string tag. If a blob is annotated with image/png we're likely to find decoders easily. However it gets harder for text/some-custom-format. So instead of a string tag, what if we tag a file A with a link to another permanent file B which we call the class for file A. The B file would describe how to parse the content of the A. But how do we know how to parse B? Either it is well known, or it would link to another class file C, perhaps. At some point we have to agree on a small language to describe the axiomatic description B* that all classes eventually link up to. The main thing would be these descriptions would have to be machine agnostic (no x86 specific stuff). However, as an optimization these class descriptions would also link to x86 implementations (stored in other files) of the parsers. So if you're running on architecture X, you could find the class for the content (A -> B) and then lookup implementations for X (B --(X)--> F). Parsers for future architectures could be added. Also, content files can be reencoded into instances of newer classes, if needed.

Konrad Hinsen 2021-05-23 07:27:08

I think the key issue is "agree on". We know enough enough by now to build semantic stacks with a minimalist basis that is easy to document for future generations. But there are many ways to do it and many ideological but not very fundamental arguments to distinguish between them. Humans just like bikeshedding too much to make substantial progress.

Breck Yunits 2021-05-21 20:29:21

Another good quote from a few months ago on HN (https://news.ycombinator.com/item?id=26034000):

Speed has a moral dimension: to be fast is to be in tune with the facts of the world as it truly is, as the Atman has provided, without illusion.

Speed has a social dimension: to make the user wait unnecessarily is to express disrespect, even contempt.

Speed has an architectural dimension: to be fast, the operations have to match the parts of the system and their relationships.

Speed has a spiritual dimension: to achieve speed demands that you humble yourself before the structures of the machine as it truly is, not some comfortable abstraction.

Daniel Garcia 2021-05-21 21:27:11

Lately, I have been thinking about how to do some programming without screens.

I came up with a representation of Nodes & wires with Lego's.

If each function and constant has it's own color, I can represent the average function like this.

  • Does anyone has experience with OpenCV or computer vision, so that I can create code from the above images?

  • Any thougths on how to make use of 3D? (There's a slight use of 3D in an image in the thread but I think the third dimension is underused)

Daniel Garcia 2021-05-21 21:31:39

Trying something a bit more complex like an XOR function, I get into the problem of wires crossing. I think wires could be made explicit with colors

📷 xor.jpg

📷 xor-2.jpg

Breck Yunits 2021-05-21 21:52:55

There are some very active LEGO subreddits (https://www.reddit.com/r/lego/)

Breck Yunits 2021-05-21 21:53:29

One idea is to start with LDraw or similar (https://www.ldraw.org/)

Breck Yunits 2021-05-21 21:53:53

And work on nailing your 3D language virtually

Breck Yunits 2021-05-21 21:54:28

There are likely CV packages out there for converting a photo of LEGO into an LDraw file

Vijay Chakravarthy 2021-05-22 02:32:52

If you are using color coded bricks you can likely just do it using image thresholding and maybe a hough transform for initial alignment. OpenCV would be overkill in this case.

Vijay Chakravarthy 2021-05-22 02:34:37

On a related note, I assume you have looked at the dynamicland stuff in some detail?