The plot thickens: Why OpenDoc failed, and then failed 3 more times?
Summary of reasons found around the web and comparing them with other implementations of the same concept:
ActiveX
KParts
Bonobo
And a short mention of Web Components
https://instadeq.com/blog/posts/why-opendoc-failed-and-then-failed-3-more-times/
LinkBack is an open source framework for Mac OS X that helps developers integrate content from other applications into their own. A user can paste content from any LinkBack-enabled application into another and reopen that content later for editing with just a double-click. Changes will automatically appear in the original document again when you save.
Not exactly the same, but had to think of Ray Ozzie’s Live clipboard ideas (sort of shared schema for web copy/paste). Perhaps you’ll find it interesting (https://scripting.wordpress.com/2006/03/07/ray-ozzies-clipboard-for-the-web/)
https://channel9.msdn.com/Blogs/Charles/Live-Clipboard-What-How-Why -> the mid quality WMV is still working.
Aside from the design of the site, here's a clue that this is quite an old project (sadly) — at the bottom of the http://linkbackproject.org/about/:
Last Revised: > 28 Jun 2004
I say sadly because that Nelsonian dream of transclusion is still a dang dream. Here's hoping the bidirectional link hype bubble inches us a little closer.
Philosophy of Programming, Simulating the Commodore 64, and More with Tomas Petricek
I posit that a truly comprehensible programming environment - one forever and by design devoid of dark corners and mysterious, voodoo-encouraging subtle malfunctions - must obey this rule: the programmer is expected to inhabit the bedrock abstraction level. And thus, the latter must be > http://akkartik.name/post/habitability> .http://www.loper-os.org/?p=55 (inline link mine)
I predict that if you actually try to build a machine with an instruction set isomorphic to a high level language, it will never reach the reliability we demand from hardware. Most likely, it will be implemented in something like microcode and we'll immediately land right back where we started. I'd rather see a focus on formal verification, probably of higher-level virtual machines built on simple and therefore easy-to-model hardware.
I think the reference to "atomic operations" is quite deep. A layer of abstraction that provides truly atomic operations is indistinguishable from a bedrock layer to anything built on it. A lot of my thinking on how to layer things is built on this idea...
I'm curious what reliability you are referring to. An instruction set isomorphic language can be made reliable in the sense of 1) consistent execution 2) predictable execution every bit as much as we count on the hardware in those two ways. It probably won't look exactly like what we think of as a high level language today. In fact, it may be quite different. You can only allow certain "clearly understandable" abstractions and maintain the isomorphism. I think this is what Kartik Agaram is attempting to explore with Mu and why there is so much focus on the hardware, the general 1:1 relationship between language statements and translation into hardware instructions and other design constraints.
My question is can such a isomorphic language be made in such a way that it would be high level enough and useful enough to be commonly used within a well understood domain?
A > bedrock abstraction> level is found in every man-made system. No recoverable failure, no matter how catastrophic, will ever demand intelligent intervention below it.My experience with programming early 8-bit microprocessors is that, when programming in assembly language, you did indeed have access to a bedrock abstraction level, as defined above.
There is no accessible bedrock abstraction level in modern computers. Machine code programming on a modern Intel based motherboard happens at an abstraction level far above the bedrock, and below you are many dark corners and mysterious voodoo-encouraging subtle malfunctions. The UEFI is stealing cycles from the OS to do who knows what, the firmware for the microcode and the mysterious intel management engine are encrypted, and security flaws like Spectre and Meltdown require intelligent intervention at a level that is inaccessible to the owner of the computer. I think the author agrees with this.
I don't agree that the invention of compilers in the 1950's was a mistake. Twenty years later, in the 1970's, CPU instruction set architectures were still being designed with the needs of assembly language programmers in mind. A key design goal was "orthogonality" [https://en.wikipedia.org/wiki/Orthogonal_instruction_set](https://en.wikipedia.org/wiki/Orthogonal_instruction_set). The existence of compilers didn't prevent architectures like the PDP-11 from being designed. I think the author agrees, since they mention RISC as the beginning of "braindead architectures".
But RISC wasn't primarily about compilers, it was primarily about making CPUs faster and more efficient, and prioritizing that goal above the goal of making the ISA comfortable for assembly programmers.
So here's my question. Suppose we start over, and build a new computer architecture from scratch. Is there not a fundamental tradeoff between making the new system as fast as an Apple M1, vs providing a bedrock abstraction level that is both accessible to the programmer, and habitable?
Suppose we start over, and build a new computer architecture from scratch. Is there not a fundamental tradeoff between making the new system as fast as an Apple M1, vs providing a bedrock abstraction level that is both accessible to the programmer, and habitable?
Probably. For me the inescapable implication is: think about habitability (and safety), and don't focus on performance to the exclusion of all else.
I don't understand why people get so excited about performance and forget Wirth's Law:
software> is getting slower more rapidly than > hardware> is becoming faster.
You think the M1 is fast? Just wait a couple of years!
A substrate that will run so fast that you don't have to think about what you run on it is the very definition of an externality. Exponential curves consume all slack. No matter how large the supply of Buffalo is, it's finite. Thinking of a resource as infinite makes no sense. That way lies religion and the Singularity.
Has Apple said anything about how they've tried to mitigate side-channel attacks on hardware optimizations? If they've just focused on making everything faster like everyone else, they're likely open to similar attacks?
"Reality is that which, when you stop believing in it, doesn't go away." -- Philip K Dick
Wondering if the notion of "bedrock" still makes sense in a world where most computers are virtual to some degree. From the exchanges above, I'd conclude that the bedrock level is the first programmable level of abstraction just above non-programmable hardware. In some contexts (e.g. cybersecurity), that's relevant. For many others, it isn't.
I'd be perfectly happy to fully inhabit a higher level of abstraction, and leave the lower programmable levels to other species of inhabitants. I see the main problem with today's platforms in the unclear borderlines between levels and in the intentional obfuscation of lower levels.
Konrad Hinsen I'm thinking about a version of my language that runs in WASM, using (some subset of) WASI to interface to the hardware and OS. In that context, WASM and WASI are the "bedrock" abstraction level, since you can't go any lower.
Stumbled into this open source personal CRM tool called Monica: https://github.com/monicahq/monica few people in the DevRel community I’m in mentioned it. I always enjoy seeing end-user applications open sourced (I’m often also equally as confused, b/c most users won’t be able to self-host it 😕 ).
I’m very keen to figure out how to do open source that isn’t code / github oriented, for end-user applications
Charming little https://macwright.com/2021/03/16/return-of-fancy-tools.html on complexity and simplicity.
The friction of having to write, to structure thoughts in plain text, to remember the name of the person I need to reference on this page: that is the point. Frictionless note-taking produces notes, but it doesn’t - for me - produce memory.
There is some utility to having the computer figure out what you've referenced and built the interlinks for you, but it doesn't seem like the note tool people have much overlap with the NLP + semantics crowd...
There’s no “directory listing” in my editor. I hit ctrl-p and fzf helps me find the file by name. This is obviously > not the future of codingFunny, this is strikingly similar to the approach Unison takes (and even carries out a few decimal places): an append-only collection of syntax trees identified by hashes, all stored inside of a single directory 😉