I finally got my list of high-level goals in semi-presentable shape. I'm most interested in feedback on whether they make sense from a communication standpoint, but also which parts of them sound feasible (there are a few, even ones not marked "stretch", that I have a feeling will not be possible). If nothing else, maybe they inspire you, contrast interestingly with your own goals, or whatever. I've just pulled out the very high-level philosophical goals from my list, plus some terminology that I use weirdly. The full doc has a lot of technical ideas, but it's still, um, raw. I'm not sure about what I linked here but the rest definitely reads more like an insane screed.
https://gist.github.com/andrewf/c6774d1bc8ad793b4b3e3172ba13f6b0 (Oh, about the title: "Fern" has been my codename for this project for a good ten years. If you stalk my github you'll see some of my first-ever attempts. đ)
Some of these goals are partially in contradiction. It might be useful to identify such pairs as lines of tension.
Example: "good defaults" vs. "don't hide things from the user". Unless all defaults are always visible (e.g. pre-filled entry fields). using a default effectively hides the existence of a choice from the user.
That's a tricky UX problem for sure, but I don't think it's a contradiction. You should be able to inspect your defaults, even if you don't feel like changing them. Somewhere down the line there's probably "see how this thing was generated" UI operation that you can use on generated content of all kinds, including defaults, materialized views, etc.
Would it be better to say something like, the information should be possible to access by some reasonably short and predictable route (depending on how obscure the info is), not that it needs to be on display all the time?
That sounds better, but it only pushes the limits of the contradiction a bit further out. In the end it all depends on the number of default values. One or two are easy to handle. Ten are a UX challenge, but doable. When you reach a thousand, the mere number makes it impossible to make them easily accessible. You need to sort them by relevance, which is already a judgement that developer and user may not agree about.
This is an essential feature of computing, which philosophers label as "epistemic opacity". Once an information system has features that exceed the cognitive capacity of an individual, new rules apply.
Sure, at some point you're just reading the source code, and no clever UX can save you from how hard that is. But before too long you've zoomed in far enough that the defaults and specific preferences are just data with no particular relationship, and the defaults/visibility tension is no longer even well-defined. Honestly, any way of finding out what an app is thinking that's more accessible than hooking up a debugger is an improvement on the state of the art.
I have trouble commenting on these sorts of high-level goals. They all sound reasonable, but many of them are vague or extremely broad, and the measure of success is unclear. The document reads like a general guiding light for the human race, rather than the set of goals for any one project.
I would fare better at commenting on narrower, more concrete goals for a particular project, especially a language design project.
If you're looking to refine this brainstorm into a presentation, try to: (1) tighten up the list, (2) provide specific good/bad examples, this helps with vagueness, (3) show connections between your bulleted items.
I imagined it as a sort of purely conceptual piece, but that's a classic blunder in math pedagogy, so maybe examples are in order here too.
The intended context is that these are the first section of a longer text document that gets progressively more concrete and detailed, going from these philosophical goals to (very much WIP) technical ones, with each layer referring to previous ones for justification (chain of reasoning is important, because some of them get pretty weird). That would be where most of the meat is, and maybe I've been assuming that those will serve as examples that clarify the very abstract parts. So I'm reluctant to make this part longer. Maybe one of those side-by-side formats would be good with examples and explanations off to the right?
With what I've given you, maybe it's enough that people aren't actively confused on what I'm even saying. Like I said up front, this is primarily an exercise in whether I can convey the ideas at all. I've historically had trouble putting them into words at all. I know it's a little weird using this slack as basically a writing critique group, by that's where I am right now. :)
I almost posted, and still might, an even more abstract question about what even can serve as justification for, e.g., one set of data modeling primitives over another.
Iâm not confused by the wording of the broad principles youâve outlined, but I'm confused as to the justification behind them, and whether you plan to act on any or all of them, and how đ¤ˇââď¸.
iâm surprised how many of the intuitions line up given there are practically no hints on implementation. đ only thing that jumps out is the âplatonic realmâ - worth noting even thatâs encoded in physics (as are bits), and ultimately intent is a world-derived model. i wonder if you resonate with this writeup: https://alexanderobenauer.com/articles/os/1/
Yes, my concept of the end user UI is extremely similar to the one in your link. If that OS existed today in a way that was interoperable with the rest of the world I would have to consider shelling out serious money for it.
I'm realizing I have a hole in my definitions of intent as written in bits and platonic intent. The practical upshot is that I want to write my intents as if in terms of a non-physical platonic intent (and don't want to get into the philosophical question of whether that's meaningful when all/most of our intents arise in the real world :D). Certainly, programs written that way (exceptions for low level code) should be independent of any particular encoding of the things on which they operate.