...
I also heard usually all the jetbrains IDEs also have a very good search built-in. VSCode also got a separate symbol search but the effectiveness depends on the language server powering it based on the language you use.
The key word is 'separate'. Apart from potential performance considerations, I don't know why you would implement file and symbol search in different places, with different keystrokes. I know what I need to find; just give me a list of all the things......
To be fair, I haven't tried JetBrains, though I intend to start using CLion on Mac to see how that goes....
...
Very interesting! 🙂 Can't wait to see what you will build.
Do you have started some implementation? What do/will you use? HTML/JavaScript? Other?
What do you think of "reactive" framework, like Vue.js? Reading your intro, it makes me think this this kind of framework can be a good fit to implement tools with bindings between several representations (the spatial canvas, the sheet, others...).
Also, I am just starting digging in generative art (in fact, I am a completely new to it), but my feeling is that your way to express constraints can be very interesting for it. And I was just wondering how randomness can fit in your system. I.e. like a spread with random intervals between magnets or with some magnets that deleted, etc.
Very excited to see where this goes!
The snapping being an extension to/equivalent to dragging seems promising from a UX perspective, particularly if it can be made clear after the fact what 'type' of drag created it.
-> is there an exhaustive list of the type of snaps? e.g translate, scale, rotate... if it's a small list then there maybe some scope for replacing the circular nodes with appropriate icons (or adding icons somewhere...)
spreads seems like an intuitive approach to one of the CS fundamentals - repetition
conditionals seem more abstract, and I'm not sure to what extent you can represent them, particularly if the other visual variables (colour, opacity, texture, position...) are all reserved for the user... I imagine they'd probably need a separate view, either for abstract conditionals in general or looking at a particular group (however that's defined), and showing all the potential alternatives
...
@Cameron Yick that polyhedra viewer is so neat. Instead of clicking on operations, want to see the space of them expanded out in VR.
Joshua Horowitz If you had any specific thoughts on this talk by the way, I'd love to hear them!
Konrad Hinsen is that you? https://www.nature.com/articles/d41586-020-02462-7 More like Past of Coding than Future of Coding. Or retrofuturistic? forepasséistic?
That's me indeed! It's about the past, but with the goal of learning for the future. Let's say the future of the past 😀
...
Jack Rusher thanks for that recollection and story about tappers/listeners... this is a great reminder of why teachers, educators (and 'communicators') are professionals who's skills should be recognised and appropriately valued (within society), as that process of bridging between different points of understanding is (often) by no means easy, and as you say, can be easy to even lose sight of from one's own point of understanding... #fairpayforteachers! 😉
I've been doing more and more writing lately, and have been wishing for a tool that allows me to write my outlines, drafts, and final compositions in the same editor window with the ability to toggle any of those "layers" on and off at will, merge them, copy selections to new layers, etc. It would work sort of like Photoshop but for writing... I have a feeling these principles could also work as an IDE extension (imagine being able to hide the "code" layer and show only the "comments" layer, or the "documentation" layer). Curious to hear your thoughts, or whether anyone else is working on something similar?
This is a fascinating idea and I've never seen anything like it. I love the approach of taking something we take for granted in the GUI tools world and folding it back into the textual tools world. There's an open-endedness to this that is exciting to me the way Roam was exciting when I first saw it. I hope you keep exploring this space and share what you learn about it.
Yeah this is rad. 🙂 I designed and built a prototype of what you’re describing back when I was on Visual Studio - I even called it editor layers. Lots of our designs have trickled out over the last decade, but sadly that one hasn’t yet. We did all sorts of cool stuff with it, turned types on and off, reduced things to signatures, embedded profiling information in, unit testing coverage/passing, git blame-like stuff, callers/references, even real-time inline diffing, all toggled by this little series of buttons attached to the top right of the screen. The design came from thinking of the editor a bit like an overhead projector and you just kept putting transparencies over top whatever was there - different sets gave you very different views on the thing you were looking at. That same idea eventually was the source for the name “Light Table.”
Years later I was surprised to see Gary Berhardt basically produce the same design, with many of the same examples! 🙂 https://www.destroyallsoftware.com/talks/a-whole-new-world
We did something similar in the Eve editor as well where you could control the sections of the document that were visible at any point in time, though that’s a little less layer-y and a bit more “build the document out of the pieces that are relevant.” We did also have a performance layer, but we never exposed the panel to turn it on.
The thing I get from this, that I didn't get from A(n) Whole New World (for instance) is that these Rabl layers can be nothing more than a named text style with a visibility toggle. In the video, the style is just a color. The style could be a little fancier — maybe different fonts, why not. But the process of defining a new layer ought to be just like photoshop: add a layer on the fly, name it (or not), put stuff in it. The barrier being so low, you don't even acknowledge how powerful it is.
Sure, we could start imagining all the built-in layers a code editor could add, and that's cool... but that's a different thing than a tool just for a writer to define their own conventions around. Like the difference between adjustment layers (PS) or nondestructive filter layers (Pixelmator Pro) and regular layers (every nontrivial image, video, music, etc editor).
I want both, but they're separate ideas.
Great Idea! I also like that the user only creates a layer and start to add text to it, without any meaning other than the one she decides according to her workflow.
In Glamorous Toolkit, we utilized both ideas. Here is an example that covers both dimensions:
🐦 Tudor Girba: The idea of a layered editor was recently proposed by @crabl: https://twitter.com/crabl/status/1298793388817313798
@spiralganglion wants it everywhere: https://twitter.com/spiralganglion/status/1298843644728717313
We already employ this in #gtoolkit. Eg, for showing the markup only when needed or showing previews live directly in the editor👇
🐦 Ivan Reese: This is a perfect Future of Coding demo — a super simple proof of concept, of something I’ve never seen before, that is so obvious in hindsight, that pulls disparate ideas together, that opens the door to much further exploration. https://twitter.com/crabl/status/1298793388817313798
Very appealing idea. I'd love to write my polished, unpolished and draft snippets etc into different layers.
Oh hell yes, give it to me now. My current workflow for writing fiction sort of emulates this in plain text: I often put an outline thingy (list of beats, anyway) at the start of a scene and delete them later. I also tend to mark things for future deletion with square brackets if I'm not sure yet whether I want to (or to give me time to cope with killing a favorite line). I would love to instead shunt them to a different layer. I crave layers every day, and so I've thought a lot about them, for outlining as in your demo, for coding as in Bernhardt's talk, and for data representation like Nick Smith and @Orion Reed have been talking about, e.g. if you're extracting and transforming data from a word doc, using layers as a way to separate info that is portable between formats and Word-proprietary details that just need to stay around if we want to round-trip processed data into the original format.
So yes, I've been working on it in the sense of thinking and wishing really hard. Still working out the details.
This looks very nice! I use org-mode in Emacs in much the same way, but it's a lot more cumbersome to use because org-mode, like all outline/folding editors, supposes a hierarchy of detail which isn't always there. That's also true of the markup display technique in GToolkit that Tudor Girba just mentioned. That's a very nice implementation of detail-hiding, but not the same as unrelated layers.
I simply said that the mechanism can be used in many ways in the two classes mentioned In the thread: either to hide/I hide or to augment with external content. The example exercises both. Indeed for a different use case than the initial one.
I already experimented with this idea for markup and inline graphics plugins in my text editor. It's my hope that I can build a 'markdown wiki' hosted entirely in the editor, and single click deployed to github. One day I can use it to write documentation or update my website. I'm surprised more editors don't automatically format and let you edit markdown tags live inside documents; it feels like a natural extension that should have been done a while ago. The layers in the demo are more like 'folds', yes? i.e. the layers aren't independent of the text paragraphs on other layers - they are stacked with them? It's powerful stuff. I can imagine writing emails this way; but maybe that could result in embarrassing mistakes!!
This idea has some potential overlap with Progressive Summarization https://fortelabs.co/blog/progressive-summarization-ii-examples-and-metaphors/ (with a large community interested in new tools)
I am so excited by this idea of layers for arbitrary author-determined organization, I'm going to just rattle out some ways I can imagine it being used. All of these are written from a writer's perspective:
Yup I'm now thinking about how to add it to my editor. This opens up a whole new dimension. Kinda like transclusion, but simpler, because the content isn't "somewhere else, transcluded here" but rather "in another layer, but right here". There is an implicit, global organization (the flattened document), but then you can expand the flatness into the different dimensions as needed.
In my editor, it's possible to compact/expand different header levels - going to think about commands to make this straightforward
I've been kicking around an idea for a while in the same vein, but coming from the direction of reading instead of writing. I like to take different kinds of notes--structural, commentary, cross-reference (within a work/to other works), summary, etc. I've been thinking about building something around an e-book reader that would allow me to move between and freely intersperse different forms of notes.
So, I could start with a view of my structural notes (my "map" of the book; possibly involving visualizations) and quickly drill down to the right section for deeper reading. Then I could add a cross reference by searching through my corpus for the right information, then pull it up side-by-side and add shared commentary--which can then serve as a new work that I can then structure, commentate, cross-reference, and so on.
It's a view of writing (or note-taking) that emphasizes the connection between the stimulus (the original work) and the response (the layers of annotation), always keeping them linked. So, theoretically, if I published a blog post and a reader owned a book I reference in the post, they'd be able to move directly from a reference in the post to the relevant passage in the book, then be able to surf my layers of annotation (and add their own) if they so desire.
Haven't gotten around to doing anything yet, though.
I pitched the idea at some friends in a writing slack and they were also excited.
That's at least 2 more people working on editors; I'd love to see these projects.
Wow! Thank you all for your kind words and feedback, I certainly wasn't expecting to strike such a chord 🙂 I'm super excited to see this concept appearing in some projects around here (especially the work that Chris Granger and Tudor Girba have done!) if you're interested in talking about things like this, my DMs are always open. Like Christopher Galtenberg mentioned, it would be so crazy to incorporate some element of progressive summarization (or spaced repetition!), so I'll definitely be exploring that as I start building this out.
Curious to hear people's thoughts about how different this is from using nesting content. Seems very similar to me except that children are inlined when visible. When nesting, for example:
- 1
- 2
- 3
- 1
- 2
- 3
- 4
the user would need a shortcut or UI to choose a global depth (level) which would open all nodes until a certain depth (1,2 or 3, 4).
I think either approach (flat & nested) possible in roam today by using tags & filters. So the user would have to tag a block like #ideas
and the apply the appropriate filter.
You could implement one in terms of the other, as long as you allowed arbitrary showing/hiding, rather than needing to open nested levels strictly in order. Beyond that, the difference is entirely in the UI. Nesting mixes content and presentation, layers (as imagined in this thread) don't. What seems nice about nesting is that it's easy to move things between layers — just indent/outdent to the level you want.
The way I understood it, this sort of layering is very different from hierarchical nesting. The individually showable/hideable segments in an org-mode style outline are confined to mutually exclusive regions (wrt their "level"), whereas layers can freely intermingle through the full extent of the document. To recover layers from an outline, you would need to not only have an operation that hides/shows everything at, say, level 2 without changing the parent's visibility, but you would need to have other "levels" under level 1 that actually sort of compare the same as 2, in the sense that they are always siblings, but distinct for the purposes of the global show/hide command. I'm not sure that would be enough, actually. Once you add the idea of arbitrary visibility, it's hard for me to think of it as hierarchical at all.
Looked at another way, nesting induces a total order on the "layers" that may not be desirable.
So, I would argue layers are more general than nesting. Nesting is easily implemented in terms of layers (one per node), but nesting is not enough to implement layering without at least using weird hacks.
My mind keeps going back to this idea of layered text and everything you could do with it. What about true overlapping layers? Would it make sense? E.g one layer may have a text snippet and another layer may have a link that overlaps exactly with that snippet. It can subsume all embedded markup. You just dump all style info into the style layer, exactly overlapping with the parts you're styling.
OK if layered text, and not plain flat text, was the lingua franca of Unix, what would programming languages look like?
It would be cool to have OS support, but I worry about pipe-based composability. I think that's essential to being a Unix "lingua franca". To process a piece of layered text, you have to know what layers exist and what they mean. Most of them are probably very application-specific, even stream-specific. You could generically pass them through, but then an entire ecosystem of utilities has to deal with format complexity they can't benefit from and can only mess up. At that point it's probably better to build a full object system like Powershell, since you're already paying for most of it.
Pipes could still be flat text (you'd select or flatten the layers when you pipe), but what I was getting at was how the 'medium' affects what we design. We design PLs with a 'flat' grammar because we are designing languages represented in flat text. Layered text is still very free form and Unix like, but would lead to different kinds of PLs that make use of the layers.
I'm trying to get the implementation of this clear in my head, since I'm seriously considering adding it to my text editor to help with other issues. One thing I've been thinking about is annotating shaders with widget information. For example, what if my flattened code looked like this:
/* Layer:1 Widget:Slider, Min:0, Max:10 */ float val;
Effectively showing layer 1 would enable editing of how the float value widget would work. Hiding layer 1 just gives you the shader code with the widget popup.
As I understand it layers aren't nested, but I think you'd need to be able to 'expand' deeper layers into upper layers without opening intermediate layers. For example:
L0:-----------|---
L1: ------|-------
L2 ----|----------
The | represents the location in Layer1 of layer2. Expanding layer 1 would show layer 2, and inside layer 2 there is another point where a further layer lives. But if L2 is shown but L1 is not, then L2 would effectively appear where the | is on layer 0. I think?
And of course, edits to the L0 layer would wind up 'moving' the insertion point of Layer1. But what about deleting the region around the expand point? I guess the next layer just drops into the nearest sensible position.....
I do something similar with Markers in my code, which can be used for things like compile errors. Which brings up the next issue of compiling code and referring back to locations in the source that may not be expanded, etc. 😉
So in short, interesting issues that might lead to a powerful system......
Implementation note: with layers that contain extra information comes the possibility that a flattened file could not be just dropped into a game engine without sanitizing. So for this case, comments in layers makes sense because it keeps the code compiling and keeps it as a plain text file. But it's just a convention.
And collapsing Layer1 from my code sample above could have a special display; I was thinking some kind of mark in the code to indicate points where layers are embedded; but suppose the widget was a color picker.... then the collapsed layer 1 would be a color swatch. Nice....
And not forgetting that layer 1 is effectively several fragments, since there can be multiple point in the text where layer 1 pieces are inserted. Hmm, this might keep me going for a week or 2!
OK, I'll stop designing out loud......:)
Shalabh Chaturvedi One use case would be to make type annotations a different layer, to reduce visual clutter. And that would open the way to having multiple annotations on code, not just a single rigid type system.
Chris Maughan feel free to keep designing out loud :) It's really helpful to hear this sort of feedback because there are just so many ways this idea can be interpreted. My proof-of-concept was deliberately open-ended for this reason: I really like to see the sorts of assumptions folks make once they start implementing a concept such as this and how practicalities like the choice of underlying data structure influence which affordances become possible or impossible as a result.
A year ago I was working on VR Visual Scripting in Facebook Horizon. They've recently started to share some more information leading up to Facebook Connect. I figured the scripting system would either be largely the same, or entirely rewritten since I left. It seems like it's mostly in tact based on documentation shared (https://support.oculus.com/872960043229069/)
https://twitter.com/oculus/status/1299032769201405957 for some examples (this is all new content from when I was working on the project)
🐦 Oculus: Hello from Horizon! We’re creating a new social VR experience that ignites your imagination and brings people together. As our invite-only beta community builds new worlds, follow along at @FacebookHorizon for a peek behind the curtain.
Unfortunately it looks like it didn't really advance much past when I was there, which is not a good thing imo 🙂
Reminds me https://en.wikipedia.org/wiki/Dreams_(video_game) or https://en.wikipedia.org/wiki/Roblox (or the grand parent of those kinds of systems, https://en.wikipedia.org/wiki/Second_Life) for VR.
It's a great idea that's well suited for VR. If it's even passingly similar to those other platforms I bet it will be a wild commercial success (which VR desperately needs anyway, so good for them).
It unfortunately doesn't seem to be pushing the boundaries of that kind of platform very much though. It's very status quo in a way.
It was extremely challenging (and according to some people I know that are still there, still challenging) to propose reimplementing direct features from any of the products that inspired it
If there was something about this that was particularly interesting, that survived through to the public (or that you feel comfortable talking about), what would it be?
I'm still not 100% sure how much survived based on the documentation, the basics are there
There are also no screens or videos of the final UI, but that's probably what will disappoint me the most, the description in the documentation makes it sound like it's mainly the same as when I left
so I designed the system to be a 2.5D s-expression-like editor, inspired by scheme bricks and scratch, it was meant to use mostly the same tools as the world building tools (which are featured in some of the videos)
from the documentation it looks like that didn't happen, and it's mostly a big list view with lots of buttons and drop downs
so probably the most interesting thing about it is it allows for semi-collaborative editing and live coding by default (although when I left objects reset when scripts changed)
the core of it was an object oriented message passing system. So each scripted object has a single script associated with it, and communicates with other objects via message passing
originally it was a lot more dynamic, but while I was there it was pushed to be more and more static (I wanted super loose coupling, my manager wanted really tight coupling that looked like a more "traditional" OOP language and explicit used method instead of message in all communication)
it seems like the tighter coupled version won out based on the UI description, but its hard to tell, might be multiple ways to do it
Scott Anderson what do you think of Neos VR? It seems to have a delightfully, awkwardly, awesomely weird, hm, flexibility to it.
Some members of the Horizon team were really pushing a node and wire UI
and we had a working version of it at one point, and Horizon can still use wires for object references
In two dimensions managing node and wire complexity can be hard, but with zooming and organization it can be ok... I guess
Dreams uses node and wire and solves it by making it a zoomable 2D UI constrained to a plane and clipped (basically nested floating 2D windows in 3D space)
Yeah, the 2d-in-3d seems to be the best of both worlds, for my money.
I do want to experiment with very 3D programming languages, but occlusion is an issue. Two ideas I thought of are a 3D wire world/redstone variant, and tangible programming inspired by modular robotics kits
Building a 3D esolang in VR would also be fun https://esolangs.org/wiki/Category:Multi-dimensional_languages
Thanks for your thoughts Scott Anderson. I've been messing with a 3D esolang — though progress certainly ground to a halt with the ending of the world. My superficial constraint has been no text. The real constraint is to make it "non-symbolic" in that representations cannot be "arbitrary." Representations need to be structure preserving in a sort of uniform way.
In case the two minute summary was too much in too little time, here are longer versions I just did: https://www.youtube.com/watch?v=TFQsNCm5AMM and https://www.youtube.com/watch?v=b2lZ2Zjbr_k
What started with me reverse engineering notion became a data-first recursive UI resolver I called https://github.com/den1k/root.
Here's how it differs from most common technologies today:
It packs a few more punches. The best example is probably this https://den1k.github.io/root/rich-document.html in about 200 LoC (https://github.com/den1k/root/blob/master/dev/examples/rich_document/views.cljs). Github repo https://github.com/den1k/root.
Also this JSON->app workflow from yoshiki https://twitter.com/yoshikischmitz/status/1176642448077967362
🐦 yoshiki: I've been jamming on this concept for making data-driven designs. Given some JSON, this app will provide you with an interface to describe how you want each entry styled, allowing you to gradually create a more complicated design. Here I create an airbnb-ish app.
side note, the biggest project I wrote with root is http://theshopgrid.com (unfortunately not OSS yet). As with any approach data-first comes with its own set of tradeoffs. Happy to elaborate if anyone is curious.
I love how root renders all data by default. This helps with the 'dataphobia' being discussed over at https://futureofcoding.slack.com/archives/C5T9GPWFL/p1598534303161500
thank you! I commented on that thread. Yes, it’s like a data browser incrementally becoming a UI. Lots can be done there to improve DX, for example analyzing strings for urls/images that can be embedded automatically. All kinds of remembered hiding and showing of state and so forth.
this looks really cool! can you talk a bit more about how the data directs the ui tree?
Sorry, I was on the phone when I wrote my previous comment. http://joshuahhh.com/projects/pane. The author posted new work here recently: https://futureofcoding.slack.com/archives/CCL5VVBAN/p1598126579060300
[August 22nd, 2020 1:02 PM] joshuah: I'm prototyping a new approach to drawing dynamic pictures with direct manipulation. So far, I've been calling it "Apparatus with Magnets". We'll see if the name... sticks. :wink:
Here's a "project proposal": https://www.notion.so/Apparatus-with-Magnets-077e72bc1ebf4f7a9ec512ef76d47994|https://notion.so/Apparatus-with-Magnets-Intro-2e32af5b59b64a45b3b203408374a56ehttps://www.notion.so/Apparatus-with-Magnets-077e72bc1ebf4f7a9ec512ef76d47994|…. Progress is slow but steady. Feedback is welcome!
Garth Goldwater sure! When you define your ui-root
you give it a lookup
function and content-keys
. from there it works like reduce: you pass it a root-id
that it passes to lookup
. lookup returns data that root wraps in your defined UI components or the default component. It also looks for content-keys
on lookup
s return value, passes those to lookup
and recurses down all the paths. Upfront root is not aware of the structure of the data. Given the id, lookup
can run arbitrary computation. It can for example issue an http request or read from indexedDB. It can return data or a promise that root will resolve and eventually render. It’s dumb by design and that gives it lots of flexibility.
thanks! checking my understanding here: so lookup is in charge of grabbing the data from content-keys and returning it in some shape, and ui-root decides how to wrap that unit of data in UI markup, and then runs lookup on the returned shape’s content keys field?
and root-id is just where ui-root starts that process in the data tree?