Did programming go through the "PC revolution" too? Because to me, programming pre-PC, during PC, and now post-PC (tablet, etc) seems to have changed far less than hardware and other software, especially with respect to the qualitative nature of the changes the PC revolution brought about.
Change my mind? Anyone else ever talk about this with this lens?
This seems relevant here: http://www.winestockwebdesign.com/Essays/Eternal_Mainframe.html
Anyone here know about in-browser ML tools for building models using images? In this case, we'd like to annotate ("draw" where relevant lines are present) some subset of an image corpus to build a model, and to do so interactively, feeding this training data into some easily configurable NN tool. Is there anything like this that is simple from a UI point of view?
Not exactly. I want something simpler, that just does images (and something that's not a commercial product). The intended use is for academic work and, more or less, annotating images and using ML to "find the lines" on scanned forms
Not sure, but you might find something among these - https://github.com/heartexlabs/awesome-data-labeling
This is a little prototype I whipped up with a Pytorch model behind a web socket. Interaction is done via click events on a HTML canvas.
Inspired by the recent post on Crosscut, I started thinking about geometric programming, and what it might look like.
I came up with the idea of using areas and line segments of 2D shapes as inputs and outputs.
For example, a rectangle gives us addition (the half circumference), multiplication (the area), square root (using area as an input and measuring one of the sides):
(image 1)
One could imagine an iteration construct ala the one in Bret Victors computational drawing program, or similar to the one in Crosscut, that allowed expressing e.g. Fibonacci iteratively:
(image 2)
Here the green arrows connect inputs lines to output lines, and the dots show where line segments begin and end.
The dotted rectangles represent iteration, where the first rectangle is used to specify the relation to previous rectangles (exactly how is a bit vague, I know). The green line sets the distance between each iteration.
Similarly, a unit circle could show the angle as a bent line (in radians, as the circumference is 2*pi), and sin/cos as shown. Using sin/cos as inputs and measuring the bent line you get asin and acos:
(image 3)
Thoughts? Maybe it already exists?
Hey, this is pretty cool!
Whenever thinking about geometrically constructing math formulae, I like to reference the section Constructing Reals from Rudy Rucker's wonderful book Infinity and the Mind — https://www.rudyrucker.com/infinityandthemind/#calibre_link-318
It's important to ask whether the geometric representations you use actually contribute to understanding.
Take your Fibonacci example. I don't think representing addition as the half-perimeter of a rectangle provides me with any intuition or insight here. To me, this particular diagram would be better off if you just stacked the input segments to form the output segment.
Better still might be a Fibonacci spiral construction, as shown below. No clue (offhand) how to represent this construction in a programming system, but it says a lot more, geometrically, than half-perimeters of rectangles /or/ stacked segments.
Yeah, that's something we ran into with Crosscut and didn't really engage with. The freedom to draw "whatever" at the concrete level is really pleasing; being able to manipulate the "whatever" using a dynamic system is appealing. But having vector math be the only way to do that dynamic manipulation is such a narrow channel through which to work — it's very unlikely to match the character of the "whatever" you drew.
I'd like to lean away from geometric representation entirely. That means no vector math — probably no math at all. I want to see what that dynamic / programming system might look like.
People admire Lisp's "elegance" in that it only has a single built-in datastructure - lists (a b c)
. Clojure, on the other hand is generally admired for being more "practical". One of the main things Clojure introduced is vectors [a b c]
, and maps {:a b, :c d}
as first class syntax. I was reading some Lisp this week and my brain kept grating/complaining because I was seeing what are conceptually hashmaps written as what I was interpreting as a lists of pairs, something which I "know" to be different.
Conversely, I have always thought it was annoying that C++ has three different operators (::
, .
, ->
) that all essentially mean "member of". Would C# be better if you had to say System::Console.Print()
instead of System.Console.Print()
? No, I think most people would rarely feel the need to conceptually distinguish between these things, the C++ syntax is just annoying noise to me.
What is it that distinguishes Clojure's brilliant decision to expand the syntax, from C# brilliant decision to compress the syntax?
It implies to me that if there is a scale of simplicity-to-expressiveness, then humans just happen to sit at particular point on it. There's no particular "reason" for why these changes were right other than "they'd gone too far that way, go back this way". There's a local maximum somewhere in the middle.
I have been thinking about this simplicity-to-expressiveness scale recently as it irrationally annoys me that on WikiData, "instance of" is "just another relationship" (it's P31 - "citizen of" is P27!).
I think that RDF is far too far towards the "elegance" end of the spectrum and would greatly benefit from a Clojure-style acknowledgement that some things are more different, and should be more differentiated. Yeah, it's mildly interesting that Node-Rel-Node triples is all you need to describe an ontology, but that's not actually how people think about the world...
In the C++ case it's a distinction without a difference. In the lisp case it's a difference without a distinction.
Daniel Jackson's concept modeling work does a good job of giving a framework for why
Tools, tasks, and people are a triple. If you keep two the same you can find a local maximum for the third. But what you learn doing that isn't transferrable to different tasks, tools, and people. And maximizing only one would have to be justified by expecting the other two never to change. It's never just one spectrum, like "elegance". With Clojure and C#, it might be that they were opposite sides of the same local maximum, because the task and people are similar. Maybe not. RDF has a completely different triple, so the lessons aren't transferable, I would expect. I would also say that if a tool doesn't reflect how people think about the world that doesn't necessarily mean there is anything wrong with the tool. It might suggest the utility of another, different tool, aimed at different (or just more) people.
IMO...
The beauty of assembler, lisp, lambda calculus, triples, etc. is that they have “no syntax” and don’t restrict what you can do.
The beauty of C++, Smalltalk, Clojure, etc. is that they “have syntax” and restrict what you can do.
“Local maximum” is just that - local. There are many local maxima. Schmooing all possible notations into “one language to rule them all” results in complexity, poor UX, epicycles, watered-down unions of features, etc., etc.
Disclaimer: I am a Lisper and a PEGer (Ohm-JS). Syntax is cheap. Deprecate programming languages.
Expanding on Chris Granger's point a bit, C++'s various member/scope operators all have basically the same signature of (group, selector) -> thing
. So it doesn't hide or obscure any semantics to use the same syntax. Lists and hash maps do not have the same signature; a list is flat, but there are internal relationships in the arguments to a hashmap. So it helps to have different syntax to remind you of that.
Chris Granger’s take, absolutely. C++ adds complexity that doesn't provide any (useful) expressiveness. So it isn't even on the Pareto frontier of the simplicity/expressiveness tradeoff space.
I personally take Lisp vs Clojure as a better example of a legitimate tradeoff close to the frontier.
I also think it's bonkers that the smallest Wikidata property, P6, is "head of government". Shows you what Wikidata thinks is most important. 😛