You are viewing archived messages.
Go here to search the history.

Ope 2020-08-17 13:40:02

What is future of hardware and how will it impact the future of code? What are the most promising approaches to breaking free of Von Neumann bottleneck? <- is this still a thing?

I have heard Simon Peyton Jones say a couple of times that they tried to create new architectures for functional languages but they failed.

Would these architectures succeed today? Are the constraints(business, technical, community) different enough or will be differ enough in 10 years that they could succeed?

Mariano Guerra 2020-08-17 13:43:09

the memristor and non volatile RAM should change how we code, no longer need to think about state persistence as a separate step, the problem is that now we can't fix by rebooting 😄

Orion Reed 2020-08-17 13:46:55

I’m unrelentingly hyped for https://en.wikipedia.org/wiki/Field-programmable_gate_array and anything resembling graph-processing. Merging memory, storage, and computing into the same substrate is also exciting.

More than the specific possible hardware, my real hope is to remember computation is king, and hardware should be in support of the computation we’d like to do quickly. If you have a way of computing that may be awesome but is hard to reconcile with hardware, that’s a sign hardware should adapt to fit.

Intel has had long enough dictating the terms of programming, and we certainly don’t want to replace them with an ARMs race to the next monopoly.

“If hardware is unjust, change hardware!”

— stealing from a quote about naturalism

Ope 2020-08-17 13:57:54

I was lead down this path by wondering if there are enough people writing python, for example, that a python interpreter compiled to hardware (a machine that only knows to how to interpret python 3.7 for example and optimised to do so) would be a great idea i.e makes sense from a unit cost perspective and efficient in other ways.

We probably don’t want to compile user code to hardware since we still want to be able to change code so quickly but things like the interpreter that are relatively fixed and are used by enough people maybe it makes sense.

Should we hardware be a more significant portion of the stack? Does the tradeoff make sense? Flexibility for power. (Just checked Wikipedia, yeah ASIC or FPGA that is just for python for example). Wonder if anyone has tried it and if it would be more widespread in future.

Doug Moen 2020-08-17 14:02:01

It looks like GPUs are the way we break free of the Von Neumann bottleneck. Desktop discrete GPUs have thousands of processing units. Every personal computer (desktop, laptop, mobile) now has a powerful GPU. Machine learning has made GPUs even more relevant. The future of code needs to take GPUs into account, and not ignore this powerful hardware. My project, Curv, lets you write simple, high level code that executes either on the CPU or on the GPU. Curv is a pure functional language because it's easier to compile pure functional code into massively parallel GPU code. I think that GPUs are viable as an architecture for pure functional programming. I do think that you need to design your pure functional language with GPUs in mind, and that wasn't a consideration for the design of Haskell. I'm not claiming that Haskell is the ideal GPU programming language.

Doug Moen 2020-08-17 14:08:26

If you have a way of computing that may be awesome but is hard to reconcile with hardware, that’s a sign hardware should adapt to fit.That hasn't worked out for the Haskell people:

I have heard Simon Peyton Jones say a couple of times that they tried to create new architectures for functional languages but they failed.

Orion Reed 2020-08-17 14:09:25

Doug Moen +1

I hope one day in the future we can design programming systems on top of computational constructs like concurrency, parallelism, uncertainty, etc. and have hardware like GPUs, CPUs or networked machines be chosen dynamically without human design interventions for specific architectures.

These are all technically challenging and won’t always succeed, but for it to have a chance we’ll need to change the direction of funding around, flowing from code requirements to hardware imperatives.

Doug Moen 2020-08-17 15:53:06

My criticism of Haskell is that ubiquitous lazy evaluation is not a good idea. This feature means that understanding and tuning the performance of a program is much harder in Haskell than in any other language. Debugging is also messed up: it doesn't really make sense to ask for a stack trace. Attempts to build high performance Haskell hardware have failed. All these facts are closely related. Haskell is somehow lacking in mechanical empathy.

Doug Moen 2020-08-17 15:59:28

Modern CPUs are designed to execute single threaded C programs as quickly as possible. There is a ton of silicon devoted to superscalar out-of-order speculative execution, etc. It isn't an efficient use of hardware.

My real hope is to remember > computation > is king, and hardware should be in support of the computation we’d like to do quickly.

Modern GPUs are designed to maximize the percentage of silicon that contains ALUs, relative to the amount of control logic. It is an efficient way to use silicon, if your goal is to maximize the amount of compute that can be accomplished with a given transistor budget.

So we start with this super powerful hardware with monster compute abilities, and we build languages that let us easily program it.

Ray Imber 2020-08-18 01:35:30

https://youtu.be/ubaX1Smg6pY recommended to me when I first joined the community by Garth Goldwater. That talk was very profound for me, it just keeps providing nuggets of wisdom! Thanks again Garth!

At one point in the presentation, Alan Kay talks about when he was at Xerox Park, they could just tell the Intel guys to adjust the microcode on the CPU's. He goes on to say that modern FPGAs are the closest thing we have to that in the modern era.

As usual, Alan Kay is way ahead of his time! I hope he is right.

My real hope is to remember > computation > is king, and hardware should be in support of the computation we’d like to do quickly.I disagree with this statement on some fundamental points. This statement implies that there is a fundamental adversarial hierarchy: Us Vs. Them; Software Vs. Hardware; One must submit to the other! This is a counterproductive view imo.

My ideal future of hardware will come from better symbiosis of computation and hardware. Not domination of one over the other.

Referring to the Alan Kay talk again, Hardware and Software design used to be much closer together. A Programmer and a Hardware engineer would sit down and solve problems together, fighting the tyranny of Physics together 😛.

My hope for the future is that this kind of symbiosis has a renaissance.

Garth Goldwater 2020-08-18 01:59:01

alan kay often cites Bob Barton as a huge inspiration for the hardware/software overlap, if anyone who knows more about hardware than me was looking for a productive name to google

Kartik Agaram 2020-08-18 02:03:44

I tend to be bearish on this whole idea of a "Von Neumann bottleneck".

  • There have been many, many attempts to create processors that operate on graphs. One of the more recent ones was by my research group in grad school 10 or so years ago: https://www.cs.utexas.edu/~trips. But see also Dataflow architectures (https://en.wikipedia.org/wiki/Dataflow_architecture), and MIT's Raw processor (http://groups.csail.mit.edu/cag/raw). While it's possible we're just one breakthrough away from making it work, it is an Open Research Problem.

  • Hardware people know what they're doing, far more than us software people. Whether that's because they're Superior Human Beings, or because their domain is easier, you seek to make hardware more like software at your peril. Imagine having to fix a security vulnerability in a chip instead of a compiler.

  • Hardware has less open source, and you seek to implement layers of software in hardware at your peril. Imagine having to convince your processor supplier of the value of open source. Imagine having to create a new programming language in nights and weekends in hardware. Now ask yourself why you would want to ask others to do something that hobbyists wouldn't do, to put up with a less malleable medium. The best ideas in computing have come from hobbyists. I want a more inclusive future, not less.

  • Python is Python only because all the hard stuff has native libraries that work with existing processors. If you plan to migrate those to pure Python, that gives up much of the benefit of using an existing language with a large install base. Might as well use a better language.

  • The term "von neumann bottleneck" is fundamentally about improving performance. Improving performance isn't in my top 10 major problems facing the future of software.

Orion Reed 2020-08-18 11:38:43

Ray Imber

My ideal future of hardware will come from better symbiosis of computation and hardware. Not domination of one over the other.>

I realise how what I said may have come across, and want to clarify here I totally agree! I say computation in reference to the abstract notion of doing things with computers, wether through hardware or code. It was actually Alan Kay who inspired me to give that response though I forget from where.

Another way to put it might be to say that both hardware and software are things we invent to serve some computing goal, and we only sometimes know how they’ll help.

Will Crichton 2020-08-18 17:13:10

There’s a group at Stanford called the “Agile Hardware Group” (https://aha.stanford.edu/). I’m not involved, but my advisor + many people in my group are.

My 10k ft perspective: hardware tools suck. Like, orders of magnitude worse than any software stack.

  • Production tools for hardware are literally Tcl/Perl scripts that generate strings of Verilog code and smash them together.
  • Place&route (lowering a design into wires on a board) takes potentially hours to run. No notion of incremental compilation, small change -> hours to run.
  • Verilog is a trash fire. It’s a programming language designed by hardware engineers, so it has insane, poorly specified semantics.
  • Everything is closed source!! From circuit pieces to compilers, everything tool is considered valuable IP. It’s 1970s era computing, but today.

These issues need to be fixed for hardware to really take off beyond its current state.

Ope 2020-08-18 18:42:24

The maker movement was poised to make hardware more accessible but seems to have fizzled out. I used to be so involved with it. Wonder what 2.0 would look like.

Jack Rusher 2020-08-18 20:09:11

On the open source side, I remain cautiously interested in https://riscv.org

Doug Moen 2020-08-18 20:21:48

There is also the http://Libre-SOC.org project, which is also Open Hardware. They are building a system-on-a-chip that contains an integrated CPU and GPU and VPU, inspired by the CDC 6600 architecture. I gather that their architecture is more efficient than RISC-V, that programming the GPU is much simpler than a conventional GPU due to close integration with the CPU, but I don't follow the project closely.

Kartik Agaram 2020-08-18 20:57:42

Yeah, very bullish on open-source hardware. I'd like for it to get commoditized and lose leverage.

I'm also interested in attempts to route around pre-firmware backdoors: https://puri.sm/learn/avoiding-intel-amt; https://openpowerfoundation.org (latter via https://www.talospace.com/2019/11/talos-ii-and-talos-ii-lite-officially.html)

Orion Reed 2020-08-17 14:40:56

BOOKS! I’m running dangerously low on books and would love some FoC related recommendations. I feel I barely know what’s out there.

Edit: am now out of books, please save me.

Christopher Galtenberg 2020-08-17 15:24:09

My suggestion is off-topic for FoC, but for FoB (future-of-books)... I highly recommend Reality Hunger by David Shields. Read that and see what it does to your brain. And then I think that new brain will have some fresh perspectives on FoC.

Reality Hunger is a thought-experiment about employing collage for new types of composition. I think it's relevant to this group because so many FoC meta-patterns correlate to collage - placing in free-space, implicit connection, higher-order effects, ease of creation.

Chris Knott 2020-08-17 15:30:51

Not really FoC but I enjoyed The Idea Factory which is about Bell Labs

Ivan Reese 2020-08-17 16:00:57

Can't ever read too much Borges.

Speaking of Borges — if you want a collection of FoC-relevant short-form pieces (fiction and non), this textbook can be had for like $10 used on Amazon — http://www.newmediareader.com. Worth the price just for the cute ways the make the book interlinked, in an attempt to treat trad. media like hypermedia.

Orion Reed 2020-08-17 16:18:50

Ivan Reese ordered the new media reader, looks like treasure! (I also have Computer Lib/Dream Machines and it feels like I’m collecting historical artefacts, I love it!)

I’m completely unfamiliar with Borges, any recommendations to start off?

Ivan Reese 2020-08-17 16:28:17

Garden of Forking Paths is included in the NMR. I'd also recommend The Library of Babel (another short story). You may want to just grab the Ficciones collection, or the Labyrinths collection (they have a lot of overlap, mind)

Eddy Parkinson 2020-08-18 01:04:36

Paul Graham's essays http://www.paulgraham.com/articles.html

A few on programming languages, lots on innovation.

Rob Fitzpatrick - the mom test - I have posted about this here before. A system fore measuring the impact of an innovation.

https://www.youtube.com/watch?v=0LwbFZkyRKk

About management, but gets recommended by several IT founders (google, drop box ...) - High Output Management. Explains how managers can add value - has about 10 ways a manager can add value, all with examples.  - 6 hours - as 3 videos here https://www.youtube.com/channel/UCjPe7ShLW8Mau1qXt5ledTw?feature=emb_ch_name_ex -

The Innovators Dilemma - Clayton Christensen - Evidence based book describing innovations that fail when you try to implement them in a large business. He shows that some innovations work in a new business, rather than an established business. He explains how and why with examples.

Nick Smith 2020-08-18 03:22:44

A book on FoC feels like a contradiction to me. We can't write a book about the future until we have a design for it, and once we do, we'd probably choose to build it instead!

What does that leave? Just history and criticisms of the discipline (both backwards-looking), plus science fiction? (You may find value in those, of course.)

Nick Smith 2020-08-18 03:26:25

"Inventing on Principle", for example, is a great mix of criticism and science fiction.

Ivan Reese 2020-08-18 05:26:50

To the contrary — I think most of what we're doing here in this community, when it comes to looking forward (so ignoring all our exploration of history), is generating or curating speculative fictions about the future of computing, and then collectively working out how to reify some parts of those fictions.

(I prefer the term speculative fiction to science fiction, since not all aspects of FoC fall under the umbrella of science, but that's just semantics).

So it's not that a book on FoC would be a contradiction — a book on FoC would be indistinguishable from the prime purpose of the community.

Chris Rabl 2020-08-19 03:26:29

Computer Lib and Notes on the Synthesis of Form are what I would consider essential reading in this area. Tools for Thought by Howard Rheingold is an excellent chronology, spanning many centuries (basically from Charles Babbage and Ada Lovelace to the end of Xerox PARC). Lately I've been reading through Philosophical Essays of Leibniz, mostly trying to understand the concept of a characteristica universalis that he introduced in one of his many papers. This has a strong relation to the concept of a "pattern language", which is what currently captivates me. Here is my list of FoC-adjacent books I've aggregated over the last year or so: https://www.goodreads.com/review/list/100125761-chris-rabl?shelf=future-of-coding&utf8=✓

Orion Reed 2020-08-19 03:56:19

@Chris Rabl ordered the Leibniz essays! I’ve read the others and love them all, seems we have overlapping interests.

Your linked collection is awesome and there’s quite a few there I’m going to have to read soon.

Nick Smith 2020-08-18 03:51:14

I'm currently searching for the right terms to use for some proposed programming constructs, and "class" is precisely the word I want to use for a specific construct: it makes the most sense, in terms of its non-software meaning.

Unfortunately, the term "class" has been absolutely butchered by OOP. Do I dare reappropriate it? I'm not sure I have a good alternative. I need a term that can mean both a label for things (e.g. "X has class Y"), but also the collection/aggregation of things having that label (e.g. "class Y has 7 items").

Christopher Galtenberg 2020-08-18 04:07:43

Everything is subject to disruption - actually feels overdue for someone to take a new swing at class - do it

Nick Smith 2020-08-18 04:29:01

I did think along those lines, but imagine the search engine clashes if I ever was to release something popular, eek. People would be pulling up tons of bad resources.

Nick Smith 2020-08-18 04:30:16

And of course, everyone who first learns one definition and then has to learn another would have to suffer through that confusion.

Nick Smith 2020-08-18 04:31:36

"Group" is another alternative that I'm evaluating

Ivan Reese 2020-08-18 05:23:28

How about.... "category"

/ smoke bomb /

Nick Smith 2020-08-18 05:24:45

My answer: too many syllables 😛. The aesthetics of a term is important to me.

Nick Smith 2020-08-18 05:25:19

People would be saying/reading/writing it a lot!

Ivan Reese 2020-08-18 05:30:55

How about... "type"

/ superman dive /

Okay, jokes aside — some people prefer to reuse terms ("tag", for instance, is used in a billion and one different contexts). Other people prefer to invent terms ("hypertext"), or pull terms out of obscurity (looking at you, "complect"). Each person has their own taste. There's no harm in any of these approaches as long as the context is adequately defined.

Nick Smith 2020-08-18 05:34:32

I definitely fall into the first "class" 🙂. I want to tap into people's existing intuitions. I had a phase where I invented new terms, but soon I was confusing even myself about the semantics of the underlying constructs!

Nick Smith 2020-08-18 05:36:19

Using existing terms with existing connotations gives me a clear red flag when the semantics start drifting (a.k.a. I'm getting confused). I'd actually strongly recommend other designers follow a similar naming scheme for that reason. Until the dust settles, at the very least!

Nick Smith 2020-08-18 05:42:20

Roam Research has actually been crucial for helping me re-terminologize. I make every usage of a term a link, and as the semantics change, I can perform a simple rename operation and then review all the sites where I discussed the construct!

Chris Maughan 2020-08-18 08:06:36

Blob? Block? Set? The problem is all the preconceived ideas behind these words. Perhaps make up a word, and force users to think about what it means?

Gort, Bur, Pag 😉

Nick Smith 2020-08-18 08:37:32

Oh god no 😧 imagine the learning curve

Nick Smith 2020-08-18 08:40:00

As I mentioned in an earlier post (and got pushback for), I think we want to bring PLs as close to human intuition as possible. To me that entails finding very carefully crafted words that map back to widely-understood concepts.

Nick Smith 2020-08-18 08:40:24

Even if there’s a slight lossiness to the mapping

Jared Windover 2020-08-18 13:33:25

I’ve been reading “Women, Fire and Dangerous Things,” which is about (amongst other things) meaning and categories. I think it’s tempting to think that class is precisely the semantics you want, but if the point is for existing human intuition to apply as directly as possible, there are meanings associated with class that are likely unrelated to your use case (and which could cause confusion). A class of students, for example, which was likely motivated by class as category, but has taken on significant other components. Or class as social standing, which I suspect followed a similar path. I think group has fewer associations and might be better for that reason.

Chris Maughan 2020-08-18 16:29:52

Regarding learning curve; don't all new languages have a learning curve? Does using an existing concept get in the way of understanding something new (especially if the new thing isn't quite the same as the old) ? Perhaps a new, unique, word would help the concept stick in the mind more easily....

That said, I was only half serious about making up names 😉

Michael Coblenz 2020-08-18 19:13:06

“cluster”?

Nick Smith 2020-08-18 23:24:51

@Jared Windover It was actually my intent to connote both "class of animal" and "class of students": in my current PL design a category is indistinguishable from a collection ("classroom").

The social status connotation is not intended, but given the context is coding, I doubt people would start thinking their source code is describing wealth and power inequality. The context of use should provide clarity.

But regardless, I'll probably prefer "group" for now, to avoid OOP at the very least.

Nick Smith 2020-08-18 23:49:50

Chris Maughan An existing concept gets in the way of something new when you choose a poor word for it. For example, "objects" in traditional OOP are nothing like our everyday understanding.

..."An object is instantiated from a class (template), which is its inherent and sole category. You can't re-categorize/re-purpose an existing object. Just like in the real world!"

..."An object has public and private members, because that makes them more 'secure' or something. Just like real life!"

..."An object has a fixed set of methods (behaviours) inherent to them. Systems don't have behaviour as a whole; behaviour belongs to individual entities, and they are all pre-defined before the system is turned on. Wow, reminds me of reality! Animals and pencils are both like this."

..."Objects change when other objects call their methods, and the world freezes whilst they change. Every object has total authority on whether its state changes; no external force has the power to change its state. Just like the real world!"

And so on. That's when existing intuitions aren't helpful.

Nick Smith 2020-08-18 23:54:29

Traditional objects are just a big knot of everything some computer scientists came up with after reading about biology whilst on an acid trip. No single term can intuitively describe that knot. The solution is to avoid knotting together half-baked metaphors in the first place.

Nick Smith 2020-08-18 23:55:07

(OOP rant over)

Chris Maughan 2020-08-19 05:21:56

I think you might be making my argument for me though 😉 If an existing concept is getting in the way, and none of our existing concepts accurately describe what is happening, time for a new one.....

Nick Smith 2020-08-19 06:20:50

Oh you mean "an existing PL concept" rather than "an existing real-world concept"? In that case, sure, having a pre-existing PL concept that has already claimed the term for itself (and poisoned it with a complicated meaning) definitely does pose a problem.

William Taysom 2020-08-20 12:46:59

Certainly, my inner logician understands the label "X has class Y" sense as a predicate and the aggregation "class Y has 7 items" as a set. Of course, the logician starts using the words type and class when predicate/set paradoxes some up.

William Taysom 2020-08-20 13:52:35

Really tag is good: "X has tag Y" or "X is tagged as a Y" and also "7 items are tagged Y" or "the tag Y has 7 items."

🕰️ 2020-07-27 07:37:24

...

Tom Lieber 2020-08-18 20:58:51

Data in Wolfram Language is tagged in various ways that tell you where it came from and what it represents. For example, if you download one of their stock datasets, and select data from it, the data is going to present itself as a table even after operations that manipulate it, unless you do something that changes its format, such as plotting it, or using https://reference.wolfram.com/language/tutorial/TextualInputAndOutput.html#12368. I'm not an expert on it, unfortunately, but I thought it'd be a useful reference if you're looking for how other language do it.

🕰️ 2020-07-27 14:35:44

...

Tom Lieber 2020-08-18 21:00:29

Its input window is still pretty small, but if the text you want to feed in is also pretty small, then it ought to do a fair job at the tasks you mention. Usual caveats here, that you'll never really know if the answer is right or wrong unless you check its work. :P

Nick Smith 2020-08-19 04:28:15

I'm slowly convincing myself that the future of programming includes verbalizability (and thus, natural language).

The ubiquity and utility of natural language is well known: almost all of the information we read, write/type, speak and hear every day is communicated via a natural language, not a specially-crafted one. Natural language is our primary means of understanding the world and interacting with other humans, especially in the absence of supplementary tools such as pen-and-paper (e.g. in face-to-face conversation). When was the last time you described a cause-and-effect phenomenon (a story, event, or task instructions) to someone without extensive use of natural language? At best, we use other artifacts (diagrams, formal models) as aids.

Despite the ubiquity and utility of natural language, if one person tries to "speak" a Java/C/JS/Rust program to another person, they have to go through an extremely complicated and lossy translation process from code to words. Based on some of my recent design work (not shared yet), I'm beginning to believe we could actually design programming languages whose "source code" consists solely of terse natural language sentences/structures with very specific syntax restrictions (to deny multiple interpretations / ambiguities). The language would not be accompanied by any supplementary symbols (!@#$%^&]/->) with domain-specific meaning, since those would inhibit verbalization. Writing code in this language still requires careful deliberation (to develop the logic), but reading and discussing code with full verbalization becomes trivial: no translation is required at all!

Yes, I'm not the first person to think about putting natural language in PLs. Someone might bring up COBOL, HyperTalk, SQL, or similar. I'm not saying verbalizability is "the solution" to programming alone, but along with a well-crafted semantics (information model etc.), it could lead to something extraordinary.

If anyone wants to think further about this, Felienne Hermans https://youtu.be/CgR5mSAGxtA?t=2663 (video is timestamped to that bit). Quote: "how does code sound?"

I searched this Slack's http://history.futureofcoding.org/, and I was surprised to discover the word "verbalization" has never been used.

Kartik Agaram 2020-08-19 05:33:50

[July 2nd, 2020 3:29 PM] ogunks900: It’s also the same thing with stuff like operators in Haskell - how do you pronounce this ‘@&amp;£’ operator (I made it up btw)? Sure it is brief but it’s three characters in my head and doesn’t have a real name(I am convinced with Haskell it should be possible to give operators aliases, funny thing is ‘&gt;&gt;=‘ is ‘bind’. Being able to pronounce things is vital for chunking I think. Again, wish we had a more data driven approach to these questions.

Felix Kohlgrüber 2020-08-19 06:07:14

Sounds interesting! I'm looking forward to learning more.

Chris Granger 2020-08-19 06:08:36

Our latest work on a front-end revolves pretty much entirely around this idea. 🙂 The big shift for us came from moving our focus away from “simple” to “simple to understand.” The results have been encouraging so far. Here’s the obligatory “incrementer button”

an incrementer is a button with a count of 0

    the text is bound to its count



when an incrementer is clicked

    increment the count

None of that is builtin, e.g. here’s how increment is defined:

to increment a number

    set the number to the number + 1

And here’s a button:

a button is an element that has a text string

    its tag is "button"

    always set "textContent" of the button to its text
Andrew F 2020-08-19 06:09:04

Verbalizability (I'm going to go with pronounceability) is an entirely different prospect from natural language. Pronounceability is definitely a great thing, for accessibility if nothing else. I still can't imagine a system of "terse natural language sentences" that isn't hostile to at least one of precision or concision. I'd love to be convinced otherwise. Can you give a napkin-example of what you have in mind?

For pronounceability I would still follow the lead of mathematics as pronounced, e.g. "f of x equals forall y in d g of x and y". Whatever approach you take needs to have enough expressive power to balance out the bandwidth limit of at least speech, if not word-based notation (I like the idea of a conventional-looking system with operators that happen to all have reasonable pronunciations, which would keep the written syntax relatively compact).

OTOH: if you can just avoid the trap of phrases that look like multiple words but parse like a single token (in that they always have to be together in the same order), if you just let words be independently composable, you'll be head and shoulders above COBOL et al.

Nick Smith 2020-08-19 06:09:13

Kartik Agaram ah, good work! I searched for “verbal-” and “vocal-“ but not “pronounce”.

Nick Smith 2020-08-19 06:40:15

@Andrew F The only reason I'd say "verbalize" rather than "pronounce" is that the latter usually refers to individual words, whereas the former can refer to whole systems. I want readers to be able to verbalize an entire system, as if they were hearing a (detailed, accurate) description from a friend.

Regarding "following the lead of mathematics", my language is going to be extraordinarily light on functions (if I can get away with it), and thus on mathematical notation. Code re-use occurs through via interrelated rules that observe and produce abstracted data through pattern-matching. These act like polymorphic functions, so for most computations there is nothing to "call" and no need for mathematical primitives, no "f(x)". We can still retain that classical syntax for mathematical (numeric) expressions, though.

Regarding "multiple words that parse like a token", that shouldn't be a problem, because programs will be built through structured editing (AST etc) with appropriate autocomplete and syntax highlighting.

I'll see if I can come up with a napkin example that I can actually defend! I'm still working on the semantics of my system, and I'm worried that an example wouldn't make sense to anyone else yet.

Nick Smith 2020-08-19 06:45:57

Chris Granger Yeah I think that a "simple model" and an "easy to read language" (in terms of Rich Hickey's definitions of "simple" and "easy") are both crucial. I've also been focusing too narrow-mindedly on the former.

Your code example demonstrates not just a language, but obviously an information model and a time model as well. I like the readability of the language, but now you've got me curious about the underlying models. 🤔

Nick Smith 2020-08-19 07:00:32

Chris Granger Also, it looks like your prototype is building on HTML? The thought scares me tbh. It looks like it could warp the design of your information and time models. Your language semantics become tied to HTML/CSS semantics.

Konrad Hinsen 2020-08-19 07:15:05

It's interesting to compare operator-heavy languages like Haskell with math notation. The latter has always been a shorthand notation for a highly codified jargon. It is pronounceable because mathematicians have communicated orally all the time. Occasional excesses into unpronounceable notations have been made, but never made it into mainstream. This mechanism hasn't worked for digital notations because people don't communicate about them orally.

A cultural practice of the Lisp community (in the widest sense, including Scheme etc.) that I have always appreciated is to value pronounceability of names. Which may seem surprising to all those who never take a closer look at Lisp because of all those parentheses. For me, Lisps are the PLs closest to natural language (but I have never used Cobol), and that's their strong point.

Ope 2020-08-19 11:19:39

To buttress your point Konrad Hinsen I miss being able to name variables ‘open?’ . This is an interesting example actually because when said out loud it is the same word as ‘open’ but with a different tone or maybe ‘open?’ should be pronounced ‘is open’ .

Is that a strike against using variable names like this 🤔.

Konrad Hinsen 2020-08-19 12:39:17

Ope Indeed, the question mark was one of the main innovations coming from Scheme. Once you have seen it, you can't help but consider the -p prefix from earlier Lisp days incredibly clumsy. I read the question mark with a different tone (to myself, I hardly speak Lisp to others), and that sounds very natural to me.

Stefan Lesser 2020-08-19 14:25:35

When trying to use natural language as a basis for programming, I suppose a lot of effort needs to go into dealing with ambiguity.

I'm all for this, because I think problems with complexity and with understandability come from the level of precision that is required to do programming today, which caters to logic and math types, and is far beyond what non-programmers are used to from their daily lives.

I wonder, Chris Granger, can you talk more about how you explicitly deal with ambiguities in that prototype, or is that not a concern at all?

Alex Wein 2020-08-19 15:22:06

There was a good useR keynote last month on some of this by Amelia McNamara called "Speaking R" (https://www.youtube.com/watch?v=ckW9sSdIVAc&feature=youtu.be&t=676, https://www.amelia.mn/SpeakingR/#1).

Chris Granger 2020-08-19 16:41:17

Nick Smith Nothing is builtin, so drawing to HTML in this instance has no implications on the semantics of the language itself. You could just as easily replace the definition of element with something that draws to GL. This is all you need for the def of element:

an element is a thing with a tag "div"



to create an element

    dom.elem(the element, its tag)

It’s just an FFI call. The language was designed to be able to go all the way down to bare metal if desired.

The full semantics of the model is probably best left for another discussion 🙂

Chris Granger 2020-08-19 16:51:58

Stefan Lesser we spent a bunch of time trying to understand what ambiguity really is and where it comes up in previous work on doing sophisticated natural language querying. In the end we realized that as long as you’re not writing full prose (e.g. limiting yourself to individual sentences rather than full paragraphs) the only form of ambiguity you really have to deal with is the edges between nodes in the sentence. For example, which object’s count does “the count” refer to? There are lots of simple heuristics we employ as humans to resolve that and in the end the only thing you really need to provide is a way to correct the edge if it’s wrong. In past projects we did that by exposing a formal representation of these linkages as a really simple nested list :)

In this particular case, we’ve designed the language such that ambiguities are pretty rare, but certainly not impossible. The editor we’re building along side this has tools for helping you understand and fix ambiguity if it comes up. We also leverage things like autocomplete to guide you towards unambiguous formulations to being with. Reducing everything down to simple declarative statements and single line actions gives us a ton of leverage to make both the meaning of things clear and keep the editing experience really natural.

Chris Granger 2020-08-19 16:54:21

Where things get really interesting is in the mixture of this way of talking about nouns and verbs in the world coupled with an editor that helps you understand the whole system concretely. I can’t wait to show you guys what that’s going to let us do. 🙂

Chris Granger 2020-08-19 17:03:27

It’s also worth pointing out that despite this being an extremely high level language, that doesn’t have to come at the cost of performance. The backend was able to match handwritten and carefully tuned rust in some of our experiments. The only reason the rust ended up being faster was because I dropped down to SIMD intrinsics and we’re generating JS right now, which doesn’t have access to SIMD.

Jimmy Miller 2020-08-19 19:16:15

Chris Granger I love hearing some much about where things are going. Definitely can't wait till you can share more 🙂

I am interested to know, who is we? Is this work you are doing in your spare time or at a company?

Chris Granger 2020-08-19 19:25:37

Josh (from Eve) and I left our jobs and are doing this off of savings at the moment. Our goal is to try to put something impactful enough together that we can create a sustainable small business around it. 🙂

Lucian Ursu 2020-08-20 09:03:53

do you know about https://www.storyscript.com/

I discovered it in this slack, but I haven’t played with it

William Taysom 2020-08-20 13:56:57

Konrad Hinsen your comment "operator-heavy languages like Haskell with math notation" makes me ponder whether scenarios where operators feel best might be those where a diagram would do better. 🤔

Nick Smith 2020-08-21 03:32:10

@Lucian Ursu Oh, I did see Storyscript at one point! I should probably look at it more closely.

🕰️ 2020-07-02 21:53:29

...

Ope 2020-08-19 10:17:35

Nick Smith thanks for this link! https://youtu.be/CgR5mSAGxtA . Posting here cos it’s super relevant to this thread

Nick Smith 2020-08-19 10:19:28

Ope No worries. I probably wouldn't pollute the main channel with this link though (unless you want to post it to #linking-together). Perhaps re-post it just in this thread?

Ope 2020-08-19 10:21:26

Don’t think there is any way to undo the sending to the channel. I wanted it called out since whoever that was on this thread would definitely be interested in this too.

Nick Smith 2020-08-19 10:22:39

Delete and re-post should do it.

It's definitely right for this thread, but not for the main #thinking-together feed. (People on the thread get a notification, so will still see it).

Ope 2020-08-19 10:25:52

For next time? I don’t think it’s that big a deal.

Nick Smith 2020-08-19 10:26:42

Sure.

Ivan Reese 2020-08-19 15:41:47

Removed :)

🕰️ 2020-06-22 16:45:48

...

Yoz Grahame 2020-08-19 21:11:15

I contributed to the Xiki Kickstarter, even met up with Craig to try some pairing on it (though we didn’t get that far), and then watched the mailing list traffic as the project gradually lost support. (The most recent thread was two years ago, when his pitch for a second Kickstarter was shot down by the backers of the first one: https://groups.google.com/forum/#!topic/xiki/Dvzo14Lhoyg )

It’s still a lovely idea, buried by its creator sacrificing near-term assistance in pursuit of a long-term dream that will probably not be reached. It’s yet another lesson in the difference between an Open Source license and an actually open project. If he’d properly shared his work and made it easy to contribute, Xiki would probably be usable today.

Joe Nash 2020-08-20 12:09:21

So Hacktoberfest is coming up (https://hacktoberfest.digitalocean.com/), which always draws a lot of new contributors, and is a great opportunity to help people get started in open source and discover some new projects. Thinking it’s probably a good opportunity for this community to attract and support new participants interested in these topics, but also maybe in championing eachothers projects and driving some contributions. Does anyone have Hacktoberfest plans already?

Daniel Garcia 2020-08-21 23:15:13

I know might be heresy but I'm curious why people like the interaction of scrubbing numbers.

IMO it's hardly discoverable, it lacks reference values to know how far to drag and feels like poor UX.

What others think about it?

The first time I saw it was on Bret Victor's learnable programming, also great to know more history about it, and if somebody knows previous uses of it.

http://worrydream.com/LearnableProgramming/Movies/Vocab12.mp4

Ivan Reese 2020-08-21 23:20:41

My initial angle on this — they're a bit like a 1D, text-code version of https://futureofcoding.slack.com/archives/C5U3SEW6A/p1597389554026100 of 2D GUI input (see also the https://en.wikipedia.org/wiki/Kaoss_Pad). I don't much care for the inline scrubbing numbers a la Bret's http://worrydream.com/Tangle/, since they feel like the tiniest step away from text code toward a richer graphical/visual coding environment. I'd prefer that we just take the whole entire step and start building rich scrubbers in a truly graphical/visual context.

All that said — basic scrubbers are nice if the medium you're working in is purely text. That's what's nice about Tangle. It's not for programming, it's for prose.

Tim Babb 2020-08-21 23:35:18

Number-scrubbers are pretty common in visual effects software. You might have a text expression that drives something visual, and dialing in something to taste is terrible UX if you have to type out numbers.

Usually the dragging "scale" is adjusted for the magnitude of the number that you're scrubbing. Some apps also let you manually choose the sensitivity with a context menu.

Doug Moen 2020-08-21 23:35:25

The domain where I want to use it is writing programs that generate graphics. What makes this interaction awesome is that you get continuous feedback in the graphics output area while you are scrubbing the value. So much more productive than: type in a number, render, type in another number, render.

Doug Moen 2020-08-21 23:39:01

This UI isn't restricted to 1D. In the other thread, I referenced the https://github.com/patriciogonzalezvivo/glslEditor project on github, which supports colour scrubbers and vector scrubbers.

yoshiki 2020-08-22 01:20:31

Seconding their applicability in graphics stuff. Also note how Bret uses it to tune the jump factor in the demo game in "Inventing on Principle". Scrubbers are great for situations like that when you're trying to find "magic numbers"- parameters you don't care what exact value they are as much as how they affect your simulation in relation to others.

yoshiki 2020-08-22 01:30:29

re: discoverability, this can be a valid critique of some systems. After Effects however signals scrubbability by giving the numbers an underline+blue text to signal interactivity, and when you hover over them your cursor turns into that horizontal scrubbing icon <-> (will edit later with the correct name).

Ivan Reese 2020-08-22 04:18:44

Further on discoverability — there are a lot of tools (3d programs, for instance) where all number inputs are scrubbable, and it's clear from the styling whenever a field is just an output/measurement and not an input.

Other tools will put a vertical up/down control next to the field, and (like Yoshiki said) the cursor changes to give you a hint that it can be scrubbed.

On the other hand: number-type inputs in some web browsers are automatically scrubbable using the scroll wheel! So it's regretfully easy to accidentally scrub them when trying to scroll through the page. Argh!

Ivan Reese 2020-08-22 04:23:56

Reference values is another interesting facet that Daniel raised. Ignoring traditional sliders or input->output graph views (the Curves adjustment in Photoshop, say), I can't think of any text-based scrubbers that do a good job hinting at ranges or reference values.

I guess one way would be to express a value as a percentage.

Going further — how do you hint that the value has some degree of nonlinearity to its effect, just using text?

yoshiki 2020-08-22 04:32:17

Yeah there are a lot of cases I can think of where reference range values are part of the tacit knowledge you have when approaching the system, fhttps://medium.com/@jxnblk/mathematical-web-typography-d69186780a41 So I think this limitation of scrubbable numbers can’t be discussed absent of specific contexts like that. I can also imagine situations where they aren’t appropriate(also cases where ppl get overly enthusiastic in using them in the wrong context. On reflection, I’ve done this before!!)

Chris Maughan 2020-08-22 07:38:54

As some of you know, I've worked a bit on this stuff and plan to do more. I'm hoping to give this feature some love over the next couple of weeks; mainly because recent discussions here have inspired me to do better.

This is what I did the last time I looked at it; auto discover the variable and add a slider in a space somewhere around it.

You can imagine a color swatch or some other style of widget to make it obvious; and allowing the user to add a comment to help identify the UI for ambiguous cases.

float val; // widget-slider

I agree that plain numbers aren't as discoverable, but the way they are drawn is different in the demo posted by Dominik Jančík - with their black backgrounds. I think you only have to 'discover' them once. And a pop-up indicator with limits, etc. would be easy enough to do.

That said, I think I still prefer adding UI widgets inside the document, giving it more of the feel of a jupyter doc. Indicating non-linearity or limits is much easier if you aren't restricted to text.

Another idea I had was to have the widgets 'cover' the text and fade back to editable text based on context, but I think this is probably too confusing.

📷 image.png

📷 image.png

Mariano Guerra 2020-08-22 10:17:34

I always found the discoverability hard and for the stuff I do the problem is that if a thing can be scrubbed, then it can't be dragged 😕

Dominik Jančík 2020-08-22 11:50:55

Mariano Guerra if you have ways to toggle the scrubbing then standard text interactions (selection, dragging) are still possible.

Here's two possibilities:

Toggling (Ctrl+Shift): https://codepen.io/domjancik/pen/XWdjrQv

Hold to enable (Ctrl): https://codepen.io/domjancik/pen/VwamOyJ

Mariano Guerra 2020-08-22 12:14:58

yes, there are ways, but they become less discoverable even 🙂

Mariano Guerra 2020-08-22 13:23:09

yes, that's the way I'm solving the discoverability problems in my app, tooltip (almost) all the things, but instead of showing them as tooltips I show them always in the same place with some extra metadata

Mariano Guerra 2020-08-22 13:34:40

Dominik Jančík you linked twice to the same link, maybe you copied the wrong one? 🙂

Daniel Garcia 2020-08-22 16:07:03

Scrubbers are great for situations like that when you're trying to find "magic numbers"To me the real solution would be to fix the problem of magic numbers, I know it's a hard thing to do. But scrubbers just feel like a bandaid to the real problem.

Maikel van de Lisdonk 2020-08-22 16:34:20

The solution that I want is more like a real knob/slider which you can turn (or slide). A knob can be endless if that fits the situation. So that you can be precise if needed or change the value to a much higher value by turning the knob faster. These kind of knobs are available on midi controllers and I've seen them in software (ableton and vst's come to mind) as well. It would be cool if the html-browser range input would support this behavior more or less out of the box

🕰️ 2020-08-05 17:26:02

...

Joshua Horowitz 2020-08-22 08:13:41

Extremely shallow feedback, but: the project name & logo are fantastic. 😁

Ray Imber 2020-08-22 19:42:02

Imperative programming and math pedagogy:

TLDR; converting an algorithm or concept from a research paper into imperative code is often seen as tedious, but I think the process helps with learning and clarifying understanding. This is an underrated advantage of imperative code.

Thoughts? Do you agree or disagree? How can this idea of "reification" of algorithms be extended to other paradigms?

The long version:

I am a big fan of Coding Adventures by Sebastian Lague. He just release a new video: https://www.youtube.com/watch?v=DxfEbulyFcY

He skims over the details (the videos are more edutainment than true in depth education), but a strategy he seems to follow is to look at the top research papers on a subject, take the key equations, and implement those equations as shaders in Unity with HLSL i.e. a C derivative language.

This is a technique I'm very familiar with. I assume many people here are familiar with it as well. It can be tedious work to do this kind of translation, but it's hugely useful. I find that after doing a translation like that, I often have a much stronger understanding of the concept.

The way I truly understood integrals and summations was through implementing them as for loops.

I didn't understand monads and combinators until I implemented a parser combinator in an imperative language. In PHP no less. I was young and naive 😛.

Many people in this community are fans of functional programming; there have been quite a few threads about ways to encode algorithms more effectively or "true to their mathematical form:" everything from APL to natural language to category theory.

This is a counterpoint against always looking for the "most efficient way" to encode an algorithm.

There is something about taking something "functional" and reifying it into a series of imperative steps that helps understanding a concept, at least for my personal learning style.

I have very little experience with visual programming, but things like Factorio (mentioned recently), make me think that reification as learning tool can be just as effective in the visual paradigm. How can this idea of "reification" of algorithms be extended to other paradigms, or taken into account more generally?

Doug Moen 2020-08-22 20:34:23

I'll rewrite your statement slightly: "Implementing an algorithm or concept from a research paper as code that you have written from scratch is a process that helps with learning and clarifying understanding." No need to restrict this to imperative code. One trick I use to understand a difficult research paper is to write out the concepts in my own language (in English, I mean).

Doug Moen 2020-08-22 20:36:28

Of course, when you write code as a way of learning or understanding someone else's ideas, you should code in a language and style that you understand very well, and feel most comfortable in. For many people, that will be imperative code. That may be because the first programming language most people learn is imperative. Whether imperative programming is inherently easier to understand than function programming is a separate issue.

Kartik Agaram 2020-08-22 20:38:26

Yeah, I agree with Doug. While my personal experience agrees with Ray's, I have to conclude after looking around that the phrase "personal learning style" is key.

There's some overlap between this thread and my pet pedagogical approach of learning things bottom up. I found it easier to learn Haskell by ignoring layout rules and infix operators, and explicitly specifying the bounds of each function call.

Ray Imber 2020-08-22 21:13:43

These are all great points. You are right to emphasize "personal learning style". (Maybe I should have emphasized that more.) The more I contemplate, the more I think that the important part is reification; making a concrete "thing".

Chris Maughan 2020-08-23 07:41:39

FWIW, I watched this coding adventure yesterday; I'm a big fan of his and always wonder how he puts together such great videos - he seems to bend Unity to his will for the teaching parts as well as the rendering parts. I have coincidentally implemented the same approach in the paper he used, as part of a game prototype I was playing with (some eye candy enclosed - must revisit this project at some point!)

My approach is probably slightly different though. I'm more of an 'implementer' than a scientist/academic, and I often struggle with hard math. My approach is typically to iterate towards a goal in very small steps without getting bogged down in the detail; and it can take several days to get through the technical challenge. I'm in awe of anyone who can read a paper full of Integrals and translate it to code. That would be highly efficient! I rely on following my nose and looking at sample code far more than the math.

Part of the reason I'm always messing with visualization tools and visual programming is that they are the only chance I have to understand.

"The way I truly understood integrals and summations was through implementing them as for loops." This + 1000, basically 😉

And Monads.... I've read a few articles, but until I actually get to use one in a program I won't understand them.

Perhaps Sebastien is just really smart (I'm sure he is), but what intrigues me is, has he got to that place by building a set of tools and technologies around himself, such that he can break any problem up into pieces he understands? i.e. The fact that he can plot and interact with graphs while building his code gives him a deeper/quicker understanding of each step? Just more validation of Bret Victor's approach, I guess....

Konrad Hinsen 2020-08-23 08:08:24

Quoting Donald Knuth (source: http://www.jstor.org/stable/2318994): "It has often been said that a person does not really understand something until he teaches it to someone else. Actually a person does not really understand something until he can teach it to a computer, i.e. express it as an algorithm. The attempt to formalize things as algorithms leads to a much deeper understanding than if we simply try to comprehend things in the traditional way."

Jack Rusher 2020-08-23 11:30:24

Re: personal style, I find mathematical concepts to be much more naturally expressed using functional programming rather than imperative, but no matter one's preferred style it seems clear to me that programming is https://www.bootstrapworld.org than the standard mathematical pedagogy. Likewise, I'm in strong agreement with Sussman that traditional mathematical notation is strictly inferior to writing everything down in an https://en.wikipedia.org/wiki/Structure_and_Interpretation_of_Classical_Mechanics.

🔗 Bootstrap

Konrad Hinsen 2020-08-23 19:06:05

I used to agree with Sussman, but I have changed my mind a bit: unambiguous, yes, but not necessarily executable. Much of what you write down in mathematics and mathematically formulated science is specifications. For example differential equations such as Newton's. You really want to be able to write such specifications without necessarily doing something specific with them.