You are viewing archived messages.
Go here to search the history.

šŸ•°ļø 2020-08-26 08:57:17

...

Robbie Gleichman 2020-08-30 22:18:13

I didn't have RSI.

šŸ•°ļø 2020-08-30 18:07:21

...

Robin Allison 2020-08-31 01:20:31

This is such a complicated and cool question and I have so many thoughts I donā€™t even know where to begin. I remember watching your talk about the thing youā€™re making and I think you mentioned something along these lines being one of the big problems. I was surprised you wanted to be able to prove theorems in this system because of course a visual environment and proofs have this tension. Iā€™m curious what kind of proofs you would have in this system. In the video you mentioned a theorem in quantum computing(?) but I couldnā€™t find it. As an elementary but nontrivial example how would you prove the Pythagorean theorem? I think you would have to do this abstractly, but as long as it is ā€œconstructiveā€ you can unfold the abstract proof at various stages and apply it to specific vectors to visualize it. Also, if you are doing proofs I imagine these would be ā€œformal proofsā€ and isnā€™t that a really tough problem? Or do I misunderstand or is there some way to get around it? eg just have a more expressive and dynamic means to write informal proofs. Anyway I certainly wouldnā€™t shy away from general variables as long as you have a means to move up and down the ladder of abstraction šŸ˜‰.

William Taysom 2020-08-31 05:21:19

Starting from concrete values, you can approach abstraction by lifting from a single value to many at once. Remember Bret's Ladder of Abstraction. For extra fun, have interactions between the multiple values. For instance, instead of a solution set of values that would work in a given context, have a probability distribution. I was pretty into these propagation networks at once point https://dspace.mit.edu/handle/1721.1/49525. Don't know if more progress has been made.

Tudor Girba 2020-08-31 07:07:12

In Glamorous Toolkit, examples play a key role. We went so far as to replace classic tests with examples (a test that returns an object). This leads to a nicer way to compose examples, but most importantly, examples offer concrete objects you can program against. As in our environment every object can present itself through custom views and as these views can also be weaved into larger narratives, the examples also offer a nice infrastructure for documentation purposes. Here is a short article about them:

https://medium.com/feenk/an-example-of-example-driven-development-4dea0d995920

hamish todd 2020-09-01 21:12:42

@Robin Allison When you say that it is "obvious" that visual environments and proofs have a tension, I guess (?) you mean that visual proofs are criticized as being not as rigorous as algebraic, because it seems to be more about intuition. This is an interesting philosophical issue and may end up being a problem for me, but I'm willing to bet that it will be sufficiently non-small

The kinds of proofs I am interested to give are (as you'd expect) fundamentally GA-related, so for example, proving that rxr* is a rotation when r is a rotor and r* is its reverse and x is a vector. Or, expanding a bit, classical mechanics, so proving that ellipses are solutions to the two-body problem. And yes, if I get to it, quantum computing! Subalgebra closure is probably what you were thinking of. It wouldn't be super surprising if the system ends up somewhere in between "strong enough to prove pythagorean theorem" and "strong enough to prove QM subalgebra closure"

"How would you prove the pythagorean theorem?" Thanks a lot for asking me that question, it's a great example to start with and I'd not thought about it :D and now I've thought about it and I know how to prove it. It's a combination of defining a "right angled triangle" by the constraints that are on it (which are visual) and then applying the rules of geometric algebra, which are all visual, to that constraint. I can say more but it might just sound weird/spoil the fun when you eventually see it šŸ˜›

Robin Allison 2020-09-02 04:20:51

Hamish Todd By the tension I meant, just the problem you brought up in the original post in this thread, and as I was watching your video that was the question that crossed my mind. Canā€™t wait to see what you cook up!

Garth Goldwater 2020-09-04 11:53:09

incidentally, some kind of geometric algebra conference popped up on my subscribed youtube feed this morning: https://www.youtube.com/user/EnkiOrigami

šŸŽ„ enki mute

hamish todd 2020-09-04 22:27:07

Yes, it is hoped that that conference will have a significant impact!

Garth Goldwater 2020-09-05 01:23:07

let me know when the ā€œGA for absolute moronsā€ lecture comes out and i will become an enthusiastic proponent!

hamish todd 2020-09-05 18:30:40

It will be a while, I want to make it really good šŸ˜…

šŸ•°ļø 2020-08-27 03:37:49

...

Nick Smith 2020-08-31 01:45:22

Robbie Gleichman I think you misunderstand my interest in natural language. Iā€™m interested in using it strictly as a primary programming interface. Few people would argue that we should have deep learning models interpreting all of our code (plaintext) to take action based on educated guesses about what we wanted to say (if you think that, thatā€™s a separate discussion). Iā€™m just exploring a syntax based on logic (i.e. explicitly broken into logical units with a well-defined semantics) but expressed in natural language (so that it is human-readable without having studied a course on logic).

šŸ•°ļø 2020-08-29 07:01:21

...

William Taysom 2020-08-31 04:12:23

I mean making change of time more explicit: more directly observable, manipulatable, and constrainable. This can take many forms. Here's an example.

Step 0: Setup

Imagine a fairly conventional imperative system. We have a bunch of boxes (variables) in which we can put values.

Step 1: Observable

We show which values go which boxes in what order. Often systems let us check state only in the moment or keep dubious logs about what happened in the past. Forget asking about the future. Imagine lining up the boxes. We might be able to scrub forward and backward through time, or add a timeline showing what values were in each box when.

Step 2: Manipulatable

Good old structured programming is kind of nice on this front. Each assignment statement records how the contents of a box change. Suppose we directly manipulate the boxes in some other way, then we can record a script of the assignments made. Glue scripts together. Good, clean fun.

Step 3: Constrainable

Except we don't abstract cleanly from the step-to-step to composable recipes. With functions/procedures, we keep track of arguments and return values, but we don't keep track of what boxes get examined or updated. We can't easily tell if ordering of calls matters, and we cannot easily require that things always happen in a certain order.

Chris Granger 2020-08-31 16:41:45

If you try to create a fully declarative, reactive semantics, I think you eventually run into the bind/commit distinction no matter what you do. Fundamentally, itā€™s a question of how the lifetime of an assertion is controlled. Commit is saying the lifetime of this assertion is unconditional, whereas bind is conditioned on the other information this assertion is derived from. Youā€™ll want both, but itā€™s awkward trying to make them play together.

Chris Granger 2020-08-31 16:43:29

Not being able to come up with something better is a big part of what convinced me that we needed to find something in the middle of the declarative/imperative spectrum, rather than constantly looking at the ends.

Chris Granger 2020-08-31 16:46:18

programming with just rules or just procedures sucks, but being able to freely mix them both together is pretty magical šŸ™‚

Chris Granger 2020-08-31 16:52:22

our implementation of bind/commit led to a lot of complexity and we thought that we had hidden the implications of the different timelines from people, but as @William Taysom said, it turned out there were cases where trying to get the right sequence of things to happen exposed you to that complexity and it was unequivocally worse than what you would normally do. Eve was much better than conventional languages on some axes, but on the axis of expressing process-like things it was significantly worse.

Chris Granger 2020-08-31 16:54:07

Part of that comes from bind/commit naturally wanting to happen at different ā€œtimes,ā€ the other part came from blocks being islands that werenā€™t obviously tied together in any meaningful way. Discovering the forest was pretty hard to do just by looking at the trees.

Chris Granger 2020-08-31 16:56:08

If you donā€™t mind exposing users to the actual semantics/complexity of time, I would look at statelog as a better approach to the problem

Chris Granger 2020-08-31 16:58:01

It makes time fully explicit, though I donā€™t know that it really makes it that much better when compared to something that can just express a procedure cleanly

William Taysom 2020-09-01 06:29:00

In case the Statelog link doesn't work http://users.sdsc.edu/~ludaesch/Paper/moc98.pdf.

šŸ•°ļø 2020-07-25 14:08:04

...

Jack Rusher 2020-08-31 06:57:52

@Andreas S. The version of that concept with which I'm familiar is a sort of "horizontal transfer" of knowledge within orgs. Can you elaborate on how it would fit into your vision here?

Andreas S. 2020-09-01 13:32:38

I think its not "my-vision" but I can try. So the problem is that too many things are too complex and educational instituions as well as organisations ( even companies) might be unfit at times to provide a context for the individual to learn what he needs. So I think this resource here outlines many good aspects of #p2p-learning https://wiki.p2pfoundation.net/Category:Education

Andreas S. 2020-09-01 13:36:53

For me its the simplest possible context in which a knowledge transfer (student-teacher relationship) can occur. It may seem very vague , even unstructured but I think the concept is just what we need in this times. Many people need to learn about many things on such different levels. Bu tthen also you have to acknowledge that attention is finite and if possible cultural constrains. This places already tough constrains on the lets call it "base layer" because I think a lot of learning has to be local. There is also a lot which can be online but I'm uncertain if its more or less then what can only be done offline/local.

Andreas S. 2020-09-01 13:42:21

Mariano Guerra hey šŸ‘‹ I like this particular thread very much do we have already a markdown export of it so I link to it from statically form extern ( without slack account) via URL? Ivan Reese Where would be the place to discuss how to organize some of our FoC gems as a Zettelkasten in Markdown? I think would volunteer a bit of time to that.

Mariano Guerra 2020-09-01 13:48:32

since this is a long living thread I would have to update the history dump to make it display newer messages, but here it is: https://marianoguerra.github.io/future-of-coding-weekly/history/?fromDate=2020-07-25&toDate=2020-07-26&channel=general#2020-07-25T14:08:04.036Z

Andreas S. 2020-09-01 13:48:50

Jack Rusher for further context of P2P-learning i'm currently reading "Education in a Time between wordls" by zak stein, which is related to the GameB movement (Jordan Hall and many others) which is related to John Vervaekes - Awekening from the meaning Crisis / the religion that is not a religion. In which people try to build a culture which is has better relationships to meaning and sustainability than our current one. I hope this is not to vague or "woowoo" for you , there are are a lot of scientists working on this but as you can imagine its a monumental task and the scientists can only be one layer of if it. So I thinks its interesting how this plays our to the larger places of society and how "we" the FoC Community - what role do we play in this infinite game of creating meaning in relationship to society.

Jack Rusher 2020-09-01 15:20:56

@Andreas S. My friend Samim https://twitter.com/samim calls this "open understanding", in the spirit of "open source". I'm in fundamental support of the concept.

Andreas S. 2020-09-01 15:33:07

Yes its such a very fundamental concept. I mean really if you try to re-learn or trying to make sense bottom up. Climbing up the maslov hierachy of needs with all the infinity of information available, its a humbling experience. But it can be also very personal and human. For example I find joy in learning new things about food, health and cooking which are unknown to my culture. And yet I enjoy it very much!

šŸ•°ļø 2020-08-28 21:25:49

...

Konrad Hinsen 2020-08-31 07:21:00

Jack Rusher Same for me. My biggest hope for a worthy successor to Emacs is https://gtoolkit.com/. It addresses what is for me the biggest limitation of Emacs: the lack of graphics.

Jack Rusher 2020-08-31 11:57:48

Konrad Hinsen I also really like Glamorous Toolkit. It's always good to see any system that embraces the old Smalltalk/D-Lisp philosophy of interaction and malleability! šŸ™‚

šŸ•°ļø 2020-08-27 13:18:23

...

Duncan Cragg 2020-08-31 12:53:33

(i.e., it's not a "problem" to be "solved", it's what most "normals" think of first)

Shalabh Chaturvedi 2020-08-31 17:55:31

The problem arises when you think of consistency and how the programs change and read state.

Duncan Cragg 2020-08-31 19:24:33

Can you give an example of the problem of consistency under read/write, in an end-user application?

As techies we are all aware of the issues with parallel access to replicated databases, but that's about optimisation for speed. What about an end-user-focused programming environment where all that is hidden?

Andrew F 2020-08-31 20:09:37

Anything involving collaborative editing of the same document, especially if some collaborators are sporadically online. Possibly a cheap answer, but I'd argue it really just emphasizes the need to build around state.

Duncan Cragg 2020-08-31 20:32:12

@Andrew F šŸ˜„ you landed on the exact example I was thinking of as one I hoped no-one would pick, which happens to be the one my own solution to end-user state management tends to punt on! šŸ˜„

Duncan Cragg 2020-08-31 20:55:37

.. I guess my point is that, for most end-user applications, state doesn't cause issues even when building a distributed system of any sort, but I'm happy to be thrown counter-examples, alongside collaborative editing.

Eddy Parkinson 2020-09-05 13:26:12

Rich is talking about a general solution to two very different problems. For example Grace Hooper describes how record keeping and projections are 2 big problems. she describes how computers help solve the problems of record keeping (database) and projections (planning) for the military. She describes how they need very different solutions. She describes how one problem is very calculation heavy and the other is very data heavy. ...... maybe one day computers will be fast enough that we can create a single general solution to both problems, but for now, we tune solutions to fit the problems.

Drewverlee 2020-08-31 20:50:17

To what extent do our natural biases leek into the models (data structures, algorithms) we use. E.g is it possible that a tree data structure is more appealing to both users and developers because it mirrors a hierarchy (everything has a parent/cause) vs a graph which has a loop (which came first the chicken or the egg) which is considered less intuitive.

It seems a relevant design choice to consider not only the universal truthiness of something but also the cognitive load it takes to use it.

Chris Maughan 2020-08-31 21:01:17

For me it usually comes down to cognitive load. If I can get away with a vector Iā€™ll use one, then a map, then a tree, then a graph.... I also believe in iterating towards a goal in an agile fashion and not over engineering a solution. But yeah, I donā€™t lightly use a tree or a graph, because complexity == time

Garth Goldwater 2020-08-31 21:20:01

I suspect that material biases wrt existing libraries matter a lot moreā€”Iā€™d prefer a graph for most of my applications, but itā€™s much easier to write hierarchies. Since relational databases depend on slow joins to emulate graph queries, we donā€™t see a lot of many-linked relationships on apps and pagesā€”which I think limits peoplesā€™ imaginations (part of why Roam is taking off IMO)

Eric Gade 2020-08-31 22:07:01

This is where it would be good to get takes from psychology, anthropology, and sociology. Hierarchies make sense to people for a lot of reasons, and some of these are deeply cultural (think: organization of societies).

On the other hand, people are generally (though we all know some exceptions) good with spaces. Perhaps what in CS is called a "graph" structure is better though of as a "map" or "the layout of rooms in a giant house" -- it's very easy for us to understand in such a case how you can leave a room but somehow take a path that leads back to that room over and over, etc

Eric Gade 2020-08-31 22:09:59

Garth Goldwater There are two things about the world of graph structures / databases I still don't understand. One is the failure of OODBs to catch on (something like Gemstone inherently uses graphs), and the other is the whole RDF/Semantic Web community and any datastores they use. You rarely see these things outside of niche or academic contexts, and I don't understand why. Any insight?

Garth Goldwater 2020-09-01 13:28:27

with the semantic web iā€™d say a combination of economic factors (eg, googleā€™s dominance, lack of a business model) and ux factors (never saw a really appealing end-user app for creating or browsing rdf data). unfortunately iā€™m at the ā€œread the wikipedia five years agoā€ level of understanding of OODBs so i canā€™t contribute much on that frontā€”but iā€™d also note that path dependence is really, really hard to overcome especially for foundations of applications like databases

Eric Gade 2020-09-01 13:33:36

I see what you mean.

Eric Gade 2020-09-01 13:34:51

One thing I find inspiring about the enthusiasm around tools like Roam is not only the return to the memex-like origins of thinking about computing, but an assumption that we expect more from users of computing systems. It is a sign that we might be able to break free from the current era of ā€œexpecting the leastā€

Shalabh Chaturvedi 2020-09-01 16:16:27

Trees are appealing in some cases because nesting and containment are something we understand metaphorically/cognitively via 'spaces' and they also gives a nice coordinate system for 'naming via location' for each node. E.g. earth < solar system < galaxy < alpha quadrant and second < minute < hour < day.

Graphs are appealing when we think of peers and relationships (the set of buildings in a city). You can have arbitrary relationships but you don't get the canonical path to each node. I don't think one is always more intuitive.

IMO the bigger problem is our biases in 'meta models'. We have the idea of what a "data structure" is, we pick exactly one and get 'locked in' permanently to that one view. We may want multiple parallel views of the information based on what we're tying to do. But none of the usual models or type systems do this well. If you look at programming, a lot of it is 'changing the shape' without adding any new information. E.g. transform a JSON object into a struct; load an array of structs into a dict for faster lookup, extract one field of many structs into an array etc. There's conflation between data structure and information structure and we haven't figured out how to separate these nicely.

Eric Gade 2020-09-01 16:18:48

I think that is a good explanation

Eric Gade 2020-09-01 16:20:32

One issue here is that being "computer people" in 2020 we have kind of already poisoned our thinking. I'm wondering how a regular, non-programmer would organize information with only the barest of tools. Would some spatial/relationship based thing come out of it?

Eric Gade 2020-09-01 16:21:15

Also how to we "hijack" those really deep and ancient human instincts in order to create systems that are more intuitive and full of possibility (rather than, say, exploitative)

Eric Gade 2020-09-01 16:21:51

The past few years I've leaned hard into the idea that metaphors are the most important aspect of computing for precisely this reason

Eric Gade 2020-09-01 16:23:27

To build on something small you said above, imagine a computing system described entirely in the metaphor of it being a city

Shalabh Chaturvedi 2020-09-01 16:23:34

Yes I think the appeal of 'graphs' as a universal structure is that in the high level 'informational space', we think our minds organize it as graphs. This is where RDF etc come in. I too am surprised we don't have mainstream programing languages that deal directly with RDF like information structures (only graphs and various views of them), instead we get low level data structures (arrays, dicts, lists, structs). This is the idea of separating design from optimization (picking a data structure involves both and implies conflation). I want to be able to design the information structure separately from the implementation and optimization views.

Shalabh Chaturvedi 2020-09-01 16:28:25

Re poisoned our thinking - definitely. We've internalized many invented structures (~ data structures) while there may be fewer cognitive information structures (and still rigorous enough to be formalized, if that's what we want).

Eric Gade 2020-09-01 16:29:06

I'm spitballing the following without real evidence, though I'm imagining research might back it up:

Shalabh Chaturvedi 2020-09-01 16:29:33

Re metaphors, I found Lakoff's "Metaphors We Live By" very interesting. The idea that we understand via 'metaphor'.

Eric Gade 2020-09-01 16:29:46

People a fundamentally good at dealing with complex relationships between things, be that institutions, or even and especially personal relations (families, friends, friends of friends, etc)

Eric Gade 2020-09-01 16:30:06

Or even, like you say, how to get around a city or how parts of the city relate to each other and function.

Eric Gade 2020-09-01 16:30:27

We are also good at dealing with ambiguity around those relations, and working around them

Eric Gade 2020-09-01 16:30:53

The current computing environments are not good with this ambiguity, and that's where some of entry level HCI gets stuck

Eric Gade 2020-09-01 16:31:51

I do not think this is for purely technical reasons, but rather because most developers ("computer people") do not think socio-technically

Eric Gade 2020-09-01 16:32:24

I will have to read that Lakoff book, thanks for the ref

Shalabh Chaturvedi 2020-09-01 16:41:59

Re ambiguity yes it seems the software we build is way too strict.

Shalabh Chaturvedi 2020-09-01 16:48:54

Wonder if there are good examples to study where the user mental models with ambiguity can be fluidly represented in the software UI available to them. Most of us just copy what came before - every definition has fixed, strict fields and so on. Ambiguities are hidden away in a 'comments' field.

Eric Gade 2020-09-01 16:52:56

There is a book by the anthropologst Lucy Suchman called "Human Machine Reconfigurations" that might be good on some of this

Eric Gade 2020-09-01 16:53:15

I have not read it in a decade, and perhaps I should pick it up again (I have learned a lot about computing in the time since)

Andrew 2020-09-01 19:37:20

@Eric Gade

One issue here is that being ā€œcomputer peopleā€ in 2020 we have kind of already poisoned our thinking. Iā€™m wondering how a regular, non-programmer would organize information with only the barest of tools. Would some spatial/relationship based thing come out of it?

If you know anyone in high school or college who isnā€™t studying CS, try looking at their personal notebooks or journals.

I think people often store information in narrative form. The narrative gives many contact points with the same piece of information.

A particularly adept learner might write about what they learned in many ways: how it applies to their life, where they have seen it in the world, the strict technical definitions, and some metaphors that capture the essence of the subject.

We want to look at the same information in as many ways as possible so we can get an intuitive sense for it. Something that we understand deeply can represent itself as a felt-sense in the body, as opposed to an intellectual thought, like a grandmaster chess player analyzing his position on the board, or a tennis player analyzing the trajectory of a ball.

Andrew 2020-09-01 19:38:42

Metaphors are a great way to think of how humans understand things.

Since it seems that most data structures are actually ways of organizing information, rather than understanding information, it can be hard to draw lots of comparisons between how a human might organize information (since humans most often want to organize their information by understanding it ā€” e.g. storing it in their body, as opposed to storing it in a physical location somewhere)

Andrew 2020-09-01 19:43:21

Something about search is really intuitive to me. Scanning a thoughtspace for key terms that might lead to relevant or related information.

Categories or tags also seem intuitive. The equivalent of naming objects like ā€œchairā€ even though ā€œchairā€ is actually a category of many specific objects.

Jack Rusher 2020-09-02 08:04:29

In addition to the very good Lakoff recommendation above, I'd add the work that http://worrydream.com/refs/Hofstadter%20-%20Analogy%20as%20the%20Core%20of%20Cognition.pdf and https://melaniemitchell.me have done around analogies as the basic unit of understanding.

Jack Rusher 2020-09-02 08:22:27

In terms of why trees are ubiquitous and graphs are not, I think there are two things at work here:

  • We have a much easier term visually parsing, internalizing and imagining even very complex trees than heavily linked cyclic graphs. There's an entire sub-discipline of data visualization trying to help people understand the latter kind of data. This is one of the deep challenges of node-and-arrow visual programming systems.
  • Although our internal representations are fuzzy and graph-y, the mechanisms we have to communicate those representations in speech and writing are linear and tree shaped (sequences of words representing recursive grammatical structures). This makes it very hard to express these kinds of structures using syntax, thus programming languages tied to textual representations are at a huge disadvantage when encoding graphs as literals. This is one of the deepest powers of node-and-arrow programming systems.

/cc Ivan Reese

Drewverlee 2020-09-02 22:52:06

Jack Rusher

That's brilliant.

William Taysom 2020-09-04 13:41:04

Often a graph has a spanning tree that represents a reasonable way to get around. And in case the tree isn't not quite right, you can often split a node so that it appears a few times in the tree.

Eric Gade 2020-09-02 01:00:59

Earlier today I was thinking about Mithenā€™s ā€œhttps://www.goodreads.com/book/show/375579.The_Singing_Neanderthals,ā€ and how deeply embedded in the human brain musical constructs are. Does anyone know of programming systems (particularly end-user programming systems) that use music as the programming interface, or perhaps a significant part of it?

Ivan Reese 2020-09-02 04:15:42

[moved from top level, originally by @Eric Gade]

A crude example here would be, say, making a loop using a kind of melody or something (though maybe a musical programming environment has no need for loops, or maybe it calls them codas, or whatever)

Cameron King 2020-09-02 18:01:49

I don't know anything in this vein, but as a musician and lover of games like Ocarina of Time and DDR/Guitar Hero/Before the Echo (nee Sequence), I've sometimes thought about it. I think music-as-programming has more of a home in video games than general programming, because the great difficulty seem to me to be the invention (and comprehension!) of grammars that map musical feature to semantic behavior. The only method I've seen is essentially a complex keyboard shortcut: a specific sequence maps to a specific function, like the song fragments in Ocarina of Time or the spells in Before the Echo.

It's easy to imagine something like musical brainf*ck, or maybe a slightly more sophisticated macro system where the user develops a mapping of musical structure to program structure (e.g., this chord represents this variable; this melodic fragment following this chord represents this method call), but that's just regular programming with extra steps. Could be run to write a program that you can then perform, though, or to generate mappings aleatorically and use small programs as sight-reading material--or if you really hate yourself, write a JIT compiler that makes random mistakes when you do, so you have to perform the program perfectly to get it to compile correctly. These are all esoteric use cases, though.

But consider the limited expressivity of a musical language, like the hmmmmm system described in the link. This is why I think music + video games is a good combination. Imagine the grammar is provided and structures map to interesting behaviors--running, jumping, dashing, guarding, rolling, targeting enemies, attacking, casting spells--a musical interface could be a fun way of "programming" the world, or at least of improvisationally triggering simple scripts written in a user-friendly language. I like to imagine Hollow Knight-esque boss fights where the "score" of the fight is a set of musical+visual telegraphs that tell you what the boss is about to do, and your job as the player is to respond with an appropriate musical phrase (dodge/guard/attack). The record of the fight becomes a piece of music written as a structured improvisation between a computer and a human within the rules provided by the game developer/composer.

But these are just pipe dreams. Like I said, I haven't seen music in programming systems, and for the difficulties of grammar, I don't think we're likely to.

S.M Mukarram Nainar 2020-09-02 19:07:53

@Cameron King Have you played patapon?

Cameron King 2020-09-02 19:12:51

S.M Mukarram Nainar. No, I haven't heard of it before now. It look pretty cool, and very similar to what I've imagined, if a little lacking in musical complexity.

Charlie Roberts 2020-09-02 18:10:10

Does anyone have tools / processes to recommend for rapidly iterating the design of a language? Iā€™m looking for strategies to produce a document that captures the design, evolution, and potential variations of a language interface separate from implementation concerns. Good examples of this would also be very much appreciated!

Andrew F 2020-09-03 01:51:45

I haven't used it, but PLT-Redex is a racket lang for roughly the same task. There's a series of lectures on how to use it on YouTube from (IIRC) the Oregon programming languages summer school. For anyone who has used it, I'm also very interested in your experience.

Charlie Roberts 2020-09-03 05:49:10

These are both great systems that I hadnā€™t heard of, thanks! But Iā€™m thinking of a high-level document, that uses purely speculative code examples (no implementation) to explore language design before diving into defining grammars or informal compiler design. To ask in a different way, if you asked a relatively novice programmer to create example programs informing the design of a new language, what would that document look like? What are effective language ā€œsketchesā€ and how do you show iteration and variation? I tend to just dive in and start implementing after writing down a few small code snippets, but Iā€™m hoping that people here might have experience with other ways to approach design before beginning implementation.

Nick Smith 2020-09-03 07:18:36

I've been working on language design for a few years now, and for design work I've not found any better solution than interlinked textual notes (Roam is so much better at this than anything else) plus digital sketches (I use an iPad + Pencil, and just Apple Notes). I export the sketches into the note app (screenshot -> Airdrop to laptop).

Nick Smith 2020-09-03 07:22:57

The pivotal change for me was making sure I had a means to externalise every thought I was having. I no longer sit around and merely think about things. I find I can think 10x better if I write down every thought, reflect on it, and then keep revising it. Some revisions are copy+paste+archive old version, others are just deletions, because not all thoughts are worth retaining.

larry 2020-09-03 22:27:22

the best i've come up with so far is Markdown, using Typora (WYSIWYG). I write and refine all my random thoughts in text, and use the code blocks for trying out the syntax. It sorta works because whatever syntax I'm working with, I try to pick a language for the code box that matches it. Minimally i can usually get keyword or operator highlighting and it makes it easier to read.

Andreas S. 2020-09-03 12:09:59

Hello everyone šŸ‘‹ From time to time I checkout this Youtube Channel: UnjadedJade , as I'm now 40 years its an interesting experience to see someones (much younger ) perspectives on things. So today she released a Video how she would organize her life with notion. https://www.youtube.com/watch?v=67jFfjwUvRQ As we can see she works quite fluently with it. Now do you know by any chance this apple knowledge navigator Video form 1987 : https://www.youtube.com/watch?v=HGYFEI6uLy0

What do you think about her usage of Notion when comparing it to the knowledge navigator? What do you think when comparing your personal knowledge management workflow ( roam, zettelkasten, emacs, vim ...) with hers? What aspects do you like of her example notion usage and what might be missing or completely unthinkable in the notion representation? Thanks for your thoughts! Ah a bonus question: do you have a "peoples database" only for professional contacts or personal too? Both mixed? If not I would be curious how you manage/organize that. Thanks!

Roben Kleene 2020-09-03 13:54:24

Huh, when I click the link to the Unjaded Jade video says it says it's unavailable/private?

Ivan Reese 2020-09-03 16:21:28

Looks like she probably needed to change something about the video, and thus reposted it ā€” here's the new link: https://www.youtube.com/watch?v=67jFfjwUvRQ

Andreas S. 2020-09-03 16:42:55

Roben Kleene its as Ivan said , she reposted it

Roben Kleene 2020-09-03 17:08:07

A couple of thoughts about this category, the first is that apps like this went through several eras. I find this interesting, because ideas that are very very old, have suddenly become popular seemingly out of nowhere (at least to me). So a question I have is "why now?"

Here's how I'd outline the "eras" of todo lists and information managers:

-- Niche (2000-2008)

OmniOutliner

Tinderbox

DevonThink

VoodooPad

-- Enthusiast (2008-2019)

OmniFocus

Evernote

Yojimbo

Workflowy

Wunderlist (now Microsoft To Do)

Things

-- Mainstream Inflection Point (2020-)

Notion

Roam

Roben Kleene 2020-09-03 17:22:00

The second thought is that there are generally three types (sorry these categories names aren't great, but they're the best I could come up with):

  1. Custom/Hackable: Org Mode, todo.txt, building your own on Markdown

  2. Straight-Forward Apps: Most apps before the mainstream inflection point fit into this category (one exception is Tinderbox, which I'd actually call a super app). These apps fit into well-defined categories: todo list, notes, or everything bucket.

  3. "Super Apps": Notion and Roam are something new. They have enough of their own concepts that defy categorization.

By far the most popular seem to #3, and I don't really get what's going on here. Why these apps have suddenly become so popular, and especially managed to capture so much imagination.

Roben Kleene 2020-09-03 17:22:12

I guess perhaps it's just a natural evolution of Evernote, but with so many more people on social media now, it ends of feeling so much bigger?

Roben Kleene 2020-09-03 17:26:14

(Regarding the comparison to the knowledge navigator, that product seems way more AI-driven than Notion. That's one of the things I find incredible about Notion and Roam, these are very fiddly, manual, apps. I always figured that's why information wasn't more popular before it went mainstream: it takes so much work.)

Andreas S. 2020-09-04 12:49:02

I think I am also interested in the style of the conversation or how the assistant blocks another phonecall vs how notifications work today.

šŸ•°ļø 2020-08-17 14:40:56

...

larry 2020-09-04 20:34:43

It's dated, but I read Nardi's A Small Matter of Programming recently and thought it very good. It's about design for end-user programming.

Kartik Agaram 2020-09-05 03:30:29

After dreaming about just using BIOS for the last few days (https://futureofcoding.slack.com/archives/C0120A3L30R/p1599112907014300), I just noticed this little sentence in a tab I had open all this while:

BIOS only runs in real mode of the x86 CPU. (> https://en.wikipedia.org/wiki/BIOS_interrupt_call> )Well, hell. I'd be stuck in 16-bit 8086 mode. I see why nobody uses BIOS. It's not just wanting performance.

šŸ˜¢

Mariano Guerra 2020-09-05 11:42:41

This message was in a conversation but I think the topic (and the resources linked) are good for a thread on its own.

What do you think of programming by example and programming by demonstration? what's the best implementation/resource/talk you have seen?

[September 4th, 2020 9:28 PM] jack529: Nice! Around 30 years ago there was a movement called _<https://en.wikipedia.org/wiki/Programming_by_example|Programming by Example>_ (PBE) that tried to find a generalization of this pattern for a variety of programming tasks. I'd love to see people revisit that work with modern compute power and neural network architectures. (An early 90s history of the word can be found http://acypher.com/wwid/WWIDToC.html|here, a sequel by a different researcher http://web.media.mit.edu/~lieber/Your-Wish/|here. Many familiar names contributed essays: Larry Tesler, Brad Myers, &amp;c.)

Kartik Agaram 2020-09-05 18:59:43

[August 30th, 2020 11:07 AM] hamish.todd1: In the thing I am making, you can't have a variable without choosing a specific example value for that variable. This is surely something that's been discussed here before since Bret does it in Inventing On Principle. What do folks think of it?

Jimmy Miller 2020-09-06 04:44:12

I really love the idea of programming by example. It seems that for many of the things I find myself doing day to day, it ought to be possible to have a computer infer what I am doing. I do think our current approaches make this incredibly hard.

One really cool attempt at it is Barliman by Will Byrd. Having played with it, it is definitely not something youā€™d want to use in practice, but still really neat.

https://www.youtube.com/watch?v=er_lLvkklsk

(Shameless self promotion) I also talk a bit about programming by example at the end of my talk on meander: https://www.youtube.com/watch?v=9fhnJpCgtUw&feature=youtu.be&t=2108

I never continued exploring this avenue, but the approach worked for quite a few more examples than I showed and I think could be extended quite a bit.

Jack Rusher 2020-09-06 09:07:38

I feel like the examples mentioned so far, while sharing some kind of kinship, have a different flavor from the old research I mentioned above. Here's an example video from 1994 in the context of charting data:

https://www.youtube.com/watch?v=jiHRCtJCRts

Jack Rusher 2020-09-06 09:21:02

Or a system that offers these interactions that resemble Perlin's ChalkTalk to develop UIs (25 years ago):

https://www.youtube.com/watch?v=VLQcW6SpJ88&list=PL3856C8FlIWfr_tX8CMUhOJvl34ylClgb

Jack Rusher 2020-09-06 10:11:35

One of my favorite systems from this period was Peridot (plus Garnet, &c -- all the "gemstone" systems that team made), but the only video I can find very low resolution, which often makes it hard to read the onscreen text:

https://www.youtube.com/watch?v=FsGx7G72V0Q

Andrew Carr 2020-09-06 13:26:45

What is the fundamental difference between TDD and coding by example? Does coding by example just mean every variable has a default value? Or is it deeper than that?

Jack Rusher 2020-09-06 13:54:07

@Andrew Carr This is not "coding by example" but "programming by example" ā€” that is it's not just a matter of having example values in the code or starting with input/output pairs, but rather a different way of communicating your intent to the computer (often graphically rather than textually). I recommend you watch the https://www.youtube.com/watch?v=jiHRCtJCRts above to get a sense for it.

Jimmy Miller 2020-09-06 15:27:03

@Andrew Carr

What is the fundamental difference between TDD and coding by example?

To put things a different way, TDD is about a human providing examples and then a human coding to make those examples work. Programming by example is about allowing a human to provide example and then the program is automatically derived from that example.

More in line with what Jack Rusher is talking about is this section from Bret Victorā€™s magic ink that goes into more in depth with an example. http://worrydream.com/MagicInk/#designing_a_design_tool

In fact in this essay, there is a quote that relates the kinds of things Jack talked about to the kinds of things I was discussing.

Many systems attempt to infer a full computational procedure, and have the most difficulty with computational concepts such as conditionals and iteration. As we will see, this tool mostly has to infer mappings from some set or numerical range to anotherā€”functions in the mathematical sense rather than the (imperative) computational sense. This may be significantly easier.

Barliman is a system that tries to infer full computational procedure. The systems Jack is talking about are much more inline with what Bret is achieving, inferring particular relations given a graphical input.

The point of my example in my talk is that there might be some way in which we can take ourselves much further into inferring from examples to computational procedures, by changing up our programming models.

I absolutely love the work that Jack linked, but I would love to see programming by example become a more general technique useful beyond domain specific cases and with some nice underlying model that can be broadly applied. In other words, I just want to work from the ground up to get something that can power the various awesome projects that Jack posted without having to code those interactions specifically.

Andrew Carr 2020-09-06 15:29:49

Ah! Thank you. That's much more clear. It almost feels like the end goal of some of the current parametric learning work (e.g., deep learning) where you give a single example and can generate/extrapolate to a functioning program.

Love it.

I look forward to watching and reading more this afternoon.

Jack Rusher 2020-09-06 16:39:42

Jimmy Miller For the record, I'm also a Barliman superfan. šŸ™‚ We've talked with Byrd a bit about getting something similar working for a subset of Clojure in Maria.cloud as part of a learner's assistant. I've already added a suggest function that does this weaker but still potentially useful "suggest possible code based on a before/after pair":

(suggest [1 2 3 4] :=> 1)

;; (("first" [1 2 3 4]))



(suggest [1 2 3 4] :=> 4)

;; ((last [1 2 3 4])

;;  (peek [1 2 3 4])

;;  (count [1 2 3 4]))



(suggest [1 2 3 4] :=> [2 3 4])

;; ((rest [1 2 3 4]))



(suggest [1 2 3 4] 3 :=> [2 3 4])

;; ((rest [1 2 3 4])

;;  (take-last 3 [1 2 3 4]))



(suggest [1 2 3 4] 1 :=> 2)

;; ((second [1 2 3 4])

;;  (nth [1 2 3 4] 1)

;;  (get [1 2 3 4] 1))



(suggest 1 [1 2 3 4] :=> [2 3 4])

;; ((rest [1 2 3 4])

;;  (drop 1 [1 2 3 4]))
Jack Rusher 2020-09-06 18:46:09

This is a much better resolution video showing a good flavor of the work (drawing interface instead of coding them, using inference to guess the user's intent and asking for confirmation, constraint-based layout systems, and so on):

https://www.youtube.com/watch?v=wc8A0woo0X4

šŸŽ„ Garnet UIDE 1993

nicolas decoster 2020-09-06 17:07:13

I am very interested in programming tools that non-experts can use. I.e. people that didnā€™t learn to program initially but want or need to sometimes.

Last week I have discussed with someone that could be interested in this kind of tool. And during the discussion about her use case, something appeared very clearly. In her journey in using programming, there are good chances that at one point she will need help from more experienced people. My feeling after that discussion is that this will be very common and that it is very important to take this into account early in the vision and the design of such tools or in the building of the community around it.

I.e. creating tools that allow non-experts to program, make them feel it is normal to not know everything, making it really easy for them to find some help, and make it easy for a more experienced programmer to give help for the programming task.

I guess I had this idea/feeling for some time, but I really feel its importance after that discussion.

What do you think of that? Do you have examples of tools/communities where this is taken very seriously? Or any research work on this? Be it for end-user programming or not (in fact even experts need help from ā€œmoreā€ experts).

Kartik Agaram 2020-09-06 17:29:07

This year's Convivial Computing Salon discussed this a fair bit. In particular, Philip Tchernavskij's response to Jun Kato's talk had a lot of pointers to prior work. I can't find the slides, but watch the video at https://junkato.jp/programming-as-communication

nicolas decoster 2020-09-06 17:48:53

Thanks a lot Kartrik. In fact I have seen it live, but forgot about it, a sign that the subject wasn't important for me at that time. I know recall I liked Jun Kato's talk a lot. I will definitely rewatch it.

Kartik Agaram 2020-09-06 17:58:09

Some keywords from my notes of the talk, that were used in papers in the '90s:

  • participatory design

  • customizable/tailorable/personalizable/adaptable software

Some notes I made of papers:

  • Maclean et al., 1990

  • MacKay 1990 (DIY community)

Just in case this is useful šŸ™‚ Mostly this is an excuse for me to transcribe analog notes to a digital, searchable form.