...
This is such a complicated and cool question and I have so many thoughts I donāt even know where to begin. I remember watching your talk about the thing youāre making and I think you mentioned something along these lines being one of the big problems. I was surprised you wanted to be able to prove theorems in this system because of course a visual environment and proofs have this tension. Iām curious what kind of proofs you would have in this system. In the video you mentioned a theorem in quantum computing(?) but I couldnāt find it. As an elementary but nontrivial example how would you prove the Pythagorean theorem? I think you would have to do this abstractly, but as long as it is āconstructiveā you can unfold the abstract proof at various stages and apply it to specific vectors to visualize it. Also, if you are doing proofs I imagine these would be āformal proofsā and isnāt that a really tough problem? Or do I misunderstand or is there some way to get around it? eg just have a more expressive and dynamic means to write informal proofs. Anyway I certainly wouldnāt shy away from general variables as long as you have a means to move up and down the ladder of abstraction š.
Starting from concrete values, you can approach abstraction by lifting from a single value to many at once. Remember Bret's Ladder of Abstraction. For extra fun, have interactions between the multiple values. For instance, instead of a solution set of values that would work in a given context, have a probability distribution. I was pretty into these propagation networks at once point https://dspace.mit.edu/handle/1721.1/49525. Don't know if more progress has been made.
In Glamorous Toolkit, examples play a key role. We went so far as to replace classic tests with examples (a test that returns an object). This leads to a nicer way to compose examples, but most importantly, examples offer concrete objects you can program against. As in our environment every object can present itself through custom views and as these views can also be weaved into larger narratives, the examples also offer a nice infrastructure for documentation purposes. Here is a short article about them:
https://medium.com/feenk/an-example-of-example-driven-development-4dea0d995920
@Robin Allison When you say that it is "obvious" that visual environments and proofs have a tension, I guess (?) you mean that visual proofs are criticized as being not as rigorous as algebraic, because it seems to be more about intuition. This is an interesting philosophical issue and may end up being a problem for me, but I'm willing to bet that it will be sufficiently non-small
The kinds of proofs I am interested to give are (as you'd expect) fundamentally GA-related, so for example, proving that rxr* is a rotation when r is a rotor and r* is its reverse and x is a vector. Or, expanding a bit, classical mechanics, so proving that ellipses are solutions to the two-body problem. And yes, if I get to it, quantum computing! Subalgebra closure is probably what you were thinking of. It wouldn't be super surprising if the system ends up somewhere in between "strong enough to prove pythagorean theorem" and "strong enough to prove QM subalgebra closure"
"How would you prove the pythagorean theorem?" Thanks a lot for asking me that question, it's a great example to start with and I'd not thought about it :D and now I've thought about it and I know how to prove it. It's a combination of defining a "right angled triangle" by the constraints that are on it (which are visual) and then applying the rules of geometric algebra, which are all visual, to that constraint. I can say more but it might just sound weird/spoil the fun when you eventually see it š
Hamish Todd By the tension I meant, just the problem you brought up in the original post in this thread, and as I was watching your video that was the question that crossed my mind. Canāt wait to see what you cook up!
incidentally, some kind of geometric algebra conference popped up on my subscribed youtube feed this morning: https://www.youtube.com/user/EnkiOrigami
š„ enki mute
Yes, it is hoped that that conference will have a significant impact!
let me know when the āGA for absolute moronsā lecture comes out and i will become an enthusiastic proponent!
...
Robbie Gleichman I think you misunderstand my interest in natural language. Iām interested in using it strictly as a primary programming interface. Few people would argue that we should have deep learning models interpreting all of our code (plaintext) to take action based on educated guesses about what we wanted to say (if you think that, thatās a separate discussion). Iām just exploring a syntax based on logic (i.e. explicitly broken into logical units with a well-defined semantics) but expressed in natural language (so that it is human-readable without having studied a course on logic).
...
I mean making change of time more explicit: more directly observable, manipulatable, and constrainable. This can take many forms. Here's an example.
Step 0: Setup
Imagine a fairly conventional imperative system. We have a bunch of boxes (variables) in which we can put values.
Step 1: Observable
We show which values go which boxes in what order. Often systems let us check state only in the moment or keep dubious logs about what happened in the past. Forget asking about the future. Imagine lining up the boxes. We might be able to scrub forward and backward through time, or add a timeline showing what values were in each box when.
Step 2: Manipulatable
Good old structured programming is kind of nice on this front. Each assignment statement records how the contents of a box change. Suppose we directly manipulate the boxes in some other way, then we can record a script of the assignments made. Glue scripts together. Good, clean fun.
Step 3: Constrainable
Except we don't abstract cleanly from the step-to-step to composable recipes. With functions/procedures, we keep track of arguments and return values, but we don't keep track of what boxes get examined or updated. We can't easily tell if ordering of calls matters, and we cannot easily require that things always happen in a certain order.
If you try to create a fully declarative, reactive semantics, I think you eventually run into the bind/commit distinction no matter what you do. Fundamentally, itās a question of how the lifetime of an assertion is controlled. Commit is saying the lifetime of this assertion is unconditional, whereas bind is conditioned on the other information this assertion is derived from. Youāll want both, but itās awkward trying to make them play together.
Not being able to come up with something better is a big part of what convinced me that we needed to find something in the middle of the declarative/imperative spectrum, rather than constantly looking at the ends.
programming with just rules or just procedures sucks, but being able to freely mix them both together is pretty magical š
our implementation of bind/commit led to a lot of complexity and we thought that we had hidden the implications of the different timelines from people, but as @William Taysom said, it turned out there were cases where trying to get the right sequence of things to happen exposed you to that complexity and it was unequivocally worse than what you would normally do. Eve was much better than conventional languages on some axes, but on the axis of expressing process-like things it was significantly worse.
Part of that comes from bind/commit naturally wanting to happen at different ātimes,ā the other part came from blocks being islands that werenāt obviously tied together in any meaningful way. Discovering the forest was pretty hard to do just by looking at the trees.
If you donāt mind exposing users to the actual semantics/complexity of time, I would look at statelog as a better approach to the problem
It makes time fully explicit, though I donāt know that it really makes it that much better when compared to something that can just express a procedure cleanly
in any case, Iād read the https://www2.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-173.html and the http://www2.informatik.uni-freiburg.de/~dbis/Publications/98/moc98.pdf if you want to really dig into this stuff
In case the Statelog link doesn't work http://users.sdsc.edu/~ludaesch/Paper/moc98.pdf.
...
@Andreas S. The version of that concept with which I'm familiar is a sort of "horizontal transfer" of knowledge within orgs. Can you elaborate on how it would fit into your vision here?
I think its not "my-vision" but I can try. So the problem is that too many things are too complex and educational instituions as well as organisations ( even companies) might be unfit at times to provide a context for the individual to learn what he needs. So I think this resource here outlines many good aspects of #p2p-learning https://wiki.p2pfoundation.net/Category:Education
For me its the simplest possible context in which a knowledge transfer (student-teacher relationship) can occur. It may seem very vague , even unstructured but I think the concept is just what we need in this times. Many people need to learn about many things on such different levels. Bu tthen also you have to acknowledge that attention is finite and if possible cultural constrains. This places already tough constrains on the lets call it "base layer" because I think a lot of learning has to be local. There is also a lot which can be online but I'm uncertain if its more or less then what can only be done offline/local.
Mariano Guerra hey š I like this particular thread very much do we have already a markdown export of it so I link to it from statically form extern ( without slack account) via URL? Ivan Reese Where would be the place to discuss how to organize some of our FoC gems as a Zettelkasten in Markdown? I think would volunteer a bit of time to that.
since this is a long living thread I would have to update the history dump to make it display newer messages, but here it is: https://marianoguerra.github.io/future-of-coding-weekly/history/?fromDate=2020-07-25&toDate=2020-07-26&channel=general#2020-07-25T14:08:04.036Z
Jack Rusher for further context of P2P-learning i'm currently reading "Education in a Time between wordls" by zak stein, which is related to the GameB movement (Jordan Hall and many others) which is related to John Vervaekes - Awekening from the meaning Crisis / the religion that is not a religion. In which people try to build a culture which is has better relationships to meaning and sustainability than our current one. I hope this is not to vague or "woowoo" for you , there are are a lot of scientists working on this but as you can imagine its a monumental task and the scientists can only be one layer of if it. So I thinks its interesting how this plays our to the larger places of society and how "we" the FoC Community - what role do we play in this infinite game of creating meaning in relationship to society.
@Andreas S. My friend Samim https://twitter.com/samim calls this "open understanding", in the spirit of "open source". I'm in fundamental support of the concept.
Yes its such a very fundamental concept. I mean really if you try to re-learn or trying to make sense bottom up. Climbing up the maslov hierachy of needs with all the infinity of information available, its a humbling experience. But it can be also very personal and human. For example I find joy in learning new things about food, health and cooking which are unknown to my culture. And yet I enjoy it very much!
...
Jack Rusher Same for me. My biggest hope for a worthy successor to Emacs is https://gtoolkit.com/. It addresses what is for me the biggest limitation of Emacs: the lack of graphics.
š Glamorous Toolkit
Konrad Hinsen I also really like Glamorous Toolkit. It's always good to see any system that embraces the old Smalltalk/D-Lisp philosophy of interaction and malleability! š
...
(i.e., it's not a "problem" to be "solved", it's what most "normals" think of first)
The problem arises when you think of consistency and how the programs change and read state.
Can you give an example of the problem of consistency under read/write, in an end-user application?
As techies we are all aware of the issues with parallel access to replicated databases, but that's about optimisation for speed. What about an end-user-focused programming environment where all that is hidden?
Anything involving collaborative editing of the same document, especially if some collaborators are sporadically online. Possibly a cheap answer, but I'd argue it really just emphasizes the need to build around state.
@Andrew F š you landed on the exact example I was thinking of as one I hoped no-one would pick, which happens to be the one my own solution to end-user state management tends to punt on! š
.. I guess my point is that, for most end-user applications, state doesn't cause issues even when building a distributed system of any sort, but I'm happy to be thrown counter-examples, alongside collaborative editing.
Rich is talking about a general solution to two very different problems. For example Grace Hooper describes how record keeping and projections are 2 big problems. she describes how computers help solve the problems of record keeping (database) and projections (planning) for the military. She describes how they need very different solutions. She describes how one problem is very calculation heavy and the other is very data heavy. ...... maybe one day computers will be fast enough that we can create a single general solution to both problems, but for now, we tune solutions to fit the problems.
To what extent do our natural biases leek into the models (data structures, algorithms) we use. E.g is it possible that a tree data structure is more appealing to both users and developers because it mirrors a hierarchy (everything has a parent/cause) vs a graph which has a loop (which came first the chicken or the egg) which is considered less intuitive.
It seems a relevant design choice to consider not only the universal truthiness of something but also the cognitive load it takes to use it.
For me it usually comes down to cognitive load. If I can get away with a vector Iāll use one, then a map, then a tree, then a graph.... I also believe in iterating towards a goal in an agile fashion and not over engineering a solution. But yeah, I donāt lightly use a tree or a graph, because complexity == time
I suspect that material biases wrt existing libraries matter a lot moreāIād prefer a graph for most of my applications, but itās much easier to write hierarchies. Since relational databases depend on slow joins to emulate graph queries, we donāt see a lot of many-linked relationships on apps and pagesāwhich I think limits peoplesā imaginations (part of why Roam is taking off IMO)
This is where it would be good to get takes from psychology, anthropology, and sociology. Hierarchies make sense to people for a lot of reasons, and some of these are deeply cultural (think: organization of societies).
On the other hand, people are generally (though we all know some exceptions) good with spaces. Perhaps what in CS is called a "graph" structure is better though of as a "map" or "the layout of rooms in a giant house" -- it's very easy for us to understand in such a case how you can leave a room but somehow take a path that leads back to that room over and over, etc
Garth Goldwater There are two things about the world of graph structures / databases I still don't understand. One is the failure of OODBs to catch on (something like Gemstone inherently uses graphs), and the other is the whole RDF/Semantic Web community and any datastores they use. You rarely see these things outside of niche or academic contexts, and I don't understand why. Any insight?
with the semantic web iād say a combination of economic factors (eg, googleās dominance, lack of a business model) and ux factors (never saw a really appealing end-user app for creating or browsing rdf data). unfortunately iām at the āread the wikipedia five years agoā level of understanding of OODBs so i canāt contribute much on that frontābut iād also note that path dependence is really, really hard to overcome especially for foundations of applications like databases
One thing I find inspiring about the enthusiasm around tools like Roam is not only the return to the memex-like origins of thinking about computing, but an assumption that we expect more from users of computing systems. It is a sign that we might be able to break free from the current era of āexpecting the leastā
Trees are appealing in some cases because nesting and containment are something we understand metaphorically/cognitively via 'spaces' and they also gives a nice coordinate system for 'naming via location' for each node. E.g. earth < solar system < galaxy < alpha quadrant
and second < minute < hour < day
.
Graphs are appealing when we think of peers and relationships (the set of buildings in a city). You can have arbitrary relationships but you don't get the canonical path to each node. I don't think one is always more intuitive.
IMO the bigger problem is our biases in 'meta models'. We have the idea of what a "data structure" is, we pick exactly one and get 'locked in' permanently to that one view. We may want multiple parallel views of the information based on what we're tying to do. But none of the usual models or type systems do this well. If you look at programming, a lot of it is 'changing the shape' without adding any new information. E.g. transform a JSON object into a struct; load an array of structs into a dict for faster lookup, extract one field of many structs into an array etc. There's conflation between data structure and information structure and we haven't figured out how to separate these nicely.
One issue here is that being "computer people" in 2020 we have kind of already poisoned our thinking. I'm wondering how a regular, non-programmer would organize information with only the barest of tools. Would some spatial/relationship based thing come out of it?
Also how to we "hijack" those really deep and ancient human instincts in order to create systems that are more intuitive and full of possibility (rather than, say, exploitative)
The past few years I've leaned hard into the idea that metaphors are the most important aspect of computing for precisely this reason
To build on something small you said above, imagine a computing system described entirely in the metaphor of it being a city
Yes I think the appeal of 'graphs' as a universal structure is that in the high level 'informational space', we think our minds organize it as graphs. This is where RDF etc come in. I too am surprised we don't have mainstream programing languages that deal directly with RDF like information structures (only graphs and various views of them), instead we get low level data structures (arrays, dicts, lists, structs). This is the idea of separating design from optimization (picking a data structure involves both and implies conflation). I want to be able to design the information structure separately from the implementation and optimization views.
Re poisoned our thinking - definitely. We've internalized many invented structures (~ data structures) while there may be fewer cognitive information structures (and still rigorous enough to be formalized, if that's what we want).
I'm spitballing the following without real evidence, though I'm imagining research might back it up:
Re metaphors, I found Lakoff's "Metaphors We Live By" very interesting. The idea that we understand via 'metaphor'.
People a fundamentally good at dealing with complex relationships between things, be that institutions, or even and especially personal relations (families, friends, friends of friends, etc)
Or even, like you say, how to get around a city or how parts of the city relate to each other and function.
We are also good at dealing with ambiguity around those relations, and working around them
The current computing environments are not good with this ambiguity, and that's where some of entry level HCI gets stuck
I do not think this is for purely technical reasons, but rather because most developers ("computer people") do not think socio-technically
Re ambiguity yes it seems the software we build is way too strict.
Wonder if there are good examples to study where the user mental models with ambiguity can be fluidly represented in the software UI available to them. Most of us just copy what came before - every definition has fixed, strict fields and so on. Ambiguities are hidden away in a 'comments' field.
There is a book by the anthropologst Lucy Suchman called "Human Machine Reconfigurations" that might be good on some of this
I have not read it in a decade, and perhaps I should pick it up again (I have learned a lot about computing in the time since)
@Eric Gade
One issue here is that being ācomputer peopleā in 2020 we have kind of already poisoned our thinking. Iām wondering how a regular, non-programmer would organize information with only the barest of tools. Would some spatial/relationship based thing come out of it?
If you know anyone in high school or college who isnāt studying CS, try looking at their personal notebooks or journals.
I think people often store information in narrative form. The narrative gives many contact points with the same piece of information.
A particularly adept learner might write about what they learned in many ways: how it applies to their life, where they have seen it in the world, the strict technical definitions, and some metaphors that capture the essence of the subject.
We want to look at the same information in as many ways as possible so we can get an intuitive sense for it. Something that we understand deeply can represent itself as a felt-sense in the body, as opposed to an intellectual thought, like a grandmaster chess player analyzing his position on the board, or a tennis player analyzing the trajectory of a ball.
Metaphors are a great way to think of how humans understand things.
Since it seems that most data structures are actually ways of organizing information, rather than understanding information, it can be hard to draw lots of comparisons between how a human might organize information (since humans most often want to organize their information by understanding it ā e.g. storing it in their body, as opposed to storing it in a physical location somewhere)
Something about search is really intuitive to me. Scanning a thoughtspace for key terms that might lead to relevant or related information.
Categories or tags also seem intuitive. The equivalent of naming objects like āchairā even though āchairā is actually a category of many specific objects.
In addition to the very good Lakoff recommendation above, I'd add the work that http://worrydream.com/refs/Hofstadter%20-%20Analogy%20as%20the%20Core%20of%20Cognition.pdf and https://melaniemitchell.me have done around analogies as the basic unit of understanding.
In terms of why trees are ubiquitous and graphs are not, I think there are two things at work here:
/cc Ivan Reese
Often a graph has a spanning tree that represents a reasonable way to get around. And in case the tree isn't not quite right, you can often split a node so that it appears a few times in the tree.
Earlier today I was thinking about Mithenās āhttps://www.goodreads.com/book/show/375579.The_Singing_Neanderthals,ā and how deeply embedded in the human brain musical constructs are. Does anyone know of programming systems (particularly end-user programming systems) that use music as the programming interface, or perhaps a significant part of it?
[moved from top level, originally by @Eric Gade]
A crude example here would be, say, making a loop using a kind of melody or something (though maybe a musical programming environment has no need for loops, or maybe it calls them codas, or whatever)
I don't know anything in this vein, but as a musician and lover of games like Ocarina of Time and DDR/Guitar Hero/Before the Echo (nee Sequence), I've sometimes thought about it. I think music-as-programming has more of a home in video games than general programming, because the great difficulty seem to me to be the invention (and comprehension!) of grammars that map musical feature to semantic behavior. The only method I've seen is essentially a complex keyboard shortcut: a specific sequence maps to a specific function, like the song fragments in Ocarina of Time or the spells in Before the Echo.
It's easy to imagine something like musical brainf*ck, or maybe a slightly more sophisticated macro system where the user develops a mapping of musical structure to program structure (e.g., this chord represents this variable; this melodic fragment following this chord represents this method call), but that's just regular programming with extra steps. Could be run to write a program that you can then perform, though, or to generate mappings aleatorically and use small programs as sight-reading material--or if you really hate yourself, write a JIT compiler that makes random mistakes when you do, so you have to perform the program perfectly to get it to compile correctly. These are all esoteric use cases, though.
But consider the limited expressivity of a musical language, like the hmmmmm system described in the link. This is why I think music + video games is a good combination. Imagine the grammar is provided and structures map to interesting behaviors--running, jumping, dashing, guarding, rolling, targeting enemies, attacking, casting spells--a musical interface could be a fun way of "programming" the world, or at least of improvisationally triggering simple scripts written in a user-friendly language. I like to imagine Hollow Knight-esque boss fights where the "score" of the fight is a set of musical+visual telegraphs that tell you what the boss is about to do, and your job as the player is to respond with an appropriate musical phrase (dodge/guard/attack). The record of the fight becomes a piece of music written as a structured improvisation between a computer and a human within the rules provided by the game developer/composer.
But these are just pipe dreams. Like I said, I haven't seen music in programming systems, and for the difficulties of grammar, I don't think we're likely to.
S.M Mukarram Nainar. No, I haven't heard of it before now. It look pretty cool, and very similar to what I've imagined, if a little lacking in musical complexity.
Does anyone have tools / processes to recommend for rapidly iterating the design of a language? Iām looking for strategies to produce a document that captures the design, evolution, and potential variations of a language interface separate from implementation concerns. Good examples of this would also be very much appreciated!
There is Gramada: https://github.com/hpi-swa/Gramada
š hpi-swa/Gramada
I haven't used it, but PLT-Redex is a racket lang for roughly the same task. There's a series of lectures on how to use it on YouTube from (IIRC) the Oregon programming languages summer school. For anyone who has used it, I'm also very interested in your experience.
These are both great systems that I hadnāt heard of, thanks! But Iām thinking of a high-level document, that uses purely speculative code examples (no implementation) to explore language design before diving into defining grammars or informal compiler design. To ask in a different way, if you asked a relatively novice programmer to create example programs informing the design of a new language, what would that document look like? What are effective language āsketchesā and how do you show iteration and variation? I tend to just dive in and start implementing after writing down a few small code snippets, but Iām hoping that people here might have experience with other ways to approach design before beginning implementation.
I've been working on language design for a few years now, and for design work I've not found any better solution than interlinked textual notes (Roam is so much better at this than anything else) plus digital sketches (I use an iPad + Pencil, and just Apple Notes). I export the sketches into the note app (screenshot -> Airdrop to laptop).
The pivotal change for me was making sure I had a means to externalise every thought I was having. I no longer sit around and merely think about things. I find I can think 10x better if I write down every thought, reflect on it, and then keep revising it. Some revisions are copy+paste+archive old version, others are just deletions, because not all thoughts are worth retaining.
the best i've come up with so far is Markdown, using Typora (WYSIWYG). I write and refine all my random thoughts in text, and use the code blocks for trying out the syntax. It sorta works because whatever syntax I'm working with, I try to pick a language for the code box that matches it. Minimally i can usually get keyword or operator highlighting and it makes it easier to read.
Hello everyone š From time to time I checkout this Youtube Channel: UnjadedJade , as I'm now 40 years its an interesting experience to see someones (much younger ) perspectives on things. So today she released a Video how she would organize her life with notion. https://www.youtube.com/watch?v=67jFfjwUvRQ As we can see she works quite fluently with it. Now do you know by any chance this apple knowledge navigator Video form 1987 : https://www.youtube.com/watch?v=HGYFEI6uLy0
What do you think about her usage of Notion when comparing it to the knowledge navigator? What do you think when comparing your personal knowledge management workflow ( roam, zettelkasten, emacs, vim ...) with hers? What aspects do you like of her example notion usage and what might be missing or completely unthinkable in the notion representation? Thanks for your thoughts! Ah a bonus question: do you have a "peoples database" only for professional contacts or personal too? Both mixed? If not I would be curious how you manage/organize that. Thanks!
Huh, when I click the link to the Unjaded Jade video says it says it's unavailable/private?
Looks like she probably needed to change something about the video, and thus reposted it ā here's the new link: https://www.youtube.com/watch?v=67jFfjwUvRQ
A couple of thoughts about this category, the first is that apps like this went through several eras. I find this interesting, because ideas that are very very old, have suddenly become popular seemingly out of nowhere (at least to me). So a question I have is "why now?"
Here's how I'd outline the "eras" of todo lists and information managers:
-- Niche (2000-2008)
OmniOutliner
Tinderbox
DevonThink
VoodooPad
-- Enthusiast (2008-2019)
OmniFocus
Evernote
Yojimbo
Workflowy
Wunderlist (now Microsoft To Do)
Things
-- Mainstream Inflection Point (2020-)
Notion
Roam
The second thought is that there are generally three types (sorry these categories names aren't great, but they're the best I could come up with):
Custom/Hackable: Org Mode, todo.txt, building your own on Markdown
Straight-Forward Apps: Most apps before the mainstream inflection point fit into this category (one exception is Tinderbox, which I'd actually call a super app). These apps fit into well-defined categories: todo list, notes, or everything bucket.
"Super Apps": Notion and Roam are something new. They have enough of their own concepts that defy categorization.
By far the most popular seem to #3, and I don't really get what's going on here. Why these apps have suddenly become so popular, and especially managed to capture so much imagination.
I guess perhaps it's just a natural evolution of Evernote, but with so many more people on social media now, it ends of feeling so much bigger?
(Regarding the comparison to the knowledge navigator, that product seems way more AI-driven than Notion. That's one of the things I find incredible about Notion and Roam, these are very fiddly, manual, apps. I always figured that's why information wasn't more popular before it went mainstream: it takes so much work.)
I think I am also interested in the style of the conversation or how the assistant blocks another phonecall vs how notifications work today.
...
It's dated, but I read Nardi's A Small Matter of Programming recently and thought it very good. It's about design for end-user programming.
After dreaming about just using BIOS for the last few days (https://futureofcoding.slack.com/archives/C0120A3L30R/p1599112907014300), I just noticed this little sentence in a tab I had open all this while:
BIOS only runs in real mode of the x86 CPU. (> https://en.wikipedia.org/wiki/BIOS_interrupt_call> )Well, hell. I'd be stuck in 16-bit 8086 mode. I see why nobody uses BIOS. It's not just wanting performance.
š¢
This message was in a conversation but I think the topic (and the resources linked) are good for a thread on its own.
What do you think of programming by example and programming by demonstration? what's the best implementation/resource/talk you have seen?
[September 4th, 2020 9:28 PM] jack529: Nice! Around 30 years ago there was a movement called _<https://en.wikipedia.org/wiki/Programming_by_example|Programming by Example>_ (PBE) that tried to find a generalization of this pattern for a variety of programming tasks. I'd love to see people revisit that work with modern compute power and neural network architectures. (An early 90s history of the word can be found http://acypher.com/wwid/WWIDToC.html|here, a sequel by a different researcher http://web.media.mit.edu/~lieber/Your-Wish/|here. Many familiar names contributed essays: Larry Tesler, Brad Myers, &c.)
Cross-link to another recent thread on programming by example: https://futureofcoding.slack.com/archives/C5T9GPWFL/p1598810841008500?thread_ts=1598810841.008500&cid=C5T9GPWFL
[August 30th, 2020 11:07 AM] hamish.todd1: In the thing I am making, you can't have a variable without choosing a specific example value for that variable. This is surely something that's been discussed here before since Bret does it in Inventing On Principle. What do folks think of it?
I really love the idea of programming by example. It seems that for many of the things I find myself doing day to day, it ought to be possible to have a computer infer what I am doing. I do think our current approaches make this incredibly hard.
One really cool attempt at it is Barliman by Will Byrd. Having played with it, it is definitely not something youād want to use in practice, but still really neat.
https://www.youtube.com/watch?v=er_lLvkklsk
(Shameless self promotion) I also talk a bit about programming by example at the end of my talk on meander: https://www.youtube.com/watch?v=9fhnJpCgtUw&feature=youtu.be&t=2108
I never continued exploring this avenue, but the approach worked for quite a few more examples than I showed and I think could be extended quite a bit.
I feel like the examples mentioned so far, while sharing some kind of kinship, have a different flavor from the old research I mentioned above. Here's an example video from 1994 in the context of charting data:
Or a system that offers these interactions that resemble Perlin's ChalkTalk to develop UIs (25 years ago):
https://www.youtube.com/watch?v=VLQcW6SpJ88&list=PL3856C8FlIWfr_tX8CMUhOJvl34ylClgb
One of my favorite systems from this period was Peridot (plus Garnet, &c -- all the "gemstone" systems that team made), but the only video I can find very low resolution, which often makes it hard to read the onscreen text:
š„ Peridot Full 1987
What is the fundamental difference between TDD and coding by example? Does coding by example just mean every variable has a default value? Or is it deeper than that?
@Andrew Carr This is not "coding by example" but "programming by example" ā that is it's not just a matter of having example values in the code or starting with input/output pairs, but rather a different way of communicating your intent to the computer (often graphically rather than textually). I recommend you watch the https://www.youtube.com/watch?v=jiHRCtJCRts above to get a sense for it.
@Andrew Carr
What is the fundamental difference between TDD and coding by example?
To put things a different way, TDD is about a human providing examples and then a human coding to make those examples work. Programming by example is about allowing a human to provide example and then the program is automatically derived from that example.
More in line with what Jack Rusher is talking about is this section from Bret Victorās magic ink that goes into more in depth with an example. http://worrydream.com/MagicInk/#designing_a_design_tool
In fact in this essay, there is a quote that relates the kinds of things Jack talked about to the kinds of things I was discussing.
Many systems attempt to infer a full computational procedure, and have the most difficulty with computational concepts such as conditionals and iteration. As we will see, this tool mostly has to infer mappings from some set or numerical range to anotherāfunctions in the mathematical sense rather than the (imperative) computational sense. This may be significantly easier.
Barliman is a system that tries to infer full computational procedure. The systems Jack is talking about are much more inline with what Bret is achieving, inferring particular relations given a graphical input.
The point of my example in my talk is that there might be some way in which we can take ourselves much further into inferring from examples to computational procedures, by changing up our programming models.
I absolutely love the work that Jack linked, but I would love to see programming by example become a more general technique useful beyond domain specific cases and with some nice underlying model that can be broadly applied. In other words, I just want to work from the ground up to get something that can power the various awesome projects that Jack posted without having to code those interactions specifically.
Ah! Thank you. That's much more clear. It almost feels like the end goal of some of the current parametric learning work (e.g., deep learning) where you give a single example and can generate/extrapolate to a functioning program.
Love it.
I look forward to watching and reading more this afternoon.
Jimmy Miller For the record, I'm also a Barliman superfan. š We've talked with Byrd a bit about getting something similar working for a subset of Clojure in Maria.cloud as part of a learner's assistant. I've already added a suggest
function that does this weaker but still potentially useful "suggest possible code based on a before/after pair":
(suggest [1 2 3 4] :=> 1)
;; (("first" [1 2 3 4]))
(suggest [1 2 3 4] :=> 4)
;; ((last [1 2 3 4])
;; (peek [1 2 3 4])
;; (count [1 2 3 4]))
(suggest [1 2 3 4] :=> [2 3 4])
;; ((rest [1 2 3 4]))
(suggest [1 2 3 4] 3 :=> [2 3 4])
;; ((rest [1 2 3 4])
;; (take-last 3 [1 2 3 4]))
(suggest [1 2 3 4] 1 :=> 2)
;; ((second [1 2 3 4])
;; (nth [1 2 3 4] 1)
;; (get [1 2 3 4] 1))
(suggest 1 [1 2 3 4] :=> [2 3 4])
;; ((rest [1 2 3 4])
;; (drop 1 [1 2 3 4]))
This is a much better resolution video showing a good flavor of the work (drawing interface instead of coding them, using inference to guess the user's intent and asking for confirmation, constraint-based layout systems, and so on):
š„ Garnet UIDE 1993
I am very interested in programming tools that non-experts can use. I.e. people that didnāt learn to program initially but want or need to sometimes.
Last week I have discussed with someone that could be interested in this kind of tool. And during the discussion about her use case, something appeared very clearly. In her journey in using programming, there are good chances that at one point she will need help from more experienced people. My feeling after that discussion is that this will be very common and that it is very important to take this into account early in the vision and the design of such tools or in the building of the community around it.
I.e. creating tools that allow non-experts to program, make them feel it is normal to not know everything, making it really easy for them to find some help, and make it easy for a more experienced programmer to give help for the programming task.
I guess I had this idea/feeling for some time, but I really feel its importance after that discussion.
What do you think of that? Do you have examples of tools/communities where this is taken very seriously? Or any research work on this? Be it for end-user programming or not (in fact even experts need help from āmoreā experts).
This year's Convivial Computing Salon discussed this a fair bit. In particular, Philip Tchernavskij's response to Jun Kato's talk had a lot of pointers to prior work. I can't find the slides, but watch the video at https://junkato.jp/programming-as-communication
Thanks a lot Kartrik. In fact I have seen it live, but forgot about it, a sign that the subject wasn't important for me at that time. I know recall I liked Jun Kato's talk a lot. I will definitely rewatch it.
Some keywords from my notes of the talk, that were used in papers in the '90s:
participatory design
customizable/tailorable/personalizable/adaptable software
Some notes I made of papers:
Maclean et al., 1990
MacKay 1990 (DIY community)
Just in case this is useful š Mostly this is an excuse for me to transcribe analog notes to a digital, searchable form.