You are viewing archived messages.
Go here to search the history.

nicolas decoster šŸ•°ļø 2020-09-06 17:07:13

I am very interested in programming tools that non-experts can use. I.e. people that didnā€™t learn to program initially but want or need to sometimes.

Last week I have discussed with someone that could be interested in this kind of tool. And during the discussion about her use case, something appeared very clearly. In her journey in using programming, there are good chances that at one point she will need help from more experienced people. My feeling after that discussion is that this will be very common and that it is very important to take this into account early in the vision and the design of such tools or in the building of the community around it.

I.e. creating tools that allow non-experts to program, make them feel it is normal to not know everything, making it really easy for them to find some help, and make it easy for a more experienced programmer to give help for the programming task.

I guess I had this idea/feeling for some time, but I really feel its importance after that discussion.

What do you think of that? Do you have examples of tools/communities where this is taken very seriously? Or any research work on this? Be it for end-user programming or not (in fact even experts need help from ā€œmoreā€ experts).

larry 2020-09-06 22:26:35

This is, I think, an excellent book on end-user programming. https://www.amazon.com/Small-Matter-Programming-Perspectives-Computing/dp/026228040X It includes a significant discussion of the role of peer support

Kartik Agaram 2020-09-06 22:32:19

Indeed. Searching http://history.futureofcoding.org for 'nardi' brings up some good past threads.

William Taysom 2020-09-07 02:45:34

Here's a way to put it... If you're making sufficiently rich software (from the end-user perspective), they are going to need help using it. You can choose to assist them in getting help, or you can hope that users will self-organize a forum for helping each other. Either way, help will be needed in order to get the most out of your software.

Konrad Hinsen 2020-09-07 06:24:21

This is a frequent situation in computational science as well. It's not quite end-user programming, but a mixture of next-to-end-user and end-plus-next-to-end-user programming. Typically collaboration of a domain expert with some computing knowledge and a software professional with some domain knowledge.

Duncan Cragg 2020-09-07 09:54:12

I think there are increasing levels of end-user programmability:

  • adjusting an app's configurations and settings, customising in your settings
  • simple rules: email filters, IFTTT
  • spreadsheet formulae; box-n-wire dataflow through function boxes
  • copying someone else's code and changing the obvious parameters, etc
  • doing lots of the above and realising it's become a massive program, then panic!

So I think we're talking about the latter here? Where an EUP system is very easy to get into, but then as a result it's very easy to create a huge blob of dense, opaque programming?

I think it's important that the EUP system offers a model that is structured from the start, and where the normie programmer can draw on the work of others in a structured way, not just copy-pasting.

They should be able to build a complex system incrementally and safely because the structure enables them to see how both their own constructions come alive and also how the work of others operates, and to see the effects of their changes immediately.

Duncan Cragg 2020-09-07 09:55:28

(needless to say, I'm designing Onex this way, in the hope of addressing this issue!)

Duncan Cragg 2020-09-07 10:19:53

Of course, all this tech is irrelevant if you can't nurture a vibrant open source community that is willing to create examples and to help normies!

nicolas decoster 2020-09-07 13:29:57

Thanks Duncan Cragg for your comment! You are right: an important thing I want to address is how to design a system that can manage the "panic" step. And you point that one way to do it, is to try to propose tools that limit the risk of occurrence of the panic, by offering a safe environment where complexity doesn't "hit" the programmer to late.

nicolas decoster 2020-09-07 13:37:10

But what I also want to address, is even if the programmer doesn't panic, there will be some points where obviously he misses some knowledge/experience/etc. that prevent him to go forward. And to keep going, some external help is needed.

So I would like to take this into account in the design of programming environments. This situations will always happen and how to improve the experience for both the one(s) that need/get help and the one(s) that give it.

nicolas decoster 2020-09-07 13:40:29

@William Taysom:

You can choose to assist them in getting help, or you can hope that users will self-organize a forum for helping each other.Here, I have already "choose to assist them in getting help", and moreover, the environment must be "between-users-help aware".

Duncan Cragg 2020-09-07 13:40:49

Well that is perhaps leading to an idea of a distributed, semantic (AST not text) IDE?

Don Abrams 2020-09-08 11:06:02

It's also worth noting that decreasing scope can often reduce essential complexity; for example we could probably write some crazy huge language that can compile to all other languages but it'd likely be more useless than any of the others too. In the past I've seen a simple core, with pattern-based replacement for "optimization" work well, though the replacements often carry inexpressible constraints (leaky abstractions) and can increase complexity :/

Duncan Cragg 2020-09-08 16:32:47

Here's a doc on my site talking about this: http://object.network/onex-app.html

Duncan Cragg 2020-09-08 16:34:33

In Onex, you already get a distributed "AST" graph for both rules and data, both of which can be edited by multiple users

Charlie Roberts šŸ•°ļø 2020-09-02 18:10:10

Does anyone have tools / processes to recommend for rapidly iterating the design of a language? Iā€™m looking for strategies to produce a document that captures the design, evolution, and potential variations of a language interface separate from implementation concerns. Good examples of this would also be very much appreciated!

Jason Brennan 2020-09-07 16:03:38

Reviving this thread šŸ‘‹

One thing Iā€™ve found helpful is to consider: What sorts of things do you want people to be able to do with your system? What goals can they accomplish? How will their thinking be changed by using your system? Why should they turn to it, instead of something else? Donā€™t think about how youā€™ll do this yet, just focus on human needs that need meeting.

From there, you can start to sketch out ideas (Iā€™d start on paper, as itā€™s the most free form). Remember the needs and contexts of the people youā€™re trying to help! Are they best met with text files (and things like git, other source code tools) or are they better met with something more graphical? (Or really far out: are they best met with an entirely new kind of computer??)

At this point, I like to move into something more like a drawing tool (I use Sketch on the Mac) that lets me mock things up and write notes, spatially. I like to imagine different kinds of UIs for solving issues, and then I can write notes along side them, etc.

As far as actually prototyping these things, thatā€™s where I struggle to do so rapidly (as far as real working systems go). But I think thereā€™s something very powerful (and cheap!) about mocking things up in a drawing tool (or in a text file ā€” but beware, if you do everything in a text file, youā€™ll probably narrow your ability to work on programming UX things provided by your environment)

Kartik Agaram 2020-09-07 16:15:54

@Jason Brennan the original thread seems to be about designing languages. I interpret that as about designing textual programming languages. Does prototyping those still have a limitation with text files? (Earlier phases certainly benefit from the ability to draw lines and boxes and so on.)

In general I'm a bit bemused by this thread. As a programmer it all feels quite waterfall-y. If you try to "mock up" a language before giving it a grammar you're quite likely to end up introducing ambiguities that need modifying the language to resolve. Similarly with many other aspects of the activity. If an activity eventually needs to be mapped on non-linear primitives (as computer programs do), it seems to me that it benefits from lots of feedback loops.

Funny story: the original waterfall diagram [1] had feedback loops between phases, so even waterfall practiced right can be quite effective. Of course, that's not what anybody means by the term anymore..

[1] http://www-scf.usc.edu/~csci201/lectures/Lecture11/royce1970.pdf, figure 3

Jason Brennan 2020-09-07 16:21:40

Yep! I still think thereā€™s a ton of benefit to mocking this stuff up in a drawing of some kind, even if youā€™re doing a textual language.

Thereā€™s lots of ā€œprogramming experienceā€ I hope everyone in here aims to include for their languages, like auto-complete, debugging tools, visualizations, etc. In my view, itā€™s better to design these things along with the language itself (not as goodies added on later), as theyā€™ll help you steer how you want the language to work.

Of course, as you say, itā€™s still important to have a somewhat working implementation to know the limits of what can be reasonably expressed in your language, but I look at that more as a limitation than a guiding tool

Kartik Agaram 2020-09-07 17:13:50

The tendency of grammars to become ambiguous is not a limitation of some single tool, it's more like a limitation of the universe you live in. It's a little bit like the speed of light. I suppose you could see it as a technical limitation of current technology rather than a property of the universe, but either way it seems as unfair as asking a carmaker why they haven't managed to add a fusion drive yet to their creation. If you create an ambiguous grammar, all your tools like autocomplete, visualizations, etc. have to now deal with the grammar. And the drag (amount of implementation needed) compounds over time.

Perhaps this is getting off-topic. I'll just reiterate that programming isn't just something you do once you know what you want. It's a tool in the toolbox for arriving at good designs.

"Writing doesn't just communicate ideas. It generates them." -- Paul Graham (http://www.paulgraham.com/writing44.html)

Stefan Lesser 2020-09-07 20:28:32

Ohhh, Kartik Agaram, have you read A Timeless Way of Building? If you havenā€™t, youā€™d probably enjoy it.

Kartik Agaram 2020-09-07 20:35:52

I've read bits and pieces of it as others have pointed them out. But I really ought to. Particularly since I cited it in http://akkartik.name/akkartik-convivial-20200607.pdf šŸ˜•

Stefan Lesser 2020-09-07 20:45:54

I never really got that Alexander had generative grammars in mind for what he calls pattern language. But that becomes quite clear in ATWoB. That elevates the concept to pretty much your description above.

Obviously, I canā€™t make you read a 500 page book, but please do and then tell me (us) what you think. :-)

Chris Pearson šŸ•°ļø 2020-08-29 07:01:21

I'd like to know more about how Eve managed reactivity (eg 'commit vs bind' and the idea of tables that contain events). What worked well? Did later iterations/inspired projects tweak this approach? How (if at all) can lazy vs eager reactivity be managed using this approach?

http://docs-next.witheve.com/v0.2/handbook/bind/

šŸ“· image.png

Jamie Brandon 2020-09-07 18:10:58

I think differential dataflows loops might be useful here. They allow having lexically scoped areas of ordered time, rather than having to try to mash everything into a single global timestep.

Chris Granger 2020-09-07 18:37:58

Yeah with DD, youā€™re bringing more imperative control to the declarative-ness so you can try to manually resolve some of the semantic issues that arise (via a combination of careful ordering and explicit branching)

Roben Kleene 2020-09-08 18:06:34

Is there any hope for end-user programming when programmers themselves don't use programming to solve their own problems?

For a long time, I've been asking myself the question about why more programmers don't use programming to solve their own problems, but it just occurred to me the implications of that to end-user programming. Does end-user programming ever have a chance of succeeding for non-programmers to solve their problems if programmers themselves aren't using programming to solve their problems?

When I say programmers don't solve their own problems with programming, what I mean is, there is sort of a ladder of useful techniques to use programming to make programming itself easier. It starts with customizing your shell or your text editor by cut-and-pasting code snippets you find online, and progresses to writing your own customizations from scratch, to writing your own shell/text editor extensions, and finally to writing your own full programs to solve your own problems.

I find it so odd that it's so rare for any of the programmers I know personally to progress beyond the first stage (some light shell/text editor customization by cut-and-pasting some code they found online). Since programmers are experts at programming, and they generally choose not to use solve their own problems with programming, what hope is there for end-users to use programming to solve their problems? Or is there something wrong with the lens I'm looking through here? Perhaps programmers are using programming to solve their own problems in a way I'm not seeing? (I.e., that aren't shell and text editor scripting)?

Eric Gade 2020-09-08 18:10:26

We first need to articulate what is meant by "programming." Without a coherent definition, we will be at a loss to come up with some modified form of it called "end user programming." My view on this is that "telling a computer what to do, later" is a reasonable enough definition, and from that it's easy for us to see that most things done using a computer are "programming-like" already.

Eric Gade 2020-09-08 18:10:52

The next stage is then to divorce the contemporary association of "programming" with languages and text editors

Eric Gade 2020-09-08 18:11:18

Using a computer involves using a computing environment, not a language

Eric Gade 2020-09-08 18:11:44

That environment can certainly prefer a language, or even deeply support a single language (see: older systems that boot into BASIC, etc)

Eric Gade 2020-09-08 18:12:07

But what is important for everyone -- whether you are the power user called a "programmer" or the regular user today -- is the environment

Eric Gade 2020-09-08 18:12:41

What I mean by this is that there is not going to be a compelling "end user programming" experience in contemporary computing environments

Srini Kadamati 2020-09-08 18:12:48

I mean, many people work at companies on products they donā€™t themselves use. Part of the rise in the profession of Product Managers and Customer Success is to help bridge the gap to the end-users and their goals.

Programmers ALONE sitting in a cave probably canā€™t create programming tools / interfaces / languages that accidentally empower end-users. I would agree there

Eric Gade 2020-09-08 18:12:54

They are not designed with that goal in mind, and are in fact actively hostile towards it

Srini Kadamati 2020-09-08 18:13:16

yeah I broadly agree with @Eric Casteleijn

Srini Kadamati 2020-09-08 18:13:30

the current software ecosystem is too complex and instills values that are somewhat counter to the goals of EUP

Srini Kadamati 2020-09-08 18:14:02

even if you built an awesome EUP tool like Excel, it will need to interface with a MySQL database or hosted in a cloud server somewhere or have complex permissions that enterprises need

Eric Gade 2020-09-08 18:14:16

Another way to look at this is to look at compelling examples from the past of end user programmable systems

Srini Kadamati 2020-09-08 18:14:21

thats why new mediums / platforms entirely are interesting. Like Dynamicland or VR / AR or some new platform entirely

Srini Kadamati 2020-09-08 18:14:53

thereā€™s an opportunity to start greenfield on a hardware + software platform thatā€™s uninteresting to existing programmers and build strong values & cultural ideals around observability, understandability, and accessibility \

Srini Kadamati 2020-09-08 18:15:01

but thats not easy!

Eric Gade 2020-09-08 18:15:40

Hypercard is a good example. It was great, and extremely popular in its own day. There are clones today -- why are they not as popular? My answer is because HC fit holistically into the OS of its day. It could do a large part of what the whole OS could do, but within its own environment

Eric Gade 2020-09-08 18:15:50

Srini Kadamati yeah I agree

Eric Gade 2020-09-08 18:16:38

Here's an analogy: why is it so hard to navigate the streets of older world cities? Because they've built atop themselves over the centuries, and were designed before mass transit, cars, etc

Eric Gade 2020-09-08 18:17:05

You couldn't build an NYC style grid layout on top of an old European city without razing the whole thing to the ground first

Eric Gade 2020-09-08 18:18:12

Likewise, a teletype-based operating system for time-sharing computing systems isn't going to easily "evolve" towards something that is amenable to end user programming

Srini Kadamati 2020-09-08 18:22:06

all good analogies. Hereā€™s a more computing related oneā€¦ Microsoft was investigated for anti-trust b/c of Windows dominance and their shenanigans with IE. Google ended up being more important b/c they were kinda the homepage for the internet after 2000. Then, Apple ended up becoming super important b/c of their control of mobile.

But neither of these companies threatened Windows by fighting directly. They would have lost! Google tried to commodotize / trivialize the OS by focusing on the web (Search + chrome). Apple built a parallel platform that had nothing to do with Windows

Andrew F 2020-09-08 18:22:10

The main reason I don't solve more of my problems with programming is that the tools for doing so are so clumsy that it's only worth it for really big jobs. There's an aspect of learnability (bash language is a nightmare) and of integration with the system (hooking most applications is a nightmare AFAIK, especially GUI)

Roben Kleene 2020-09-08 18:22:41

Most of these comments are (interesting) observations about why itā€™s hard for end-users to use existing environments for programming. Any comments specific to why programmers donā€™t use them for programming? (E.g., they write code as their job, but donā€™t write much code to do their job more effectively.)

Andrew F 2020-09-08 18:23:33

I'm probably not the only programmer who started off trying to use programming to solve things, but got burned because it was just way more efficient to do it manually.

Eric Gade 2020-09-08 18:23:46

Roben Kleene it's the same reason

Eric Gade 2020-09-08 18:24:12

It's too complicated and takes too much time

Roben Kleene 2020-09-08 18:24:23

Yeah totally makes sense, things like Bash seem incredibly clumsy until you try to use something else, and then realize itā€™s the task itself thatā€™s clumsy (my interpretation)

Eric Gade 2020-09-08 18:24:27

If you are a professional programmer, someone is paying you to do all that complicated stuff

Eric Gade 2020-09-08 18:24:32

But personal time is more valuable

Eric Gade 2020-09-08 18:25:25

That said, bash scripting and things of that sort definitely constitute programming, and developers do it for themselves all the time as far as I've seen

Roben Kleene 2020-09-08 18:26:13

I do a ton of this kind of thing on company time, I usually automate all the things everyone is doing manually. But the people who do this work are maybe 1 out of 10 programmers in my experience.

Roben Kleene 2020-09-08 18:27:00

I think Bash is the most popular, but I still find it to be rare, but that would be an interesting data point if others find it less rare

Andrew F 2020-09-08 18:27:03

The task is usually more burdened by accidental complexity than actually clumsy. That's the system integration aspect a couple of us have mentioned. Doing everything in plain text was cute but causes a lot of friction when working with structured data, especially when you have to convert between formats.

Srini Kadamati 2020-09-08 18:27:28

Roben Kleene can you clarify what you mean by:

For a long time, Iā€™ve been asking myself the question about why more programmers donā€™t use programming to solve their own problems,

Eric Gade 2020-09-08 18:27:30

No Roben Kleene I think you are right; most paid people are not doing it

Srini Kadamati 2020-09-08 18:27:33

what do you mean by ā€œtheir own problemsā€?

Eric Gade 2020-09-08 18:27:54

But most paid people are working in environments that are clumsy, archaic, and in many ways hostile

Srini Kadamati 2020-09-08 18:27:57

problems b/c of programming / their code? Or like problems in their life (e.g. too much time spent running errands)

Srini Kadamati 2020-09-08 18:28:12

or like ā€œI have too many meetingsā€

Eric Gade 2020-09-08 18:28:27

personally I don't write too many bash scripts because I'd rather shove a meat thermometer into my ear and wiggle it around. And if I'd rather to that, then I'm always up for just manually doing whatever the task is

Srini Kadamati 2020-09-08 18:28:55

yeah I do 0 bash scripting myself

Srini Kadamati 2020-09-08 18:29:02

often end up doing a lazy python script

Roben Kleene 2020-09-08 18:29:37

Writing a bash script to do a huge find and replace programmatically, making scripts to quickly open projects they open over and over again, customizations, e.g., I always adding a shortcut to open hyperlinks from my text editor. Itā€™s more lots of small things rather than one big thing.

Eric Gade 2020-09-08 18:29:44

Then I think "maybe there is some linux program that does this" and I'm off to the races trying to find a command name that makes sense, usually dropping vowels for no reason

Eric Gade 2020-09-08 18:30:27

Yeah I think you'll find among Emacs users a higher concentration of people that do what you are suggesting. And I'd say part of the reason for that is the quality of the environment

Kartik Agaram 2020-09-08 18:30:30

Jeez, this thread is going fast. I'll wait for it to slow down, except to say: I don't understand Roben Kleene's original question, and why it assumes customizing is not programming, and why extensions are not programming. Why is only writing new full programs considered programming? Most of us only modify existing programs in our day jobs.

A founding ethos of programming is to do as little of it as possible. That's broadly true even in my vision of utopia.

Roben Kleene 2020-09-08 18:31:02

I always think of @Steve Krouse example he talked about a ā€œemail construction kitā€, I keep thinking, wait a minute programmers already have that (Emacs) and they barely use it, and they already know how to program...

Srini Kadamati 2020-09-08 18:31:08

Writing a bash script to do a huge find and replace programmatically, making scripts to quickly open projects they open over and over again, customizations, e.g., I always adding a shortcut to open hyperlinks from my text editor. Itā€™s more lots of small things rather than one big thing.Now I see what you mean, kinda like ā€˜automate the boring thingsā€™.

Eric Gade 2020-09-08 18:31:14

I'm with Kartik Agaram here -- configuration is also "programming"

Kartik Agaram 2020-09-08 18:33:05

These seem worth reflecting on:

Roben Kleene 2020-09-08 18:33:19

Kartik Agaram Those are all programming! My point is mainly that they stay low on the hierarchy, and donā€™t progress to writing their own customizations. E.g., when we talk about end users programming, presumably thatā€™s more than cut and pasting a few common snippets online, and thatā€™s about as far as most programmers go with their own environment (while I agree that 100%) qualifies as programming.

Roben Kleene 2020-09-08 18:35:55

@Eric Gade I agree regarding Emacs, part of my inspiration for this question is why isnā€™t it more popular? And, furthermore, VS Code, which is arguably the most popular text editor ever, specifically makes these types of customizations harder (thereā€™s no .emacs equivalent).

Eric Gade 2020-09-08 18:36:59

Emacs is not more popular because it is designed to be a system that is text only

Eric Gade 2020-09-08 18:37:16

But the systems everyone uses -- even programmers -- have robust GUIs

Roben Kleene 2020-09-08 18:37:38

Re the xkcd, if we havenā€™t been able to make automation a worthwhile time investment for experts, then what chance is there we'll be able to make it worthwhile for non-programmers?

Eric Gade 2020-09-08 18:38:44

I think your question is a very good one actually

Eric Gade 2020-09-08 18:39:04

But I'd say that perhaps professional programmers are "experts" in many of the wrong things

Eric Gade 2020-09-08 18:39:56

Frankly if I was building a new computing system from the ground up, I would not want to work with most people trained in CS these days, or anyone who has extensive software development experience

Eric Gade 2020-09-08 18:44:24

My litmus for a computing system in 2020, both as a professional programmer and as a user, is this: can I easily create a button that does some task I want when I click it?

Eric Gade 2020-09-08 18:44:38

The answer is no, both for the programmer and for the user

Eric Gade 2020-09-08 18:44:52

On some personal computing systems this used to be quite easy for both

Jared Windover 2020-09-08 18:54:02

For me the answer is that nothing feels like a stable foundation on which to build. If somebody wants to pay me to spend a majority of my time fixing previous code I and others wrote, thatā€™s fine, but thereā€™s something deeply frustrating about it when you see the extent to which your tools get in the way of your own vision. I donā€™t want to live in the command line, but thatā€™s about the only place I feel confident that what I build can persist.

Andrew F 2020-09-08 19:00:38

... if we haven't been able to make automation a worthwhile time investment for experts, then what chance is there we'll be able to make it worthwhile for non-programmers?

"There never was much hope, just a fool's hope"... but for some reason we're all in this Future of Coding thingy anyway. I see this as the exact problem we're here to solve.

Alexey Shmalko 2020-09-08 19:29:49

I actually do automate a lot of things (and I use Emacs). Mainly because the emacs makes it easy to do.

For me, the important aspect is extensibility. In many cases there is an almost good solution but with one or two things I need to do differently. And too often the cost to modify the solution is about as hard as to reimplement it from scratch. Especially true for full-blown GUI and android apps.

For emacs, you can always take a half-baked solution, modify its code on the fly and you're done.

If I were to speculate about why others don't do the same and automate easy tasks, I would say that the tools have high learning curve. It took me quite a while before I started to feel proficient scripting Emacs. There is also an attitude problemā€”not all programmers know they can extend their editor and what's possible, so they don't even look for opportunities.

Garth Goldwater 2020-09-08 20:01:01

in particular, i think that the majority of the ā€œold european citiesā€ weā€™re living in weā€™re built for batch processes. most of the tasks iā€™d like to automate have to do with user input (the click a button example is really stark and appropriate). and that user input isnā€™t the start or end of the program. id really like pretty much any environment that gave me live insight into how eg keyboard shortcuts and button presses were flowing across the system, and similar insight into graphical primitives and responses. i can build data abstractions pretty much anywhere but gui and input processes always feel like iā€™m running a marathon with petulant snakes around my ankles

Chris Knott 2020-09-08 20:16:13

Another relevant xkcd;

https://xkcd.com/974/

I think currently automation appeals to people who actually enjoy the problem solving aspect. The fact that it may or may not be massively more efficient is secondary.

I wrote a python script to combine image files into 2x2 combined images of 4, because it was slightly cheaper to print those as "single" images for my wedding than use the company's own quarter size print option. Probably only saved a few quid in total but I enjoyed the task.

This instinct is kinda orthogonal to people who like programming/computers.

I know people who come up with elaborate schemes for making sandwiches/wrapping presents/stuffing envelopes etc en masse. These type of people are the ones who will jump on end user programming when it becomes easier.

A lot of users though, aren't even using copy-paste, select all, find-replace etc yet

Tak Tran 2020-09-08 20:46:49

Maybe because we are programmers, we know the cost of creating/maintaining the things we make, so we rather spend our time doing other things. One quote that has stuck recently is, ā€œIn software, anything is possible, but nothing is freeā€. Even though, I can certainly optimize some of the things I want to do, I donā€™t want to pay the time and effort to do so.

Eric Gade 2020-09-08 20:48:47

Garth Goldwater I will add that when I'm working on Smalltalk projects, I tend to make those kinds of buttons all the time, and just pop them out onto the desktop. I wish I could do it in macOS

Eric Gade 2020-09-08 20:50:37

Chris Knott There is lots of evidence from the past that the kind of division you are describing is more of a gradient. Apple used to be really good about this, not only with Hypercard, but also with Applescript, which they have shamefully allowed to die on the vine. Macs were the desktop publishing platform of choice because "users" could write Applescripts for performing tasks across applications like Illustrator and Quark etc

Eric Gade 2020-09-08 20:52:35

What I think is important in future systems is that they should be discoverable in the sense that you can "peel back a layer" and see how things work in some slightly more complicated context, and then when and if you are interested, peel back a subsequent layer. Ideally this would have manifested itself in, for example, a MacOS where the "top layer" was all described in HyperTalk/Applescript

Eric Gade 2020-09-08 20:52:42

But they never went that far

Eric Gade 2020-09-08 20:53:08

Now if you want to make your own buttons in macOS, say, you have to download xcode, learn about unix, build processes, and a full on programming language

Eric Gade 2020-09-08 20:53:33

There is nothing in between, and for no good reason!

Chris Granger 2020-09-08 21:34:55

Roben Kleene I eventually came to believe that itā€™d take a pretty significant societal shift for it to be realistic now. The time that many of us base our dreams on (the days of hypercard and VB6) just doesnā€™t exist anymore - there are now endless distractions and more apps to do what you want than you could ever begin to look at. In the earlier days of computing there was just.. less stuff. And it was more obvious that if you wanted something maybe youā€™d put it together yourself. Even with the perfect toolset, most people arenā€™t aware that they even could change things and even when you tell them they can, they donā€™t know what to do with it.

Fortunately if you take a longer view, there is one clear exception to that: contexts where you can use programming to build stories/worlds. Games like Minecraft, Roblox, etc, expose kids to programming in a way that isnā€™t ā€œletā€™s do this faster,ā€ but instead through the joy of making universes to explore. If I were going at this now, Iā€™d be focused on building a path for the folks who grow up in those worlds to apply what they learned to the more mundane ā€œadultā€ life.

Chris Granger 2020-09-08 21:39:02

We can definitely continue to lower the floor of programming and open it up to more people, but to achieve the true ā€œend-user programmingā€ goal, weā€™d have to significantly change peopleā€™s relationship to computing. Realistically I think thatā€™d take growing a generation.

Srini Kadamati 2020-09-08 22:00:45

to second Chris Granger in the early days, there werenā€™t these dominant platforms / models that everyone was already used to having. I think thereā€™s optimism here if you look at other areas entirely where the experience & tooling is very different for an end-user vs a professional.

Cooking is the best example I keep coming back. Home chefs / ā€™amateurā€™s use scaled down pots & pans. Industrial kitchens are more like factories and use giant equipment that an ā€˜end-userā€™ would never really consider buying. But both of these are 100% valid industries and crafts. Thereā€™s more content targeting end-user home-chefs than there is for elite chefs. In theory, restaurants could provide 99% of the meals that all humans need but cost, cultural traditions, etc are still big barriers.

This has preserved home-cooking for literally centuries! If restaurants / central kitchens could provide all meals at the same cost as you making it, I predict weā€™d see a huge dip in home-cooking. Most people donā€™t make their own furniture anymore but pretty much everyone still cooks to some degree (even if youā€™re just microwaving food). Necessity is really the driver / mother of invention here. Home-cooking is still the best hammer for most peopleā€™s food problems

Shalabh Chaturvedi 2020-09-08 22:14:49

This just showed up on my feed and seems relevant.

https://twitter.com/tayroga/status/1296538378255491072

Essentially programmers (and myself) do some programming for our own problems. But the systems we have aren't fundamentally designed for end user programability. So the burden is large. We do it more where some software has been designed for extension in some respects only - e.g write a plugin for your editor.

šŸ¦ Taylor Rogalski: Alan Kay on end-user programming (WWDC 1990)

Srini Kadamati 2020-09-08 22:18:54

I like Alanā€™s heuristic of fist / hand of code, and then a ā€œpage of codeā€. If adding something takes more than theseā€¦ the burden might be too much. Anyone (even kids) can quickly load code-context into their short-term memory and extend / modify software. Past that, youā€™re closer to a programmer

Konrad Hinsen 2020-09-09 06:38:53

Speculation: The vision that Alan Kay describes in this talk hasn't been realized in mainstream software because it doesn't fit with how industrial societies see production as separate from consumption. We even use the economic link between the two (money flow, GDP) as the measure of collective wealth. People solving their own problems don't contribute to GDP, so their work has no value for the economy.

If there is some truth to this, the good news is that industrial societies are slowly realizing that maximizing GDP isn't such a great idea. And in a way the FOSS movement is a first sign of this in the software world.

Srini Kadamati 2020-09-09 13:09:27

Konrad Hinsen I agree but I think my comment from earlier adds another tidbit here:

https://futureofcoding.slack.com/archives/C5T9GPWFL/p1599602445178200?thread_ts=1599588394.135900&cid=C5T9GPWFL

Thereā€™s no economic story around end user programming. In my cooking analogy I give, making food yourself has pretty much always been cheaper than buying restaurant or mass produced food. Thereā€™s also been strong cultural ideals around cooking. But if it was ONLY the culture, Iā€™d suspect very few people would cook.

[September 8th, 2020 3:00 PM] skadamat: to second <@UEBG0NPDK> in the early days, there werenā€™t these dominant platforms / models that everyone was already used to having. I think thereā€™s optimism here if you look at other areas entirely where the experience &amp; tooling is very different for an end-user vs a professional.

Cooking is the best example I keep coming back. Home chefs / ā€™amateurā€™s use scaled down pots &amp; pans. Industrial kitchens are more like factories and use giant equipment that an ā€˜end-userā€™ would never really consider buying. But both of these are 100% valid industries and crafts. Thereā€™s more content targeting end-user home-chefs than there is for elite chefs. In theory, restaurants could provide 99% of the meals that all humans need but cost, cultural traditions, etc are still big barriers.

This has preserved home-cooking for literally centuries! If restaurants / central kitchens could provide all meals at the same cost as you making it, I predict weā€™d see a huge dip in home-cooking. Most people donā€™t make their own furniture anymore but pretty much everyone still cooks to some degree (even if youā€™re just microwaving food). Necessity is really the driver / mother of invention here. Home-cooking is still the best hammer for most peopleā€™s food problems

Srini Kadamati 2020-09-09 13:10:30

Commercial software is so so cheap because of the 0 distribution cost of software. The ā€˜junk foodā€™ / ā€˜processed foodā€™ of software is good enough for most people most of the time

Srini Kadamati 2020-09-09 13:13:29

I think its interesting to think about areas, users, personas, use cases, etc where customization, culture, and cost are aligned towards EUP.

  • Creative art. Sure many folks use Photoshop and what not, but people are willing to experiment with new tools if they can express themselves in new ways.
  • Kids / K-12. Thereā€™s definitely pressure to ā€œteach Python for jobsā€ or w/e but many kids intrinsically donā€™t yet care about jobs and more about creative activities / things that solve problems now for them (e.g. setting up Minecraft servers or making their own games)
  • Non-technical users that dream of making technical things. I found out from this Slack that a lot of the award winning game Hollow Knight was created using Playmaker (no-code / wires & boxes editor for Unity). The creators didnā€™t let their lack of deep coding skills stop them!
  • Data science. This is my world, and the trend here is to make everything work via a SQL interface, literally. SQL is learnable by pretty much anyone IMO (my mom struggled with Java and Python but is better at SQL than me!). Getting analysts to learn Python is a big ask and people worry about mistakes that could be make (plus that type of code is harder to audit). Also, Iā€™ll throw in obligatory popularity of Excel here! You still got the ML in Python people but most analysts and less technical people donā€™t necessarily want Python / R right away.
  • Note taking / personal organization. Thereā€™s a subset of note-taking (2nd brain, Roam, Digital Garden etc) who are interested in adapting their current tools even more to their personal workflows & preferences and donā€™t mind simple scripting.
Eric Gade 2020-09-09 13:18:08

When Chris Granger says we need a cultural change, heā€™s right. Our culture is utilitarian and wrapped in short-termism. This is reflected in computing as well.

Eric Gade 2020-09-09 13:19:01

FOSS is good in some ways, but kind of counterproductive structurally: we know from the 60s and 70s that in order to get truly qualitative leaps in computing, we need to fund people for longer periods of time with not so many restrictions. The current economic thinking precludes that kind of funding model

Eric Gade 2020-09-09 13:19:42

Additionally, as Srini Kadamati has pointed out, our education system has also become a victim of this cultural shift. Students are not educated, but rather trained for jobs. Hence the obsession with ā€œteaching kids to codeā€

Eric Gade 2020-09-09 13:20:59

FOSS kind of reinforces all this by providing increments upon the leaps of the previous generation, without providing a means for future leaps. Corporate sponsorships and people doing projects in their free time isnā€™t going to cut it. They need the means to follow sometimes errant paths for years at a time, and they need to be funded during that time so they can concentrate

Eric Gade 2020-09-09 13:22:42

I respect the whole free software / libre movement, but if there is not a change to the greater political economy, itā€™s going to be a movement that noodles around in unix and other partial techs from the 70s until the end of time

Srini Kadamati 2020-09-09 13:44:34

I share all of your thoughts @Eric Gade and sentiments. Thereā€™s a pessimistic and an optimistic view, but the nature of the change of revolutions seems to be about the niches that adopt these ideas.

The good thing with some of the areas I listed above is that I feel / think that those end-users are a bit more patient. They want to do more with less and theyā€™re willing to try tools that are different but can enhance their workflow.

Thereā€™s still the ā€œintegration with existing tools / workflowā€ problem though. The data science one is that ultimately any fancy EUP tooling still has to talk to a clunky big database. With art, people expect similar formats as the output (SVG, PNG, MP4, etc).

Roben Kleene 2020-09-09 15:41:37

Srini Kadamati Great examples! I'm particularly interested in the Hollow Knight story, if you happen to have a link or any other source I'd love to read more about it.

Roben Kleene 2020-09-09 15:52:03

The theme of the responses here seems to be: Only a small percentage of people are interested in using programming to improve their own workflow, but much more people are interested in using programming to build things to share. Which I think resolves my initial conundrum about programmers not using programming to solve their own problems: I was looking at scripting and customization, but what I probably should have been looking at is things like side projects and personal sites. Which to me seem much more popular than scripting/customization?

There might be a lesson for end-using programming here too: It's probably better to focus on tools that let people create things for other people than it is to focus on anything that you'd call "automation". For whatever reason most people aren't interested in automation (maybe just because it's not worth the time, e.g., the relevant xkcds)? But they are interested in building things to share, e.g., see the Hollow Knight example above. This seems consistent with the no code movement going on as well.

Eric Gade 2020-09-09 16:20:25

Perhaps expressiveness > utility

Konrad Hinsen 2020-09-09 16:23:34

Lots of good contributions here... just one more comment on the "most people don't need more than ready-made apps" argument: that's really a cultural issues. People need food and shelter, plus whatever it takes to become a worthy and respected member of their society. Software is new, it appeared when our societies were already well into an industrial mindset of production and consumption. Other DIY activities, including cooking, have been part of our culture since long before the industrial era. One reason people cook at home is that they grew up seeing other people do it, and they have seen the advantages (economic, social, etc.) it brings. In a culture without end-user programming, it's not surprising that few people miss it.

Roben Kleene 2020-09-09 16:28:36

Thanks! Made in Unity with Playmaker (which I'd never heard of) https://assetstore.unity.com/packages/tools/visual-scripting/playmaker-368 Just amazing!

Srini Kadamati 2020-09-09 16:32:11

The theme of the responses here seems to be: Only a small percentage of people are interested in using programming to improve their own workflow, > but much more people are interested in using programming to build things to share> . Which I think resolves my initial conundrum about programmers not using programming to solve their own problems: I was looking at scripting and customization, but what I probably should have been looking at is things like side projects and personal sites. Which to me seem much more popular than scripting/customization?

I would nuance this more. Clayton Christenssen / Jobs-to-be-done framework and all these other stuff from product management land (disclaimer: I used to be a PM) emphasize that people donā€™t care about your product. They care about making progress towards a problem they have and they may hire your product / service or another one based on how well it solves their needs.

Cooking food solves many problems for people (cost to feed family, taste ā€” restaurants canā€™t quite replicate home food taste for many, convenience once you know how to cook, feeling of self-sustenance, social ā€” cooking for others, and probably 5 more). Getting food delivered is a worse proposition in many cases (except when youā€™re busy or money isnā€™t a concern or you want specific cuisine). Right now, most programming doesnā€™t solve problems people have today. Itā€™s not ā€˜economicallyā€™ a better solution for anything for most people.

using programming to improve their own workflow, > but much more people are interested in using programming to build things to share> .

There ARE people who care about improving their workflow. Bankers and traders šŸ™‚ they learn the shit out of Excel and use it to automate their workflow, check for errors, etc. B/c it literally saves them time and helps them make more money or get an advantage in the market.

Roben Kleene 2020-09-09 16:47:08

Regarding bankers and traders, do you think a higher percentage of those groups customize their workflow with scripts than programmers customize their workflows? Sort of my starting theses is that for programmers, using programming to improve their own workflow is niche. I'd be very curious if there are other industries where using programming to improve their workflow is mainstream (for that industry).

Srini Kadamati 2020-09-09 16:48:46

no idea tbh, but I know a large % of them maximize the hell outta Excel. They really learn them well. Maybe programmers do this in other ways (customize their emacs setup, or their shell setup or w/e for little wins here / there).

Srini Kadamati 2020-09-09 16:49:23

I mean we have an eng team at work and theyā€™re constantly looking for ways to improve the cloud deployment infrastructure, speeding up test suite, etc. I categorize those as ā€˜workflow improvementsā€™ even though they accrue to all engineers in an org instead of just the engineer themselves

Shalabh Chaturvedi 2020-09-09 16:52:12

Re "FOSS being counterproductive strategically" - this twitter has interesting takes: https://twitter.com/jonathoda/status/1104522585092481024

Hypothetically lets say what's holding us back is poor forms of composition and the existing ontology of computing, so we need to design new forms that scale up better. However to be adopted we must be compatible with the existing world and so "compose" with it - so we perpetuate the existing composition and ontology models. This affects both commercial and free/open source software. Might affect side projects more because of a greater need to 'fit in'.

šŸ¦ Jonathan Edwards: Open source slows progress in software technology by demonetizing it. https://twitter.com/AmarachiAmaechi/status/1104383478483902464

šŸ¦ Amycruz šŸ‘©šŸ½ā€šŸ’»šŸ‘©šŸ½ā€šŸ’»šŸ‘©šŸ½ā€šŸ’»: UNPOPULAR OPINION: TECH EDITION

Srini Kadamati 2020-09-09 16:57:42

yeah I saw Jonathanā€™s tweet when it went out and it really made me ponder! At work, weā€™re commercializing / stewarding this viz / BI tool called Apache Superset - https://github.com/apache/incubator-superset but right now the #1 reason people use Superset and leave PowerBI / Tableau is because ā€œweā€™re freeā€ lol. We havenā€™t leveled up yet on the value front

Eric Gade 2020-09-09 17:01:54

Iā€™ve described it this way. I think of FOSS and perhaps establishment computing today as being like the medieval scholastics. The certainly produced a lot of work, even original work in some sense, but it was all confined to Aristotelian thinking, and their contributions were increments on rehashing Aristotle. For centuries.

Eric Gade 2020-09-09 17:02:32

Thereā€™s lots of cool stuff out there in FOSS that makes a lot of different things possible in computing (though we should be honest that most of those things are commercially useful)

Eric Gade 2020-09-09 17:02:47

But that doesnā€™t mean FOSS isnā€™t rehashing unix forever. It seems thatā€™s what is happening

Eric Gade 2020-09-09 17:03:03

Look on HN where any discussion of a ā€œnew operating systemā€ is really just a new linux distribution

Roben Kleene 2020-09-09 17:05:00

Regarding:

I mean we have an eng team at work and theyā€™re constantly looking for ways to improve the cloud deployment infrastructure, speeding up test suite, etc.

Personally, I've struggled to get other engineers interested in working on these things, but of course that's extremely anecdotal. I'd love to hear perspectives from others on this.

Roben Kleene 2020-09-09 17:07:32

Regarding Unix, the general pattern seems to be moving towards Unix system, e.g., Windows Linux Subsystem. My interpretation of that is that consumer computers seem to be going the way of locked-down smartphones, so traditional computer operating systems are mainly for programmers and other "heavy" workflows (media editing, etc...). There's just too much infrastructure on Unix systems to support those workflows to do anything else.

Tak Tran 2020-09-09 17:12:05

I mean we have an eng team at work and theyā€™re constantly looking for ways to improve the cloud deployment infrastructure, speeding up test suite, etc.

Personally, Iā€™ve struggled to get other engineers interested in working on these thingsI think when an org is big enough to have an internal products or platform team, they would work on these optimization/efficiency/infrastructure tasks.

Roben Kleene 2020-09-09 17:44:05

In my experience, once an org reaches a certain size, there will be a dedicated team to maintain this stuff, but until then no one really wants to work on it. (Which I find so odd, because if there are any problems with it, if you're a dev, you're feeling those problems on a daily basis.)

2020-09-09 20:07:20

I agree, based on working at a company with about 200 engineers. In a startup there is a lot of firefighting and pressure to launch / work on user-facing products, and infrastructure can be just good enough. When you have permission and management support (and even incentives) to make infrastructure really solid, then you can fix a lot of systemic problems.

At the opposite end, you can go on infrastructure vision quests, making something that other devs don't want to use, etc.

larry 2020-09-09 23:59:58

It's probably been said in the other 109 comments, but more people would shell script if the tool names and interfaces weren't so incredibly unmemorable. Is anti-memorable a thing? (I just decided it is.) The interface kinda stinks.

Git has an anti-memorable user interface, but by building GitHub and the GitHub desktop on top, it's now pretty easy to use without the fear of never seeing your code again. More people use git (i'm guessing) than shell scripts, even though its command line interface is irregular, dangerous, and unpredictable.

Garth Goldwater 2020-09-10 01:10:26

was just checking out some of bessemerā€™s recently released investment memos and i think an argument can be made that Shopify is actually a great example of an end-user programming success story: https://www.bvp.com/memos/shopify

šŸ”— Shopify

Srini Kadamati 2020-09-10 01:16:20

yeah I agree. Shopify is a beast of a company and they arenā€™t interested in trying to centralize everyone onto a vague platform like Medium did. They embraced ā€œthe edgesā€

William Taysom 2020-09-10 06:02:14

I might be missing it in the thread here, but one reason even programmers don't script things is that there's no good way to do it, a huge gap between interacting with a GUI and automating the interactions.

Konrad Hinsen 2020-09-10 06:17:48

Coming back to this thread, I am wondering; what is that "programmer" category? Professional software developers, perhaps? Is that a sufficienly homogeneous group in their professional practices to discuss how much they develop software for their own needs?

yoshiki 2020-09-10 23:56:07

My perspective: I did more "end-user programming" before I was a programmer, but knew enough programming to be useful. This was at an office job, where everything ran on Excel. I was doing lots of web scraping, data processing, making small tools for tedious tasks, etc.

Then I got a job doing programming, and didn't have any material problems to solve with programming any more!

yoshiki 2020-09-11 00:00:21

So, to the question:

"Does end-user programming ever have a chance of succeeding for non-programmers to solve their problems if programmers themselves aren't using programming to solve their problems?"

my answer is:

"yes! Non-programmers have way more problems solvable with programming than programmers do"

yoshiki 2020-09-11 00:11:30

To the specific point in the original post about some programmers not customizing their programming tools:

one hypothesis is that programming tooling is already pretty well optimized for the output that the industry wants from programmers. Programmers are customers too: It only takes a small amount of dedicated people making tooling like IDEs, text editors, plugins etc for the majority to benefit from the care and thought put into these tools.

Chris Knott 2020-09-11 11:00:09

My experience definitely matches yoshiki's (although I went the other direction, out of software development).

The amount of menial, manual use of computers that goes on in the world is a travesty. I'm talking about stuff like physically typing filenames into a Word doc.

It's long been recognised that a manager might send emails to her staff, instructing them to do some task on a computer, in a way that is almost pseudocode. e.g. "can you go through every sales report on the K drive for February and check if any of the unit codes have expired?".

End user programming should be looking to eliminate the middle human from this kind of human->human->computer situation.

This is the type of thing that SQL was meant to eliminate, but it didn't. I think the issue is solutions like SQL demand too much subservience in how information is managed, and they want it stored in a way that is different to how humans would naturally do it. The also demand labour up front (at the point of storing), for no immediate reward, which is always a foolishly optimistic thing to require.

I have high hopes for a system that approaches the OS from the same perspective as the user, for example all text OCRed, structure automatically inferred from physical layout etc.

Eric Gade 2020-09-11 13:35:11

Chris Knott Totally agree about the manager request example. I did a freelance job a couple of years ago for a non profit. They really just needed a specific way to process bank account information from several online services into an Excel template they used. The people working there knew exactly ā€œwhat needed to happen,ā€ but the tools on their system had no way for them to express it, and the services they were using were siloed from each other

Eric Gade 2020-09-11 13:36:04

In the end I had to make them a quick electron app that did it, and it was really eye opening how complicated the programming task was even though the real task was conceptually simple, and all the ā€œpartsā€ were already there in the system

Eric Gade 2020-09-11 13:36:45

As time has passed, I consider the design that got them (and myself) into that position to be hostile. But it gave me work, just like poor design gives programmers work every day across the world

Roben Kleene 2020-09-11 15:43:54

Regarding:

one hypothesis is that programming tooling is already pretty well optimized for the output that the industry wants from programmers. Programmers are customers too: It only takes a small amount of dedicated people making tooling like IDEs, text editors, plugins etc for the majority to benefit from the care and thought put into these tools.

This is an interesting perspective, and I believe it's true to an extent, e.g., while almost every powerful application from Excel to Photoshop to DAWs has a cottage industry of extensions that surround it, it's certainly true that developer tools dwarf the others in quantity of existing customizations.

But I still see a lot of contrary evidence. E.g., it still seems that most programmers, when they encounter a problem with a programmatic solution, they'll tend to choose a manual solution. The canonical example is automated testing. While automated testing has certainly become popular, it still seems to be in the "eat your vegetables" category, instead of something programmers just do naturally. Which is odd if you think about it, because manual testing is just that, manual, and programmers are master automators, so...?

More personally, what I find most fulfilling about writing my own scripts and customizations is that I can make the software behave the way I want it to. When you're using someone else's customizations you're always at the mercy of the creator's decisions. I don't think that wishing your tools worked differently is rare, that's a sentiment that I feel almost every computer user has, but programmers are the only group that's empowered to change how their tools work today, and for the most part, they still choose not to.

Srini Kadamati 2020-09-11 15:50:00

Iā€™ll challenge some of your assumptions here Roben Kleene not sure if this is useful, but could be another perspective!:

  • Automating isnā€™t always better than manual. Clicking through a new UI sequence can give you the ULTIMATE end-user gut check. Things can still go wrong (caching, what have you) but less is likely to go wrong. You can build a burrito robot but unless you taste the burrito at the end with your own mouth, you just donā€™t know if everything worked as you thought it would.
  • Automation is often less concrete / tangible. Similar to the first 1, but automation is also more abstraction. Abstraction is complexity. Even if the automation script is something simple, thereā€™s overhead now to maintain a list of automations. Perhaps we need better ā€˜automation interfacesā€™ where the overhead is brought to 0. Analogy: I donā€™t think about not eating cookies in my day to day, I just donā€™t keep cookies in the house. 0 overhead!
Roben Kleene 2020-09-11 15:51:55

I agree 100% with both of these points. I'd be the first to admit that I like automating things because I don't like doing them manually, not because I think it's objectively better by any other metric besides my own personal preference.

Srini Kadamati 2020-09-11 15:53:38

this is a long threadā€¦ it could be interesting to fork and start a new thread and this time include like 5-10 concrete examples in your life / things youā€™ve seen other programmers do Roben Kleene etc to spark the discussion!

Roben Kleene 2020-09-11 15:54:31

I'd also say that "how can doing things manually still be better than automating them in 2020?" Because I believe that it's still better for many, possibly even most, cases is one of the central questions I'm grappling with.

Kartik Agaram 2020-09-11 16:49:14

Roben Kleene: Nooooooo! šŸ˜„ It seemed like you were moving towards the light here:

I'd be the first to admit that I like automating things because I don't like doing them manually, not because I think it's objectively better by any other metric..But then you immediately put your blind spot back on in the next comment:

how can doing things manually still be better than automating them in 2020?That framing is only going to lead you in circles. As Srini Kadamati pointed out above, and as I tried to say in the overflow thread ā†Ŗ:

  • The line between 'manual' and 'automated' is fuzzy on the computer. If I switch windows and type a command on the shell, I'm still making use of automation. Just less of it.

  • Adding levels of automation always has costs. If it seems always a good idea to you, just wait a few years. We can improve lots of things here, but it's just not a reasonable goal to aim for "adding levels of automation should always improve life". There will always be situations where doing something manual is simpler, faster, less alienating. Start developing some warm and fuzzy feeling for doing things manually.

  • Desire for automation is subjective to some extent, as you pointed out in your first comment.

I'd say join me over here where the goal isn't automation but comprehension. Practice throwing kicks not because kicks are always a good idea, but just so you build up judgment on when to use a kick, and so you can do a lot more with a single kick when the moment arrives.

(Movie recommendation: https://www.imdb.com/title/tt0061770)

Roben Kleene 2020-09-11 17:55:32

Adding levels of automation always has costs. If it seems always a good idea to you, just wait a few years.I think you're misunderstanding me here, I'm specifically saying I don't always think automation is a good idea, I'm saying that it's the way I prefer to solve problems. It's an inclination that has more to do with me, than it does the problem. Correspondingly, I tend to choose to solve problems my approach is a good fit for. I do a lot of work with frameworks, where automated testing is more important. I tend to avoid issues in the UI layer, since manual testing is usually more efficient there. (I actually love working on design-system level UI stuff, but bugs that exist between the UI and the data layer are probably my least favorite thing.)

I would love to hear more about this "love of doing things manually" though.

Eric Gade 2020-09-11 18:57:03

I would love to hear more about this ā€œlove of doing things manuallyā€ though.Iā€™m thinking if the environment you are in is introspective and malleable enough, the manual approach is just so easy

Eric Gade 2020-09-11 18:57:16

And the lines between manual / automatic become hazy

Roben Kleene 2020-09-11 19:09:47

Sure... I agree that line can be fuzzy, but I guess I'm not sure how useful that distinction is, e.g., I'd put someone using a malleable environment / using automation on one side of the coin, but most programmers are still on the other side of the coin, where they're not using a malleable environment (or at least leveraging it), and they're not automating.

Eric Gade 2020-09-11 19:12:56

I think someone should do this as a real study

Eric Gade 2020-09-11 19:13:13

And collect background information about the programmers in question

Roben Kleene 2020-09-11 19:19:39

Also if anyone wants to share there definitions or examples of malleable environments (i.e., environments that are so efficient that they remove the advantages of automation) I'd love to hear about that

S.M Mukarram Nainar 2020-09-11 19:49:20

The most obvious thing that comes to mind is text editor keyboard macros

S.M Mukarram Nainar 2020-09-11 19:51:08

You can record one, use it over and over, and save it if you want.

Importantly, you are personally in control at every step, and it is a very lightweight abstraction.

Eric Gade 2020-09-11 19:53:11

Ditto for doing things in Emacs lisp. Once you have the lisp made for it, turning it into a command is easy

Roben Kleene 2020-09-11 19:54:15

Yeah, that's the first thing I thought of too. Keyboard macros seem pretty clearly automation to me? (And correspondingly, would be a feature most programmers don't use). It sounds like there's a concept of a malleable system that's not based in automation here, I'd love to understand what that is. (And I'd love to hear any more examples of both.)

Jared Windover 2020-09-11 20:22:10

While I object to needing your IDE to have thought of the things you want, JetBrains does a pretty good job of thinking of the things that my coworkers and I want, and I think we do leverage it to an extent that it is ā€œautomationā€. Multi-cursor, regexp-replace and structural-replace are some examples that come to mind.

Andrew F 2020-09-12 00:56:11

These text editor examples highlight something: good automation blurs the line between manual and automated action. Specifically, it makes the automation invisible by making it so easy it feels like manual work (this might just be my view of what you folks are talking about re malleable systems). Multi-selections are a great example. You can think of them as automating repetitive identical edits, but they feel like just cutting with a sharper knife, not programming a chopping machine.

Possibly one of the key things that makes them feel manual is that you can see the changes as they're made. Not exactly a new idea (oh hai Bret Victor), but maybe a new perspective on why it's important. If every bulk edit operation on a doc or database ran in a transaction with a live preview of the changes, people might feel more comfortable playing with them.

yoshiki 2020-09-12 01:51:25

but programmers are the only group that's empowered to change how their tools work today, and for the most part, they still choose not to.Roben Kleene I don't think it's so much that they choose not to, it's often that they don't need to.

Roben Kleene 2020-09-12 11:40:03

yoshiki Do you have any thoughts on preceding part?

When you're using someone else's customizations you're always at the mercy of the creator's decisions. I don't think that wishing your tools worked differently is rare, that's a sentiment that I feel almost every computer user has,

E.g., my observation is that developers tend to be frustrated with their tools (especially more experienced ones). Do you not find this to be the case?

Roben Kleene 2020-09-12 11:41:48

(I guess I should add that VS Code appears to be the first very popular text editor that developers don't seem to be frustrated with, so perhaps my perspective is outdated.)

Kartik Agaram 2020-09-13 03:47:55

Don't you also see experienced developers being more resigned to their tools? In fact, that's one way to diagnose all of us pushing back on you in this thread: we've been living with our chains so long that we've forgotten about them. That would certainly explain why we don't customize more.

I had more customizations in 2001-2005, then I moved and changed a lot of stuff and just restarted my text editor settings from scratch. And they've never grown back to the sort of multi-megabyte state they were in then.

Backing up to your previous comment to me (I wrote a response yesterday that the Slack app ate):

I think you're misunderstanding me here, I'm specifically saying I don't always think automation is a good idea, I'm saying that it's the way I prefer to solve problems. It's an inclination that has more to do with me, than it does the problem.I don't understand the distinction you're making here. At least, what you're saying sounds like it answers your earlier question, "how can doing things manually still be better than automating them in 2020?"

Funny thing is, I do a lot to automate testing. In fact, one way to view my Mu project is as making the UI layer easy to test.

I would love to hear more about this "love of doing things manually" though.When I do something manually I sleep soundly at night that I haven't created new tech debt for myself. I don't have something new I have to maintain, or try to read later to try to understand just what I was thinking. Manual labor can be therapeutic, like gardening. All these things have nothing to do with the state of the underlying system. They're just about the mess I make, and about taking care of my own state of mind.

Sometimes I do things manually for a few days even when I'm sure I have to automate them eventually. Manual work keeps me close to the data and might give me some new insight. "Being the computer" helps me understand the problem before I try to solve it.

yoshiki 2020-09-13 05:03:03

E.g., my observation is that developers tend to be frustrated with their tools (especially more experienced ones). Do you not find this to be the case?I see pockets of people feeling frustrated and other pockets of people who aren't. It's difficult to summarize since I think the landscape is complex(like you said, experience plays a part). I agree though that not everyone is satisfied. I'll have to revisit this later when I have more to say.

Konrad Hinsen 2020-09-13 13:06:27

On automation: it has two very different costs that need to be weighed against the benefits. One has already been cited: the initial effort to put automation in place. The other one is less obvious: a fading understanding of what is really happening. It's much more pronounced if you run someone else's automation, but also happens when you run your own code for a long time without studying it from time to time. And if then you have to change something, it can be difficult.

So automation is an obvious win only if the two costs are low. Stuff like renaming hundreds of files: shallow but lengthy. The loop with a few-line body.

Roben Kleene 2020-09-13 15:50:50

Kartik Agaram I really appreciate these comments, some responses below:

Don't you also see experienced developers being more resigned to their tools? In fact, that's one way to diagnose all of us pushing back on you in this thread: we've been living with our chains so long that we've forgotten about them. That would certainly explain why we don't customize more.

This seems to be saying the same thing I am, experienced programmers are unhappy with their tools, but accept them (that's the way I'm interpreting "resigned" at least?) The question is just why. You've given some great reasons, but I'm not entirely convinced (not saying I disagree either, just not sure either way), but I think the best response would be to give some concrete examples. That's also something a couple of others have requested too, and I also think it's a good idea, so I'm going to start a new thread soon with examples.

I had more customizations in 2001-2005, then I moved and changed a lot of stuff and just restarted my text editor settings from scratch. And they've never grown back to the sort of multi-megabyte state they were in then.

I used off-the-shelf software for the first half of my career (2002-2010), mainly with the defaults, then started customizing after that. I realized it solved a really really big problem for me: I used to keep changing software because an alternative would solve some problem I have, but after using the new software I'd realize the old software did some other things better. So my total number of problems would always stay the same.

Now that I customize, my total number of problems goes down. Now I think of applications like as a shell that I can customize to make it do the things that are important to me very quickly. And I'm so much happier as a computer user this way, because I'm not in a constant state of frustration because nothing works the way I want it to like I was before (this is is probably a core trait of customizers/automators, as well as contrarians in general).

I think you're misunderstanding me here, I'm specifically saying I don't always think automation is a good idea, I'm saying that it's the way I prefer to solve problems. It's an inclination that has more to do with me, than it does the problem.

I don't understand the distinction you're making here. At least, what you're saying sounds like it answers your earlier question, "how can doing things manually still be better than automating them in 2020?"

To me these are two separate thoughts: In the first, I'd guesstimate automators are about 1/10 of programmers, I'm just saying I'm in that 1/10. The second part is just surprise that using a computer programmatically is still so difficult in 2020, given it's a problem so many people have worked on. Not sure there's a contradiction here? I think you're saying #2 leads to the 1/10 in #1, which I agree with. But I don't see why that would change my personal preference?

When I do something manually I sleep soundly at night that I haven't created new tech debt for myself. I don't have something new I have to maintain, or try to read later to try to understand just what I was thinking. Manual labor can be therapeutic, like gardening. All these things have nothing to do with the state of the underlying system. They're just about the mess I make, and about taking care of my own state of mind.

I understand this in theory, but I just don't experience using computers this way. This reminds me of how I feel when I watch Gray Bernhardt type, I find him amazing to watch, but he has an affinity for the mechanical act of typing that I just do not share. I find just imagining doing what he does myself exhausting. (Also, doing that many small mechanical motions relative to his output seems like a recipe for RSI to me.) There is just no way I'm ever going to use a computer by typing everything out the way he does, the "cost of a key stroke" is just higher for me than it is for him.

Kartik Agaram 2020-09-13 16:33:22

Roben Kleene That really helps clarify things. I certainly agree that computers today fit better for people with a certain profile (who find certain things like typing cheaper to do), and that there are huge barriers to customizing them for other profiles. I think it analogous to the direction of a vector rather than its magnitude. The things you want are not that far from the current defaults, it's just that there's huge impassable mountains in the direction you want to take computers, whereas I[1] and Gary Bernhardt have the advantage of living in the direction of the flat plains. (And the poor laypeople are out in outer space with no service.)

[1] Though I see myself moving closer to you over time, what with my recent RSI troubles.

Roben Kleene 2020-09-13 16:55:28

Iā€™ll just add that I think both approaches work, John Carmack seems like a use the defaults type (https://twitter.com/id_aa_carmack/status/1302651878065475584?s=21). Linus seems like a customizer (maintaining his own Emacs, and git seems like explosion of Bash scripts, at least when Iā€™ve looked at it under the hood).

šŸ¦ John Carmack: I have never been a power editor user; typing just never felt like a bottleneck worth fighting over (unlike exploration). It is interesting watching my kids get excited as they discover various Sublime Text features that I never use.

Kartik Agaram 2020-09-13 17:09:38

I'm noticing an analogy here with lifestyle design. A few years ago there was a movement towards minimalism until people realized that not everyone is rich enough to afford minimalism[1]. Similarly, I think when you see someone who says they just use the defaults, you're really seeing someone privileged to be better suited to their environment.

[1] Arguably this pandemic has made us all poorer by forcing us to run less lean and maintain more inventory of more types.

Kartik Agaram 2020-09-10 06:39:33

I'm going to start an opinionated overflow thread for the previous discussion (https://futureofcoding.slack.com/archives/C5T9GPWFL/p1599588394135900)

Why programmers shouldn't program for themselves (my editorializing)

Focusing on "quantity of programming" feels like the wrong frame. My ideal society of people educated in programming may not involve most people actually doing much programming most days. What matters is the potential. Compounding advantages from programming for one day per year.

  • Impulse to generalize is self-limiting (some maintenance burden may be irreducible). A good end-user computer needs to be extremely parsimonious in out-of-the-box capabilities, and leave lots of space for users to "pollute" it with what they care about. Give people space to make mistakes, raze things and start over. If it's too built-up, it discourages experimentation and customization.

  • Baiting big to catch small. (https://xkcd.com/1319) The long tail of manual tasks are not really economic to automate just for oneself.

  • First-world problems. Until we get good sensors/actuators, programming is kiddie-pool stuff for the most part. "I wrote a script that lets me open projects easily so that I can write more scripts." There's more to life. (Not for me, but for most people šŸ˜„)

Why programmers don't program for themselves (snapshot summary of the previous thread)

  • Interoperability limitations. Between any putative new script and other devices, platforms, programs.

  • GUI limitations.

  • Operational/maintenance burden (Ivan Illich). Keeping up with security advisories, for example (https://mastodon.social/@akkartik/104790515855023278)

  • Programming for employers sucking up all the oxygen. Building for oneself is economically invisible in the current paradigm. (Thanks Konrad Hinsen.)

  • Long-term trend towards locked-down, consumption-oriented devices. Morlocks turning Eloi.

  • Lack of DIY culture. Programming for others may be poor preparation for listening to one's own needs (e.g. https://mastodon.social/@akkartik/103994830568601931). Perhaps the original sin was framing programming as driven by exernal "requirements"? But computers always had to start out expensive; hard to imagine how we could dodge that bullet..

  • Fragmentation in incumbent programming models. High barrier to entry for exploring new programming models.

  • Poor discoverability/unmemorability/anti-memorability.

(Bullets are not disjoint, just interlocking/overlapping frames I've been finding useful.)

Jack Rusher 2020-09-10 08:26:15

As someone who comes from the antediluvian era when personal computers were personal, that was one of the most depressing threads I've read since joining this community. šŸ˜„

Stefan Lesser 2020-09-10 10:50:53

Yes! Sadly.

The more I learn about that "antediluvian era" the more I think we have this whole thing backwards. Back then, people like Alan Kay were trying to figure out how to make computers work for everybody. "Programmer" was just a synonym for "computer user".

But then we somehow accepted the idea that it's better to distinguish between so-called "experts" that can create all the software for "not experts" to use, even though these "experts" knew nothing of what the "not experts" really wanted to achieve. Primarily it seems, so we can sell a sh*tload of products and turn a planetary-scale nuclear attack-proof knowledge-sharing computer network into a shopping mall.

The result is that now everything is so complicated that not even the "experts" find it feasible to do most things with a computer themselves, and the "not experts" are now convinced that they can't possibly learn how to do anything useful with computers unless somebody else creates it for them.

We should've never stopped trying to make computers work for everyone.

And we should've kept talking about the "medium" aspect of computing more.

Stefan Lesser 2020-09-10 10:57:39

Ah, sorry, that turned into a not very constructive rantā€¦ I do appreciate that there are still a few people trying to make computers work for everyone, and in this community in particular.

This whole "end-user programming" thing (even though I've adapted to use the term because otherwise it just makes communication more complicated than it already is) just feels totally backwards to me ā€” starting from a programmer mindset and trying to invent an easier way for non-programmers to do programmer things just seems wrong, wrong, wrong to me.

We need better options for non-programmers to teach us how to use computers in ways that are more useful and more relevant.

Ricardo A. Medina 2020-09-10 11:43:42

Stefan Lesser (last paragraph) have you expanded on it elsewhere?

Stefan Lesser 2020-09-10 13:09:15

Ricardo A. Medina I have a lot more thoughts about it than I have written down anywhere (yet?). Every once in a while I keep rambling about it here. What I did write down elsewhere is this: https://stefan-lesser.com/2019/12/13/democratize-programming/

Roben Kleene 2020-09-10 14:31:23

Jack Rusher Regarding the thread being depressing, I personally am more hopeful after the realization I made here https://futureofcoding.slack.com/archives/C5T9GPWFL/p1599666723187800?thread_ts=1599588394.135900&cid=C5T9GPWFL That I think a better place to look at how programmers use programming for themselves is to create things to share with others. E.g., it seems to me that things are like personal sites and side projects (sharing) are much more common than using programming to improve personal productivity. Correspondingly, it seems to me that the most fruitful area in end-user programming would likely be in making it as easy as possible for non-programmers to share their creations with others. I think the history of the web also reflects this, with the move towards services that remove the technical hurdles to sharing (e.g., Twitter/Medium, for better or worse) and the current no-code movement.

[September 9th, 2020 8:52 AM] services: The theme of the responses here seems to be: Only a small percentage of people are interested in using programming to improve their own workflow, but much more people are interested in using programming to build things to share. Which I think resolves my initial conundrum about programmers not using programming to solve their own problems: I was looking at scripting and customization, but what I probably should have been looking at is things like side projects and personal sites. Which to me seem much more popular than scripting/customization?

There might be a lesson for end-using programming here too: It's probably better to focus on tools that let people create things for other people than it is to focus on anything that you'd call "automation". For whatever reason most people aren't interested in automation (maybe just because it's not worth the time, e.g., the relevant xkcds)? But they are interested in building things to share, e.g., see the Hollow Knight example above. This seems consistent with the no code movement going on as well.

Konrad Hinsen 2020-09-10 15:36:10

I am from that antediluvian era as well, my first computer (https://en.wikipedia.org/wiki/Colour_Genie) was so personal that it didn't even have any connection to the outside world other than via physical artefacts (cassette tapes). But I never left the universe of end-user programming. All the software I write is for my own use. Often published for sharing with others, but never written exclusively for that purpose. From this perspective, the evolution of the last 30 years looks like computers becoming ever more powerful and at the same time ever more difficult to use. I spend more time today on administrative overhead (software updates, ...) than I did 30 years ago.

Andrew F 2020-09-10 17:24:51

Related to Roben's point, I've always thought it should be easy to share workspace customizations and the like as well as code. Config packages should be easy to create, install, and uninstall.

Kartik Agaram 2020-09-10 17:42:02

@Andrew F I think the right way to share workspace customization is as "naked code". Copy-pasting fragments. One link from my OP above is relevant: https://mastodon.social/@akkartik/103994830568601931. My argument goes like this:

  • Settings often grow in an unruly fashion compared to codebases. We're more careful about organizing modules in a repo than we are in adding knobs to a config file. Even though the modules are internal details and the config file is externally visible.

  • Settings in config files often depend on each other in subtle, hard-to-debug ways.

  • Creating packages to manage customizations requires dealing with these dependencies. That creates a lot of bloat to sense lots of different combinations of settings and behave appropriately.

  • The bloat hinders further customization. People start to rely more on packages created by others, and the muscle of doing your own customization atrophies.

Bottomline: If you make something look polished, people will assume someone else should make it. If you make it look half-assed, it will encourage, even beg for, helping oneself. This is a case where worse is actually better, not just pragmatically but really.

Jack Rusher 2020-09-10 17:43:59

Konrad Hinsen I, too, still write tools for myself constantly, including little 10-15 minute programs meant to be thrown away. All I could think looking at that thread is that if one's normal toolchain makes producing nonce personal code that difficult, something has gone terribly wrong.

Roben Kleene The first time we used Maria.cloud to teach a class of novices we added a feature where any "cell" in a notebook could be shared as a an "app" -- a view onto the whole notebook via an interactive graphical cell -- with a single mouse click. At the end of the first day everyone had made a game or some art, and they loved sending around their creations. It was great. So, yes, this is definitely something "non-programmers" (rather, not yet programmers!) respond to quite readily.

Roben Kleene 2020-09-10 17:50:19

Aye aye aye, what a cool idea, a social networking platform based around sharing little interactive bits of code. Twitter x CodePen

Andrew F 2020-09-10 18:05:38

Kartik Agaram you argument is compelling for text config files as they are today. However, looking to the future, blobs of text config can't be the only way people make customizations. The whole hidden dependency thing especially needs to die (that alone would take you a long way). Basically what I mean by "it should be easy" is that the obstacles in your post should be defeated. :)

Kartik Agaram 2020-09-10 18:31:10

Totally agreed. But it's not really about text. In fact, text helps here because it's still easier in 2020 to throw text into a random text box than it is to attach files to it.

The real problem is the indisciplined dependencies, and it's a devilishly subtle problem. For the obstacles to be defeated we need somehow for everybody to do "the right thing" -- even after we figure out the rules of engagement. I don't know how to even start attacking that social-organization problem.

In the meantime, it seems to me the best way to share customization is to encourage sharing what works for you, with the expectation that others will need to tweak it to get it working for themselves. It's fiddly and annoying, but still on balance better than the current world of pervasive learned helplessness.

Jamie Brandon 2020-09-10 22:43:41

I don't think the explanation even needs to be specific to programming. There has been a long-term trend towards people doing less and less for themselves, and instead satisfying more and more of their desires through market interactions. Cooking, cleaning, basic repairs, altering clothing, music, story-telling etc are all often outsourced now. End-user programming is swimming against that tide.

Maybe there is inspiration to be found by looking at surviving knots of diy culture, where people are still interested in learning new skills. Eg my old housemate used to hang out at a community carpentry workshop.

Konrad Hinsen 2020-09-11 07:11:56

Kartik Agaram I disagree about one point: your argument is all about text. In a universe where people share small pieces of code for inspection and adaptation to their own needs, it is crucial to have a representation that is universally supported by communication media, and that means text. I actually guess we all agree that it means text for now, but my claim is that this is not going to change. Text is the dominant mode of time-delayed communication between humans, with pictures taking the second place and everything else (voice recording etc.) lagging far behind. Therefore text is going to be the preferred medium for the kind of interaction you describe for a long time. Perhaps we will develop picture-based communiction with machines to the point that it can complement text, but that's no more than a dream for now.

What made me realize the importance of text-based representations is my recent involvement with Pharo (i.e. Smalltalk). When interacting with a Smalltalk system, text is used only at the most fine-grained level, inside a method. But that means that discussing Smalltalk code by e-mail or in discussion forums is very cumbersome. People end up using constructs that are legal Smalltalk code but which nobody ever uses for code development, such as creating classes by sending a message to another class. This creates a serious disconnect between human-computer interaction and interhuman communication.

Kartik Agaram 2020-09-11 07:23:14

I've been trying to keep an open mind and work against my pro-text bias, but I'll be very happy if that's not needed šŸ™‚

Jack Rusher 2020-09-11 12:50:41

Re: DIY's passing, I sometimes wonder whether having an entire industry competing to building seductive simulations of personal development might be interfering with the actual personal development of human beings. Or, to put it another way, how many music lovers have spent more hours playing Garage Band than it would have taken to learn how to play and form an actual garage band?

Roben Kleene 2020-09-11 15:33:20

The killer features of plain text are collaboration and sharing. One pet theory is that one reason for the incredible popularity of visual programming languages practically everywhere except for software development (e.g., https://twitter.com/robenkleene/status/1280182521796399106) is that most those are areas much less dependent on collaboration.

Jamie Brandon 2020-09-12 04:18:10

Jack Rusher I saw some back of the envelope math recently - if it takes about 1000 hours to become competent (not expert) at a skill, and the average american watches 4 hours of tv per day, then the opportunity cost of tv is 1.5 skills per year.

Alexey Shmalko 2020-09-12 06:42:58

Jamie Brandon 1000 hours must be about deliberate practice, which is highly demanding and you can't put too much of it in a day. You can't easily replace 4 hours of watching tv with 4 hours of high-intense practice. But replacing 4 hours of tv with 30 minutes or an hour of practice should be fine.

But on the other hand, I don't believe you need a thousand hours to be competent either. My guess is 100ā€“500 hours depending on the skill.

That should give you 1ā€“3 skills per year. So yeah, tv is huge waste either way.

Kartik Agaram 2020-09-13 03:52:00

what a cool idea, a social networking platform based around sharing little interactive bits of code. Twitter x CodePen

You mean like http://www.bashoneliners.com?

U01A57MG2HM 2020-09-13 10:41:19

Jamie Brandon It's funny I was actually talking about this with a few friends and we sort of all had the opposite experience. Not sure if we're part of the same generation. We found that the generation of our parents (born in the 50s early 60s) didn't have a DIY culture at all.

From real experience:

  • If you're bike is broken :Parents => bring it to the bike shop

Us => Watch a youtube tutorial

  • You want to organize a trip:Parent => Contact a local travel agent

Us => Do some research, go on Sky Scanner, try to find local guides, http://booking.com...

I was also talking to my step dad about it, who was also leaning to the same conclusion, He was amazed that kids of our generation all seem to cook. He is a city guy from Italy, he never cooked for himself. Being a bit counter-cultural he associated cooking yourself with the older more repressive generation (he was a bit of a hippy). And maybe we are doing the same things he did, we cook because our parents didn't.

I think there might be some vertical and also horizontal cultural differences. Most of my friends come from cities/big towns and we are Europeans. There might be different shift depending on your generation/location.

Robin Allison 2020-09-11 06:25:35

Broad question here.

Do people here know of any tools that separates the complexity component of a program from the underlying behavior it would eventually produce, and then let you manipulate code so the behavior is fixed? By behavior I mean something like the user-facing behavior of a program, or its effect on some data. Its a flexible concept in my mind. A large portion of programming seems to be rewriting code so it maintains the same behavior, but is then also extensible in some way. Factoring code is an example of this activity, but you could also rewrite code to produce the same behavior which is not a factorization of the original code. To be concrete, you could factor in two different ways, so each factorization would produce the same behavior but neither is a factorization of one another. Moving back to the unfactored code and factoring in the other way is then a means of transforming the code to produce the same behavior that isn't mere factoring.

(picture: code<--factoring<--code-->factoring-->code)

(this is very reminiscent of factoring in abstract algebra and you could imagine an algebra about manipulating the code in this way, and going down this road you can ask whether two programs will produce the same behavior implies there is a common factorization but this might be another conversation).

I'm curious about this question mostly as a proxy for a related question in math: How can you transform one proof into an equivalent proof? This is a slippery concept because nobody knows how to make precise the idea of "equivalence of proofs". If you know about Hilbert and his 23 problems you might find it interesting that he originally had a 24th problem on the equivalence of proofs! Even though the idea is notoriously difficult to pin down, I think it is intuitive enough to take a pragmatic stance and ask how you could go about implementing technology to carry out these transformations. This is important to me because in math we "factor" proofs all the time and often compare proofs to determine the essential and incidental aspects of each. So what I'm really looking for is any techniques or perspectives in the domain of programming that could be taken back into mathematics. I've seen some approaches down at the level of the lambda calculus but I haven't found them useful. I think a pragmatic/experimental approach is better than a theoretical approach at this point.

Jack Rusher 2020-09-11 07:10:52

The closest thing to what I think you're asking about is using Logic Programming to do program synthesis. Here's a video demoing a system of this kind:

https://www.youtube.com/watch?v=5vtC7WEN76w

Martin Sosic 2020-09-11 07:21:38

At risk of coming up as very naive: what about tests on the practical side and formal verification on the other side? Since those are allowing us to refactor code while ensuring (to varying degree) its correctnes and behaviour, how do those fit into what you are looking for?

U01AN8DFFBN 2020-09-11 08:40:45

I wonder how different proofs for the same theorem compare after resolving all abstractions and breaking them down to ZF. But I guess even proofs using the minimal amount of axioms aren't unambiguous, as two different axioms could share a common idea.

However, it is undecidable whether a set of axioms contains redundant axioms.

In general, deciding whether two programs are semantically equivalent is undecidable too. So there cannot be a universal tool that enumerates all alternative function bodys with the same behavior.

William Taysom 2020-09-11 09:23:24

When dealing with undecidability, it's best to say, "well, we'll just handle the easy cases," and see how far you go. Compiler optimizations are certainly an exercise in factoring as is partial evaluation, which is a good deal more fun. Come to think of it, conventional refactoring is sort of the opposite transformation. Instead of removing indirection, add it so that the rest is more regular.

Robin Allison 2020-09-11 17:19:58

@Martin Sosic I appreciate the answer. Talking about testing is more concrete than talking about an abstract idea of behavior, and as you said the practice of testing is about manipulating code so the same tests (at least) still pass. I guess the picture I have in my mind is that when you are coding you are moving around a space of strings of symbols, and in that huge space is a space of valid programs, and once you write tests, then there is an even smaller space of programs that pass those tests. Is there any means of restricting code rewrites to discrete steps that take place entirely in the space of programs that pass the tests? Rewriting for-loops as while-loops wouldn't change any tests. Changing some variable names. Refactoring. These wouldn't change the tests at all, and in a specific context you could possibly have more. If you have various pieces of code that are interchangeable then you have something like an algebra where you can substitute equivalent expressions. Tests tell you when code is interchangeable so it gives you some algebra-like thing and how can you manipulate code at this level?

So tests are relevant for what I'm looking for because they let you say when two programs are equivalent. I don't think proof verification does the same in math.

Robin Allison 2020-09-11 17:45:24

@U01AN8DFFBN "In general, deciding whether two programs are semantically equivalent is undecidable too. So there cannot be a universal tool that enumerates all alternative function bodys with the same behavior."

Neat! Good to know there isn't a universal tool.

@William Taysom Easy cases is right! I haven't seen compiler optimizations before. I might take a peek into that area, but I have a feeling I might get scarred.

Ray Imber 2020-09-11 19:02:14

I agree with @William Taysom. This very much reminds me of compiler optimization.

Classical compiler optimization makes use of many transformations that maintain equivalence such as https://en.wikipedia.org/wiki/Static_single_assignment_form form. These sorts of transformations combined with heuristic based analysis are the bread and butter of compiler optimization. The GHC Haskell compiler is probably the epitome of the classical compiler optimization approach.

Then there are the recent results in applying Machine Learning and Genetic algorithms to compiler optimization. These compilers often produce extremely novel, unexpected, even bizarre, machine code, and yet the results have been shown to be "equivalent", and yet much more efficient in some dimension than known state of the art classical approaches (often either better runtime or less resulting machine code.) AFAIK, the way the ML compilers seems to work today is essentially unit testing and manual inspection against programs compiled with classical compilers.

https://arxiv.org/pdf/1805.03441.pdf

https://www.semanticscholar.org/paper/Compiler-Optimization%3A-A-Genetic-Algorithm-Approach-Ballal-Sarojadevi/6676a5489ced5412fa2ba3ecb76ca3e5ca2723e0

A key idea is that all these algorithms must have some notion of equivalence of programs. An optimization is only useful if the resulting behavior is equivalent.

The real heart of this is the Church-Turing thesis. Program optimization is essentially saying one program is equivalent to another program, but just computed in a different way. The fact that such equivalent programs can even exist is a direct result the Church-Turing thesis.

Your line of reasoning is related to some of the inspiration of the creators of logic programming and proof assistants like Coq and Agda. If you can encode a proof in a programming language, you have essentially shown that the proof is computable. If it's computable, then there are infinite equivalent programs (via Church-Turing). You can then apply all the known computable transformations like SSA, or graph pruning analysis, etc...

William Taysom 2020-09-12 07:30:01

Wow, GCC flags Ray Imber? I do like the idea of ML being applied at high level "moves" than it often is. I mean, for instance, allow for only valid application of inference rules rather than random term rewrites. Let the system play over that space.

Nick Smith šŸ•°ļø 2020-08-24 05:10:56

Why isn't any kind of logic programming considered a https://en.wikipedia.org/wiki/Model_of_computation? Why do we talk about Turing Machines and recursive functions as fundamental, but not inference? I can't find any resources discussing this disparity. It's like there are two classes of academics that don't talk to each other. Am I missing something?

Prathyush 2020-09-11 13:07:35

Nick Smith I think logic programming indeed began as identifying itself as a programming language: http://www-public.imtbs-tsp.eu/~gibson/Teaching/Teaching-ReadingMaterial/Kowalski74.pdf

The extant thread of how this plays out I think is in the work of John Reynolds in the form of logic relations right now. But the present day incarnations that have the major mindshare happens to be Prolog variety of languages and I think its a fertile ground for bringing in novel work with it.

Shalabh Chaturvedi 2020-09-11 18:29:27

(My comment below generated a bit of interest in another forum so I want to copy it here as a prompt.)

What do folks think about the emphasis on "readability" for programming languages?

Readability in PLs is nice, but ultimately a red herring. It doesn't scale. We may be able to read a small snippet and get it, but once we have more code or more abstractions, we're lost again. The real goal should be to offload all mechanical computation to the machine.IOW, readability gives a very small step. It may be easier to build a "structure in your head" after reading a "more readable" representation (vs a less readable representation of the same thing). However the work really starts after you have this structure, when we begin "playing computer" in our heads.

Andrew Carr 2020-09-11 21:32:20

Oh, it's a local vs global argument? Interesting. The point being that locally readable code is not the bottle neck, but globally "readable" (comprehensible?) code is.

Nick Smith 2020-09-12 00:24:48

Locally-readable code is definitely a bottleneck. I dare you to try and tell my students otherwise after theyā€™ve spent 12 weeks trying to learn Python fundamentals :/

But anyone defining ā€œreadabilityā€ as merely naming plus commenting is missing the obvious: the actual language that those names and comments are embedded within! The grammar and semantics of a language matters at least as much as the names that are used within it.

Nick Smith 2020-09-12 00:29:32

But yes, large-scale understanding is a separate issue requiring its own solutions.

taowen 2020-09-12 07:23:06

global readability is a by product of global consensus. It is harder now days to achieve global consensus, because the skill and platform driving teams to be more and more narrowly focused. When everybody has ideas of how to organize, global readability is hard to achieve.

William Taysom 2020-09-12 07:40:34

Code is pretty low level: it's hard to get an overview of it. Low level consistency, in naming for instance, will give you insight into what's going on, and I think more importantly, those who write locally readable code are also more likely have a good architecture.

Cole Lawrence 2020-09-13 10:56:20

I would say observability is equally important. It's nice and all if the code is readable, but many times the code goes slightly out of date with it's names. And, the way it works makes a lot more sense when you have good tools for inspection into what's going on or what happened.

Prathyush 2020-09-12 10:25:50

I tried to write a narrative of how studies in universal algebra and category theory is providing a deep understanding of how programming languages are united/distinct in the way they treat computational structures: https://github.com/prathyvsh/morphisms-of-control-constructs

I am pretty new to this field and thought the story of this evolution is less told away from academic circles and started piecing together parts of the story as I read twitter feeds / blogposts / papers on it. If anyone here is knowledgeable in this, can you please provide some feedback on what I have missed or what needs polishing? Pretty sure that there could be some significant details I have missed out.

Stefan Lesser 2020-09-12 16:52:05

I donā€™t know much about this and certainly a lot less than you do, but I am super interested in this and really appreciate you writing about it and putting together all these resources. So hereā€™s my feedback: thank you and please keep going! :)

Prathyush 2020-09-12 18:56:53

Thanks a lot for the encouragement! I just bumped into this after exploring random stuff related to PLT. A certain kind of recurrent idea here seems to me that there is something fundamental about computational structures (in a topology like sense) that we are all groping at in our daily lives.

Programming seems to me now to be an activity in creating some kind of <hypothesis>semiotic ground for manipulating Platonic forms</hypothesis>.

I am trying to see if those things can be made first class in a visual fashion and PLT semantics looked to me like the place where I can dig in as I have some idea of PLs and a bit of logic knowledge. Also, there are a great community of people studying this deeply and communicating about it in this field. But it is filled up with waaaay too much jargon to repel most programming practitioners as the notations are dense and require an understanding of mathematics before approaching it.

Prathyush 2020-09-12 19:04:41

This sort of aligns with a direction of using computation graph as a first class entity which I think some people here and in Twitter is about to embark on. This can be seen in Twitter convos in the form of RoamOS or as Codex lately proposed: https://twitter.com/codexeditor/status/1303985191912747009

This would result in a computational would of structure, I think, close to the attached image. Where each context has a program inside it like in a graph editing tool which then becomes the ground for further computation. Think of modules referring each other and creating intermediate representations of computations in this stage, just before getting visualized out as UI elements in a pixel matrix.

šŸ“· image.png

Prathyush 2020-09-12 19:09:17

But I think the links go deeper, and (un)fortunately semanticists, philosophers, and mathematicians working on specialized areas seems to be the ones with intimations of what these general structures could be like. Here is another image from Rocco Gangleā€™s work (He mixes Category Theory with philosophy to propose some kind of ā€œdiagrammatic immanenceā€). This but for programming is what I think is lurking inside the morphisms of control constructs.

šŸ“· image.png

Stefan Lesser 2020-09-12 20:52:30

Yes! I understand maybe half of what youā€™re saying, but that half feels true and important to me. Iā€™ve been looking at linguistics, category theory, bidirectional transformations, parsers/grammars, complexity theory, and Christopher Alexanderā€™s work on Wholeness/Life and his generative process of unfolding which I just recently started to understand as having parallels to Chomsky and grammars (search for Greg Bryant, if that sounds interesting).

All these things are very different disciplines but somehow I started to see strong parallels and connections which I totally believe can be expressed mathematically. And very likely we have already identified all the structures we need, we just need to make the connections explicit. Unfortunately, I often find it hard to describe and it makes total sense in my head, but then Iā€™m struggling explaining it to others.

It looks like youā€™re onto something that ties into this and might help me find ways to explain some of these connections better.

And thatā€™s not even touching on all the exciting ways this might help us find better visual representations for programming systems!

Prathyush 2020-09-12 21:18:17

Exactly my thoughts! You put this better than I did. Christopher Alexanderā€™s work was a big influence and he is a strong center of this kind of interdisciplinary thought. Much respect to him for his work. I need to look up Greg Bryant, thanks for the pointer!

I am working on and off on my mathematical skills to see how I can represent these ideas well. This is why I started the research on notation to see how historically they have helped us in expressing that other realm that we have inside us: https://github.com/prathyvsh/notation I am pursuing it under a feel that there is a good ROI from having a good fitness between the content/context relationship for the constructs you device to ground these forms.

Will definitely share when I have some clarity on this. Thanks for the good words āœŒ

Stefan Lesser 2020-09-12 21:24:31

Exciting. Iā€™ll keep following you here and on Twitter.

Re Greg Bryant start here: https://www.youtube.com/watch?v=X-5KG73fzJ4

Then read this whole blog in chronological order (itā€™s not as much content as it seems): https://chomskyalexander.blogspot.com/?m=1

And feel free to ping me on anything related to this. I still have to dig through all what you posted but maybe there are chances to loosely collaborate.

Garth Goldwater 2020-09-12 22:09:55

iā€™m going to have to give this sincere focused time, but i wanted to reply before my initial reaction before i lost the referencesā€”iā€™ll come by and fill these out with hyperlinks later. i too feel like all of this programming language research is swimming in a dark cave, and in our hunting around iā€™ve got this suspicion that thereā€™s a light switch not too far away. and that it has a lot to do with reifying the process of evaluation and letting users interact with it directly. in particular,

...there is something fundamental about computational structures (in a topology like sense) that we are all groping at in our daily lives...

I am trying to see if those things can be made first class in a visual fashion and PLT semantics looked to me like the place where I can dig in as I have some idea of PLs and a bit of logic knowledge. Also, there are a great community of people studying this deeply and communicating about it in this field. But it is filled up with waaaay too much jargon to repel most programming practitioners as the notations are dense and require an understanding of mathematics before approaching it.

and

All these things are very different disciplines but somehow I started to see strong parallels and connections which I totally believe can be expressed mathematically. And very likely we have already identified all the structures we need, we just need to make the connections explicit. Unfortunately, I often find it hard to describe and it makes total sense in my head, but then Iā€™m struggling explaining it to others.

resonate really strongly with meā€”better-worded versions of stuff iā€™d tried to explain to other people in the past

the aforementioned references im working with off the top of my head:

  • call by push value
  • ohm/ometa
  • kernel/vau/f-expressions
  • partial evaluation
  • programming should eat itself
  • the work going on at red planet labs, alluded to by a few talks on the specter library for clojure
  • meander, another clojure library
  • the stuff rich hickey has started saying about functions having some knowledge about what they require to work properly (instead of specifying requirements on data structures directly)
  • towers of interpreters
  • f-algebras, recursion schemes
  • scoping and binding in the rebol programming language, APL
  • defunctionalization and refunctionalization
  • concatenative languages, because: the current continuation for them is always the state of the stack plus the rest of the tokens in the source, so you can always split a program at any point, pause it and restart it, and composition instead of application is the default action of putting two words next to each other. another way of looking at it is that every concatenative program always carries with it the context itā€™s operating in/on. plus thereā€™s this cool video i shared before: https://youtu.be/R3MNcA2dpts
Stefan Lesser 2020-09-13 10:12:04

Hereā€™s something Iā€™ve been looking into for a while that I didnā€™t see you mention specifically: the connection between

  • (embedded) domain specific languages
  • parser combinators
  • abstract algebra
  • transducers (you kind of mention that with generators/iterators, I think)
  • transformation passes (towers of interpreters?)
  • ā€¦it appears Iā€™m going to list all of computer science if I keep going, but there are certainly some more items that belong in that list

There are quite obvious connections between some of them, for instance parser combinators are more or less directly applied abstract algebra. But there seems to be a more fundamental pattern that is reflected in all of them (and here we get to the part where Iā€™m usually failing at describing it well enough ā€” probably due to my lack of depth in mathematical understanding):

  • They all involve (or at least can facilitate) transformations from a sequential to hierarchical structure.
  • They all represent a set of well-defined composable entities that together form something like a grammar (some more directly than others).
  • They all in a sense resemble words of a language, which can be combined to describe a lower level thing in more detail (-> Alexanderā€™s Pattern Language).
  • They all enable and/or are based on a fundamentally recursive pattern which allows them to be used on various levels of abstraction at the same time; theyā€™re something like an abstraction of an abstraction, if that makes sense?

I mean, maybe Iā€™m just looking at lambda calculus shining through in all of them (and in all of computing), and thatā€™s that ā€” and thereā€™s nothing more to see here. But I donā€™t think thatā€™s it.

I know, this is kind of weird, but maybe to some of you what I just wrote makes somewhat sense and you will have some comments that help me on the path to computational enlightenmentā€¦ ;-)

Cole Lawrence 2020-09-13 14:40:06

Garth Goldwater pointed this out to me in a recent conversation: continuations could be seen as the same as monads. I dug up this little discussion with more discussion around this idea https://stackoverflow.com/questions/4525919/continuation-passing-style-vs-monads

Drewverlee 2020-09-13 01:56:56

What if it's important, for comprehension, that we be able to speak programming languages not just write them.

Cole Lawrence 2020-09-13 10:51:39

I suppose I'd it's important for comprehension, then the programming language environment might need to be more tolerant to potential error, right? For example if someone says "with each user online in slack, I want to ask them when their birthday is" I'd love for the computer to be able to help incrementally qualify that for us even if there are subtle "grammar fixes" that need to be applied. Then, programming with correct syntax would be a bit more similar to writing correct English with Grammarly.

William Taysom 2020-09-13 12:15:32

Even for the professional programmer, a system that comes up with "but have you considered" questions could prove fairly useful. How many bugs are caused by odd interactions from otherwise fairly independent subsystems?

Drewverlee 2020-09-13 12:29:24

Though it would be interesting to consider talking to the computer and having it translate. I was thinking more about just a way to converse with other developers. The lack of humanity in the development process seems to lead to a lot of stress. People enjoy interacting with other people.

Drewverlee 2020-09-13 12:31:57

I do this now in clojure, but I'm curious if we developed shared rules around the speaking patterns.

Don Abrams 2020-09-13 12:48:18

This comes up a lot in mob programming. I'd love research in what gets named and if there are patterns for it. The GoF book also stresses the reason for naming the patterns and the follow up refactoring books use the new vocab with some additional verbs.

Don Abrams 2020-09-13 12:50:56

Also, programming languages hit the same areas in the brain as everyday languages... But the ambiguity restriction is very different (due to shared context)

Cole Lawrence 2020-09-13 14:22:21

Aha, indeed, I misunderstood the original idea.

It's tough to be able to verbally communicate anything involving multiple actors (DB, Load balancer, Http endpoints; Text buffer, RegExp, validation messages; etc.) That's why when we can't communicate it by writing the code, I tend to try drawing it out and annotating with different levels of annotation. So, if we were communicating how notifications work, I would start by writing out the high level actors involved in sequence of a notification being sent. Then, I'd incrementally add notes for where our business requirements are applied. Then, I might show where the data is stored, etc.

In this situation it is incredibly difficult to share knowledge only orally. It is far easier to write it out in a shared code editor, or through sending back and forth snippets, than it is to try to hold all the info in your brain as the conversation moves forward.

In some of the most complex topics, my peers and I will actually write out the questions we're asking in comments in the shared editor, so we can encode each other's thoughts carefully to save time from having to repeat ourselves and losing context.

Cole Lawrence 2020-09-13 14:25:10

It kinda sounds like as if a math teacher quizzed students orally without writing out the problem on the board. It would lead to too much repetition and clarifications, just so the student is able to write out the problem the teacher is asking...

Mark Santolucito 2020-09-13 16:13:28

would be really interested to know if there is any MRI research looking at how much overlap there is in which parts of the brain "light up" when speaking/writing/read code vs natural language

Garth Goldwater 2020-09-13 16:37:05

i think @Cole Lawrence is pointing at something important here, which is that the main affordance given by non-spoken language is asynchronyā€”being able to look at things in different orders, return to old topics, and read without listening or speaking. i think the idea of spoken language (and an interactive conversation) also probably have unexpected side benefits, but iā€™m struggling to think of them at the moment (probably because i donā€™t use spoken interfaces very often). the only one that comes to mind is that spoken interactions imply an awareness of contextā€”think of how much more often you get to say things like ā€œwhatā€™s this?ā€ out loud rather than when youā€™re writing