You are viewing archived messages.
Go here to search the history.

Kartik Agaram 2022-08-29 04:40:33

šŸ§µ on the first 25 minutes of futureofcoding.org/episodes/057

Summary:

  • It would be cool to create a visualizer for low level software. Assembly language in particular would benefit from visualization to catch errors in register allocation, manual memory management.
  • A retro game console may be a simple test bed for trying out various FoC ideas. You could literally save snapshots of all 64KB of RAM for some machines every single frame, and then perform further analysis on them, diffs and so on.
  • A similar idea: GPU Maker 2000 like RPG Maker 2000 but for GPU programming.
  • The Gameboy has a particularly ideal form factor for a convivial tool for thought. In particular, it has a camera, something I wasn't aware of.

Does anyone have a good sense of the space of Assemblers out there? Surely there exist debuggers for Assembly? Maybe reverse engineering is a space to look at? Are there any debuggers or reverse-engineering tools with command languages? Ghidra does seem to have something: resolute-tech.com/an-introduction-to-ghidra-scripting. Does anyone here have experience with it?

Kartik Agaram 2022-08-29 04:42:27

One reaction I had listening to the podcast: do we really need a visualizer? Mu would check register allocations and raise errors when you got them wrong. Ditto memory allocation errors. A visualizer would add a lot of complexity to it. Is it really worthwhile?

Tom Larkworthy 2022-08-29 06:45:58

I am converging on the idea that the FoC is not replacement of text with no-code, but augmentation of text with dataviz, i.e. dataviz assisted development. This seems to be what you want here (I can't help with the specifics of assembly). Yo do not need the same set of dataviz for all problems, so I think complexity is not an issue, as you only pick the tools that make sense.

Jan Ruzicka 2022-08-29 11:09:15

Re the idea that retro consoles might be a good testbed, since we can handle the whole systems with our current tools: Alan Kay would argue that we ought to do this with our current platforms, with the aid of supercomputers, and that we're limiting the scope of our ideas by not doing it. I agree with him, since on a retro console you won't typically need to deal with concurrency, and certainly not with parallelism.

Andreas S. 2022-08-29 15:32:44

I actually remember RPG Maker 2000 there was a forum around the 2000ā€™s where I spend a lot of time back then where it had a sub forum.

Chris Knott 2022-08-29 16:20:59

Re: retro consoles. Yes, I would love to make a Gameboy emulator that included all the visualisations from this talk youtu.be/HyzD8pNlpwI (particularly the pixel data related ones from ~29mins on)

William Taysom 2022-08-30 05:36:04

For many years now, I've wanted to make an easy to use card game builder. Would be less time sensitive than a console. Complexity comes in managing state well: phases of a turn, turns combining into rounds, phases, matches. Some combinatorial trickiness: sets and runs. And I'd feel good about the result if the system can automatically generate AI players of different skill level and disposition based off the rules. I mean given this set of rules, you could have an "aggressive" player who tries to do this and this.

Paul Tarvydas 2022-08-30 09:05:13

do we really need a visualizer?

IMO, the question about ā€œvisualizationā€ might be rhetorical - visualization of what ?

Debugging belongs to the realm of creativity, not of clockwork engineering.

Debugging is iteration. Debugging is notation-specific. Debugging is paradigm-specific. For example, if you Design a system using, say, the OO paradigm, you want a debugger that shows you the OO-iness of your code, not, say, machine-level single-stepping from a completely different paradigm.

Kartik Agaramā€™s Handmade-network video reminds me of a long-standing je ne sais quoi Iā€™ve had about GVL . Kind of a projectional editing REPL (based on simplistic-SVG?).

Mariano Guerra 2022-08-31 16:11:40

What are alternative solutions to variables and scopes? Is there a proven abstraction that end users easily understand?

Personal Dynamic Media 2022-08-31 16:36:55

Brian Harvey used to say that dynamic scope is what you get if you don't think about scope, so that's why it is easier for beginners to understand. But it still involves variables and a type of scoping.

Maybe I'm suffering from a lack of imagination here, but I'm not sure how to easily perform abstraction without giving things names. Names are what help us humans remember the meaning and usage of a thing or a behavior. If one tried to create some form of graphical language where things were displayed but could not be named, I feel like the first thing people would ask for is the ability to use names so they're not stuck thinking about this thing and that thing and the other thing.

It did not take long for early programmers to invent so-called floating labels, allowing them to name pieces of code and data in memory. Even in spreadsheets, the ability to name cells and ranges makes formulas substantially easier to read.

cambridge.org/core/journals/mathematical-proceedings-of-the-cambridge-philosophical-society/article/abs/the-use-of-a-floating-address-system-for-orders-in-an-automatic-digital-computer/66DB2A4ACA578BB871B1B4A75352A6ED

Outside of computers, imagine trying to tell someone how to make a sandwich without using any names.

Jim Meyer 2022-08-31 16:45:48

I think we're bounded by human biology in what kind of scopes we can reason about. We're spatial creatures (2D/3D plus time).

I'm not aware of alternative solutions, but there are variations with important differences in their relationship to end users.

All scopes are essentially a set of nested spatial containers, but the spatial borders in traditional programming are functions and classes which is where the problems start for non-coders.

The best example of spatial scopes that make sense for end users are spreadsheet rows and columns, which are much more natural. The variables "need a place to hang on the wall" in the end user's mind, and a function doesn't tick that box (it's a position in a text file, but essentially non-spatial).

From here I guess the remaining directions are 3D scopes or Graph scopes which is essentially the input/output model seen with node/flow-based programming.

Konrad Hinsen 2022-08-31 16:59:28

The alternative I am trying out myself in my Leibniz project (github.com/khinsen/leibniz-pharo) is no scopes, or if you prefer a single scope. To make this practical, code units must be kept very small, which actually helps to keep them understandable. That means: no "standard libraries" with tons of definitions that might one day be useful. Small bits of functionality must be explicitly included.

The inspiration for this is mathematical notation in textbooks and research articles. They don't have scopes. Every bit of notation, once introduced, is valid for the whole text.

Jonathan Arnett 2022-08-31 17:31:34

I've thought about creating a single-scope logic/"relational" language where "functions" are sets of rules about how the variables relate to one another. I guess it's not too terribly different from a database, per se, where variables are rows and rules are constraints. Moreso inspired by Prolog, only Prolog rules take explicit arguments.

I honestly have no idea if this is a good idea, and in all probability it's probably a bad one.

Chris Knott 2022-08-31 19:10:16

Wikipedia titles are globally unique, they just put the scope in brackets afterwards e.g. "Franz Ferdinand (band)".

Nick Smith 2022-09-01 02:58:31

The PL Iā€™m designing doesnā€™t have nested scopes. Itā€™s a relational programming language (Datalog-inspired) ā€” itā€™s the only paradigm I know of where such a thing is possible (with some hard work!).

As a program gets large, the absence of a syntactic boundary (e.g. a file, or a code block) for limiting the places a definition can be accessed from becomes a problem. But I think itā€™s an easily solvable one.

Variables, on the other hand, will remain essential for as long as humans use natural language.

Konrad Hinsen 2022-09-01 06:40:46

Chris Knott That looks more like an ad-hoc namespace than a scope to me.

Tom Larkworthy 2022-09-01 07:27:42

the old programs were declare all your variable in advance and/or only have a single global scope which is extremely easy to understand with the negative drawback of not scaling to large programs or not handling temporary internal control flow variables very well (internal loop variables have to go to the top). Still, pretty good IMHO if you want fast understanding of a snippet of code.

Chris Knott 2022-09-01 08:10:42

Konrad Hinsen yeah you are right.

I think scope in the sense of actively restricting the ability to talk about something from another context is not user friendly. Chris Granger talks about this in one of his talks where he demoed Eve at a local event (at Dynamicland I think). There were lots of non-programmers there. They couldn't understand why you could point to a deeply nested variable on the screen, but not just pull that value out and use it where you want.

I think the lack of scope in Excel (and autonaming of variables) is one of the reasons it is user friendly. It still has namespaces but you can refer to anything you can see (even across different files if you use a fully qualified path reference).

Konrad Hinsen 2022-09-01 10:27:41

Smalltalk doesn't have scopes either. Namespaces, yes: a global one (class names etc.), one per class for instance variables, and one per method for local variables, which are not allowed to shadow instance variables. I can't remember anyone complaining about the lack of scopes in Smalltalk.

Tyler Leonhardt 2022-09-01 16:38:00

I find scopes to be a useful abstraction. Not necessarily for the initial creation of a program. I believe they resolve two issues:

  • Single-user error on text entry: You may not intend to use a particular variable in certain contexts. Scopes are a useful way to make sure that a typo doesn't result in unintentional usage.
  • Multi-user idea communication: When designing large systems it is useful to hide certain details of the system, especially if a particular use would largely result in errors. For example, if the use of a variable i is used multiple times in a single method to iterate through multiple lists it is useful that different i s in different scopes are associated with different lists. It communicates an idea to other developers that the "mental load" introduced by the variable need only relate to the matter at hand and can be ignored outside of that context.

In a similar way to dynamic vs typed languages you can get away without scopes with a little bit of discipline. Encoding the restrictions seems a useful way to communicate intentions of the code though. In traditional implementations it really doesn't put much burden on the author as types can in some cases.

Jason Morris 2022-09-02 16:10:11

By default, definitions are unbounded, or bounded only by document. Redefinitions are bounded, and only explicitly. Redefinitions can be referred to outside the boundary of the definition but only explicitly. "1. Minister means the Minister of Health." "2. In this section, Minister means the Minister of Revenue." "3. The Minister, as that term is defined in section 2." Is that "scope" or "namespace"? I'm thinking namespace?

Jason Morris 2022-09-02 16:13:12

I would say that approach is proven.

Chris Knott 2022-09-02 16:13:40

The distinction to me is that something does not exist at all outside of its scope, whereas outside of its namespace it just goes by a different name.

Scope is inherently confusing from an author who has an omniscient view of the program.

Chris Knott 2022-09-02 16:16:56

It can be useful when you are debugging in your head ("playing computer") because it reduces the amount of possible factors affecting the program, but this is fool's gold, the actual solution is to make the computer help with debugging, so people don't have to play computer in their head at all

Personal Dynamic Media 2022-09-02 16:32:06

Konrad Hinsen local variables and arguments in Smalltalk blocks are lexically scooped. That's what makes it possible to implement conditionals and iteration by passing a block to a method.

Tyler Leonhardt 2022-09-02 17:48:44

It can be useful when you are debugging in your head ("playing computer") because it reduces the amount of possible factors affecting the program, but this is fool's gold , the actual solution is to make the computer help with debugging, so people don't have to play computer in their head at all

I disagree with this, specifically the bold part. In almost every scenario the goal should be to get feedback as early as possible. Ideally you can look at a program and know what it does just like you can look at text in a book and know what it says. In many large programs it is difficult to run all of the code through a debugger, sometimes taking double digit numbers of minutes. For example, major games take minutes to compile, run, and load into maps.

There are certainly use cases where you can lean more on a debugger, like scripting. Even in these scenarios most developers prefer to be able to look at code and know what it does rather than have to run it through a debugger.

Chris Knott 2022-09-02 18:16:21

I think your concerns are about the current-of-programming, aren't they?

Yeah, I mean forget minutes - when I last worked in the games industry a full compile had to be done overnight . This is bad. I would be wary of basing philosophical positions on that though.

"If we adopt this language feature, compile times will be faster" is exactly the sort of tradeoff I'd classify as fool's gold.

Konrad Hinsen 2022-09-02 18:56:09

@Personal Dynamic Media In Pharo (the only Smalltalk I have experience with), there are no lexical scopes. Re-declaring an argument or a local variable in a nested block is forbidden (see screenshots).

Blocks passed into a method are a different story. Their local variables are invisible from the method that uses them, so I wouldn't call that lexical scopes either, but that's certainly debatable.

Personal Dynamic Media 2022-09-02 19:01:22

Konrad Hinsen thank you, I was unaware of that limitation/feature. I think I see your point now about how if you just forbid variables from ever being shadowed, programmers don't need to think about scope.

Tyler Leonhardt 2022-09-02 20:16:04

I think your concerns are about the current-of-programming, aren't they?

Yeah, I mean forget minutes - when I last worked in the games industry a full compile had to be done overnight . This is bad. I would be wary of basing philosophical positions on that though.

"If we adopt this language feature, compile times will be faster" is exactly the sort of tradeoff I'd classify as fool's gold.

I feel like this is putting words in my mouth. I'm making this argument for past, present and future: it was true, it is true, and it will continue to be true. Looking at something and knowing it works is better than having to take extra steps to find out if it works .

Games are only used as an example. I've also done OS development where the same is true. I provided scripting as a counter example where maybe your argument is stronger: it's easier to run and debug scripts. I'd be curious if you have any realistic examples where people would prefer "[making] the computer help with debugging" over being able to "[debug] in your head" (I changed the gerunds in your quotes). I can't think of any. Seems like you always want to look at a program and know it works where possible and debugging only needs to come into the picture when that fails.

Tyler Leonhardt 2022-09-02 20:24:59

Its interesting if you take each argument to its extreme. I don't claim you are making one of these arguments but they are interesting to think about:

  • A language which is easy to "run in your head" but has no debugger.
  • A language which is hard to "run in your head" but has a great debugger.

I think its clear people would prefer the first bullet in most contexts. Though obviously a powerful debugger is an incredible tool for building better programs. I don't mean to degrade debuggers or claim they aren't useful. Rather, I think its worth aspiring to improving what can be done in the compiler/interpreter input before considering improvements provided by a debugger. Truthfully many of the tradeoffs may simply come out in difficulty of implementation. If it takes weeks to implement a complier feature vs days to implement a debugger feature that prevents a similar error, its probably better to focus on the debugger.

All else being equal though, I believe it is better to "verify things by looking at them" as I put it, even if human brains are lossy. The debugger comes in when the human brain fails... that doesn't mean the human brain should be replaced by it entirely though. The brain is what you are thinking with. Anything else, like a debugger, requires us using our much slower physical appendages to interact with.

Jason Morris 2022-09-02 20:42:19

Hard disagree. Programs are complicated. Even if you can understand small parts well by looking at them, you have no possibility of seeing the implications of how they interact once they are beyond toy size. I'll take your second option hands down.

Chris Knott 2022-09-02 20:45:37

I agree that it would be better to be able to do it in your head but I think it's impossible. Even the simplest things are already way beyond un-aided human brain processing power.

Consider the Mario example from Inventing on Principle youtu.be/PUv66718DII (from ~13 min, specifically the feature demoed from 13:55). It's basically just solving a quadratic equation but pretty much impossible (for me at least!) to do in your head.

Jason Morris's project is a "debugger" of sorts for Laws, something which are generally less complicated than computer programs.

Tyler Leonhardt 2022-09-02 20:55:26

Hard disagree. Programs are complicated. Even if you can understand small parts well by looking at them, you have no possibility of seeing the implications of how they interact once they are beyond toy size. I'll take your second option hands down.

How would even know how to write the program in the second bullet? A language could be so hard to use that its infeasible to get a program which is even debuggable. At least in the extreme case.

I agree that it would be better to be able to do it in your head but I think it's impossible.

I agree that its impossible in many situations. I even agree that small programs can be difficult to get right. You can't know its right until you run. But ideally you can get it as close to right as possible before running it so that debugging time is minimized.

Again, I'm not against debuggers and all code written should be run and tested so you can verify it is correct. Its just that the previous claim is too extreme for me to agree with. It certainly isn't "fool's gold" to construct better models that people can "debug in [their] head":

It can be useful when you are debugging in your head ("playing computer") because it reduces the amount of possible factors affecting the program, but this is fool's gold.

Jason Morris 2022-09-02 20:59:53

There is no extreme case. People create languages that are harder to use, on purpose, for fun. Humans are weird like that.

Tyler Leonhardt 2022-09-02 21:03:06

There is for sake of argument šŸ˜› but I agree its a weak argument. The reason its interesting to think about though is because it becomes clear that there is some limit on program understandability that is important. It is impossible to ignore the brain. There is not a limit on debuggers though. You don't need one. You could get by with printf and just running the whole program even if you don't want to.

Chris Knott 2022-09-02 21:11:57

To restate my point; people have been trying to create languages that are easier to write correctly for a long time, with comparatively little success, whereas less effort has been put into omniscient/time travel debugging/program visualisations etc.

Perhaps there is a theoretical programming language possible that brings the power of the computer to 99% of people but I can't even conceive of what that would be like, whereas I can conceive of theoretical (but impossible-at-the-moment) tools which make programming more like building with your hands. Bret Victor's work has (faked) examples of these sort of tools

Jason Morris 2022-09-02 21:12:28

I think we disagree about which of these two options is "ignoring the brain". Brains are very good at using language, and very bad at internally modelling the behaviour of complex systems they can't observe.

Tyler Leonhardt 2022-09-02 21:14:28

To restate my point; people have been trying to create languages that are easier to write correctly for a long time, with comparatively little success, whereas less effort has been put into omniscient/time travel debugging/program visualisations etc.

I agree much more with this framing of the point. I really take issue with calling the effort "fools gold" though.

Separately, I'm not so sure folks have had "little success". In the context of "a long time". I think folks have had a lot of success at first, but it slowed down considerably over time. I made a previous point about a tradeoff between verifying with "looking at a program" vs verifying with "debugging" and I think there is a valid argument that we've gotten all the low-hanging fruit from the first and underinvested in the second.

Tyler Leonhardt 2022-09-02 21:16:25

As a specific example I think structured programming had pretty considerable impact on understandability in ways that are more significant than similar debugging improvements made at the time... Its been awhile since we've gotten anything as impactful as structured programming though.

Jason Morris 2022-09-02 21:21:38

In looking vs. debugging, which is type safety?

Tyler Leonhardt 2022-09-02 21:26:14

Yeah, I was thinking that was missing from this discussion. It's interesting. Somewhere in between. There's almost three levels you want to consider things at:

  • How easy is it to understand ā€œjust lookingā€ (human only)
  • How easy is it to understand with automated verification (machine only)
  • How easy is it to understand with a debugger (human and machine)
Tyler Leonhardt 2022-09-02 21:27:13

I'd even argue that some of the verification methods impose complications in program text that makes 1 harder. Complicated type systems can sometimes place a burden on the programmer.

Jason Morris 2022-09-02 21:32:19

I divide it primarily between things that seek to make errors impossible, and things that seek to make errors easier to discover, and things that make errors easier to diagnose, and things that make them easier to repair.

Jason Morris 2022-09-02 21:33:31

E.g. type safety, fuzzing, debugging, and clear syntax.

Jason Morris 2022-09-02 21:34:27

I find "impossible" and "easy to repair" to be usually mutually incompatible.

Tyler Leonhardt 2022-09-02 21:37:59

Yeah, I feel like this is true in many contexts. There is definitely a balance between them.

Jim Meyer 2022-09-03 06:27:12

A UX design tool has one job: Materialize the designer's intent as working code.

Everything the tool does is in service of refining that intent (ideation, exploration, validation).

A programing tool has one job: Materialize the user's intent as working code.

UX design tools are programming tools.

UX tools today care very little for code. But they should, and it needs to be at their core.

Garth Goldwater 2022-09-03 16:37:27

id also make the argument that programming tools do very little in the opposite directionā€”they're much worse at communicating intent. and they're obviously visually weak

Jim Meyer 2022-09-03 17:00:47

Garth Goldwater Trying to fix that šŸ˜Š See šŸ’¬ #share-your-work@2022-07-28T11:20:10.210Z on how we're able to let designers communicate their intent by visually editing code. Working UI code, that uses real production components, is a much more precise way for designers to communicate their intent than compared to having their developer counterparts inspect their Figma/Sketch/XD vector-graphics files.

[July 28th, 2022 4:20 AM] jimkyndemeyer: UI code was always meant to be edited visually at 60 FPS on a canvas in a code-native design tool.

Garth Goldwater 2022-09-03 18:08:03

oh yeahā€”i think i saved that gif lol