You are viewing archived messages.
Go here to search the history.

Dalton Banks 2021-11-02 00:04:07

A thought that’s been crystallizing for me is that the essence of ‘coding’ is modeling & simulation (not e.g. data and functions). These themes show up all the time in FoC contexts, but as far as I can tell they’re rarely the ROOT metaphors of a system.

Of course there are plenty of examples in “real engineering,” what Alan Kay refers to as CAD<->SIM->FAB system. Do you know of examples of ‘convivial computing’ projects where modeling and simulation are the main event, or readings on the topic? What do you think of this premise?

Here’s a recent Quora answer for more context: Does Alan Kay see any new ideas in computing?

“New” is not what I look for. “Ideas that make a qualitative difference over past techniques” are what I’d like to see.

Years ago, I’m fairly sure I was aware of pretty much everything regarding computing that was going on in the world. Today, I’m definitely not aware of everything, so it’s reasonably likely that if there was something really great being done somewhere that I wouldn’t know about it.

I would be most interested in learning about “qualitatively more expressive” programming that is more in line with top-level engineering practices of the CAD<->SIM->FAB systems found in serious engineering of large complex systems in the physical worlds of civil, electrical, automotive, aeronautical, biological, etc. engineering.

In the CAD<->SIM part I’d like to see the designs understandable at the level of visualizable semantic requirements and specifications that can be automatically simulated (on supercomputers if necessary) in real-time, and then safely optimized in various ways for many targets.

Isolating semantics in the CAD<->SIM part implies that what is represented here is a felicitous combination of “compact and understandable”.

The FAB-part pragmatics are very interesting in their own right, and besides efficiencies, should be able to deal with enormous scaling and various kinds of latencies and errors, etc.

The above would be the minimal visions and goals that I think systems designers within computing and software engineering should be aiming for.

I’m not aware of something like this being worked on at present, but these days this could be just because I haven’t come across it.

Chris Granger 2021-11-02 01:29:21

Simulink, yakindu, and the lifecycle modeling language (LML) folks all come to mind.

Chris Granger 2021-11-02 01:30:05

And of course SysML

Chris Granger 2021-11-02 01:30:23

Though I think LML is a better approach

Dalton Banks 2021-11-02 02:00:38

Thanks Chris! I guess I still see those tools in the CAD/SIM/FAB camp, at least Simulink and yakindu, though I prob need to make another category for “business processes” (which I still don't think the UML family has done a great job solving). For me 'convivial computing’ intends something I might use to build/edit my own tools (notes, todos, etc) in a live environment.

Dalton Banks 2021-11-02 02:02:55

Huh i hadn't taken a close look at LML though, that does look worth knowing

Kartik Agaram 2021-11-02 14:11:39

Also Netlogo which is getting discussed at https://futureofcoding.slack.com/archives/C5U3SEW6A/p1635772261029400?thread_ts=1635772261.029400&cid=C5U3SEW6A

Dalton, are you suggesting using simulation metaphors for things like programming email filters? 🤔

Would HyperCard and HyperTalk fit the bill?

[November 1st, 2021 6:11 AM] mornymorny: An interesting little application; from what I can tell written in a combination of Logo and Lisp. You can interactively edit the UI, etc. Hit the 'Setup' then the 'Go' button to run the code (the source itself is hidden by the scrollbar in the main window). https://codap.concord.org/app/static/dg/en/cert/index.html#shared=https://cfm-shared.concord.org/iGCemjqgrcHduADPwtMM/file.json (Oh, and it is a virus/Covid simulator)

Dalton Banks 2021-11-04 15:22:07

this will probably sound overly philosophical but i’m not trying to be.. just what seems to happen when i try to coherently define terms.

first off, science & engineering is almost always based on the assumption that we share a common ‘reality,’ which we come to understand through our band-limited, unreliable faculties of sensing and cognition. i will adopt that assumption throughout. studies and experience show our mental models to be wildly inconsistent, both internally (‘verification’), measured against reality (‘validation’) and compared to other people’s (‘coordination’), but they form the basis of pretty much every decision we make.

my best working def of a model is ‘something that represents a partial world state’

  • ‘something’ = has to exist in some form to be useful, whether encoded in a tangible object, a pencil & paper sketch, computer memory, neural circuitry, etc.

  • ‘represents’ = ultimately in the eye of the beholder; requires some pre-shared bootstrapping model/implementation to be useful (e.g. among humans we have near-universal experiential primitives like ‘dark/light’, ‘hot/cold,’ etc)

  • ‘world’ = all of reality

  • ‘partial world’ = some subset of reality

  • ‘state’ = some configuration of that subset of reality (with implicit or explicit precision/likelihood)

representation is kind of subtle even beyond coordination (alan kay’s ‘communicating with aliens’)... perhaps the most basic representation is a simple ‘reference’, which still encodes the assumption that there’s a ‘something’ that’s persistent and recognizable on the other end (‘object permanence’). i don’t see a flaw in saying ‘pointers’ are the most basic form of stateful cognition (and subject to the same foibles as c pointers... is the referent still there? can it still do the same things? does it still have the same properties? has it been replaced by an evil twin?)

models can serve a bunch of different roles. e.g. a model can communicate an observation of what i claim the current world state to be, an instruction representing the world state i want a system to produce, or an imagined scenario to reason about.

‘simulation’ is a bit slippery; to me the useful primitive to start with is ‘an ordered sequence of models,’ in which case simulation is something like ‘an ordered sequence of models representing the time evolution of a particular model according to some update rule’

one could think of science as ‘a process for finding models that best represent the world and finding rules for updating them that predict future world states’, design as ‘a process for defining models of how we want the world to be,’ and engineering as ‘a process for implementing rules (science) in order to achieve desired world states (design) based on current world states (science).’ ‘programming’ tends to muddle all 3 together, whether explicitly or (usually) implicitly.

all that said, yeah, Kartik Agaram programming email filters is squarely in the realm of what i’m talking about. you start with a model of reality that also includes your computing environment - your machine, its OS, your browser, the mail server, etc etc, which you can always drill down from whatever abstraction your dealing with if needed. you define a model for what unfiltered email is like (‘science’), a model of what you want your filtered email to be like (‘design’), and come up with (science) & implement (engineering) rules you think will achieve that. simulation is a powerful tool to aid in the ‘coming up with and implementing rules’ part (’what happens if i project this rule on this inbox model over time?’). to close the loop you also want some nice tools to see if it’s working how you want it to. netlogo enables some of this in a very ‘science-y’ not ‘user-y’ context. hypercard gives you some nice tools for very ad hoc experimentation.

wavelength check?

Duncan Cragg 2021-11-04 16:40:56

written before I saw the above

Absolutely! For me personally, "the essence of ‘coding’ is modeling & simulation" is exactly where I've come from in everything I've done in this space.

I wouldn't just cross out "e.g. data and functions" as a result though - you can't exactly throw those out! Even when in the most modelly and simulatey world, you'll need to present data to the user to hold current states. And behaviours of those states is basically going to boil down to something like functions, even if presented more abstractly in pretty graphics. I mean, even Excel - the financial modeller/simulator - has those!

For me, programming is creating new realities or simulating existing ones. I often refer back to the early Macs which introduced to the world a wordprocessor program that made the page actually look like the printed page you'd end up with, instead of glowing text floating on a black background that bore no relation to it. This is modelling or simulation of printed paper.

In social media or chat you're modelling or simulating the relationships between people: their social graph. You're simulating them talking (or perhaps passing little paper notes to each other!)

Of course, 3D virtual worlds and Augmented Reality are the extreme of this position, as is the programming of IoT devices.

Dalton Banks 2021-11-04 17:58:51

i’m tracking. to try out my terminology (i know you wrote this before reading), i’m def not throwing out data and functions but kind of putting them in their place. ‘data’ is the encoding of models in some reconstructable form. ‘functions’ are rules for transforming models (very useful in simulation or representation). representation is a separate issue.. can be whatever makes the most sense in context: text, diagrams, interactive widgets, spatial audio. a missing concept here is ‘linking models together.’

all good examples of modeling and simulation, though i wouldn’t say 3D VR is necessarily the extreme.. e.g. you can have rich representation capabilities but very poor modeling & simulation capabilities. this is partly why a lot of MMOs despite flashy graphics struggled to achieve the immersive quality of text-based MUDs.

IMO it’s getting easier & easier to hop around these these layers while programming and it can be hard to keep track of where the lines are drawn, to the detriment of comprehension.

Duncan Cragg 2021-11-04 21:51:30

On the new post, I agree with everything up to the email filter bit, which I'm still digesting but it's not resonating as yet!

Dalton Banks 2021-11-04 15:22:07

this will probably sound overly philosophical but i’m not trying to be.. just what seems to happen when i try to coherently define terms.

first off, science & engineering is almost always based on the assumption that we share a common ‘reality,’ which we come to understand through our band-limited, unreliable faculties of sensing and cognition. i will adopt that assumption throughout. studies and experience show our mental models to be wildly inconsistent, both internally (‘verification’), measured against reality (‘validation’) and compared to other people’s (‘coordination’), but they form the basis of pretty much every decision we make.

my best working def of a model is ‘something that represents a partial world state’

  • ‘something’ = has to exist in some form to be useful, whether encoded in a tangible object, a pencil & paper sketch, computer memory, neural circuitry, etc.

  • ‘represents’ = ultimately in the eye of the beholder; requires some pre-shared bootstrapping model/implementation to be useful (e.g. among humans we have near-universal experiential primitives like ‘dark/light’, ‘hot/cold,’ etc)

  • ‘world’ = all of reality

  • ‘partial world’ = some subset of reality

  • ‘state’ = some configuration of that subset of reality (with implicit or explicit precision/likelihood)

representation is kind of subtle even beyond coordination (alan kay’s ‘communicating with aliens’)... perhaps the most basic representation is a simple ‘reference’, which still encodes the assumption that there’s a ‘something’ that’s persistent and recognizable on the other end (‘object permanence’). i don’t see a flaw in saying ‘pointers’ are the most basic form of stateful cognition (and subject to the same foibles as c pointers... is the referent still there? can it still do the same things? does it still have the same properties? has it been replaced by an evil twin?)

models can serve a bunch of different roles. e.g. a model can communicate an observation of what i claim the current world state to be, an instruction representing the world state i want a system to produce, or an imagined scenario to reason about.

‘simulation’ is a bit slippery; to me the useful primitive to start with is ‘an ordered sequence of models,’ in which case simulation is something like ‘an ordered sequence of models representing the time evolution of a particular model according to some update rule’

one could think of science as ‘a process for finding models that best represent the world and finding rules for updating them that predict future world states’, design as ‘a process for defining models of how we want the world to be,’ and engineering as ‘a process for implementing rules (science) in order to achieve desired world states (design) based on current world states (science).’ ‘programming’ tends to muddle all 3 together, whether explicitly or (usually) implicitly.

all that said, yeah, Kartik Agaram programming email filters is squarely in the realm of what i’m talking about. you start with a model of reality that also includes your computing environment - your machine, its OS, your browser, the mail server, etc etc, which you can always drill down from whatever abstraction your dealing with if needed. you define a model for what unfiltered email is like (‘science’), a model of what you want your filtered email to be like (‘design’), and come up with (science) & implement (engineering) rules you think will achieve that. simulation is a powerful tool to aid in the ‘coming up with and implementing rules’ part (’what happens if i project this rule on this inbox model over time?’). to close the loop you also want some nice tools to see if it’s working how you want it to. netlogo enables some of this in a very ‘science-y’ not ‘user-y’ context. hypercard gives you some nice tools for very ad hoc experimentation.

wavelength check?

Felix Kohlgrüber 2021-11-02 20:06:05

Hi folks!

It's been a while since my last post in this group, and it feels good to be back with some new FoC-related thoughts: I've been thinking about the tree structure of file systems recently and it turns out that they're limiting and require workarounds for relatively common use cases. Files contain data, but don't have children. Folders have children, but can't store data themselves. What if a file system had "nodes" that could store data AND have children?

I've written a blog post about this and would like to hear your thoughts. As I'm not a native speaker in English and not really talented in writing, I'd be interested in feedback regarding the content as well as the general writing style etc.. Thanks in advance and looking forward to interesting discussions!

https://fkohlgrueber.github.io/blog/tree-structure-of-file-systems/

William Taysom 2021-11-03 06:57:18

index.html — not to be facetious, and I see you write about it in the article. Of all filesystem woes, data stored in a directory proper is not one that comes to mind.

One thing that does come to mind is treating a directory as thought it were a file. I mean macOS packages. https://en.wikipedia.org/wiki/Package_(macOS)

Felix Kohlgrüber 2021-11-03 08:12:31

I didn't know about macOS packages, thanks for the hint!

Andrew F 2021-11-03 16:21:22

HTML itself is another thing that has this "data at every level" structure, in the form of attributes. It's everywhere.

FWIW I think the more interesting problem with filesystem trees is that they're rigidly tree shaped, when tags or 2d tables might sometimes be more useful. :)

Felix Kohlgrüber 2021-11-03 16:35:45

Andrew F yes, file systems being limited to tree structures is an issue. Often, something more general (e.g. a graph) would make more sense. That's something for another blog post some day 😉

William Taysom 2021-11-04 03:02:33

If only FSs were honest to goodness trees. Hard links make for big headaches.