2020-05-03 19:41:28 Unknown User:

MSG NOT FOUND

2020-05-04 00:56:09 Kartik Agaram:

But the export didn't have a limit, last time I looked. Mariano Guerra are you using the export or the API?

2020-05-04 14:38:59 Ivan Reese:

Mariano Guerra — I could also make you a Slack admin, since that might help you have access to more history. Let me know.

2020-05-04 16:23:58 Mariano Guerra:

I guess I'm using the API (a python script), not sure what's the difference 😄

2020-05-04 18:19:13 Kartik Agaram:

The export is a straight-up download of a large-ish zip file. You'll be able to see it on the top-right at https://futureofcoding.slack.com/admin/settings if/when you become admin.

2020-05-04 07:42:36 Shalabh Chaturvedi:

Reminder this starts Monday: https://futureofcoding.slack.com/archives/C5T9GPWFL/p1587478348097300

I copied the events into a public Google calendar: https://calendar.google.com/calendar/embed?src=a5d6rkt4qd1u5je0339c3apib8%40group.calendar.google.com&ctz=America%2FLos_Angeles

2020-05-04 09:12:51 Chris Knott:

I tried to join but can't hear anything

2020-05-04 09:14:45 Chris Knott:

nvm - wrong timezone 🙈

2020-05-04 13:28:44 Konrad Hinsen:

The program says "London". For those not intimately familiar with European time zones, that's currently UTC+1.

2020-05-04 14:08:24 Shalabh Chaturvedi:

Doh i got it wrong then. Edit: OK, fixed the calendar now.

2020-05-04 16:24:21 Tom Lieber:

Thanks for the calendar!

2020-05-04 18:49:45 Doug Moen:

Just watched the first session. Are there links posted to the text versions of the papers? I thought I saw a link earlier, but can't find it now. Also, if these presentations later go on youtube, then please post links.

2020-05-04 08:09:46 Mariano Guerra:

📰 The Future of Coding Weekly Newsletter is out, but most importantly, it's the first issue where I link to the conversations in the public static archive. More important than that is that I archived all conversations up to the point where slack allows me (Oct 11th 2019) and they are searchable/exportable in the new search page: https://marianoguerra.github.io/future-of-coding-weekly/history/

2020-05-04 11:42:22 U01361BMJS0:

Hey everyone. This is Doug from Seattle. I work at AWS, writing Go code for the docs.

2020-05-04 13:11:09 William Taysom:

On the #introductions channel tell us a little about your interests, and we'll be sure to comment. It's one of our conventions.

2020-05-04 16:25:05 Tom Lieber:

Thanks for the Go examples! 🙂

2020-05-04 14:31:09 Brent:

Is anyone aware of a similar community to this; with a robotics && || AI focus?

2020-05-04 14:33:01 Ivan Reese:

I would imagine that there are some good subreddits on those topics, since they're fairly mainstream.

2020-05-04 14:38:37 Brent:

Yes; indeed there seems to be plenty round there. I really like the volume - quality ratio in here; I don't quite get the same vibe on the larger forums 🙂

2020-05-04 14:40:50 Ivan Reese:

Thanks! We're making a real effort to reflect on and solidify around good community norms (mostly in #meta), so it's nice to hear that you like it here!

2020-05-04 19:54:59 Roben Kleene:

I agree, one of the things this community is really strong on is depth. Just spitballing for a minute here: I think one of the things almost everyone in this community shares is that almost none of us are not happy with the status quo. This makes for great conversation, because in many other discussion forums, it's almost impossible to break the conversation out from "how things are" to "how things could be", whereas that happens very naturally here.

2020-05-04 19:56:08 Roben Kleene:

I'd love to find more communities focused on how things could be. E.g., is there a design community that's not focused on "what's the best design tool today", but "how could we make a better design tool"?

2020-05-04 16:19:23 Doug Moen:

i am only too familiar with internet security flaws. However, it is the basic protocols of the internet that are at fault, not how they are implemented. In order to understand the cause of internet security bugs, you need to look at the actual bug reports, and at the code where the bug occurs. See cve.mitre.org and the twitter feed @CVEnew. The majority of these bugs are the result of coding in an unsafe language. When I was working in the network security industry, the unsafe language was usually C, and the bugs were buffer overflows, use-after-free bugs, etc: these are vulnerabilities that would be prevented by coding in a safe language. Today, a lot of network exposed code is written in higher level languages than C, but the code continues to be full of security holes. It's not commonly understood why these higher level languages are unsafe. I'm going to arbitrarily look at today's latest CVE, which is <https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-12641>. This bug "allows attackers to execute arbitrary code via shell metacharacters in a configuration setting for im_convert_path or im_identify_path". The bug is a result of executing bash code, and bash is an inherently unsafe language (ie, it is not pure functional). The problem here is that you can embed "$(cmd)" sequences in any bash argument, and this will execute a command that can have arbitrary side effects. In a pure functional language, evaluating a string-valued expression cannot have side effects.

2020-05-04 20:14:49 Shubhadeep Roychowdhury:

How dev tools were created?

Which kind of tools were used by the first generation programmers?

How did they evolve?

We often have these questions in mind, here is a nice article which can give a little glimpse in it. - https://medium.com/codist-ai/dev-tools-are-dead-long-live-the-dev-tools-66a4a7f0d91a

Feel free to give us any feedback you may have.

2020-04-29 18:53:49 Unknown User:

MSG NOT FOUND

2020-05-04 21:02:11 Niko Autio:

Jared Windover you mean "Layout API"? https://houdini.glitch.me/layout

2020-05-05 12:56:04 Jared Windover:

Thanks Niko Autio! I had found houdini, but I hadn’t seen that layout page.

2020-05-05 03:20:01 Kartik Agaram:

Edwards, Kell, Petricek and Church, "Evaluating programming systems design", Psychology of Programming Interest Group

https://alarmingdevelopment.org/?p=1358

Basically they're asking the following question: Assume we start writing interactive essays as the output of research. How does the review process change? How do we maintain academic rigor?

2020-05-05 10:25:08 Mariano Guerra:

Ask yourself: When’s the last time you used an app, or visited a website, that was made by an actual individual person? How many of the tools you use at work, or apps you spend time on for fun, come from a community that you're part of? If you’re a coder, when’s the last time you just quickly built something to solve a problem for yourself or simply because it was a fun idea?

https://www.linkedin.com/pulse/code-great-heres-why-we-need-yes-anil-dash

2020-05-05 10:25:42 Mariano Guerra:

That human web has disappeared because it got too hard to just create things on the web. Building and sharing an app should be as easy as creating and sharing a video

2020-05-05 10:42:32 Konrad Hinsen:

Last time I visited a Web site made by an individual? Right now, 15 minutes ago (reading a blog post). Rarely more than a few hours during the day. Tools made by individuals are a different story. I use a few at work, but those are very specialized research tools.

2020-05-05 13:05:11 Martin Sosic:

Thanks for sharing Mariano Guerra, this got me super excited since I am working on this exact problem 🙂 (with DSL approach at the moment)! I agree that a lot of web is already there (medium, wordpress, shopify, wix, bubble, ...), but as Konrad Hinsen said, if we want to build a tool / actual app, that is different story. That is in part due to tools/apps just inherently being more complex, that is something that we can't avoid, but there is certainly still a lot of space to make it simpler than it is right now (removing accidental complexity).

2020-05-05 14:37:59 Roben Kleene:

One of the great mysteries about computers is why so few programmers write programs to speed up their own workflows? Sure, Emacs is a thing, and a lot of people do a bit of customization with their shell. But the vast majority of programmers I've met and worked with, do not do this, and effectively write zero code to help with their own workflows. You can see this in the numbers by looking at the meteoric success of Visual Studio Code (arguably the most popular GUI programming tool in history), which is much harder to customize than say, Emacs, Vim, or Atom.

2020-05-05 14:50:01 Roben Kleene:

About web publishing, I don't think that web publishing has gotten more difficult, in fact the opposite is true: It's gotten far easier. The reason that there are so few handmade websites is twofold: One, those websites have been dwarfed by the amount of web content that's now published by people without the technical expertise to do traditional web publishing. So while traditional web publishing has not gotten more difficult, other forms of web publishing have gotten far easier.

2020-05-05 14:51:16 Roben Kleene:

The other reason is simply that in order to promote content today, you need to leverage network effects, and self-published content makes that harder.

2020-05-05 15:43:17 Martin Sosic:

Roben Kleene hm, well the answer regarding writing programs for our own workflows might be pretty simple - it is always lower priority compared to the project we are working on, and it provides long term benefits compared to short term benefits of solving the immediate task (and we naturally go for short term).

2020-05-05 16:29:13 Roben Kleene:

I don't disagree that that's a factor, but it feels incomplete to me. For example, you could say the same about learning new programming languages or new frameworks. But I don't find programmers particularly adverse to doing that, and they often actually find that exciting. If I were to rank programmer preference for how invest in longer-term gains, I'd put them in this order:

Learn a new language or framework > Learn a new tool (e.g., IDE or text editor) > Write code to solve their own problems

The first thing I'd say that ordering reveals is that branding works: languages, frameworks, and tools all have branding, while writing a script doesn't.

2020-05-05 16:30:47 Roben Kleene:

I actually think that the reason programmers don't write code to solve their own problems, is that broadly speaking, programmers (and people in general) don't feel deficiencies in their tools. Most people don't really care if something takes 30 steps as long as it gets done. They don't really worry about whether that could be done in one step, if they already have a way to get it done.

2020-05-05 21:34:46 Dan Cook:

This article is how I feel generally about programming and programming languages.

What "no code" offers is ready-made components that are easy to compose. But that can be done with (well disciplined) code and libraries all the same. The difference is that you typically have to do that composition in the midst of a lot of programming and language boilerplate / noise. But if that were cleaned up (e.g. provide a clean interface or declarative view) for the components you want, then you can have that "no code" feel on top, but still have all the power to customize and program what goes into it.

It's like creating a collection of functions (a library) that give you all the basic tools you need for something simple, except nothing stopping you from also writing new functions or selectively mixing some of those with your own code.

2020-05-06 08:07:31 Martin Sosic:

Roben Kleene That is a good point, I would also say people prefer learning new stuff to writing their own helper tools.

But except for branding, I think there is another big factor: learning is easier then creating something. While learning I read the materials, follow the instructions and play with the tech. No hard decisions, and I know I am doing something good, I am improving myself and I am following instructions, I am making myself more valuable on the job market.

Creating a tool / writing a script, on the other hand, means I have to make a lot of decisions (how will it work, how will I do it), I have to do research on my own without clear guidance, and finally, I don't know what the outcome will be, I might fail in building it, and I am not really sure what will the impact on my value on the job market be.

Not to mention that if you are working in well developed ecosystem, the "easy" tools are usually already there and present, so you feel the need to build tools less, and what is left to be done is harder and more complex, maybe so much that you don't think it can/should be solved.

2020-05-06 14:52:35 Roben Kleene:

Great points about learning and the career-boosting potential of learning languages and frameworks. I definitely agree those are major factors, I'd say you're right and the career boosting potential is probably the #1 reason developers learn new languages and frameworks.

About "creating a tool / writing a script", I'm not really talking about projects that are the size and scope of what you're talking about. E.g., there's pretty clear progression for learning to automate:

  1. You might learn about Bash aliases, customizable keyboard shortcuts in an IDE/text editor, using a macro utility like Keyboard Maestro on macOS, install a shell utility like ripgrep or rupa z.
  2. You start writing your own Bash scripts and functions, maybe write something with AppleScript or Automator on macOS.
  3. You write your own text editor extension, browser extension, or shell program.

The problems you're talking about only start to show up in #3, whereas in my experience, the majority of developers never even do #1.

2020-05-06 16:14:01 Martin Sosic:

Ah I see what you mean Roben Kleene! You are right, I was actually thinking more about #3 as you nicely analyzed it. I actually thought that many/most devs do stuff from #1 and #2, but now that I think about it I realize my examples are from a limited circle of people and probably not representative. One thing I did notice with people I worked with is they are not using advanced keybinding system like those that emacs or especially vim offer, which I always feel is a shame since they are so fun/cool/productive, but ok, I assume that is due to the learning curve + perception that vim and emacs are old and so are the keybindings hm.

2020-05-06 21:21:22 Roben Kleene:

Yeah, I'm definitely basing my points anecdotally on my particular circle of colleagues as well.

2020-05-07 10:35:36 Stefan Lesser:

Interesting discussion! I mainly avoid scripting or customization for this weird reason: it feels like tight coupling to me, which is what I try to avoid in programming and system design as well. If I use certain configuration or scripting mechanisms, it initially feels like I have much more control over the system than I have over a system that doesn’t provide these. You don’t notice that it’s an illusion until the system changes dramatically or goes away. In a way I could also describe it as “I want to use a product that has been well designed in the first place, not a mediocre system that only becomes a great one after I put the effort in to customize it.” There are things I want to be involved in building, but there are also things I just want to work without me having to make them work first.

That’s why I value good defaults over configurability and extendablity. I’ll rather use a product that is somewhat limited in features, if it chose good defaults over a product that is more configurable but starts with poorly chosen defaults and I have to spend time to set it up the way I want it first. I don’t know exactly where this comes from, and I know way more people — especially in our community — who value configurability more and don’t care about defaults that much. So I know I’m closer to the weird end of the spectrum — is it just me, or are there others who know what I’m talking about?

I’m still trying to get to the core of this weird feeling, and it still feels too hard to describe, but it seems obvious to me that it plays a large role in the platform and tool choices we make.

2020-05-07 14:11:49 Martin Sosic:

Stefan Lesser I can back this up from my side for sure! Although I love programming/engineering and tinkering, when I need something to work, I will certainly prefer opinionated solution with good defaults then configuring stuff myself. One part is certainly the time I would need to spend on configuring, however I enjoy that sometimes - but the biggest thing is certainly maintainance! Yes, if I configured it myself, it might work exactly as I want it, but what when it stops working? And at that moment, I might not have the luxury of choosing the time when I will fix it, it might be urgent and critical. So for me, to put it simply, it is the cost of maintainance - the less I am responsible for, the more I can focus on what I want.

2020-05-07 18:36:30 Nicolas Decoster:

I like when a system/tool allows you to configure things. But now I also value a lot those with good defaults. And in the end I find myself seeking configurable tools with nice defaults. For example, I use zsh as my interactive shell, with which one can configure lots of things, and I chose it for that. But after having installed it, I found the defaults was good enough for me, and I didn't have to to tweak it much.

2020-05-07 20:09:19 Martin Sosic:

To add to this, I wouldnt be so sure that majority of people prefer customisation/extensibility Vs good defaults/ease of usage - it is just that most experts prefer customization/extensibility, due to them being experts who need more power/control, and experts are also much louder in the community since they have more to say, are building tools and so on. It is hard to say to which degree this goes, but I am pretty sure these is some bias there / skewed perception. I remember Haskellers discussing smth similar, about Haskell appearing harder than it is because most of the stuff discussed online is happening between senior Haskellers, so it is rather complex concepts, and if you read Reddit posts for example, you quickly get feeling you will never be productive. But, allegedly we are seeing just the top of the pyramid of Haskellers.

2020-05-07 20:24:53 Stefan Lesser:

Martin Sosic Not sure if that was in response to my comment, but just to clarify: what I meant and according to my anecdotal experience is that I would assume more people in this particular community and probably among programmers in general to prefer customization, while I also firmly believe that end-users prefer good defaults. Although I’d also argue that many end-users are not making that choice as consciously as programmers/people here do.

2020-05-05 11:12:05 Grigory Hatsevich:

Maybe it is a good idea to use zooming (ctrl + mouse wheel) to transition between various visual representations of the program. Zoom out — and you'll see the whole picture. Zoom in — and you'll see the code. Zoom in even further — and you'll see some details. Etc. I wonder how many different useful layers in between we can think of.

Are there programming environments which somehow implement this idea? Do you like this idea?

2020-05-05 11:20:55 Mariano Guerra:

a long time ago I saw a demo similar to that, I can't find it but I found this one which may be a good starting point to explore related work: https://www.youtube.com/watch?v=5JzaEUJ7IbE

2020-05-05 11:21:46 Jared Windover:

I believe touch designer pro uses this to support nested data flow programs ( at a low zoom level you see a node, and at a higher zoom level the node becomes its own graph). I think it's a good idea for visualizing recursive things.

2020-05-05 11:23:36 Mariano Guerra:

https://www.youtube.com/watch?v=62KcJ09k7cE

2020-05-05 11:23:45 Leonard Pauli:

YES! I really like this idea (in fact, working on it :) ). Node n noodle software usually has the concept of double clicking a node to open it up, revealing its subnodes / implementation. Though zooming is a bit more natural in certain cases. I have a combination of outliner/mindmap on an infinite canvas (translation + zoom), in which a node may be expanded as a list in the outline, or exploded to the side as a mindmap/graph, or fully moved into (becoming the current ctx/workspace). When expanded in the current ctx, its nodes becomes a bit smaller, thus allowing infinite expansion on the same view, resulting in a fractal-like zoom experience. The subnodes may be static logic, or a dynamic result, and in such case, they may define their rendering to a certain degree themself, so making an actual fractal out of this wouldn't be too far off :)

2020-05-05 11:25:10 Mariano Guerra:

https://www.youtube.com/watch?v=G6yPQKt3mBA <- this one is close to the one I remember

2020-05-05 11:29:32 Chris Knott:

I really think ZUIs are underexplored.

If I ever get round to designing a UI it will be a ZUI combined with "search anywhere". I think these two together can combine the best of CLI and GUI.

2020-05-05 11:55:37 William Taysom:

For programs, semantic zoom (of various sorts) is what you want. And yes, ZUIs are underexplored.

2020-05-05 12:10:02 Stefan Lesser:

I really think ZUIs are underexplored.

Absolutely! For me this falls into the category of interesting opportunities in the space between 2D and 3D interfaces. It’s easy to be distracted by the possibilities of AR and VR in “true” 3D, although there is still so much more to explore in 2D (2.5D?) space.

Zooming is one of those interactions deeply rooted in the way we think. It has an embodied physicality to it, which provides a powerful metaphor that most people understand intuitively. This is why the touch gestures for it (pinch to zoom) are so easily remembered.

I’m sure there are many more ways for us to build upon the patterns (kinesthetic image schemas) that provide the underpinnings for most of our linguistic reasoning and build interfaces with them that can truly be “intuitive”.

2020-05-05 12:38:57 Grigory Hatsevich:

Mariano Guerra Oh my God, this Eagle Mode is so amazing. In this way you can keep so much more information in your working memory — when transitions are continuous, than in the usual way when you discretely jump between contexts (windows, files, webpages, ...). I feel like I keep everything which was shown — because nothing disappeared abruptly, just smoothly diminished so I remember where to find it.

Thank you so much for showing it.

Why doesn't everybody here use something like this every day?

Another half of my amazement is by the experience of this question-answer interface: I imagined what I wanted to see, posted a question to the right place, and in minutes I get something wonderful and relevant. How much better experience it is than googling. Can it be scaled so that thousands or millions of people can have such super high quality prompt answers to their questions?

2020-05-05 13:02:33 Grigory Hatsevich:

Maybe web browser should work like this? Zoom into a link instead of opening the link in a new tab.

That is, we can combine zooming with hyper-linking. Similar to when you browse a tree object with circular references: every object can have connections to other objects, and you can explore each link right here, in the current context.

2020-05-05 13:07:58 jeff tang:

I’m also a huge fun of ZUIs. And I do think the question > Why doesn’t everybody here use something like this every day? is a good one. My guess as to why ZUIs are not more mainstream is that, though they can be quite effective for readers by leveraging spatial perception, they are rather difficult for makers of ZUIs. Essentially, making a ZUI is akin to cartography, but the digital world is ultimately unbounded (infinite canvas). Parent-child relationships can be mapped to containers and relative size, so something like “Eagle Mode” (which is awesome!!!) works really well since filesystems are a tree. But even then, there are many display and layout questions like the how many columns and rows per container.

On the application side, something like Project Xanadu uses a ZUI for relating documents and text to one another, but once you get to a certain number of documents, I think this seems pretty intractable as well. Spatial perception is a lot harder when the X-Y-Z dimensions are unbounded. This is just my guess though! What does everyone think?

2020-05-05 15:06:36 Grigory Hatsevich:

jeff tang It seems to me that an object tree with circular references is a pretty nice (and scalable) structure. One giant tree can hold all your data. Any object (node) can have links to all other objects (nodes) related to it in the form of key-value pairs (object properties). You can know and use the "real/absolute address" (e.g. shortest path from the root) of a node if you want, but you as well can start from any node and go "only down" — following the links which relate a node to all other things meaningfully related to it. And so on recursively.

2020-05-05 15:40:26 Grigory Hatsevich:

A challenge with ZUI is how to see several nodes from different places of the same tree at once and select and manipulate them collectively. This could be a solution: imagine you can draw an arbitrary shape on your screen and this shape becomes an independent view into your tree: you just continue zooming in this shape, and only this shape changes its zoom level, while all other shapes stay at their current zoom levels. So in this way you split your screen into several independent views, zoom each shape to a needed level so that you see all the objects you need at once, then you can select these objects at once (ctrl+click) and then do some action on them collectively. It could also be convenient to automatically magnify (slightly resize) the area in which you are currently zooming — while other areas are still visible, just diminished slightly.

2020-05-05 15:40:59 Jared Windover:

Is a tree with circular references the same thing as a graph with a root?

2020-05-05 15:47:01 Grigory Hatsevich:

I mean object tree, like this:

2020-05-06 03:17:54 William Taysom:

Jared Windover yes, an object graph! Though Grigory Hatsevich is driving at a canonical spanning tree and the idea of zooming parts of it more than others.

2020-05-06 06:22:46 Ian Rumac:

One giant tree can hold all your data Grigory Hatsevich yes, this! I call it a multidimensional tree, but this is what my project is all about, one single tree that is connected back and forth to all - tho having it directly circular like the example means you’ll have a hard time serializing it and updating it. the real structure in the backing should not be circular by reference like that but its more or less it. Great to see people thinking similarly!

2020-05-06 08:24:18 Josh Cho:

Yeah we need more non-binary input besides mouse position (e.g. keypresd is binary)

2020-05-06 08:42:32 Ian Rumac:

Adding a depth sensor to trackpad would be fun, getting an extra dimension in work.

2020-05-06 08:48:50 Mariano Guerra:

I think some already have a pressure sensor, or you mean more project soli? https://www.youtube.com/watch?v=Db9nDOCahO0

2020-05-07 00:07:23 Ian Rumac:

Yup, meant more like project soli

2020-05-07 00:07:34 Ian Rumac:

damn forgot that exists in pixels

2020-05-07 01:47:27 jeff tang:

Reading Building the Memex Sixty Years Later: Trends and Directions in Personal Knowledge Bases and they mention tree + graph as a data structure to consider 🙂

2020-05-07 07:53:13 Grigory Hatsevich:

jeff tang "Although graphs are a strict superset of trees...", object trees with circular references are actually no less general than "general graphs" because 1) each node can relate to any other node or to a group of nodes ("children:[child1,child2,child3]"), 2) there can be multiple kinds of relations ("son: value", "daughter: value", etc.). So I cannot imagine any relation which could not be expressed with object tree structure. Object tree is a graph which just has a default convenient structure, but is not limited by it. Please correct me if I am wrong in substence or in terminology.

2020-05-07 13:44:12 jeff tang:

Object tree is a graph which just has a default convenient structure I’m not an expert on this, but yes I would say that an object tree is a kind of graph since you can have cycles and bi-directionality

2020-05-07 13:58:18 Leonard Pauli:

Wait, soli wasn't an april fools joke??! AWESOME!

2020-05-07 13:59:59 Ian Rumac:

Leonard Pauli no, it’s running in prod 😄

2020-05-07 14:00:51 Ian Rumac:

if you have more than 1 “parent”, it’s a graph. graph is just a perspective of a tree.

2020-05-07 14:01:02 Ian Rumac:

or vice versa 🤔

2020-05-07 15:27:54 Grigory Hatsevich:

Ian Rumac "if you have more than 1 parent" Thanks, I forgot to consider this case. Though I think it still can be handled with an object tree. There are two options:

  1. You can simply know that "child"/"son"/"daughter" relation is by its nature a multy-parent relation, so each time you see "child"/"son"/"daughter" property you should look at the referenced object and get information about reciprocal relation from there: in this case you look at "parents" property of the referenced object and get all other parents from there. Thus you got info about this multy-parent relation.

  2. If you require that it should be semantically indicated that a relation is multi-parent each time you look at this relation, you can implement this with object tree through the following convention: a relation value can contain not only the referenced object, but also some meta-information about this relation besides its name: John={name:'John', son:{multy-parent:true, referenceObject:James}}

Not sure if I am correct with terminology.

2020-05-08 00:26:36 Ian Rumac:

Your all seem to include muddling with the tree as an abstraction to support an array as parents afai can tell. There are more ways to implement this, one would involve what I guess is a wrapper parent, but that dilutes the data, one would be to make a full circle with copies, but that expands the depth recursively, one would be to cross dimensions, but that is not for a normal tree. I assume you could do it in a parser by checking fi a child of a this node exists in depths above it, but that would/could get quite expensive and screwing with bloom filters for basic operations. Graphs are the different perspecive to a tree only because of the indication of whats a parent whats a child.

2020-05-08 00:27:25 Ian Rumac:

Not sure even where Im going with this, its 2.30 AM and I was just reimplementing my multitree in JS so remembered to check this

2020-05-05 17:49:04 Ivan Reese:

Announcement:

🎄 I've spun up a new channel for #functional-programming. If that's an area of interest for you then be sure to join.

Note that this channel, like our other subject-specific channels (#end-user-programming, #graphics, #music, maybe others someday — see #meta) are intended for discussion by and for people who are sincerely enthusiastic about the subject. These are positive spaces, focused on studying and critiquing the ideas within the field, not questioning the field itself. If you'd like to call into question the merits of an entire field or practice, that belongs in #general...

...or perhaps in another community entirely if it's just a rehash of the same old tabs-spaces debate. After all, we all know that tabs are the supe-💨

2020-05-05 18:06:59 Chris Maughan:

On the subject of tab/spaces, I heard recently that Screen Readers are easier to use with Tabbed documents vs spaces. It is the first argument I've heard for the merits of one over the other - personally, I'm a spaces man, but if it is genuinely true, then I will have to reconsider.... For this reason, I got around to adding tab support to my text editor.

2020-05-05 18:17:25 Duncan Cragg:

Spaces FTW. And Vim.

2020-05-05 18:17:53 Duncan Cragg:

We earn more apparently. Studies have shown.

2020-05-05 18:56:52 Ivan Reese:

The benefit of saying you use tabs is that people look at you funny, which is delightful. Meanwhile, you can actually use spac-💨

2020-05-05 19:02:34 Jared Windover:

I like to think that your use of 💨 means you just indent with dashes and have customized an entire workflow around this. As somebody who unironically turns on visual whitespace, I could get behind that.

2020-05-05 19:59:05 Edward de Jong:

In Beads i solved this long running debate by preventing the use of spaces in the beginning of a line. You can only use tabs in the beginning, which solves a big problem with Python in terms of interchanging code which was formatted with differing numbers of spaces per tab equivalent. After the first alphabetic character you can use spaces of course. Works great, highly recommended.

2020-05-05 20:02:52 Edward de Jong:

Since i am the only one in this entire community who ever met John Backus (1973), i will leave the ancient field of FP to the fanatics. If they think a 50 year old concept is state of the art, they are drinking the kool-aid. But he couldn't get anyone to switch out of either COBOL or his prior invention FORTRAN, so there are obvious problems with pure FP. I see a lot of languages adopting FP-like features to blunt its penetration. Deduction and Declarative style trumps FP anyway, that's where i am headed.

2020-05-05 21:19:36 S.M Mukarram Nainar:

tabs for indentation, spaces for alignment; though I'd like to see elastic tabstops used more

2020-05-05 22:10:26 Ivan Reese:

Also, vertical tabs. Also, diagonal tabs.

2020-05-05 23:10:23 Duncan Cragg:

I vertically-align my code cos I've got a touch of OCD. Plus it's easier to read 😄

2020-05-06 03:22:17 William Taysom:

Tabs inline so that x = f(whatever) x_ = g(x) something_else = q(x, x_) lines up nicely. 🐺

2020-05-06 03:28:41 Shalabh Chaturvedi:

Folks we need a tabs v spaces channel.

2020-05-06 04:00:02 Doug Moen:

On spaces vs tabs: how about neither? I'd prefer a projectional editor that automatically formats my program to fit the current window width. Word processors can do this, and more: you can even insert diagrams and images into a document. Imagine the possibilities if we could use this 1980's technology for writing code, instead of editing virtual punch-card decks on our 4K retina displays.

2020-05-06 04:07:50 Shalabh Chaturvedi:

Totally on board with projectional editing! Inline diagram and image 'literals' sound pretty good to me.

2020-05-06 04:21:53 Edward de Jong:

I am planning for the world of proportional fonts. Fixed Width is archaic, however, there is no default OS font that is proportional that is any good with punctuation.

2020-05-06 05:02:37 Ivan Reese:

I'd like it to be possible to program in Hest without seeing any text whatsoever. That's one reason why it's a drawing tool.

2020-05-06 05:41:20 Edward de Jong:

Without text complex expressions will be harder to understand. There are limits to comprehension without text. We learned this in the old days, because we had our trusty IBM flowcharting templates, and their pads of flowcharting paper, and once the program got past a certain point of complexity, the flowchart became fairly useless. They tried making FORTRAN auto-flowcharters, but it became what a tangle of wires. Scratch shows that you can avoid textual forms, but once programs get larger Scratch isn't so great. I do think with zooming and encapsulation of sub-areas a reasonable compromise can be reached. I had this template. So classy, with the sleeve and all, and the legend on the back. There has never been a classier company than IBM to my knowledge. HP was very good, but IBM at its peak is the pinnacle of technological excellence in the smallest details.

2020-05-06 05:50:25 Shalabh Chaturvedi:

I think the problem is often trying to extract a graphical model from a textual one. It might be interesting to see what we get if we start with a graphical canvas and visuals and see what kinds of things can be expressed (graphical first). I'm not particularly against text but I'm interested to see where Hest goes. I like hybrid models which is text arranged in various ways outside the 'linear sequence file' limitation. A good example is Subtext 2 - the 'table' layout is much neater and easier to follow than a long series of nested conditionals.

2020-05-06 08:18:10 Duncan Cragg:

In Onex I have 2D layout with nested boxes for the outside structure of things, then inside the boxes there is text for labels, values and rewrite rules, which themselves are nested lists that can/will be rendered as nested boxes (that bit I haven't done yet) So you end up with text only being used for single values or identifiers or symbols in the language. Everything else is structured as nested boxes.

2020-05-06 08:19:07 Duncan Cragg:

It's all saved and transmitted as normal structured text

2020-05-06 08:23:39 Chris Maughan:

My focus is on building graphical models from textual ones; but I am certainly interested in the reverse process, and thinking about it actively. Duncan Cragg are you planning on doing a 2 minute week video?

2020-05-06 08:33:15 Duncan Cragg:

Of course, but I'm not quite ready yet, plus I haven't ported my language from Java to C yet! So that may be a while. I can show the Object Network navigation GUI at least

2020-05-06 08:36:51 Chris Maughan:

OK, I'll look forward to it. My focus is on giving live coders visual feedback/information on the music/graphics that they are generating in code; hopefully to help understand why certain inputs give certain outputs, and enable greater creativity. But a logical next step is to let users tweak the visual representation of their code and have the full feedback loop.

2020-05-06 09:57:23 Duncan Cragg:

I have seen live coding generating music and even though it's an amazing skill I always thought a graphical UI should make more sense!

2020-05-06 16:49:53 Ivan Reese:

Without text complex expressions will be harder to understand. There are limits to comprehension without text. We learned this in the old days I think there's a good reason that real engineering disciplines, like mechanical or electrical engineering, use symbol-based schematic diagrams to design and document their systems. In programming, the systems we design are universally small and simple (once you take out the 99.9% accidental complexity), so we should be able to get by with doing all our coding as static 2d colorless drawings on a drafting table. We should then be able to submit those programs to a compiler or archivist labeled, of course, using colored tabs.

2020-05-06 17:43:40 Stefan Lesser:

It’s just symbols, whether text or schematic diagrams (or mathematical formulas). I’d wager that which kinds of symbols are used within which community has a lot more to do with environmental factors like who gets to tell the stories and build the products that become popular and what sets of tools are available than with any property of the symbols themselves.

2020-05-07 00:02:46 Garth Goldwater:

i’d like to congratulate this community on successfully derailing the tabs vs spaces debate that started fomenting with the (IMO) much more interesting textual vs graphical/projectional debate

2020-05-07 00:54:55 Ivan Reese:

Yeah but.. coloured tabs vs coloured spaces. Syntax highlighting has had a blind spot for too long. And don't give me that "rainbow parens" nonsense. We all know lisp isn't a real art form.

2020-05-03 00:01:15 Unknown User:

MSG NOT FOUND

2020-05-06 04:59:33 Zubair Quraishi:

I think to be fair to Chis Lattner, he did say quite early on that he wanted Swift to be "everywhere", so Swift may be bad, but at least it is holding true to it's promise of "trying" to be everywhere. The same could be said about alot of other languages which are ubiquitous and have accumulated alot of junk over the years. I guess languages are like all software products. Everyone uses only 20% of the features, but it is always a different 20% than everyone else, so the software "appears" to gain bloat

2020-05-06 05:22:21 Nick Smith:

Warning: shower thoughts lie ahead.

I've been struck with the nagging thought that maybe "FoC" general-purpose languages (targeting extreme accessibility) have to be designed in such a way that they can run on (modern) GPUs rather than just CPUs. Even integrated GPUs nowadays have a minimum of 200-400 general purpose cores (albeit with a slant towards certain operations). We don't really know how to use those cores effectively for general purpose tasks because we're normally trying to program them using a C dialect (e.g. via CUDA, OpenCL, GLSL...). It's the same problem we have with multicore CPUs: writing massively-parallelizable code in an imperative language is (too) challenging, and we've known for decades that we'll need to solve the problem eventually, since parallelization is the only way to scale computation once we're building circuits out of individual atoms.

Only languages based on constructs that are implicitly parallelizable are going to be able to target GPUs effectively whilst remaining highly accessible. The alternative is to ask the user to explicitly divide their computation up into parallelizable work units (threads/actors), which is an immediate complexity trap. Programmer-led task division doesn't scale, and it's a deep rabbit hole that can require a PhD to be done effectively.

Some people might argue that parallelization is a performance optimization and that most end-user apps don't need it, but I think there are many occasions where the ceiling of what's possible is just too low to offer a bright future. There are always occasions where someone comes up with a need like "I want to process this entire spreadsheet / note collection / webpage" or "I want to make a picture" or "I want to do a simulation / animation of my idea", and they want that processing to be interactive (implying instantaneous), at which point most serial languages can't handle what's being asked for.

So, must a new generation of accessible programming languages be based on implicitly parallelizable constructs and 400 cores? The hardware APIs we need (Vulkan, WebGPU...) are finally becoming available. We just need to utilize them half-decently.

2020-05-06 05:26:51 Nick Smith:

Non-parallelizable language constructs include call stacks/top-down recursion, and hierarchical data structures (anything based on unidirectional pointers and singular access paths).

2020-05-06 05:43:18 Shalabh Chaturvedi:

I agree. One problem we have is our languages end up expressing a lot more than we need to. E.g. sequential imperative langs specify the order of execution even when it is not necessary (and then compliers get more complicated trying to unravel the data flow to optimize). Even with parallelelizable constructs, I think we'd want to minimize the inter 'cell' communication for practical reasons.

2020-05-06 06:32:13 Nick Smith:

Yeah, sequences of instructions are a non-starter. I'm skeptical of the concept of "cells" too, though I'm not sure what you're thinking of. Cells = objects = actors (executing independently) are a form of explicit parallelization, even if you can justify them as representing parts of the problem domain (as OOP has always tried to do).

2020-05-06 06:49:14 William Taysom:

I'll agree, but maybe for a funny reason. With at the level of most FoC projects, the execution model of hardware isn't of primary concern. Most of us aren't aiming for portable assembly, rather something confluent with people's ways of thinking about problems. Many things we want to express are non-strict: you want the computer to help you figure out something, but you don't care the order it computes things in so long as it remains responsive while working.

2020-05-06 06:52:19 Nick Smith:

My perspective is: you can't be confluent with someone's way of thinking if you're asking them to express their thinking in terms of 400 parallelizable units (in order for their idea to be feasible to execute).

2020-05-06 06:52:45 William Taysom:

On the other hand, there are domains where you are describing a step-by-step process: be it card games or an assembly line. However, in this these cases. The domain specific imperative steps probably should not line up with some sort of CPU threaded execution mechanism.

2020-05-06 06:54:57 William Taysom:

Nick Smith the kinds of things that I've done which parallelize to 400 units are all of the form, "check all the combinations and tell me the best fit." And you don't even want the computers to check all the combinations because there's way too many.

2020-05-06 06:55:18 William Taysom:

I mean to actually check them all.

2020-05-06 07:29:04 Stefan Lesser:

Nick Smith Why do you see the solution to this problem at the language design and not on the library level?

2020-05-06 07:32:57 Nick Smith:

Stefan Lesser What’s the difference between a library and a language? If the library is offering parallel computation, then the library will need to offer a language (API) to express the computation. Libraries don’t help solve the problem.

2020-05-06 07:34:54 Nick Smith:

Also, libraries are usually designed to solve a specific problem (e.g. graphics, or matrix multiplication). If you want your arbitrary domain problems to be parallelised, you need to express them in terms of more general constructs. Those constructs are something a programming language is supposed to provide.

2020-05-06 07:48:04 Stefan Lesser:

Hmm… not sure I can follow. How can you solve an arbitrary problem with parallelization? Wouldn’t you have to know enough about it so it becomes domain-specific? At least specific enough to know if it makes sense to run it on GPU cores such that the setup costs are amortized?

2020-05-06 07:52:43 Stefan Lesser:

There are libraries today that do exactly that transparently based on context like the amount of data to be crunched, so you as the developer don’t have to make that distinction… so I guess my question is — slightly reworded — what benefits would a language offer over a library for an already existing language? It seems a lot of work to invent a new language if you can just import a library…

2020-05-06 07:56:12 Nick Smith:

If you run a whole app on the GPU then setup costs are less of a problem (from what I know). Setup costs are a problem when you try and do some tasks on the GPU and some on the CPU, and they have to constantly communicate. Setup costs are also higher in outdated APIs like OpenGL, and very low in newer APIs since the GPU program is compiled at app startup rather than mid-frame.

2020-05-06 08:00:31 Stefan Lesser:

Or at program compile time even. So do I understand you correctly that you want a programming language that does everything on the GPU?

2020-05-06 08:00:32 Nick Smith:

To have everyday computations be massively parallelizable they firstly need to be large enough that it matters. If your app is just a digital clock (incrementing a counter), then yeah, it can't be parallelized and it doesn't need to be. Once you have a computation large enough to be worth parallelizing, then automatic parallelization depends on having the problem expressed in a form that does not introduce artificial sequentiality (i.e. avoids instruction sequences, top-down recursion, and hierarchical data structures). What these constructs should be is an open problem. I have a hunch that relational/logic languages provide a good foundation and we need some novel ideas atop that foundation. That's what I've been focusing on for the last six months. I'm hoping to be able to share more at some point, if I find a promising path.

2020-05-06 08:02:25 Nick Smith:

Yes, I want to consider a language that does everything on the GPU that is possible with today's hardware.

2020-05-06 08:05:55 Stefan Lesser:

What these constructs should be is an open problem.

Sounds like the distinction between applicative functors and monads might be relevant to your investigations.

2020-05-06 08:08:09 Nick Smith:

I've been down the route of FP, and I know about those concepts, but I don't think they're applicable in any substantial way (pun intended).

2020-05-06 08:20:12 Stefan Lesser:

I was more talking about math than FP here, but it seems that might just make it even less attractive to you…

Regardless, there are (imperative) languages that take advantage of the programmer specifying something as applicative, ie. “sequence of operation not important” to run them in parallel. And at the call site it looks just like your usual map over an array, just that it runs faster.

On the other hand I do think a functional language like Haskell is a good example to see what design questions are raised when a language restricts specifying operations where order is important in general, and only allows that with additional effort in form of monads and syntactic sugar like do-notation.

I’m genuinely interested in what you are thinking about and look forward to learn more about what you’ve been working on.

2020-05-06 08:23:47 Nick Smith:

It's definitely necessary to exploit associativity and commutativity of operations for parallelization, which I think covers what you're referring to.

2020-05-06 08:25:40 Nick Smith:

Haskell isn't really orderless. It's built atop a foundation of recursion and hierarchical data structures. Even if its operations aren't specified as a strict sequence (a total order), those constructs impose (inessential) order.

2020-05-06 08:26:47 Nick Smith:

Thank you 🙂 I'll be reporting progress when I think I have a story to tell! (will try #two-minute-week at some point too).

2020-05-06 08:28:07 Duncan Cragg:

In Onex there are two forms of parallelism that can be exploited. There's the coarse grained parallelism that you'd probably understand as the actors or agents or live objects. And there's the finer term reduction familiar to FP folk, where you can reduce a tree by rewriting many branches at once.

Neither of which the programmer needs to be conscious of!

I mean, they will get the sense that the coarse objects seem to have their own animation, but that's what they'd expect given that that's how reality also works!

2020-05-06 08:29:52 Duncan Cragg:

So to return to your point, Onex programs can be parallelised without end user involvement, including on GPUs

2020-05-06 08:30:38 Duncan Cragg:

Oh, and I use Vulkan but haven't got a clue what I'm doing.. 😄

2020-05-06 08:33:11 Nick Smith:

Have you actually got it (the whole language) running on GPUs? GPUs don't handle tree-like data very well, from what I know.

2020-05-06 08:34:13 Duncan Cragg:

Soz I meant I use Vulkan for the UI so in theory it would be easy to use the API for processing but that's a long way off..

2020-05-06 08:36:01 Nick Smith:

Seems like a stretch to say that you can compute on GPUs then 🙂. Let me know if you make progress in that direction.

2020-05-06 10:31:03 William Taysom:

Friends, let's be clear about the old difference between parallelism (doing two things at once) and concurrency (coordinating activities that are happening at about the same time).

2020-05-06 13:01:09 Doug Moen:

There's been a lot of activity over the past few years in defining high level GPU languages that address these issues. • co-dfns is a data-parallel and functional dialect of APL that runs on GPUs. What blew my mind is that the compiler itself is written in co-dfns, and so you can compile your co-dfns programs into GPU code using the GPU. Prior to seeing this, I did not think that a compiler was the kind of program suitable for parallel execution on a GPU. Turns out that the choice of data structures is very important. • Taichi is a DSL for defining fixed-height hierarchical data structures (which behave like sparse arrays). Thousands of lines of CUDA that only one guy in your organization understands can be replaced by tens of lines of taichi code. It's very instructive to look at the GPU-specific data types and compiler optimizations exposed by the Taichi language. • TensorFlow is implemented as a library that you call from C++ or Python. Using this library, you construct what is essentially a parse tree for a program using APL-like data parallel operations. The library compiles this parse tree into GPU code and executes it. Stefan asked why this is a language issue, not a library issue. Well, it's easier to write code directly in a language, than to use a library interface that consumes source code written in another language and compiles it.

2020-05-06 13:05:33 Doug Moen:

My Curv language already compiles into GPU code, but it's not general enough to do everything I want. Support for hierarchical data structures is the next big thing (along with the ability to automatically generate compute shader pipelines). The use case will involve hundreds of cores traversing the same hierarchical data structure in parallel.

2020-05-06 13:35:51 Stefan Lesser:

Doug Moen co-dfns looks super interesting — thanks for sharing that! I hadn’t heard about it, although Raph Levien’s A taste of GPU compute is sitting at the top spot of my to-watch list and it seems I would’ve picked it up there soon… fascinating. I’d love to dig deeper into the data structures part — why is it that every project you’re currently not working on looks more attractive than the one you are working on?

2020-05-06 13:38:54 Stefan Lesser:

I also have a feeling that the C++ superset approach used for Apple’s Metal Shading Language isn’t the end of that story…

2020-05-06 19:31:23 Jamie Brandon:

Database folks have been attacking sql-on-gpu for a long time but there are no real successes yet. The core problem at the moment is memory bandwidth and latency. Getting data in and out of gpus is slow, and gpus aren't very good at branchy workloads so you inevitably need to do some stuff on the cpu. Often the result is that actual query execution is somewhat faster but the speedup is dwarfed by the time spent fetching the data.

Looking at games gives a good idea of the division of work - game programmers are among the most experienced at writing gpu-friendly code but they still typically choose to put game logic, AI, pathing etc on the cpu.

Also modern cpus are actually pretty wide if you write code that is friendly to out-of-order execution and memory pre-fetching, but modern high-level languages go almost out of their way to be hostile to both. My bet is that designing a language to reduce false data dependencies and allow for more sequential memory access is a more viable target than designing a general purpose language for the gpu.

This might change though in the next decade as new gpu designs offer to share general memory with the cpu, so switching back and forth becomes more practical.

2020-05-06 20:06:56 Scott Anderson:

One thing is many programming models that are friendly for GPU also run significantly faster in CPUs

2020-05-06 20:12:52 Scott Anderson:

Game code runs on CPUs, sure, but if you look at what Unity is doing with DOTS, or ISPC, you'll see that on CPU you can get orders of magnitude better performance, most game code doesn't run on the GPU because the GPU is budgeted for graphics work, but more and more graphics (culling, sorting) and graphics adjacent (animation, VFX) work that traditionally happened on the CPU is happening on the GPU

2020-05-07 00:12:24 Garth Goldwater:

the co-dfns author did a workshop on how he made trees parallel by construction that i will probably have to digest for a full year: https://youtu.be/lc4IjR1iJTg

2020-05-07 01:22:01 Nick Smith:

Doug Moen Thanks for all those links! You seem like you know what's going on in the GPU space, I'll have to tap your brain 🙂. And wow, I've never looked into Tensorflow because my eyes glaze over when people start talking about machine learning. I didn't realise it was a more general platform. Time to pay attention!

2020-05-07 01:42:16 Nick Smith:

Jamie Brandon Couldn't the fact that game programmers still do a lot of work on the CPU be explained by the fact that: • It's hard to write massively parallel code in traditional languages • GPUs are very hard to program, especially with pre-2016 APIs • Not all work is slow enough to benefit from extensive parallelization • Everyone thinks CPUs are "normal" and GPUs are "special" None of those factors imply that GPUs would be a performance regression, they just imply that CPUs are the easy or "good enough" path. I mean... I've made some small video games before, and I never decided to use a CPU because a GPU wouldn't work. It's more that CPUs are the default choice. On the other hand, I can see how an SQL database would be a problem on the GPU since you have to shuffle a ton of data around and disk/memory speed would limit any potential gains. I'm not personally interested in "big data" apps thankfully, so I'm happy to presume my language only works with <~2 GB workloads. And yeah, I think "the future" is integration of CPU and GPU cores such that there is no perceivable communication overhead. It might be reasonable to take that as an assumption, if helpful.

2020-05-07 02:23:43 Doug Moen:

Nick Smith I think the future is integration of CPU and GPU cores such that there is no perceivable communication overhead. So AMD has been pushing this idea for 8 years with their HSA architecture. If it is such a great idea, why are Intel and nVidia not doing it? An honest question, I know very little about HSA or about any engineering tradeoffs that might be involved. https://en.wikipedia.org/wiki/Heterogeneous_System_Architecture

2020-05-07 02:26:03 Nick Smith:

My first guess would be Nvidia doesn’t see itself as a CPU company and Intel doesn’t see itself as a GPU company. They’re focused on their own niches. AMD has an equal focus on both technologies.

2020-05-07 02:26:46 Nick Smith:

Of course all companies have crossed into both technologies, but not deeply.

2020-05-07 17:38:09 Scott Anderson:

Nick Smith see my response above, none of those things are true for the current state of the AAA game industry, game engines are using (less) massively parallel code on the CPU by default, the main reason now is resource allocation, and once again many workloads that were CPU only are moving to the GPU now

2020-05-07 17:38:24 Scott Anderson:

sure there is a lot of game code that is still single threaded

2020-05-07 17:39:16 Scott Anderson:

Also one thing about the memory issue is game consoles already almost universally have unified memory

2020-05-07 17:39:21 Scott Anderson:

and have had it for years

2020-05-07 17:52:03 Scott Anderson:

on consoles there often is very little communication overhead, consoles use AMD (or Nvidia with Switch) SOCs, there are still many reasons why you can't or don't want the CPU and GPU operating on the exact same data and you don't want the CPU waiting for the GPU or vice a versa. GPU and CPU don't share caches, GPU prefers large pipelined workloads to hide memory latency and wants to saturate bandwidth, etc. but many of those same rules apply to any multiprocessor system and are valid if you want good performance with multithreaded CPU code

2020-05-07 17:57:23 Scott Anderson:

Nick Smith Intel is working on a discrete GPU (that will actually ship, I think), so maybe that will change. I think Intel is more of a GPU company than Nvidia is a CPU company

2020-05-07 17:59:08 Scott Anderson:

There is also this idea that, everyone has a integrated GPU sitting in their Intel CPU, and that thing is often idle if a discrete GPU is active

2020-05-07 18:02:22 Scott Anderson:

and sure its under-powered, but it still has some compute capability that's non-trivial

2020-05-07 18:02:40 Scott Anderson:

its under-powered compared to high end discrete GPUs

2020-05-07 18:03:03 Scott Anderson:

also integrated GPUs on PC are UMA

2020-05-07 23:14:23 David Piepgrass:

I haven't seen anyone mention Halide ... I don't know how to build a programming language where massive parallelism is easy, but Halide is the obvious starting point. https://halide-lang.org/

To a large extent I think the ball is in the hardware people's court. I remember seeing a proposal for a CPU architecture, can't remember where I saw it or what it is called... it was similar to SIMD but instead of the concept being "provide a bunch of instructions and hope developers use them" it was "run arbitrary C loops in parallel" - it was an architecture designed specifically to allow the vast majority of loops to "implicitly" run in parallel (meaning, the compiler would have to emit "vector" instructions as in standard SIMD, but the instructions themselves were more powerful than standard SIMD, enabling most loops to be automatically parallelized instead of the status quo where only a fraction of all loops can be automatically converted to SIMD form.) So, like, this needs to be a standard.

Meanwhile on the GPU side, there is a large physical distance and slow bus separating it from the CPU, as well as a separate memory pool... and GPUs are bad at running code that is serial in nature. If AMD/NVIDIA can come up with a hybrid architecture that is capable of running both parallel code and mostly-serial efficiently, then it will become possible to "just compile your code for the GPU" (or in case of JIT languages, "just flip a switch and it runs on the GPU"), and then the GPU will be a more popular target.

2020-05-08 03:17:07 Dan Cook:

This is a long thread I can't read through yet, so I risk repeating ideas, but here are my thoughts:

There are many projects here that consider using a DAG representation for code or logic. That's something that can be automatically parallelized (where the only blocks are dependencies). Except that each branch is probably not doing the same thing, so not SIMD.

Most loops can be replaced with map / reduce / filter / join / etc. (this is becoming a big thing in JavaScript, or LINQ in C#). If we keep heading that way, a lot of that stuff can be replaced with SIMD. Might not be possible wherever there are side effects, but some might be reducible to a formula?

For things that are sequential, maybe a model more like fields + particles, like Alan Kay has suggested before. Maybe that's actors all over again though? But maybe there's a way to make that easier to deal with than in traditional general purpose languages.

2020-05-08 05:47:41 Nick Smith:

Scott Anderson I just stalked your LinkedIn so I believe you know what you're talking about 🙂. It's good to hear that AAA studios are taking GPU compute seriously.

2020-05-08 15:16:56 David Piepgrass:

Dan Cook Well, Parallel LINQ (parallel map / reduce / filter / join / etc for C#) has been available for many years, but I don't generally feel like just using it all the time instead of standard serial LINQ. The problem is that often we're operating on a small collection, and the overhead of coordinating the behavior of even two threads is not worthwhile if the collection is smaller than, I don't know, 100 elements or something. Throughout my career I've spent most of my time dealing with many small collections with less than 5 or 50 elements each, and when I'm dealing with medium-size collections of 100-1000 elements, it's often a hashtable, sorted list or array that I'm using as a lookup table in a bigger calculation, rather than something I'm directly using with map/filter/reduce. (I do also process larger lists, it's just that the fraction of code doing highly parallelizable work is small.)

Sometimes one can rearrange lots of small collections into big arrays to allow more parallelism, but sometimes I can't think of a way to parallelize a lot of this work, e.g. algorithms on tree structures like Loyc trees (code) don't seem parallelizable, generally, although I did design my LES language intentionally to have a "context-insensitive" syntax, so at least you can parse all your files in parallel. For these kinds of workloads I really want hardware like I described, which can parallelize short loops efficiently.

2020-05-06 13:28:08 Stefan Lesser:

I’m still working my way through Crafting Interpreters and just came across a nice piece of content that many here might get a kick out of: at the end of chapter 23 hides a design note, which is a fascinatingly deep critique of Dijkstra’s Goto considered harmful. There’s nothing really surprising or substantially new in there, but I really like how Bob Nystrom argues about the complexities that hide in a paper that practically ended a certain language feature. It’s relatively short and you don’t need to read any other part of the book; it totally stands on its own. Here’s a teaser:

I guess what I really don’t like is that we’re making language design and engineering decisions today based on fear. Few people today have any subtle understanding of the problems and benefits of goto. Instead, we just think it’s “considered harmful”. Personally, I’ve never found dogma a good starting place for quality creative work.

http://craftinginterpreters.com/jumping-back-and-forth.html#design-note

2020-05-06 13:42:34 Chris Knott:

For imperative languages with side effects, a gosub is often clearer and more "honest" than function calls. There's some value in distinguishing at language level between "instruction reuse" (sub routine), and "abstracted calculation" (pure function).

I've never considered a labelled goto as an anti pattern. Unlabeled line jumps on the other hand are definitely less clear.

2020-05-06 14:48:07 William Taysom:

"one of our tribe’s ancestral songs" — I often reflect on "Goto Considered Harmful" for the part: have the code of the program match the concept of what the program does, and the particular example of structured code (sequential statements, conditionals, and loops) for imperative (step-by-step) programs.

I think of that part so much, in fact, that I forget about the part that mentions GOTO at all. And in view of the GOTO bit, I suppose callbacks are more harmful in that you are now making the structure dynamic. (To say nothing of continuations.)

2020-05-06 15:22:21 Kartik Agaram:

This was great. Dijkstra's paper makes a reasonable case, but on balance may have actually done more harm than good with the flood of imitators and style guides seeking to ban language features purely out of fear. Many of the imitators even copied the 'considered harmful' phrasing, as an offering to be taken more seriously by the masses.

Then again, maybe I'm being unfair. Ever were humans prone to following rules without understanding why they exist. If it wasn't Dijkstra we'd find someone else to imitate.

Here's a little case I made for goto a few years ago: http://akkartik.name/post/swamp

2020-05-06 16:23:11 Martin Sosic:

While I agree that we shouldn't blindly forbid language features / programming techniques and therefore limit ourselves, it is great to have this kind of pointers / style guides when you are a beginner in programming and need some kind of instructions how to behave. Reasonable defaults / restrictions, so it is harder to hurt yourself. On the other hand, I would be very surprised to see experienced developer that is still convinced that all those rules are untouchable -> naturally, as you grow, you start testing those boundaries and reevaluating them. I think it us up to us to learn to question everything, from time to time, at least a little bit.

2020-05-06 16:46:23 U010328JA1E:

I highly recommend Knuth's "response" "Structured Programming with go to Statements" which I find a way better piece than Dijkstra's. About dogma, fascinating how things could be different if Niklaus Wirth (!) hadn't change the title from "A case against the goto statement" towards publication.

2020-05-06 16:48:35 U010328JA1E:

Dijkstra himself wrote something like (paraphrase): "regrettably, it became the cornerstone of my fame even just by the article's title". But imo, I don't think he was bothered too much about it 🙂

2020-05-06 19:35:37 Jamie Brandon:

Labelled continue/break are basically goto anyway and trying to fit certain control flows into that pattern just ends up being harder to read than a real goto.

I like Julia's approach of allowing labelled goto but only within a function.

2020-05-06 20:18:47 Kartik Agaram:

Is there any extant language that allows goto another function? Even C has that guardrail. Which makes it hard for us moderns to appreciate just what Dijkstra was arguing against.

Personally I think labeled break/continue is great! Is there any pattern it doesn't support?

2020-05-06 20:36:30 U010328JA1E:

Quickly checked https://riptutorial.com/julia-lang/example/15206/input-validation

States: "Although both examples do the same thing, the second (with recursion as opposed to GoTo) is easier to understand."

But I find it to be completely opposite and I'd think to beginners as well. What do y'all think?

2020-05-07 00:21:35 Garth Goldwater:

goto reminds me of hyperlinks, so i’m partial to it. i also think the first example is clearer—but i’m also prone to liking things like the not-quite-deprecated-but-very-frowned-upon with statement in javascript, so i may just favor underdogs

2020-05-07 00:23:02 Garth Goldwater:

goto’s confusion IMO is a classic example of plaintext failing programming—extremely complicated hypertext fiction created by nontechnical authors suggests that better interfaces may make it relatively intuitive

2020-05-06 17:55:21 Stefan Lesser:

This seems to have the potential for quite some impact on the future of coding, doesn’t it? What do you think? https://github.com/features/codespaces

2020-05-06 18:36:11 Roben Kleene:

Frankly, it’s seeming more and more by the day that local development environments are living on borrowed time.

2020-05-06 18:36:48 Roben Kleene:

Obviously still a long ways to go, but movement in that direction seems to picking up speed.

2020-05-06 19:10:22 Konrad Hinsen:

I wonder how much of a dependency on GitHub this new feature introduces. Can you still work 100% outside of GitHub on your projects? After all, GitHub could close down in a a few years, like others did in the past.

2020-05-06 19:38:44 Roben Kleene:

While I agree 100% that that’s something worth worrying about, it also seems to me that the market in general has decided that they don’t care about that kind of risk.

2020-05-06 20:30:48 Kartik Agaram:

The market is made of people. All of us get a vote! And we get to change it as much as we want. So it's certainly worth discussing scenarios, so that we can influence and be influenced by each other's adoption.

2020-05-07 14:13:21 U012WT6NP2N:

Apparently it's not just github but a microsoft product : https://visualstudio.microsoft.com/fr/services/visual-studio-codespaces/?rr=https%3A%2F%2Fonline.visualstudio.com%2Flogin

2020-05-07 14:14:13 U012WT6NP2N:

But yes as you said, your local dev environnement has a bleak future.

2020-05-07 14:30:26 Mariano Guerra:

instead of local/remote we could push for a standard for "roaming" environments of which this is one instance and then local and remote are basically the same

2020-05-07 14:31:05 Mariano Guerra:

I would like to isolate my local environments as much as those in github codespaces, I want to version them and share them with coworkers, fetch them on a new machine

2020-05-07 14:43:04 Roben Kleene:

I really like the framing of the goal as "roaming"/isolated development environments. Regarding Codespaces does anyone know if you can ssh into your Codespace? And tangentially, does this model allow using any text editor (or any other program) besides Visual Studio Code? E.g., can you edit an image in Photoshop in the Codespace for example? I'm sure not today, but ever? Does the model allow for that?

2020-05-07 14:45:56 Mariano Guerra:

VS Code has a "remote" protocol builtin, it uses that to connect to a container as far as I understand, if the protocol was a standard others could implement it (since VS Code is open source it shouldn't be hard to reuse it), and since the env is an environment it should be possible to ssh to it, don't know if this service allows it

2020-05-07 14:47:18 Mariano Guerra:

https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.vscode-remote-extensionpack

2020-05-07 14:47:52 Mariano Guerra:

https://code.visualstudio.com/docs/remote/ssh

2020-05-07 16:02:54 Konrad Hinsen:

I like the "roam" label as well. Local vs. cloud is not really the important point. What matters to me is being in control. If next week I need to work offline, I want to be able to do that. If my cloud IDE provides goes bankrupt, I want to be able to move to a different one without loss or major effort.

2020-05-07 16:48:56 Roben Kleene:

I'd add to control also being able to use any apps with your source code. I love the idea of being able to spin up a development environment from anywhere, but I don't want to be forced to use VSCode to do so. (I use and love VSCode, but I still don't want to be forced to use it.)

2020-05-09 02:29:12 Doug Moen:

I use github, but I am wary of being tightly coupled to it. My project doesn't have a wiki, it has a 'docs' directory. It also has an 'issues' directory, although casual users are free to create github issues. For many years, I have done 'development from anywhere' by either bringing along a laptop with a dev environment, or by ssh-ing into my main dev machine from anywhere. If github were to instantly vanish with no warning, I would be fine, and that's due to the distributed nature of git, and the fact that my dev environment contains a local copy of all my code. Github would surely benefit by turning into a walled garden with so much crucial functionality that only exists in their servers, not on your local machine, that you are locked in and cannot escape. The more it looks like Github is heading in this direction, the more that some people in the dev community will resist, create alternatives, and migrate elsewhere. My personal future of coding is decentralized. Git is an amazing decentralized tool, but more can be done.

2020-05-09 06:46:43 Konrad Hinsen:

The best tool for a local-first approach is probably Fossil, which keeps issues in the repository itself, and therefore in every local copy. But being non-git, it will probably remain a niche tool.

2020-05-07 17:44:17 Tom Lieber:

Stephen Kell during Convivial Computing Salon Q&A: "[C's] concept of memory is bigger than the process… avoiding the denigration of the outside."

2020-05-07 17:46:23 Chris Knott:

Is it on Youtube? Annoyed I missed it

2020-05-07 17:48:34 Tom Lieber:

I heard Jonathan Edwards saying that he was updating the agenda spreadsheet to replace the Zoom links with recording links as they go along.

2020-05-07 17:56:20 S.M Mukarram Nainar:

Any chance we could get them uploaded to youtube or some other video site?

2020-05-07 17:56:36 Kartik Agaram:

Chris Knott I missed the first talk today because I transferred the time wrong to my calendar 😭 Extremely disheartening after spending much of my week thinking about this schedule.

2020-05-07 18:06:55 Shalabh Chaturvedi:

I have a counterpoint to Stephen's. Is the C memory model not an imposition of the C paradigm to anyone who wishes to interoperate?

2020-05-07 18:08:49 Shalabh Chaturvedi:

BTW, Kartik's talk coming up soon, right? I have 12pm Pacific.

2020-05-07 18:14:11 Tom Lieber:

I think it's the other way around. Memory exists, so we have "systems" programming languages that use it directly. But your point feeds into a thought I was composing…

Memory extending beyond a single process enables mostly just low-level bit munging. It seems to me that one mechanism for "connection over containment" (also from Stephen Kell's talk) would be rethinking what's "outside."

Standard "outside" today has std streams, the filesystem, the clock. But your language's objects probably don't extend beyond a single process. Your types probably don't (unless you're targeting a Lisp machine). BeOS's rich file attributes encourage you to put most of your program on the "outside."

2020-05-07 18:17:14 Tom Lieber:

Maybe everyone already knows this, and it's just that nobody wants to make an OS. 🙂

2020-05-07 18:25:08 Chris Knott:

Was it similar material to this? https://www.youtube.com/watch?v=LwicN2u6Dro

2020-05-07 18:36:51 Shalabh Chaturvedi:

Different. Here's the abstract. I think the slides and recording will be posted at some point: https://2020.programming-conference.org/details/salon-2020-papers/9/Convivial-design-heuristics-for-software-systems

2020-05-07 18:37:12 Shalabh Chaturvedi:

The C comment was in the discussion, afair.

2020-05-07 18:38:47 Shalabh Chaturvedi:

To follow Tom Lieber's thread - yes inside vs outside is very important. All interplay is within some homogenous and agreed upon model. You can say this is 'bytes' but really that's also an imposition.

2020-05-07 18:50:57 Christopher Galtenberg:

Love this conversation - keep adding relevant materials on inside/outside as you can think of them

2020-05-07 19:13:41 Tom Lieber:

I agree, but I feel like being agreed-upon is still just part of the picture. C denigrates memory less than, say, filesystems, because everything's in memory: every variable is in memory, and even the code itself is in memory, so nearly the whole program is on the "outside," and it just happens to be an annoying outside for interplay. Filesystems extend beyond a single process, but the boundary between one process and another, by way of the filesystem, is nearly absolute.

2020-05-07 19:18:50 Tom Lieber:

I keep coming back to "connection over containment" too. Via memory, you can connect to any process, whereas a filesystem interface or network protocol or whatever is clearly containment.

2020-05-07 19:37:51 Shalabh Chaturvedi:

Could you elaborate connection over containment? Is about emphasizing composition?

2020-05-07 19:42:54 Tom Lieber:

IIRC, yes, as a design heuristic for convivial computing.

2020-05-07 20:02:40 Shalabh Chaturvedi:

Yeah I agree with that idea in general. I didn't see the whole talk, but did follow the questions so I missed some of the context.

2020-05-07 20:03:50 Shalabh Chaturvedi:

I think composition one of the main issues and very poorly solved. We have composition via byte formats, APIs and other higher level models. But it all seems too brittle.

2020-05-07 20:33:08 Shalabh Chaturvedi:

I would like to make a disctintion between 'can connect' and the 'domain of connection'. At the lowest level we have electricity flying over connected wires and various interconnected micro devices which are compatible because they use the same voltage, etc. Just because we can connect two micro devices doesn't mean it will do any useful. Once we implement 'bytes' it becomes totally irrelevant what voltage and shape of wires is used to represent 'bytes as ram'. So you can switch out the voltage and entire circuitry with something very different and as long as you can simulate 'bytes', its still all good - because your program is dealing with 'bytes' and not 'voltage'. (In fact we have gone through many iterations in hardware from vacuum tubes). But what about bytes and byte formats? To me they are just like voltage - not very important to the real goal. Just because one program can 'see the bytes' of another program (analogous to just wiring up random micro devices) doesn't mean its useful. They have to agree on the byte formats. What do bytes transmit? data structures which represent information. So for composition between two entities, they must agree, at least partly, on the shape of the information. If this is possible, they could negotiate and converge on one of many possible byte representations (or perhaps forgo bytes altogether in some other future computer). Basically I'm arguing that byte formats are irrelevant and information models are where it's at, however most of our composition models are deeply coupled with irrelevant details. The entrenched bytearray orientation via Unix/C is one culprit here.

2020-05-07 23:53:15 Tom Lieber:

I think it's important for conviviality, though, that if I want to make a debugger for your program (one of Stephen's examples), that it not be required for your program to implement a Debugging Communications Protocol. The convenient thing about the "memory" concept is that your program and all its state is in memory no matter how you write it. I don't mean to minimize the importance of information design; I'm just ruminating on how it's irrelevant without access.

Anyway, I guess this is why people like Smalltalk images so much. 😆 It's been dawning on me today that, while my usual gripe is how awful most desktop software APIs are relative to Acme's 9P interface, uniform access to all internal state by putting all internal state "outside" may be more important in the long run—with or without (ideally with!) your representation negotiation tech. Barely any web sites are extension-friendly, but they gotta use the DOM and that's where we get 'em!

2020-05-08 00:35:43 Shalabh Chaturvedi:

I think Basman asked about how it is possible to have a universal debugger and the answer points to having pre-shared the concepts and formats (e.g. what a call stack is and how debugging symbols are encoded). I would go further and ask what does debugging even look like if you don't use call/return functions, but use constraint connectors (like Marcel's talk, not sure if you watched it)? Could GDB or a universal debugger even hope to make sense of bits arranged according to completely different paradigm?

2020-05-08 00:39:46 Shalabh Chaturvedi:

That said, I strongly agree that if something maps to bits, those must be visible for convivality and transparency. Further, even appropriate lenses to view those bits more meaningfully must be readily available.

2020-05-08 15:32:08 Tom Lieber:

No, I'm so bummed I missed it and that the recording's not up yet.

2020-05-09 04:10:45 Shalabh Chaturvedi:

Recording's up now. The question starts here: https://youtu.be/1ql__-f4rB4?t=2235

2020-05-07 18:38:43 Mariano Guerra:

This post seems to show the limitations of malleable systems: https://medium.com/diesdas-direct/notion-encourages-busy-work-and-im-tired-of-it-b1e049edb663

2020-05-07 18:55:25 Andy F:

interesting, I feel like a takeaway is, even when we have tools that remove as much accidental complexity as possible from software creation, there’s still the essential complexity of how to actually design the app well. Whether everyone can/should be an app creator is an open question

2020-05-07 18:56:11 Andy F:

Some people don’t have the cooking skill to cook a hot dog and that’s okay

2020-05-07 18:56:27 Jared Windover:

I read this before and had a different take. Caveat: I haven't used notion, and my project is a malleability project, so I'm bought in. The first two points (notifications and lack of design) actually seem like notion just isn't malleable enough. Notifications should be customizable based on the meaning of the way something is being used. Same with the graphical components. I want a sane default appearance and the ability to relayout things according to my preferences. As for the endless tweaking, I have two thoughts: it's good that people are able to customize things and explore new workflows, and it's bad that existing work isn't being discovered and repurposed. It sounds like reinvention (and some amount of incompatibility/churn) is the problem, more than the inclination to tweak things. Which is why I think any kind of malleability project should also have, like, an app store or component library, or some good way of sharing the work that has been accomplished.

2020-05-07 19:13:42 Steve Dekorte:

“Too often people can’t handle the great power Notion gives them: they frequently build the most sophisticated, most complex solution for a problem because that makes them look smart. “ A good description of the problem of software in general.

2020-05-07 19:38:35 Roben Kleene:

There are three levels to information management:

  1. Information is lost
  2. Information is disorganized
  3. Information is usable

The writer appears to be looking at the growing pains between 2 and 3, and asking why can’t we just go back to 0?

2020-05-08 13:51:20 Ryan King:

I'm trying to take my rigid project management system and make it something much more malleable, so the last point hits home with me. My biggest fear is I'm dumping too much work on the user and designing these systems is not their area of expertise. I'm hoping that that I can provide enough structure to guide the user but also keep the system flexible enough for them to fullfil their needs.

2020-05-07 18:43:51 Chris Knott:

There's an (IMO) utterly cringe-worthy mobbing going on at the moment regarding a Github repo for the epidemiologists who produced an influential paper regard Covid-19 https://github.com/mrc-ide/covid-sim/issues/165

2020-05-07 18:45:24 Chris Knott:

If you can wade through the insufferable brogrammers there's an interesting conflict going on between scientists, for whom code is a temporary tool, and software engineers, who are upset that they are doing it wrong

2020-05-07 18:48:00 Ivan Reese:

What's next? We'll learn that it's unethical for any software with an SLA to be programmed in a dynamically-typed language?

2020-05-07 18:49:14 Chris Knott:

One point of contention is that due to fairly complicated technical reasons, the code is not deterministic. The scientists point out that the entire simulation is stochastic, so what does it matter? When they are preparing their results they run it 1000s of times and just note the results somewhere (like - gasp - on paper maybe?!).

2020-05-07 18:51:04 Ivan Reese:

Wait 'til these programmers learn the meaning of the word "simulation". Oh, to see the looks on their faces!

What is a programmer? A miserable pile of secrets.

2020-05-07 18:51:30 Chris Knott:

Upsets me no end to see these guys basically bullied by trolls during what must be the most anxiety inducing period of their lives.

Very heartened to see John Carmack take a stand against the mob - https://twitter.com/ID_AA_Carmack/status/1254872368763277313

2020-05-07 18:53:08 Ivan Reese:

That was my fear — what if the code turned out to be a horror show, making all the simulations questionable? I can’t vouch for the actual algorithms, but the software engineering seems fine.

2020-05-07 19:02:58 Jared Windover:

The implication that all of these folks see their tests pass and “know” they’ve written correct code is also horrifying.

2020-05-07 19:13:24 Edward de Jong:

Unfortunately these models were weaponized as instruments of politics; the run of the model that predicted hundreds of thousands of deaths caused the UK to pivot abruptly into major lockdown, and on the day the lockdown was announced the model was rerun and the death estimate lowered to 20k. That is what i read somewhere. With 280 parameters to tweak, this model can be coerced into yielding any desired result, and as a predictive tool it is frankly worthless. Models like this are better for helping build understanding of the factors than generating predictions. This isn't fluid dynamics, which at this point is extremely accurate.

The fallout from this politicization of modeling, whereby you release a subsequently retracted non-peer reviewed paper that has major consequences, with no chance for others to validate or double check before decisions were made upon it, was a derail of the normally calm scientific process, and outrageous overestimates in the original paper, which were calculated to spur action, probably with foreknowledge they were exaggerated, will tarnish computer modeling for a long time to come. Per the body victorious, a virus can "create 10^72 copies in 12 hours", so how does one model a process that is like an atomic bomb?

2020-05-07 19:16:32 Justin Blank:

I think there’s a lot of interesting questions about the reliability of scientific code, that responsible people in scientific computing struggle with, and make valiant efforts to improve.

And I think this github issue contributes nothing of value to that effort, public understanding, or anything other than some commentators’ sense of superiority.

2020-05-07 19:44:37 Ivan Reese:

will tarnish computer modeling for a long time to come Or to frame it in a positive way, it'll help the public gradually learn that models are not fortune tellers, but rather tools to assist reasoning — and that the models themselves are more valuable than the individual results they produce. All people are going to gradually learn this, and that sort of societal literacy takes a long time to develop, and is not a clean and straightforward process. This is a good thing. We're going to have to go through the same process with ML, and it's going to be even messier. That's not a knock against ML or the people misusing ML. The cybernetic human is in its infancy.

2020-05-07 19:59:33 Max Kreminski:

from an educator’s perspective, this is one of the reasons i’ve been thinking a lot lately about how to teach reflective simulation design & simulation epistemology. one of the weird holes in computer science education right now is the lack of a critical pedagogy around modeling real-world phenomena in code. it’s my hope that giving people tools to craft simulations of phenomena they care about for themselves, and putting them in a workshop or studio environment where they can collectively critique one another’s models, might be useful for building this kind of simulation literacy – but i haven’t gotten a chance to put any of this into practice yet

2020-05-07 20:25:03 Chris Knott:

Edward de Jong This is somewhat off topic but just to correct some misinformation, their report was not changed nor retracted. You can read it here https://www.imperial.ac.uk/media/imperial-college/medicine/sph/ide/gida-fellowships/Imperial-College-COVID19-NPI-modelling-16-03-2020.pdf (see pg 13) - one of the team addresses this misconception here - https://github.com/mrc-ide/covid-sim/issues/175#issuecomment-625377867

2020-05-07 23:35:11 William Taysom:

Edward de Jong sounds like you're describing (what is it called?) math washing — using equations and algorithms to give the apparence of certainty. I don't see how we can ever have any math without the potential of misusing it. I guess it's just that the sooner we all know this, he better.

2020-05-08 00:14:52 Edward de Jong:

Ivan Reese made a terrific point, maybe the fallout from this $20 billion a day cost in the USA lockdown will be that people stop blindly obeying unverified computer models. People have always been gullible and believed soothsayers, the computer is the modern equivalent.

That they are claiming that 500k would have died had Britain not gone full lockdown is an unproven statement. In fact social distancing as has been practiced has very little evidence that it affects final outcomes. This 2020 study published in nature (https://www.nature.com/articles/s41598-020-58588-1) on the Flu virus shows that viruses become so prevalent in the atmosphere that they are basically unavoidable. The fact that they overestimated deaths in UK by more than factor of 10 is to the modelers excusable, because it spurred action. But is that responsible science?

My objection is that exponential functions buried inside a formula almost always result in a runaway result, and from the Club of Rome onward, people have been predicting global catastrophes from famine, seas rising, ice ages, ice melting. Soothsaying merged with the computer holding the sign saying "the end is near".

2020-05-08 03:50:52 William Taysom:

Edward de Jong we need a term that incorporates "soothsaying".... Digital Soothsaying... Data Soothsaying... Algorithmic Soothsaying?

2020-05-08 07:33:18 Edward de Jong:

Machine learning soothsaying MLS, or Artificial Soothsaying, or Augmented Prediction

2020-05-08 13:24:22 Konrad Hinsen:

The one conclusion I draw from this event that a crisis is not a good time for a serene discussions of how to do science, or how to make science-based policy decisions.

There is an ongoing discussion in many scientific disciplines about how to do better computational science. Software engineering is one aspect in this discussion, but only one among many. So far, different disciplines adopt different priorities, for good reasons. I don't expect the debate to be settled any time soon, and I don't expect mobbing to be helpful in any way.

2020-05-08 21:54:43 Duncan Cragg:

https://twitter.com/duncancragg/status/1258876608758403072

2020-05-10 19:17:08 Mark Dewing:

Scientific results are usually more reliable than the code that produced them. This seems like a paradox, but it relates to code being a model that is just one input to the process of understanding.

2020-05-10 21:51:06 Ivan Reese:

Or the code is a measurement of a model (or of a model of a model). The measurement can be approximate or noisy while still confirming the clarity of the fundamental model. Consider Newtonian mechanics (a clear, but imperfect, symbolic model of nature) vs observations of the motions of planets (noisy) vs a miniature-scale physical model (approximate).

Good science can be built of rough data and imprecise recreation. Good science also doesn't need to be perfectly predictive, just helpfully predictive.

2020-05-07 20:00:06 Tom Lieber:

Kartik Agaram in his Convivial Computing Salon Q&A: Mu has a "barbarian ethos" in the sense that barbarians trade with nearby settlements for technology they don't have, similar to how Mu minimizes dependencies, but he's creating it on a Mac, in a text editor, etc.

2020-05-07 20:27:19 Ivan Reese:

That rocks.

(Sad I missed the live stream, but I only realized it was happening 15 minutes after it started. I hope there will be videos soon)

2020-05-07 20:30:59 Kartik Agaram:

Here are the slides.

I'll also paste the links I posted into the conference Slack: Paper: http://akkartik.name/akkartik-convivial-20200315.pdf Repo: https://github.com/akkartik/mu Compiler summary: http://akkartik.github.io/mu/html/mu_instructions.html Me: http://akkartik.name

You've seen them all, but it may be helpful to lay them out in one place.

2020-05-07 20:40:52 Chris Knott:

Why is it necessary to have SubX and Mu as different languages, can't you just have a restriction of Mu with the same syntax?

2020-05-07 20:42:01 Chris Knott:

I understand the benefit in terms of making the implementation of the compiler/translator easier, but I don't see this as being a part of the system that users will really care about understanding

2020-05-07 20:42:24 Kartik Agaram:

Since Mu has to be safe, I have to disallow a lot of stuff that is allowed in SubX. Like goto 🙂

2020-05-07 20:44:06 Chris Knott:

What is the advantage of this representation; 8b/copy 0/mod/indirect 1/rm32/ecx 8/disp8 0/r32/eax

2020-05-07 20:45:25 Chris Knott:

why not registers['eax'] = memory[registers['ecx']]

2020-05-07 20:46:20 Kartik Agaram:

Relative to the binary: 1. It's text. 2. It raises errors immediately and with error messages rather than just segmentation fault.

Relative to conventional syntax: 1. It's easy to translate, and that means less stuff for you to understand if/when you end up wanting to understand the internals. Which is the whole point of Mu, to not leave you without a paddle.

2020-05-07 20:46:40 Chris Knott:

I used Python but hopefully you get the idea. I'm trying to draw a distinction between the physical fact that "registers" is a 8-big dictionary with certain keys, and a particular way of writing information

2020-05-07 20:47:21 Chris Knott:

One is a compromise we have to make to the machine, the other one I don't see why you "give way" so much to the binary representation, which only the machine likes not humans

2020-05-07 20:47:40 Kartik Agaram:

I'm not sure I'm quite following you. If you like I'd love to do a quick call sometime. I'm on vacation all this week and weekend, so more flexible than usual.

2020-05-07 20:51:00 Kartik Agaram:

Mu's hypothesis is that trying to cater to humans (in the short term) has costs. So at lower levels it might reduce total cost to cater less to humans in the short term, and cater more to machines and to humans in the long term (by being easy to learn albeit with some initial hump).

For example, this notation is trying to avoid creating a new standard. It's so close to x86 that you can use an existing standard. And I can implement it with just machine code too, and the implementation can be relatively ergonomic. Perhaps a Python-like syntax can be implemented in x86, but it feels like it would be larger and more complex.

2020-05-07 21:14:11 Chris Knott:

I will compose my thoughts better then get back to you properly later in the weekend

2020-05-08 01:51:42 Kartik Agaram:

Video: https://us02web.zoom.us/rec/share/wpVrdKus2WNOfLPpxmrYYIATGtXuT6a80CIW_PoLnxkDirZvo7wNRE60OdEdT6np

2020-05-08 04:39:22 Ivan Reese:

You should do a top-level post, Kartik!

2020-05-08 21:32:20 Ivan Reese:

Anyone here work at Cycling74, or work on Pure Data?

2020-05-08 21:34:42 Cameron Yick:

I don't, but Cassie (maintains the p5 web editor) does: https://github.com/catarak

https://cassietarakajian.com/current-projects

2020-05-09 03:13:23 David Piepgrass:

Is anyone aware of a programming language or library that efficiently solves the problem of incremental reactive recalculations involving collections? I'm a fan of libraries that support reactive updates, such as Assisticant, KnockoutJS, MobX, Vue.js and SwiftUI, but I don't know of one that contains the algorithm I want. I'll explain the problem by example. Suppose you have:

  1. an "observable" list of a million items, and you insert or remove an item somewhere in the list
  2. a filtered list based on the million items showing perhaps a thousand of the items
  3. a projection of the filtered list (map/select) So, when you insert or remove the item, the library should efficiently (and automatically!) propagate the change through the filtered list to the projected list. If the new or removed item is filtered out anyway, propagation should stop so the projected list is not notified of a change. Ideally, change notifications should be deferred in some way so that if several changes are made to the same list item in rapid succession, the derived items (2 and 3) would only be notified once.

2020-05-09 03:27:41 Ryan King:

I’m trying to solve this exact problem right now! I’m very close to trying to roll my own solution. I’m thinking of shifting from observables to a more decoupled event system. I find they are creating unintended side effects and are difficult to debug (and I need events for undo functionality). Anyway, let me know if you find a solution - or would like to help build one!

2020-05-09 04:09:02 Chet Corcos:

Are you looking for something in-memory or a database?

2020-05-09 04:36:12 Ryan King:

In memory

2020-05-09 06:42:49 Edward de Jong:

I am sure that in Excel which can handle about a million rows before it tops out, that they are using some paginated system, whereby if something changes in a chunk the whole chunk is rebuilt but otherwise intermediate chunks with their subtotals is maintained. A lot of the code in Excel is about handling the scale of the databases that people are throwing at Excel. The lower scale functionality is pretty easy, having written a spreadsheet before. Very important to cluster the data otherwise your CPU will almost come to a halt. This is a very tricky area that takes lots of testing to verify that your algorithm interacts well with the CPU. Since you have a very specific set of requirements you will need to roll your own; the conventional systems are not built for this kind of load.

2020-05-09 11:46:41 Doug Moen:

I'm starting to investigate the use of Lenses to address this problem, but I don't have any concrete solutions yet. I just discovered this week the existence of a long running "bx" community (bidirectional transformation) who work on these problems. EDIT: I checked, and the BX people mostly use Haskell, so their code probably won't help you, due to their library dependencies.

2020-05-09 15:02:51 Ivan Reese:

I believe transducers in Clojure were designed to allow you to build data structures and operations that can be composed in this way.

But if perf is your main concern, it probably should be easy enough to roll your own in any language. Us front end devs have to do very similar things to efficiently render large table views in the browser.

2020-05-09 15:25:01 Dan Cook:

In BV's "Future of Programming", he talks about how memory and processors are made of the same material (transistors), so eventually it should all just be a bunch of processors.

I think this is the kind of application where something like that would work: where a bunch of units are constantly a function of others. So like if you could allocate arbitrary chunks of memory and "assign" an operation relative to some other chunk.

Maybe that's more like PLA than a bunch of "processors", but maybe that's the goal: being able to reduce pure functional operations down to programmed logic on the hardware that's constantly updating, without all having to be cycled single-file through a CPU.

2020-05-09 16:21:13 David Piepgrass:

Ivan Reese If it's easy enough, could you enlighten us? I've been thinking about this problem occasionally for several years and never fully cracked it

2020-05-09 17:20:12 Doug Moen:

transducers in Clojure are unidirectional. In David's context, you can transform the model to the view, but you cannot modify the view and propagate the changes back to the model.

Lenses are like transducers, except that they are bidirectional, which is needed for David's "reactive updates". That's why I was talking about "bx" or bidirectional transformation.

Lenses look easy enough (I'm going to try my first implementation of them soon enough). But the requirement is for efficient bidirectional transformation of a million items. Surely that requires more thought than the usual simple Lens implementation?

2020-05-09 18:38:34 Chet Corcos:

I mocked something out that should effectively do what you're looking for: ``` type Paper = { id: string subject: string date: string // ISO }

const collection: Record<string, Paper> = {}

// This is the query I want to index: // Filter for nutrition items this year, range 20-40. Object.values(collection) .filter((item) => item.subject === "Nutrition" && item.date > "2020-01-01") .slice(20, 40)

// First, lets translate this into a composite index. const filterIndex: Array<[string, string, string]> = [] // [subject, date, id] for (const item of Object.values(collection)) { // uses binary search to insert in sorted order. addToIndex(filterIndex, [item.subject, item.date]) }

// Translate your query into subscriptions. const subscriptions = [ [ "date", "2020-01-01", () => { /* Update callback / }, ], [ "filterIndex", 20, 40, () => { / Update callback / }, ], [ "subject", "Nutrition", () => { / Update callback */ }, ], ]

function updateItem(id: string, update: Partial<Paper>) { // Emit on the old key-value because this will be removed from result set. for (const key in update) { subscriptions .filter(([a, b]) => a === key && b === collectionid) .forEach(([_a, _b, callback]) => callback()) }

const beforeIndex = removeFromIndex(filterIndex, collection[id])
Object.assign(collection[id], update)
const afterIndex = addToIndex(filterIndex, collection[id])

// Emit on the new key value because this will be added to result set.
for (const key in update) {
    subscriptions
        .filter(([a, b]) => a === key && b === collection[id][key])
        .forEach(([_a, _b, callback]) => callback())
}

if (beforeIndex !== afterIndex) { // Emit an update for all listeners on filterIndex between before and after. } }```

2020-05-09 18:39:31 Chet Corcos:

This pattern is something I discovered when I was building a datalog prototype. Using reified indexes on your queries makes subscriptions a lot easier.

2020-05-09 18:39:34 Chet Corcos:

https://github.com/ccorcos/datalog-prototype/tree/master/src/shared/database

2020-05-10 01:04:48 Chet Corcos:

I guess my point is: it sounds like you want a reactive database.

2020-05-10 02:53:52 Edward de Jong:

In general it is not easy to make controls bidirectional. There are host of ergonomic decisions that are asymmetrical, and I don't believe any general system could handle a wide range of widgets. I just posted a dual simultaneous temperature control sample, where you can set the temp any of two ways, and it feeds back to the model which then automatically re-renders the widges. But you still have to manually code how you want the cursor to behave, and the scaling factors for the mouse, and how you want to clip movement. User input is rather different than rendering a control. It is good however, to try an make things as bidirectional as possible, as it would save a ton of code.

2020-05-10 03:54:17 Jamie Brandon:

https://github.com/TimelyDataflow/differential-dataflow/ is state of the art, although the UX can be iffy if you aren't up to speed with Rust.

https://opensource.janestreet.com/incremental/ is also good. It can handle updates to nested collections, unlike differential, but can't easily handle maintenance of nested loops.

I wrote a bit about applying similar ideas to UI - https://scattered-thoughts.net/writing/relational-ui/ - and a friend of mine is building something in the same family for javascript/react, not sure how mature it is yet - https://datalogui.dev/

I'm also working on a language that is designed to be efficiently incrementally maintained, although I haven't actually mapped it to the incremental layer yet - https://scattered-thoughts.net/writing/imp-intro/

2020-05-10 03:56:26 Jamie Brandon:

I also did a bunch of work for materialize.io which is a proprietary SQL database that compiles down to differential dataflow. There's some interesting stuff on the blog about eg incremental maintenance on non-abelian aggregates.

2020-05-10 04:23:01 Ivan Reese:

David Piepgrass You're not actually looking for bidirectionality, are you? That seems like an additional wrinkle introduced by Doug Moen.

In the case of unidirectionality, you'd have some function responsible for insertion into the large list. That function also runs the filter on each newly inserted item, and if an item passes the filter, add it to the filtered list too. If you're doing this in terms of some reactive library, the library should (caveat emptor) be smart enough to not re-filter the entire list on every change, and instead just apply the filter to new items.

To batch together rapid updates, you'd want a debounce or a throttle — high likelihood your reactive library of choice has those. Otherwise, you can just roll this yourself — you just need some way to schedule code to run after a certain amount of time has passed, and a place to store some intermediate state.

As for recalculating the projection, mapping should be just as easy as filtering — just do it to each new item as it comes in. Again, all the Rx libraries I've seen are smart about this.

Finally, for reducing, efficiency will depend on the properties of the operation: is it associative? Commutative? Approximate? Can you build a small intermediate result that's easy to incrementally modify and recompute? Etc. This talk gives a nice summary of some good strategies: https://www.infoq.com/presentations/abstract-algebra-analytics/

It's entirely possible I'm missing details that make this problem far harder than I'm imagining. But I feel like I've done this exact thing a handful of times, both with an without reactive libraries, so hopefully this helps somewhat.

One more bonus link — all the features you've asked for are documented visually/interactively here: https://rxmarbles.com/

2020-05-10 12:03:34 William Taysom:

Associative? Commutative? Approximate? — In other words, qualities of the kind of aggregation make the difference. Filtering, though a special kind of aggregation, can be easy to recalculate or hard depending.

2020-04-17 18:21:07 Unknown User:

MSG NOT FOUND

2020-05-09 19:42:12 David Piepgrass:

Hmm, anyone else here always forget the syntax of INSERT and UPDATE because they are so different from each other? Doesn't seem like great attention to human factors in that case.

2020-05-09 21:35:08 Tom Lieber:

Yes! I’d be curious to know why they are different.

2020-05-10 01:06:32 Tom Lieber:

I’m finally reaching the end of the podcast episode about the community survey results and heard the suggestion for a “book club” where the books are FoC prototypes. Um… yes please?? Is anybody already organizing such a thing?

I ran a mass in-person user study of my JavaScript debugger Theseus in the form of a class to teach JavaScript, and though I mostly remember the bugs and technical gotchas of installing my largely untested software on many stranger’s machines, it also led to incredible discussion and muuuuch better documentation, and I’d do it again in a heartbeat.

2020-05-10 08:03:58 andrew blinn:

I made the suggestion, and I've slowly been working up a reading list which is very specically scope-limited to the structured editing microcosm. Ive been wrestling a bit with the issue of coordinating what I assume will be a sequence of baroque set-up processes around the non-web-facing projects. In that sense it might inevitably become more of a support group than a book club...

2020-05-10 11:40:06 Chris Knott:

Could you upload a ready to go VM to Google Drive or something?

2020-05-10 21:11:08 Tom Lieber:

Let us know if you need anything!

2020-05-10 18:47:29 Scott Anderson:

Nice block based environment for building simulations https://github.com/EiichiroIto/NovaStelo

2020-05-10 18:47:54 Scott Anderson:

It reminds me a lot of the work I was doing at Facebook for Horizon

2020-05-10 19:27:51 Christopher Galtenberg:

http://a9.io/glue-comic/ - bravo Max Krieger

2020-05-10 20:09:44 Max Krieger:

thanks! Props to everyone else in the Convivial computing salon and those who made it possible :) Jonathan Edwards Colin Clark