2020-05-06 22:03:52 Unknown User:

MSG NOT FOUND

2020-05-10 22:08:19 Ivan Reese:

One as yet unimplemented plan is that abstraction will come from building "symbols", which are a collection of graphics / logic wrapped up into an isolated unit with a clearly defined interface — typical patcher stuff.

Each symbol instance will need its own sense of time. When looking at a particular instance, all child instances will execute "infinitely fast" to simulate continuous time semantics. All parent instances will execute discretely, only when necessary to satisfy the needs of the currently observed instance.

The GUI, including things like pulling a point off an edge, would be the ur parent of the simulation hierarchy. So if it were possible to run time backwards outside the GUI level, your editing interactions would gradually be undone, and that pulled point would hop back onto the edge where you pulled it off, right at the moment that you had pulled it (since the flow of time of the points-and-edges simulation is happening inside the GUI).

Not sure I'll end up going with this plan, though. I have stack of imagined "it would feel nice to __" experiences that guide my development. I'm trying each of them, keeping the ones that feel most simpatico. So we'll see!

2020-05-10 22:10:51 Ivan Reese:

Also — > Moving time backwards no longer includes that point. Maybe. Some rewind strategies I've played with have this behaviour. Some don't. Still trying to find the approach (likely a hybrid of strategies) that feels nicest to use.

2020-05-11 00:15:16 Kartik Agaram:

The major problem encountered in time travel is not that of accidentally becoming your own father or mother. There is no problem involved in becoming your own father or mother that a broadminded and well-adjusted family can't cope with. There is also no problem in changing the course of history; the course of history does not change because it all fits together like a jigsaw. All the important changes have happened before the things they were supposed to change and it all sorts itself out in the end.

No, the major problem is quite simply one of grammar, and the main work to consult in this matter is Dr Dan Streetmentioner's Time Traveller's Handbook of 1001 Tense Formations. It will tell you for instance how to describe something that was about to happen to you in the past before you avoided it by time-jumping forward two days in order to avoid it. The event will be described differently according to whether you are talking about it from the standpoint of your own natural time, from a time in the further future, or a time in the further past and is further complicated by the possibility of conducting conversations whilst you are actually travelling from one time to another with the intention of becoming your own father or mother.

Most readers get as far as the Future Semi-Conditionally Modified Subinverted Plagal Past Subjunctive Intentional before giving up: and in fact in later editions of the book all the pages beyond this point have been left blank to save on printing costs. -- "The Hitchhiker's Guide to the Galaxy"

2020-05-11 12:42:53 Chris Maughan:

It was a slow week this week. I found I didn't have my usual energy for after-hours projects, and I needed a rest this weekend. As is usually the case in such circumstances, I spent some time cleaning up code and refactoring - I often find such activities help keep the project momentum up, while requiring less time & effort. I have spent some time recently thinking about how to manage note events and synchronize beats & timing. It seems like a good idea to integrate Ableton Link at the outset, because on some level it helps drive towards a cleaner management of such things. It is also really nice to have a built in way for musicians to work together with a live coder; a USP perhaps. I don't say it in the video, but the tempo of the audio is being controlled using the iPad drumming app, which the sample app and the synth are aligning with. Another piece of ongoing work is the integration of Orca-c which I've shown previously. This is an embedded Orca inside my text editor and forms one way in which a coder can drive the synthesizer (and later, I hope, some geometry/graphics too). Orca isn't the only way I intend to generate music, but it represents a relatively easier integration than music languages, etc. As I say in the video, there is still work to do to complete this feature.

2020-05-08 03:26:23 Unknown User:

MSG NOT FOUND

2020-05-12 15:12:56 Garth Goldwater:

http://calca.io/ for any curious folks

2020-05-15 10:59:54 U0123H7JRDM:

Hi, in this very short 2nd introduction video I'll show you how to create a body mass index calculator with my flow-editor step by step. It also shows (very briefly) the new debug functionality to help "follow" the flow. https://youtu.be/oVNdm3JWE4g

2020-05-15 13:24:40 William Taysom:

Here I see one slider feeding into another feeding into an expression... so what do the arrows mean in this editor?

2020-05-15 13:30:47 U0123H7JRDM:

Do you mean the arrows of the connections between the nodes? That's the path and direction the flow follows when it is executed. In this example the sliders both trigger the weightSliderTask-node (each slider node has an "onChange" property which is assigned to the weightSliderTask-node for both sliders in this example)

2020-05-15 14:02:26 U0123H7JRDM:

The tech stack on the frontend for those interested : mainly react, redux, react-konva, victor, rxjs, bootstrap, material ui slider, immer and a custom service worker running the flowrunner

2020-05-15 16:06:20 Ryan King:

I find it a bit strange that weight flows into height when no data flowing from weight will change height in anyway. Do you think it would make more sense for both sliders to feed into the expression in parallel?

2020-05-15 16:21:23 U0123H7JRDM:

I see your point, from my perspective the current approach makes sense because you let the payload grow as it progresses through the flow. From a user perspective that might be making less sense. I've got support for parellel flows already, I'll see if that will work here. Thanks for the feedback!

2020-05-15 16:28:51 Chris Maughan:

My synthesiser project works in a similar way - multiple notes and generated channels of audio data flow along the pipeline, sometimes combining and splitting (like when the notes are combined into a stereo stream or when a pair of oscillators go into a mixer). There is a definite ‘flow direction’.

2020-05-15 16:32:11 Ryan King:

Maybe there's a way to indicate the data being collected as it flows down the pipeline. If we could see it, I believe it would make more sense.

2020-05-15 16:39:51 Garth Goldwater:

i understand that the visualization is a work in progress—I’m more excited by exactly how little you have to type to get these things connected together. so much of traditional UI programming depends on essentially “watch” statements

2020-05-15 16:46:09 U0123H7JRDM:

Yes it's a work in progress but there's already much more too show then I did so far (2 minutes is very short offcourse ;-) ). Maybe I'll prepare a longer video and post it in #feedback, I'll think about it

2020-05-15 16:50:14 U0123H7JRDM:

The way parallel flows currently works: there's a special start node and a parallel-resolve node.. the start node can split into multiple flows and the resolve node waits until they all ran and then passes all payloads through to the next node.

2020-05-15 17:02:36 Chris Maughan:

Have you considered a directed graph where you 'pull data' from the far end? So that parallel resolve will happen because the dependent node will evaluate the nodes further back until all inputs are up to date.

2020-05-15 17:08:00 U0123H7JRDM:

I've got a special type of connection which is called "injection" which sounds like this and I was thinking about it actually as an alternative. The current implementation of that is rather limited though, it's just one level deep instead of going further if there are more nodes connected.

2020-05-15 17:12:28 U0123H7JRDM:

this is the current parallel solution. Both sliders trigger the first node when a slider is changed in the UI, this is specified in the 'onChange' property of both slider nodes (you can see it in the video)

2020-05-15 17:53:59 Chris Maughan:

I see, so does your diagram imply that you only have a single input/output for each node?

2020-05-15 17:54:13 Chris Maughan:

(at least in terms of flow...?)

2020-05-15 18:18:23 U0123H7JRDM:

Each node can be triggered by multiple nodes independently (the parallel example is an exception to that) and the payload the node receives determines the input parameters/properties and not all properties need to be handled by the node, most of them are usually just passed on to the next node. A node can trigger multiple output nodes

2020-05-16 10:05:24 U0123H7JRDM:

The current system probably is limited in its applications because it doesn't have "control-inputs", I think I could implement these by adding observable properties in a task. I am going to that explore that idea further. Chris Maughan how is this implemented in your application?

2020-05-16 11:15:53 U0123H7JRDM:

In the meantime I want to share another screenshot to show a bit more of the other applications of the flow-editor : here the flow-editor is used to create a flow which controls a react-native app. When you press save in the editor a json file is stored on the local file system and triggers the rebuild of the reactnative-app. The flow flow contains both the navigational structure and the form definitions as well as the calculations. The flow-editor runs locally here (using node.js).

2020-05-16 20:42:51 Chris Maughan:

Hi U0123H7JRDM, I think my approach is quite different to yours. Essentially I have a directed graph. Once per frame, I read the output node(s) and they try to 'compute'. First they look at their inputs, and if any of them are 'flow data' or 'control data', they are checked to see if they are current. If not, the graph walks further up and evaluates until the node inputs are 'current'. Then it can be computed. So it's very much a 'pull' architecture. I have 3 basic types of 'pins' on my nodes. • Flow Data. This is like a big bundle of data arrays. The node is expected to work on all data streams it receives in each compute step. For music, these arrays are typically buffers of audio data; often with 1 buffer per note of polyphony.
• Control Data. These are also bundles of arrays, but typically contain control information, such as modulation curves. It is just convenient for me to separate the concept of control from data. • Parameters. These are just data values such as float, int, etc. They can be connected to other nodes, but they are not 'evaluated' to satisfy graph compute. They are considered 'always valid', but that doesn't mean that the node won't walk back up the connection chain of the parameter to find the source value. It should be considered prototype/research code, but you can see the graph Compute function here: https://github.com/cmaughan/nodegraph/blob/master/src/model/graph.cpp

2020-05-17 08:13:19 U0123H7JRDM:

Hi, thanks for sharing your approach! Summarizing: the main difference between our implementations is that in yours in each frame the whole graph gets evaluated and in mine a node gets only triggers by an event (which can be external) and this can be dependent of time but certainly not necessarily and most of the time it's user input that trigger a node or other node's triggering other nodes. Both our graphs are directed I think. Also the way the graph gets evaluated is different, in yours the pull architecture as you describe is very different then in mine , which is forward directed. Very interesting this discussion about our approaches, I am going to think about it some more.

2020-05-17 09:09:31 Chris Maughan:

Yes, you have it right I think. Except to say that nodes in my graph that aren't required won't wind up getting evaluated; i.e. if they aren't part of the dependency chain of the output node, then they will not evaluate, and if a node is 'current', it doesn't need to do any work. I don't allow loops in the network; i.e. a node will not evaluate twice per graph evaluation. For this reason, I have a generation number on each node which can track the global generation and avoid repeat computation. It makes the evaluation logic quite clean I think.

2020-05-17 09:12:11 Chris Maughan:

I am probably quite influenced by the Maya directed graph approach, which was my first exposure to such an idea; though I don't implement the push/pull architecture that they do.

2020-05-15 12:20:33 Mariano Guerra:

Instadeq Week in Two Minutes #5: Form builder cell to create forms that emit output on change and submit, register and reload providers dynamically and allow services to specify configuration parameters that appear in the UI: https://youtu.be/Bk8SBjTRJLg

2020-05-15 13:05:17 U0123H7JRDM:

Do you work on this fulltime? Great progress in a week time I think!

2020-05-15 13:50:47 Mariano Guerra:

yes, I work full time

2020-05-15 13:53:56 U0123H7JRDM:

That's awesome! My own project is still a side project. What's your tech stack on the frontend?

2020-05-15 13:55:07 Mariano Guerra:

react, bootstrap, immutablejs, voca, draftjs, echarts

2020-05-15 13:59:20 U0123H7JRDM:

Cool! I don't know voca and echarts so I'll look into those

2020-05-15 14:00:53 Mariano Guerra:

voca is just string functions, echarts is just if you want charts 😄

2020-05-15 16:21:52 Ryan King:

I don't have much input on this besides it looks like you're making good progress. I do know of a few smaller companies that need a super simple self setup data querying system like this, but a big thing for them is collating the data in nice reports for upper management. Do plan on helping people produce reports/documents from their queries, or are you just looking at the data analysis side of things?

2020-05-15 22:26:31 Mariano Guerra:

reports and documents is the end goal, it's just that we know how to solve that part so I'm not showing that much and leaving the final touches on that to the end 🙂

2020-05-15 22:27:17 Mariano Guerra:

this is our current product: https://www.youtube.com/watch?v=LK9Lfo4dFLU

2020-05-15 15:59:32 Ryan King:

Hey gang, not much of an update this week. Just documenting my struggles between wanting observable state and needing a centralised system to manage events. I tried to build my own system in a few days but soon realised there's a lot more I need to think about - and maybe there's some existing approaches you all know about that I could look into?

In general, am trying to update state solely via a centralised event manager, but also use observables to automatically trigger those events (if that make sense)

A messy sandbox you can play with can be found here: https://codesandbox.io/s/eve-test-04e75?file=/src/App.js

https://youtu.be/Hw4BgyWRAAA

2020-05-15 16:13:17 Mariano Guerra:

have you considered the elm architecture or similar? https://guide.elm-lang.org/architecture/

2020-05-15 16:16:11 Mariano Guerra:

your demo has a really fine grained event generation, maybe a coarse grained one would do? like transactions?

2020-05-15 16:29:36 Ryan King:

Yeah it looks like I'm going for a elm-ish messaging architecture. I think I'm creating issues by have object both emit and listen to events. Maybe all updates should be dispatched via message...

I was also wondering on how much detail to go into the events. I suppose it's a matter of exposing what data is most likely to be listened to, otherwise you'd just need to filer it out later right?

Will definitely look into transactions! I've never even heard of them!

2020-05-15 16:38:36 Mariano Guerra:

what I mean by them is that you bundle a group of mutations into a named thing and emit an event with the name of the thing and some extra parameters, much more semantic that emiting events every time you mutate a field

2020-05-15 16:42:07 Garth Goldwater:

you might be interested in https://github.com/thefrontside/microstates microstates

2020-05-15 17:32:54 Ryan King:

Mariano Guerra oh yes I think something like that will be needed but it might need to be in combination with the microevents so derived values can know when to update.

Garth Goldwater 🙌 looks super interesting, thanks! I look forward to see what it can do!

2020-05-15 17:35:13 Garth Goldwater:

the youtube talks listed on the readme are super clear (to me at least)—might be worth checking out

2020-05-15 17:35:36 Garth Goldwater:

fairly react-specific most of the time but i linked to the framework without the react integrations

2020-05-15 20:17:35 U0123H7JRDM:

What's different in your scenario then the problems redux solve? Mobx is also a widely used alternative

2020-05-15 20:59:52 Ryan King:

Yeah I basically want a cross between the two. I currently use something like mobx https://github.com/alloc/wana and I really like that api but something redux-like would be easier to manage as the app is getting larger. However, I do not enjoy using redux at all, there's lot of boilerplate, and I find it a bit cumbersome. So this is an attempt at a happy medium between the two.

2020-05-15 21:07:05 Ryan King:

It's all trying to solve the same problem - I'm just playing with ideas to find what requires the least cognitive effort for me as programmer.

2020-05-17 07:55:21 Edward de Jong:

Ryan King you might consider looking at my Beads language. It has three aspects that relate to your work. It is a clean sheet approach and emits to raw JS with no external dependencies such as Rect, etc.. You declare a model as a graph database schema that will be filled in with data later with data. Then you write chunks of code that draw your screen using the model data in a pure manner (the view). The controller chunks are blocks of code appended to the view drawing subroutines. There is also the notion of a derived quantity which is lazily evaluated when it is referenced. The key advantage over your method is that if a view drawing function uses model variables, a, b and c, if any of those 3 variables change their value, then the draw chunk is re-executed. This is all tracked automatically without any the programmer having to declare dependencies, similar to how spreadsheets work, except that this is about re-executing drawing functions with memorized parameters, which is quite different than simple formulas being executed in topological order. It has a central event system called the Loom, which combines publish/subscribe, network, keyboard, mouse, timer events into one unified event stream, that is fed to the constellation of code chunks, based on their appropriateness for that event.

2020-05-16 20:27:22 Chris Maughan:

Here is my weekly update. Apart from some work on the internals to better represent time events, I spent some time on writing a little visualiser to help me understand scheduling. I hope it will also end up being a nice additional tool for the end user to see what's going on too. I really need some full days to work on some of the more interesting problems, and an hour or so a day (my usual schedule) is not cutting it at the moment :(

2020-05-17 06:49:12 U0123H7JRDM:

What is your end goal? Do you want to be able to develop vst's with your engine which can be used in daw's like ableton or something more standalone?