Where FoC and TFT overlap... passionate care for "little beginnings"
adjacentpossible.substack.com/p/little-beginnings-everywhere
š¦ YimingWu @ChengduLittleA@mastodon.art: #OurPaint The node based brush engine!
:D
Hello everyone š
I found this very interesting session from 1998 - it delves into the territories of "humanistic computing". Terry Winograd,
and Jaron lanier - youtube.com/watch?v=Rns8_e2JiKw throughts and feedback on this is appreciated :)
And another one since we have some long holidays before our selves: There are at least some interesting points ( at least for me) how dave snowden talks about our relationship to technology, and how we still can scale making better decisions: youtube.com/watch?v=eAWzj6jucL4
Stumbled upon this from the Dial Box wikipedia page: nngroup.com/articles/noncommand
[I]t may be one of the defining characteristics of next-generation user interfaces that they abandon the principle of conforming to a canonical interface style and instead become more radically tailored to the requirements of individual tasks
Not quite how it happened (this article is from the late 90s, i.e. pre-iPhone)
Maxwell equations of software? or rather Feynman diagrams of software? crossing fingers that some of this is true š
Twitter thread explaining it
High-order Virtual Machine (HVM) is a pure functional runtime that is lazy , non-garbage-collected and massively parallel . It is also beta-optimal , meaning that, for higher-order computations, it can be exponentially faster than alternatives, including Haskell's GHC.
That is possible due to a new model of computation, the Interaction Net , which supersedes the Turing Machine and the Lambda Calculus . Previous implementations of this model have been inefficient in practice, however, a recent breakthrough has drastically improved its efficiency, resulting in the HVM. Despite being relatively new, it already beats mature compilers in many cases, and is set to scale towards uncharted levels of performance.
Welcome to the massively parallel future of computers!
It is shown that a very simple system of interaction combinators , with only three symbols and six rules, is a universal model of distributed computation, in a sense that will be made precise. This paper is the continuation of the author's work on interaction nets , inspired by Girard's proof nets for linear logic , but no preliminary knowledge of these topics is required for its reading.
Interaction nets are a graphical model of computation devised by Yves Lafont in 1990 as a generalisation of the proof structures of linear logic. An interaction net system is specified by a set of agent types and a set of interaction rules. Interaction nets are an inherently distributed model of computation in the sense that computations can take place simultaneously in many parts of an interaction net, and no synchronisation is needed. The latter is guaranteed by the strong confluence property of reduction in this model of computation. Thus interaction nets provide a natural language for massive parallelism.
š¦ Fica, vai ter monads: Since I've been receiving a bunch of DMs asking what HVM, Kindelia and Kind are, and since it is still so early (we don't even have a pitch deck yet!), let me make some tweets to present our work, and a sneak peek on the crazy things we're building. GIANT TECH THREAD INCOMING
This is the page that has helped me the most in making sense of all this: github.com/Kindelia/HVM/blob/master/guide/HOW.md
Iāve been trying to work through the paper, but the math is pretty dense. š
that explanation is much easier to understand, thanks for the pointer Kartik Agaram
From the amount of hype that the creator is trying to generate, I'm concerned that he's a snake oil salesman. After all, he's trying to turn it into a blockchain project.
Can anyone confirm that this execution model is actually novel and promising?
that's the "crossing fingers that some of this is true" part of my post š
I know a few smart FP people who do blockchain projects more as an excuse. If this has any "massively parallel" capability, I imagine it must track data dependencies. My second thought is that any practical single node speed will come from playing well with CPU caches and any practical cross node speed will come from efficient communication protocols.
after digging in a bit, Iād say that this passes an initial sniff test. The properties he mentions are theoretically available given how it works and I can plasuibly imagine a low constant factor runtime thatād take advantage of it. Donāt know about ānovelā, but I havenāt personally seen this particular approach before.
Just so everyone knows, the author hasnāt decided whether this implementation will be free to use yet. Thereās no LICENSE file, and he isnāt willing to add one.
the author hasnāt decided whether this implementation will be free to use yet.
This doesn't seem like a fair characterization of his comment.
This also seems excessive. The issue is still open. That's usually a sign a decision hasn't been made. How about we give him more than 5 minutes?
I think itās pretty fair. The repository has been up for a year and heās trying to commercialise his work via a blockchain project. Itās quite possible heāll license it under GPL to stop others from using it freely in practice.
I just thought itās worth pointing that out here before people spend too much time looking into the project under the assumption that is free to use.
(Thereās nothing wrong with protecting your work, but IMO you should make it clear what the conditions are before you start sharing it. I donāt want to be sued for forking the codebase.)
Wait, you consider GPL to be non-free ?! That seems worth calling out more explicitly.
(I'll stop debating this particular sub-thread. I don't have any love for blockchain projects, but the possibility of getting sued for GPL violations doesn't seem like the most interesting thing here.)