You are viewing archived messages.
Go here to search the history.

🕰️ 2020-08-08 17:31:02

...

Maikel van de Lisdonk 2020-08-10 10:36:46

Are there people here who have experience with crowdfunding a FoC project? When would that be a good fit?

J. Ryan Stinnett 2020-08-11 11:22:26

This is definitely a tricky thing to balance, and I have not yet seen what I would consider a "great" answer. For my own projects of this shape, I am planning to use an open, self-hostable model, as I think it's critical for user control to be able to host it all yourself if you want. A subset of premium features would rely on a hosted service, which people could pay for or host themselves.

Here are some useful resources related to this:

Brian Hempel 2020-08-10 11:35:35

Hello all! Please consider submitting to LIVE 2020, the Workshop on Live Programming. Traditionally, most submissions to LIVE are demos of novel programming submissions. We hope LIVE can be an opportunity to polish up your work a little and present your progress to the world, by video or PDF or web essay—just be sure to situate the work within the history of programming environments. The submission deadline is Sept 18, and the workshop itself will be online, tentatively Nov 17. The attached Call for Submissions has details, or visit the website: https://liveprog.org/

nicolas decoster 2020-08-10 12:17:48

Hi @Brian Hempel. What kind of work should be submitted? Mainly academia work? Other too? In particular, I am not sure I will be able to "situate the work within the history of programming environments" precisely. I have never conducted some real research about it, and the most I can do is noting some inspirations and counter examples for my vision, but not a real academic comparison.

Brian Hempel 2020-08-10 15:14:38

Nicolas Decoster Don’t let that keep you from submitting! Part of LIVE’s goal is to provide a place for work that doesn’t quite fit the academic mold. Noting “inspirations and counter examples” is a good start for an “academic” related work discussion—particularly counter examples! Also because this is an eclectic workshop we’re not expecting quite as much thoroughness, something like 5-10 references is probably reasonable if you’re not coming from an academic background. If you ping me with your idea and those inspirations and counter examples I might be able to point you in a profitable direction so you can spend an afternoon following references and filling in some gaps. But, again, don’t let that keep you from submitting!

Nick Smith 2020-08-10 12:16:50

Has anyone ever come across (or thought about) a tagging system for data where nouns and adjectives play different roles? For example, you might search for a "user", but then refine that search to a "banned user". Note that "banned" and "user" are not necessarily independent tags. Just because someone is a "banned user" and a "father" doesn't mean they're a "banned father" as well. The adjective "banned" could specifically relate to the noun "user".

Or a slightly clearer example: someone who is both a "skilled baker" and a "writer" is not necessarily a "skilled writer".

I'm now wondering whether an understanding of (basic) linguistics is necessary to develop a good tagging system. After all, our mission is to adapt programming languages to the human mind 🤔.

This is part of my search for "an ideal model for information" that readers might remember from a few weeks ago.

Andrew McNutt 2020-08-10 12:54:16

That reminds me of https://www.hempuli.com/baba/

Chris Knott 2020-08-10 13:13:45

This is a very interesting point. Definitely one of those brilliant examples which makes you think you've got more thinking to do...

My first reaction is that every tag is a kind of adjective, i.e. they are descriptive, and it might be more that "banned" is a verb/gerundive that caused the distinction.

I think implicitly the #banned tag applies only to the "father tagged object" and not directly to the underlying object.

You could represent this with a kind of nesting;

person #father#banned #blond

person #father #blond#banned

(person #father #blond)#banned

These could be used to represent, person banned for being a father, banned for being blond, and banned for being a blond father. The top one would match a query of #banned, #father, #blond and #banned#father but not #banned#blond

o you could remember what filter the object is being considered under at the time the tag is applied.

Jack Rusher 2020-08-10 14:18:36

It's not clear to me how you mean "banned father" that doesn't mean the intersection of "banned" and "father". Could you elaborate?

Chris Knott 2020-08-10 14:24:49

The skilled baker/writer example is much clearer. One intersection is valid, one isn't

Nick Smith 2020-08-10 14:27:35

Jack Rusher

What I meant to convey is:

In natural language, adjectives are sometimes ascribed to specific nouns, i.e. they are not transferrable between the different nouns that an entity can be named by. I added a second example to my original post that hopefully clarifies: someone who is both a "skilled baker" and a "writer" is not necessarily a "skilled writer".

I've not yet seen a tagging system that accounts for this linguistic phenomenon. That said, I'm still figuring out the extent to which supporting it could be useful.

Nick Smith 2020-08-10 14:37:34

Chris Knott I started thinking about linguistics after realising that hierarchy may not be a sufficiently powerful basis for a tagging system. So I'm skeptical that nesting is the right way to encode a reason for banning. (Aside: I didn't intend the discussion to be about verbs/actions, but it's an interesting side question)

Chris Knott 2020-08-10 14:41:31

Yeah it's not a strict nesting I think it's a semilattice or DAG

Nick Smith 2020-08-10 14:54:06

Chris Knott I'm thinking it should be something more like:

person (#father) (#blonde) (#banned #user)

The tags here are grouped into unordered sets. The main purpose of the grouping is to handle the case where an adjective should be associated with a specific noun. Under a different interpretation of my initial prompt, you might get:

person (#father) (#blonde) (#user) (#banned)

as an answer. This can be encoded in an unsophisticated tagging system. I think this was Jack Rusher’s interpretation.

My second (less ambiguous) example would look like:

person (#skilled #baker) (#writer)

S.M Mukarram Nainar 2020-08-10 15:30:57

I think a cleaner way there is to make "skilled" a function of sorts that takes an argument to give you a concrete tag

Jack Rusher 2020-08-10 16:09:56

The new example makes what you mean much clearer. In the banned + user

case, a user is an entity and banned is a property that an entity can

have. In the second example you have a person (entity) who has the

role of skilled baker and (perhaps) the role of writer. The usual

ways to encode this sort of thing in information retrieval include:

  • (folksonomy) #baker and #skilled_baker are different tags

  • (semantic) person hasRole Baker with a property on this

    instance of the hasRole property that indicates mastery

  • (statistical 1) mastery is inferred from another property (years

    baking, number of awards won)

  • (statistical 2) mastery is inferred from the position of the entity

    in some vector space (think Word2Vec and friends)

Jack Rusher 2020-08-10 16:11:31

After the first time that enter auto-submitted this answer before I was done, I edited it in my editor of choice using markdown that Slack would have understood if I typed it inline, but which it has presented unmodified after copy-and-paste. 🤬

Andrew F 2020-08-10 17:04:01

Hierarchy alone might not be enough, but I don't think it can be avoided either. If you want a "skilled writer", that "skilled" is necessarily interpreted in the context of "writer".

Stefan Lesser 2020-08-10 17:07:17

Nick Smith What are you hoping to find within linguistics that you think would be helpful here?

Andrew F 2020-08-10 17:15:39

I don't think linguistics is the right approach at all. The thing wanted is logic, and trying to import the semantics of messy evolved natural languages will only make the logic harder.

In general I don't entirely agree with the idea that "our mission is to adapt programming languages to the human mind". That might have been a good thought when Grace Hopper tried it, but we know better now. It just has to actually make sense, and people will come around.

Andrew F 2020-08-10 17:22:16

My idea for tagging the writer: bob (#baker) (#writer (#skilled)), where you can think of the nesting as expanding to #writer #writer/skilled for a hierarchy-oblivious system. From a logic perspective it might be more accurate to call it #writer->skilled, i.e. invoking implication, but I haven't quite figured out how that works, either...

Robert Butler 2020-08-10 19:39:07

Your question took my back to my RDF and OWL days where we were building ontologies to basically do this and reason about it. This has long been the promise of semantic web technologies. However, they never broke the usability barrier. Tags are amazing because of their simplicity. As soon as you have skilled writers you have increasingly complex relationships where the computer is trying to model what is in the user's head.

It's doable, but complex and you run into all sorts of strange cases. Can you be a "skilled hobbyist" or a "skilled thinker"? As soon as you start applying meaning to the adjectives, you lose the ability for the definitions to adapt well based on the community using tags. The more precision you want, the more work you have to do. The more work you have to do, the less users want to do to maintain the taxonomy. IMO, this is one reason machine learning has been so successful. It can basically infer the relationships between tags based on how they are used. But you need a lot of data and you are at a decided disadvantage if you are searching using overloaded terms (I used to use a software package called "Chef" that had "cookbooks" and "recipes" - nightmare to find good docs for answers to questions).

Nick Smith 2020-08-11 01:21:20

Jack Rusher To respond to each of your four points:

  • (folksonomy) My belief that a simple tagging system is inadequate is what led me to make this post. If #skilled_baker is an atomic tag, then you can't add qualifiers (extra adjectives) programmatically, unless you feel like playing with string parsing. I feel like qualifiers could be really important in a tag-based programming system: you can use it to organise/sort information within a specific context (noun), i.e. you don't pollute the global tag space and thereby risk the accidental inclusion of an entity into some faraway dataset that happened to use the same qualifier. You can use qualifiers to programmatically sort your #users into #free #users and #premium #users.
  • (semantic) To me, this is the "graph view" of the problem. A graph-based model is an alternative to a tag-based model, in my mind (though some might argue you can combine them or switch between them). Some people are doing graphs, but I'm personally trying to avoid graphs where possible.
  • (statistical 1) This is about code (inference), not merely data! I think code is an important layer, but we can (and should) keep the layers separate. Code is not a tag, but code can apply tags. That still leaves us with the original question of what the tagging model should be.
  • (statistical 2) Ditto.
Nick Smith 2020-08-11 01:27:31

Stefan Lesser In linguistics, I'm hoping to better understand the means by which we classify, qualify, and describe (etc.) things using natural language, and determine whether it is possible and useful to construct a simplified and reduced version of that as a means to classify, qualify, and describe digital information.

As I said, this is all for the purpose of finding, to the degree possible, "an ideal model for information".

Nick Smith 2020-08-11 01:36:40

Robert Butler I didn't mention it in my original post, but I'm focused on an information model for a user to develop programs within, such that they can easily understand how program data is organised and manipulated.

I'm not looking for a "machine-readable" information model to be used as an input for AI. I think that was the problem of the whole "semantic web" thing: it was about AI, and AI isn't easy.

William Taysom 2020-08-11 05:28:49

More commonly I have mutually exclusive tags (states really). Like, I don't know: #one_star, #two_star, #three_star.

Jack Rusher 2020-08-11 06:37:38

Nick Smith Tags are also a graph, just with a constrained set of semantics. Code is data and vice versa. @Andrew F’s observation that "logic is what you want" isn't wrong, but it's worth remembering that logic and language are co-travelers -- both spandrels evolved to serialize graph-structured human internals for transmission to other humans. If you follow linguistics -> logics to tease out the meaning of the transmissions, you end up with First Order Logic-based knowledge representation systems, like -- as Robert Butler said -- the semantic web stack.

Nick Smith 2020-08-11 07:06:42

Jack Rusher I think we should be careful in uses of the word "are" and "is" when comparing concepts. It often sweeps away deeper understandings. Saying "tags are a graph" irks me, because a graph is defined as a set of nodes and edges (perhaps with attributes), and tags obviously aren't defined in terms of the same concepts. So perhaps the true relationship to be illuminated is: graphs can model anything that tags can model, i.e. graphs are a strictly more powerful model? This seems true when by "tag" we mean a traditional tagging system (hash tags), but that doesn't mean that all conceivable tagging/annotation systems are strictly less powerful than graphs. I'm worried that graphs are a conceptual cage that, once you've committed to thinking in terms of them, deny you the ability to discover new models.

Nick Smith 2020-08-11 07:14:22

Of course if you find something more powerful than graphs, you could then extend the definition of a graph, adding new "features" until it reaches power parity, but at that point you've bent and distorted the original definition (of which there are already many variants). So it might not hurt to just start from a clean state, rather than bolt on new features to an old model.

Nick Smith 2020-08-11 07:21:23

@Andrew F Jack Rusher Yes, predicate logic is relevant, but the answer obviously isn't "just use logic". If it were, then we would already have found an ideal information model and/or programming language decades ago. Original ideas are needed. I spent a few years trying to do exactly as you suggest: take existing concepts/models/theories and just apply and combine them in the right way. But it's never worked out. Mathematicians invented predicate logic, and they invented graph theory. For my project I'm taking these ideas as inspiration but I truly think I need to invent a new model, which will obviously have some relationship/mapping to old ones.

Stefan Lesser 2020-08-11 09:41:26

Nick Smith Seems like we're thinking about similar things — would love to hear/read more about your project.

I went down the linguistics route (I'm two years in now), and as much as I'm happy to find people interested in it, from the previous discussion I can only sense that it would be more of a distraction for you than it would actually help. What you seem to be looking for is what linguists and cognitive scientists call "categorization". When I found out about it, I was shocked how much linguists ask the same questions we ask when we try to design a good data model.

If you stay on the "classic" generative linguistics path (Chomsky et al.), you'd better get extremely comfortable around hierarchies and taxonomies. It seems most of the work here in the last few decades is basically trying to fit into the classic models what we learn about how the brain works, which more and more diverts from what can be meaningfully expressed with those models that are primarily based on set theory, perhaps with some fuzzy logic applied to it. It looks a lot like the frustration you voiced earlier — we can make it work by stretching the current models and making them much more complex, but it's everything but satisfying.

There are linguists who realized that and who went looking for new approaches. I've been looking deeply into cognitive linguistics and think that's a great field to look at for inspiration. There's Rosch's prototype theory, which is still pretty close to classic categorization with a little bit of fuzzy logic sprinkled in, but that's just the gateway drug into metaphorical structuring and image schemas, and then all your set-theory based logic goes out the window, and you're left with embodied cognition and stuff that is way too ambiguous for what we'd like for static modeling. It seems you have extremely specific use cases in mind, and so I don't feel good about recommending you look into this.

When you say you're looking for more powerful models, "something more powerful than graphs", I wonder what that means to you?

If you want more flexibility in modeling, you need to take into account that this always comes in the form of trade-offs. Mathematics has this pretty much figured out though. If you look at algebraic structures you can pick one with few rules, which gives you a lot of flexibility of what you can model with them, but then you can't do much with these models — if you can't make any assumptions about structure, you can't run algorithms against it. And you will see that graphs are already pretty high up there in terms of flexibility, but to do anything useful with them, you'd have to look at slightly more restrictive (semi-)lattices or… tada… trees again. There's a reason why almost anything can be represented as a graph. Look at Category Theory as an extreme (not to be confused with the "categories" from above). I guess that's the only branch left that qualifies as "more powerful than graphs", but then all its useful applications are only useful once you pull them back down into the land of graphs, sets, and trees.

I'm currently working under the assumption that "a better model" doesn't exist, or at least that I'm certainly not clever enough to invent it, and that the models we have are more than what's needed for progress — the problem is just that we need different models at different times. That's why I'm betting on homomorphic representations and (bi-directional) transformations between them to solve some of the problems of static models that lack the flexibility we need to model dynamic systems.

Nick Smith 2020-08-11 10:15:51

Stefan Lesser My project is an enormous soup of concepts, each sitting somewhere on an axis of acceptance/rejection, understanding/ignorance, and integration/isolation (from other concepts). I've been spending a few years (first casually, but now seriously) mapping out what the "ideal programming system" might look like, and some aspects (like a model for code/change/time/distribution) are starting to become clear, but fundamental aspects like the structure of information are still unclear. I've been saying for a long while that I will write a blog post when a sufficient fraction of my project crystallises, but until then I'm just going to keep churning on ideas. Essentially, I need to understand enough that I can build a non-trivial prototype. It's hard to do that without an information model!

I'm not willing to believe that linguistics is a distraction yet. I'm only interested in the basics: I guess I want to understand what word classes, syntax, and grammar from a natural language might be useful as a basis for describing and manipulating the type of information found within a programming system. I probably don't want to go much further down the rabbit hole.

By "more powerful model", I mean giving power to the programmer. I mean a model that makes it easy to express the things that a programmer might want to express, i.e. being possible is not enough. It's possible to model anything with a (souped-up) graph, I'm sure, but that doesn't help much.

I definitely think a better model exists. As I said before, the models we know about today were just invented by some creative people a few decades or centuries ago, usually as a tool to explore a particular domain. Nobody discovered graph theory in the Amazon rainforest or on the surface of Venus. There are some fundamental patterns that result from the physical laws of our universe (e.g. as reified in geometry), but the models we build around these patterns are typically layered with additional constraints. We have a new domain now: programming, and I think it's very possible to invent new models which are appropriate for the domain.

Nick Smith 2020-08-11 10:20:32

I'm believing more and more that "fixed mindsets" are endemic in 21st century Western society — there is a disappointingly low threshold for what we are willing to accept can be newly created. We're willing to accept that we can create new dinner recipes, music, apps, programming languages, etc, but we're not willing to accept we can invent fundamentally new and groundbreaking things, especially foundational models (see: the mess that is modern physics). Besides the domain-specific knowledge required, what’s the difference? Reminds me of Peter Thiel's thesis on Western society's recent inability to invent new technologies.

I've definitely been trapped in fixed mindsets myself. The hardest part is noticing the illusory walls, and the next-hardest part is figuring out an alternative.

Andrew F 2020-08-11 18:15:46

I'm definitely not saying just use predicate logic (note that's not what my proposal did). Even if you do invent a totally new system, I still think existing logics are a better starting point than natural languages. AFAIK formal logic was evolved over centuries starting from natural language. Predicate logic in particular arises from similar questions to what you're asking. If you ignore all that experience, you're likely to end up tearing your hair out eventually. A logic database is already pretty close to an ideal data format anyway.

You might even want some nice computable fragment of higher order logic rather than first-order to capture the relationships between predicates/tags, though that may be overkill. I definitely think the innovation will be in the right way to cut a logic system down to fit the relatively simple problem of tagging.

Robert Butler 2020-08-11 20:10:23

Nick Smith Your thoughts triggered a line of thinking for me. What makes a groundbreaking thing groundbreaking? I think that discovering groundbreaking new technologies is more a function of connecting a set of normally unrelated ideas together. Most breakthroughs and new technologies have been a slow iteration of new ideas based on old ones, experimentation and connecting unrelated ones. Take flying, for example. Powered flight actually arose nearly simultaneously in several places because the necessary supporting ideas were available, we just had to put them together. This is true of almost everything. I'm not sure if I agree with this, but maybe you could argue that we don't have groundbreaking technologies anymore because everything is groundbreaking. In the age of the internet, we iterate so fast and can so easily connect so many different ideas together that it has become utterly common place.

Truly groundbreaking technologies need to seem to come from nowhere. That is, there needs to be something mysterious in the source of it and it needs to fundamentally change the course of history. By the second measure, the internet, the iPhone, reusable space program, electric vehicles, batteries, cloud computing and so much more are in a very real sense changing nearly everything, except it's all mixed up and happening so fast that nothing looks groundbreaking anymore. Further, the mystery is gone in some sense. We have a really high quality record of where the innovations are coming from, what ideas are being combined and how. The internet is insane.

Take Newton for example. Newton did a lot of utterly absurdly groundbreaking work. But in large part, that is because 1) we have limited records of the thought processes and ideas that were connected to get there and 2) he had access to much of the knowledge of his time and 3) it stood out from it's contemporary work. I also think there is a component of groundbreaking work coming from a single mind where it has to be conceivable in a single mind and not a lot of minds were doing the conceiving. That is, Newton lived in an age where a single human mind could "discover gravity" as it were. I would argue that we have minds today that are able to conceive of, understand and "break ground" on vastly more complex and strange things than Newton ever did. What has changed is the environment, frequency and number of minds conceiving.

I highly recommend "How to Fly a Horse" by Kevin Ashton: https://en.wikipedia.org/wiki/How_to_Fly_a_Horse#:~:text=0385538596-,How%20to%20Fly%20a%20Horse%3A%20The%20Secret%20History%20of%20Creation,the%20true%20source%20of%20innovation.

Robert Butler 2020-08-11 20:27:19

Also keep in mind the timescales here. Peter Thiel's "extended technological stagnation" is honestly laughable to me. It's been a sensational idea for a long time to figure out how to get the "world of tomorrow, today". His "extended" time period is a blip compared to the time scales things have taken historically. "We were promised flying cars and got 140 characters" is a straw man argument. Things are moving so fast right now that we literally can't keep up with the ground breaking consequences of it. We are swimming in so much ground breaking that we have forgotten it's the air we breath.

Also, who said 140 characters wasn't groundbreaking? Social media has fundamentally changed the world in so many ways for good or ill. Also, we have flying cars. Whether they are useful or just a novelty remains to be seen.

Robert Butler 2020-08-11 20:55:29

Also the productivity discussion around Thiel's ideas is not specifically technology related. Productivity is an economic output, which is basically based on how much energy you can tap into. Jeremy Rifkin has done a lot of writing about that, and if he is right we are about to tap into the next jump in productivity as we start to unlock solar energy.

Robert Butler 2020-08-11 21:01:58

So... in summary, what I'm trying to say Nick Smith is that you should definitely avoid a fixed mindset and keep pushing through to see what you can uncover while shooting for your ideal programming language. I believe it's a worthwhile endeavor regardless of whether I can see past my RDF and graph work. I've definitely long believed there is something there, but experience taught me that you end up getting too caught up in formal language as a substitute for logical formalism and it becomes awkward.

That said, it doesn't mean that you won't be able to crack it or at least discover something new and interesting. Keep an open mind to look at what you might think are completely unrelated fields or areas of thought. Often they hold the keys to innovation.

Robert Butler 2020-08-11 21:04:20

Also, maybe there isn't a single "ideal programming language". There is probably an ideal one for what you are trying to do. That might be helpful to focus on exactly what you want to do with it.

Nick Smith 2020-08-12 00:06:50

@Andrew F I am using formal logic as an inspiration, e.g. Datalog is my inspiration for a code model. It sucks as a data model though. You're probably right about there being a logic that could be invented and layered onto first-order logic to effectively describe the tagging system I'm looking for. Seems like I'm going to discover what that logic is after I invent a system though? As in, the properties I want will define/induce a logic, even if I'm not looking for one.

Nick Smith 2020-08-12 01:05:19

Robert Butler This definitely deserves to be in its own thread. We're not talking about tagging systems any more! (Warning: a very personal perspective follows)

Perhaps the word "groundbreaking" was too strong — it has too many connotations. My point is that I refuse to dismiss the possibility that I can invent something wholly new. I get the vibe from people sometimes that they believe they can invent new products but not new models. At best, they take some model that already exists and tweak it, and I'm given advice to do the same. There's a big difference between products and models, and Silicon Valley only knows how to make software products (and occasionally hardware products), and even then... most software products have poor utility and/or ergonomics. Silicon Valley isn't pumping out much new computer science (relative to their size), except in very specific niches like machine learning. And don't get me started on academia: they stick to very specific niches that are "accepted" within their circles — it's a game of conformity, or "micro-innovation".

New products are great, but there's a limit of how much we can achieve by building incrementally upon the ideas and the models that are already known to "work". You cited stuff like the iPhone and Elon's stuff: those are the exceptions, not the norm. Those were/are situations where people asked "what if we try designing from scratch?". The existing notions of what a "phone" or a "car" entailed were thrown away, even the core fundamentals like having buttons or (soon) having a driver. You can treat math the same way: there's nothing sacred about existing mathematical models. You can use them as inspiration whilst being fully prepared to move beyond what exists.

Nick Smith 2020-08-12 01:15:17

Robert Butler I'm sticking with the "ideal programming system" notion for (again) the reason of not being willing to close any doors. I'm not willing to accept the conventional narratives on what is "realistic", because they're never based in fact: there's no real evidence to suggest that a (broadly) general purpose system (as opposed to a language) isn't possible. Domain-specific languages are always going to be "ideal" for their domains, so there is no universal language for computing, but a system is able to host these DSLs. The inbuilt language of the system itself is more about providing non-domain-specific notions like change. It's about providing a platform for these DSLs; something for them to be "compiled to", something universal.

Also, the term "ideal" is more about taking software to the "next level", the next stratum. I think there's room for more strata, but we can't even dream about what they'll be like yet. Horse riders probably imagined mechanical horses, but not self-driving (galloping?) pedestrian-aware mechanical horse taxis.

Robert Butler 2020-08-15 00:02:08

Nick Smith For what it's worth, I applaud your efforts. I think new models are rare, but they do happen. They are hard, fraught with "peril" if you will, but can really shake things up. I agree with you on going back to the fundamentals. I have often found my most paradigm shifting moves have been when I start to feel stuck in the mire of all the complexity. Life is complex, but mostly we tend to tack on artificial complexity. If you can step back and look at the fundamentals there is often a new model that is begging to get out of the problems space, but we were just too far down the path we started down to notice.

Robert Butler 2020-08-15 00:06:24

Personally, I'm more interested in the future of programming as a way to find new models or rediscover new usefulness in old models. I'm personally not so interested in the new cool toys, but prefer the more fundamental type stuff. That said, I do often find that insight comes from criticism. Someone will say something to me that doesn't sit right but I can't immediately explain why. In the end, if I can really dive into that critique there is something waiting for me to discover it there. At any rate, I hope you take any criticism from my end as me trying to help move toward an answer rather than stop or slow you down or a belief that you won't be successful.

Robert Butler 2020-08-15 00:09:22

Truthfully, I'm also attempting to develop a new model for programming by going back to the fundamentals of how processors work and seeing if we can go back to an early time and branch in a different direction than the one we took to get where we are today. Who knows if I'm going to be successful in developing anything relevant or useful but I'm trying my best anyway.

Robert Butler 2020-08-15 00:09:44

Nick Smith is there somewhere I can go to read more about what you are trying to do?

Nick Smith 2020-08-15 00:20:59

Yeah, I agree that constructive criticism can be helpful. In fact, I'm my own biggest critic. I keep tearing down my plans and prototypes as rubbish (because they are), and every time I do, I learn a little more. Those past failures led me to beliefs such as "graphs are neither the right interface, nor a helpful underlying model", upon which I base a lot of the assertions I make. Feedback from others is important too: hence I enjoy starting all these conversations here.

Nick Smith 2020-08-15 00:25:25

Robert Butler Hardware design sounds 100x harder than PL design. Intel and AMD have armies of people whose career is exactly that 😬, though yes, with strong biases towards incrementalism.

Nick Smith 2020-08-15 00:26:51

Robert Butler Unfortunately when people ask me this, the answer is no. There is no good place to read about what I'm doing. The reason? As mentioned above, aspects of my plan and vision keep churning significantly. I don't have a story I'm willing to commit to yet. But I'm working towards it!

Nick Smith 2020-08-15 00:28:59

My goal is pretty easy to state though: I'm trying to vastly simplify software development. I want to make it 100x simpler. I want to find the simplest programming model & infrastructure that can possibly exist. I've not been willing to make any compromises on that quest.

Nick Smith 2020-08-15 00:30:59

And unlike with projects like "Scratch", I don't think simplicity entails coming up with a cute cat avatar and adding lots of colours to the IDE. 😕

Nick Smith 2020-08-15 00:33:09

Though of course, the Scratch people would probably argue their goal is "onboarding", not discovering new simplicity.

Nick Smith 2020-08-15 00:35:34

I will know I've succeeded if children and experienced developers both use the same interface.

Roben Kleene 2020-08-10 21:27:53

Interesting Twitter thread proposing game engines are becoming more popular for non-gaming use cases. These were the most interesting examples he gave to me:

The Mandalorian and The Lion King were shot almost exclusively using these tools.

Even Hong Kong International Airport uses a "digital twin" built on Unity to simulate changes in passenger volume

He doesn't share many references or links with more details. I'd love to hear from anyone if they have more examples of game engines being used for non-gaming use cases like these. Or any links or other information to share related to this topic. https://twitter.com/aaronzlewis/status/1291889682788253696

🐦 🅐🅩🅛: Until recently I had no idea that game engines are basically eating the world. Urban planning, architecture, automotive engineering firms, live music and events, filmmaking, etc. have all shifted a lot of their workflows/design processes to Unreal Engine and Unity

Ivan Reese 2020-08-10 21:47:42

Here's a good article (I think this is the one I read) discussing the techniques used in The Mandalorian.

https://www.fxguide.com/fxfeatured/art-of-led-wall-virtual-production-part-one-lessons-from-the-mandalorian/

Ivan Reese 2020-08-10 21:48:35

I found this particular bit really interesting:

Latency and Lag

From the time Profile’s system received camera-position information to Unreal’s rendering of the new position on the LED wall, there was about an 8 frame delay. This delay could be slightly longer if the signal had to also then be fed back to say a Steadicam operator. To allow for this when the team was rendering high resolution, camera-specific patches behind the actor (showing the correct parallax from the camera’s point of view) the team would actually render a 40% larger oversized patch. This additional error margin gave a camera operator room to pan or tilt and not see the edge of the patch before the system could catch up with the correct field of view.

Robert Butler 2020-08-10 23:51:49

Wow, this pretty much blew me away. Really aren't far from at least the visual aspects of the holodeck.

William Taysom 2020-08-11 05:29:45

Ivan Reese Got to love the hacks!

Ivan Reese 2020-08-11 13:19:58

@William Taysom It's the 8-frame amount latency, specifically, that caught my attention. That's a lot of lag. Between that and the LED pixel pitch forcing them to shoot soft, it seems like they still would need to digitally replace the background in post. It makes me wonder if they'll ever not need to replace the background.

Scott Anderson 2020-08-12 02:49:33

I'm biased, but I think a lot of software that doesn't look like dynamic documents/forms (which is a lot of software, text editors, social networks, etc) and looks more like simulation or dynamic worlds will be made in game engines, this includes current industries that can utilize real-time now and new industries

Ivan Reese 2020-08-12 03:27:32

Do you mean "game engines" generally, including things that run on the web, or Unity and Unreal specifically, which don't really have a web story?

Scott Anderson 2020-08-12 03:37:04

Generally in that Godot or something new could take market share from Unity or Unreal or at least be a serious player in the space, also it's possible that higher level tools and environments that look like games (Dreams for example) will be more important in this space

Scott Anderson 2020-08-12 03:37:27

Unity and Unreal both run on the web, and Unity's web support isn't bad

Scott Anderson 2020-08-12 03:37:52

I mean people deploy games that run in the browser all the time

Scott Anderson 2020-08-12 03:38:34

I guess I make a distinction there because it means people (probably) won't be using web tech to make game like things, there's not a real path to it

Scott Anderson 2020-08-12 03:38:44

Three.js is pretty nice

Scott Anderson 2020-08-12 03:40:17

But I don't see a future where most spatial applications are made with Three.js, it seems to have a niche where you want to embed 3D models in a mostly traditional web site

Scott Anderson 2020-08-12 03:40:54

But there aren't a ton of games made with Three.js in the grand scheme of things

Scott Anderson 2020-08-12 03:44:14

So because the browser doesn't answer the runtime or authoring environment question for spatial applications (3D, 2D or XR) it will still be around but mostly as a deployment mechanism and OS sandbox (WebGL/WebGPU and Wasm)

Scott Anderson 2020-08-12 03:59:16

We could see a browser like platform with better support for 3D, but that seems hard to get off the ground right now, the browser is kind of too important for that kind of experimentation (although Mozilla does some) and everyone else wants to make walled garden platforms because it's a better business model than open standards

Scott Anderson 2020-08-12 04:09:09

Also I don't think traditional web apps will go away, I only think as more game like applications become important the web won't be the primary platform for implementing them, it will be something like a game engine, even if the web is a deployment target

🕰️ 2020-08-09 08:54:16

...

Chet Corcos 2020-08-10 21:35:56

@Tim Babb what do you think?

Tim Babb 2020-08-10 21:42:40

oo, very cool

Tim Babb 2020-08-10 21:47:21

I think this is getting closer to the right idea. Their initial market seems to be for webdev (?), which I'd consider risky— devs are hard to win over for a whole host of reasons

Tim Babb 2020-08-10 21:51:07

They're doing type safety which I approve of. And it looks like they're trying to offer fairly general primitives, which I also approve of.

It looks like you have to jump out of the system and write JSON/JS to define new functions, though? And I'd be curious how a user would express iteration.

Chet Corcos 2020-08-11 01:15:13

Hmm yeah. Definitely a good start, but its lacking some customizing.

Chet Corcos 2020-08-11 01:15:29

I'd really like to build a custom UI inside the component

Robert Butler 2020-08-10 23:43:27

I just stumbled across this article. It left me feeling like I stumbled onto something very profound. It's not directly related to coding, but this article gets at the very core of what this community is about if we are honest. I know my 16-bit processor, assembly language and ultimately the high level language I'm trying to build are me trying to say something, to be consulted about the future of coding. https://ftrain.com/wwic

Kartik Agaram 2020-08-10 23:52:55

Blast from the past! Great article.

Andrew F 2020-08-11 00:47:00

“Intense moderation” in a customer service medium is what “editing” was for publishing.

Dang, that's good.

I must confess I also loved the part where he called out four sets of programming language zealots in three words.

Kartik Agaram 2020-08-11 07:18:52

Also invented the tilde movement, iirc.

Robert Butler 2020-08-11 19:27:19

Just read that what is code article. Delightfully self-aware, thought provoking and entertaining.

Andrew F 2020-08-11 22:01:30

For anyone else wondering what Kartik Agaram is talking about: https://medium.com/message/tilde-club-i-had-a-couple-drinks-and-woke-up-with-1-000-nerds-a8904f0a2ebf The first google result for "tilde movement" was some League of Legends movement command. :D

Kartik Agaram 2020-08-11 22:40:58

Here's my homepage: http://tilde.club/~akkartik

Garth Goldwater 2020-08-11 03:19:24

https://twitter.com/workingdog_/status/1292940516548640774?s=21 this thread, i think, points to what we lose when our systems and databases aren’t modifiable by everyday users. it’s about the diversity of human experience, and a love of pen and paper, but i think the most important thing to note about the way pen and paper gets talked about here is that the people filling out the forms are free to scratch things out and file reasonable requests and modify things as really existing people and events shake up the ontologies of the processes they are tasked with using

🐦 Shel🐶🐾: When you work in civil service, social services, or even just bureaucracy, you just become very aware of the vast diversity of experiences and life circumstances that exist in the world and how you have to adjust your systems to accommodate.

Kartik Agaram 2020-08-11 03:40:29

On the flip side, it rarely works out for me to scratch out a field on a form. And I've done forms on paper a good amount. Somehow the database schema doesn't feel like the bottleneck in dealing with bureaucracies.

I'll certainly cop to computers perpetuating existing power structures, but it feels a bit much to claim things were all kumbaya before them.

Tim Lipp 2020-08-11 04:40:18

Great lateral thought to apply this to the world of databases! At the very least it is another case-in-point for the importance of "localism" for administrative purposes. Some of the best shelters I know of really typify this.

Garth Goldwater 2020-08-11 10:45:31

Kartik Agaram definitely not my intention to argue in support of bureaucracies—just pointing out that you can get a lot of leeway forced into a system when users know in their hands that they can change it

JP Posma 2020-08-11 05:49:24

I haven’t shown this before, but I figured that it might be interesting to this group. For 2 years I was the tech lead on a robotics visualization tool called Webviz (I’ve moved to a different team now though). Not really looking for feedback since the intended audience is robotics engineers, and the onboarding experience is.. super rough. 😅 But we can get away with that since it’s incredibly useful; it’s the most used internal tool at our company. Which goes to show how important interactive programming visualizations are — especially in a complex domain like robotics! Anyway, figured some of you might find this interesting. Happy to answer any questions. https://webviz.io/

🔗 webviz

JP Posma 2020-08-11 05:53:09

Also there’s an article with some more background: https://medium.com/cruise/webviz-fb5f77ebe52b

Mariano Guerra 2020-08-11 13:17:21

what's the future of error handling?

Rahul Goma Phulore 2020-08-11 13:27:20

Interfaces should be respectful of the work that the user has done up to the point an error occurs. The computer should freeze/salvage as much of the state as makes sense for the context, and provide the ability to resume from there. Kinda like Common Lisp’s condition system, but potentially spanning multiple network nodes, and also surfaced meaningfully in the user land. 

It should include contextual suggestions to help “fix” the error, and carry on. Loosely mapping to the Common Lisp restarts.

It should speak the user’s language. It shouldn’t gratuitously expose them to the semantic layers beneath. Unless they want to! Which means you need some sort of “moldability” in your error framework.

Garth Goldwater 2020-08-11 13:31:57

i agree with the above ^ every error should be result in a question from the user about how to proceed, with the opportunity to fix the issue using any required data from a humane interface for finding it

Mariano Guerra 2020-08-11 13:34:00

and what if the error corrupted state or left it in an inconsistent state?

William Taysom 2020-08-11 13:40:40
Mariano Guerra 2020-08-11 13:43:28

so the future is immutable? 🙂

Duncan Cragg 2020-08-11 14:45:32

I don't believe in errors or exceptions, because I believe programming is creating models of reality, and you don't get errors or exceptions in reality.

Mariano Guerra 2020-08-11 14:48:38

how parsing a csv when given a pdf is a model of reality?

Duncan Cragg 2020-08-11 14:58:48

There are two ways of answering that: from the point of view of (a) syntax as type in a programming model, or (b) interfacing between a clean programming model and the dirty realities of legacy tech

Duncan Cragg 2020-08-11 15:00:32

(a) If you expect some shape and don't see it, you don't see it, no errors, just no behaviour. Hopefully your programming environment will help you play with the stuff of your model to enable you to feel and experiment your way forwards

Duncan Cragg 2020-08-11 15:01:01

(b) yes, something tells you the data import didn't work, in a log or something; so maybe you can have "error as type": an object in the clean programming environment that was meant to be populated with the CSV data has an "error message"; fair enough! 😄

Kartik Agaram 2020-08-11 15:04:27

Depends on the future of everything else! It makes no sense to design features separately and then bolt them on.

Depends on where in the utopian stack you're talking about. Different places will need different, ahem, trade-offs.

Duncan Cragg 2020-08-11 15:04:54

↑ and what Kartik sez

Andreas S. 2020-08-11 15:57:57

Robust Computing

Garth Goldwater 2020-08-11 17:59:24

yeah, ideally the error percolates up as a mismatch—for example, somethings undefined. the tool rewinds the execution trace to where that symbol became undefined—gives you the process and lets you edit it such that it doesn’t happen again

Garth Goldwater 2020-08-11 17:59:37

similar to how TDD works in a smalltalk image

Garth Goldwater 2020-08-11 18:10:35

demonstrated at 17:30-21:00 in this video (tldr at 19:30-20:45): https://youtu.be/eGaKZBr0ga4

Duncan Cragg 2020-08-11 18:15:12

.. * tl;dw you mean 😄

Kartik Agaram 2020-08-11 18:33:50

I assume @Andreas S. is referring to https://www.youtube.com/playlist?list=PLm5k2NUmpIP-4ekppm6JoAqZ1BLXZOztE

Very nice internally consistent answer. Fixing error handling this way affects the entire programming model.

Andrew F 2020-08-11 18:37:34

I do think Duncan Cragg is on to something. "Errors" are fundamentally just another control flow path that don't necessarily deserve special treatment. From this I conclude that any shenanigans we want to do with exceptions should also be available for happy-path code as well. I'm pretty sure this means pervasive use of delimited continuations, possibly in the form of algebraic effects.

Kartik Agaram 2020-08-11 18:45:21

Or anonymous sum types that are structurally rather than nominally matched. Go's pattern but with a single result.

Andrew F 2020-08-11 19:10:45

Kartik Agaram can you elaborate a bit on what you mean by "structurally matched"? A doc link to a language that does it would be fine. I'm assuming "nominally matched" is the usual ML-style pattern match...

Kartik Agaram 2020-08-11 19:17:07

Right. The closest language is https://ceylon-lang.org/documentation/1.3/introduction. Search for 'union type'.

Basically structural typing addresses Rich hickey's criticism of Haskell. You don't always want int to match some union type containing ints. But it's good to give programmers the choice. Particularly for errors, it's good to be able to just return an int and have it automatically convert to int|err.

This approach to error checking feels more parsimonious than first class delimited continuations and algebraic effects.

🕰️ 2020-08-09 09:52:12

...

Ope 2020-08-11 13:22:36

https://www.usenix.org/legacy/events/sec99/full_papers/whitten/whitten_html/index.html this was about how a new field usable security came to be. The point being that, it’s not just about stuff technically working it’s about making sure it usable. People in security used to be like if it’s not cryptography it doesn’t belong.

Ope 2020-08-11 13:25:06

Sorry just got to these are the ones that are still my browser tab 🙂.

Ope 2020-08-11 13:26:24

And yeah, one of the biggest problem is that papers are under a firewall (I know scihub I know :) ) but it meant I closed some links cos I could be bothered to check. TMI already

Ope 2020-08-11 13:27:39

Just updated this with some links from the summer school.. cc Will Crichton any other ones to add?

S.M Mukarram Nainar 2020-08-12 14:50:31
Garth Goldwater 2020-08-12 15:01:36

wow this is amazing!!

Steve Peak 2020-08-12 15:01:58

Somewhat reminds me of https://www.mercuryos.com/

Garth Goldwater 2020-08-12 15:05:01

Kartik Agaram for some subconscious reason, Principle 11: Defer Composition reminds me of your project

Steve Peak 2020-08-12 15:05:43

I hope that the future of software is more about interoperability and context-aware flows (both of which are essentially absent in todays products due to the desktop paradigm and web page paradigm). It’s as if every app/company needs to become it’s own bespoke marketplace and spread very horizontally with it’s feature set to accommodate edge cases. I see a future where the democratization of “features” broken down into smaller units and the new operating system, like Mercury for example, is the binding agent by applying some, if not all, the principles outlined in that wonderful article.

Kartik Agaram 2020-08-13 01:32:11

Garth Goldwater I don't understand why, but I'll take it 😆 It reminds me more of Alan Kay on late binding.

Garth Goldwater 2020-08-13 01:33:21

i think i was thinking about you pointing out that you didn’t actually need to use ncurses. sounded like a similar kind of situation emerging from your prototyping

Kartik Agaram 2020-08-13 01:41:14

Yeah that makes sense. I hope to one day reach his desired state: add graphics, subtract terminal escape sequences.

Kartik Agaram 2020-08-13 02:04:11

Persistent state stores should not be some assumed default, but rather require interactive/user-initiated automation. Lifespace should be coupled to namespace.

I ❤ this much more than @François-René Rideau's ideal of orthogonal persistence: https://ngnghm.github.io/blog/2015/08/03/chapter-2-save-our-souls. Yes, the two are not mutually exclusive. But they are both hard, and they point us in opposite directions. If I had to choose one, I know which way I'd go.

Andreas S. 2020-08-12 16:42:29

🐦 Conor White-Sullivan: Keep thinking about how our long term goal is to invert this

@worrydream is right https://twitter.com/MeadowsRichard/status/1248767679583977473

🐦 Richard Meadows: keep thinking of this gif in relation to @RoamResearch every week something disappears into the maw: • journaling • GTD + project management • blog posts, essays • goals and reviews • contact management • zettelkasten • reading list • training log • frickin groceries

Christopher Galtenberg 2020-08-12 17:01:51

Don't need to invert it if we get the right computer - for instance, the right computer for a human may be that the whole desk is a display, and that there are physical props as interface elements

Steve Peak 2020-08-12 17:05:13

Yea… I tend to think this narrative is narrow minded too. On one side: I would love to see more “real-world” interactive design, but on the other, I want the computer to just do shit for me without having a bunch of paper on my desk. Nonetheless, IMO the computer gets in our way most the time and in the future we will have something like Jarvis in Ironman (which I hope Dynamicland is ultimately heading towards). All I want is: “Computer, do my taxes” and “Computer, earl grey hot.”

Steve Peak 2020-08-12 17:06:41

Also… you cannot carry around physical props on the go… I could see the phones being embedded into the arm, but no way will you catch me with a pocket full of neatly wrapped paper and a tripod on my back for the camera to watch my paper move.

Steve Peak 2020-08-12 17:07:49

Mobile is the future. To me, mobile is not a phone it’s computers getting out of the way and more assisting my life and working with me so that I can be more mobile 😉

Christopher Galtenberg 2020-08-12 17:12:49

Jarvis was neat because the idea wasn't "computer go do this for me", it was "let's work on this together"

Steve Peak 2020-08-12 17:14:06

💯 I’m crying with love for that statement.

Steve Peak 2020-08-12 17:29:43

🐦 Steve^: The future is not about the two extremes of devices vs physical things as expressed here https://twitter.com/Conaw/status/1283434727584620544 it's about computers fully getting out of the way and augmenting tasks on a human level: think Jarvis from Ironman: working with you not for you.

Jack Rusher 2020-08-13 12:38:47

A CS professor thirty years ago: 'In the future when we talk about computers people will ask, "you mean those little things we put under our tongues to make us smarter?"'

Shubhadeep Roychowdhury 2020-08-12 20:39:45

Rob Pike's 5 Rules of Programming

http://users.ece.utexas.edu/~adnan/pike.html

Kartik Agaram 2020-08-12 21:33:25

minor pedantic note on 'premature optimization':

This was originally attributed to Tony Hoare, but tracking down the origin of this quote I found that it was actually Knuth who said it first. Although Knuth did call it "Hoare's dictum" 15 years later, Hoare himself disclaimed it. See > http://shreevatsa.wordpress.com/2008/05/16/premature-optimization-is-the-root-of-all-evilhttps://wiki.c2.com/?PrematureOptimization

Chris Maughan 2020-08-13 06:30:55

This is basically my approach to coding, good stuff. The first question I ask anyone who tries to check in an optimization is "What was the measured time before and after?" If they can't answer that question, they are doing the wrong kind of programming.....

Shubhadeep Roychowdhury 2020-08-13 07:50:51

I did not know, neither formalised the approach like Mr. Pike did. However, I do agree with it and I follow it most of the time. Kartik Agaram Also, I had a faint memory about the "Premature Optimisation" part attributed to Knuth, thanks for confirming. This is a very important lesson and many people learn it the hard way.

Shubhadeep Roychowdhury 2020-08-13 07:51:25

Can we do better than our C compiler?

https://briancallahan.net/blog/20200812.html

Felix Kohlgrüber 2020-08-13 11:33:51

Most people here probably use terminals / CLIs quite often and like them for their conceptual simplicity: Send a command -> Receive a response. I'm thinking about improving CLIs while keeping what I like about them. One thing I'd like to improve is interactivity in the output. Different use cases require different levels of detail and currently, there's no way to change the level of detail once the result of a command is printed. For example, a lot of commands include --verbose options that perform the same action, but print additional output. So what if you could interactively decide what information to show in the output? This would allow more flexible / readable output and wouldn't require re-running commands to get additional information. In the tweet below, I mocked a simple example where a stack trace is hidden by default, but can be shown when needed. What do you think of this small idea?

https://twitter.com/FKohlgrueber/status/1293865745420615680

🐦 Felix Kohlgrüber: Collapse / Expand is a feature I've been wanting in terminal output for a long time. Thinking about this, a lot of --verbose / -v options could probably be replaced by this very simple form of interactivity. What do you think?

Kartik Agaram 2020-08-13 14:30:59

I like it, but what's the next step past mock? Ship terminal+shell?

Felix Kohlgrüber 2020-08-13 16:32:09

Kartik Agaram My main intention was to get the idea out and gather feedback. I'm currently not planning to actually implement this. Shells and terminals are a rabbit hole I don't want to get into right now. If I were to implement it though, I'd probably take the chance to rethink terminal interfaces from the ground up. Interactivity, rich formatting, multimedia content, terminal-program-communication, etc., I feel like there's still a lot of unused potential. And most importantly, I'd replace all the old stuff like ANSI escapes, \r, etc. with portable and clean 21th-century interfaces.

Kartik Agaram 2020-08-13 16:35:30

Mocks are great for decision making. But only if you're planning on doing something based on the feedback.

I think I'm just frustrated because I have lots of plans for Mu in this vein, but I HAVE NO HANDS. Stuck with RSI.

Felix Kohlgrüber 2020-08-13 16:59:11

Oh, sorry to hear that. I wish you all the best. What are your ideas regarding this in Mu?

Another reason for me to do these small demonstrations is to show others that even things that have been as stable as terminals still can be improved.

Kartik Agaram 2020-08-13 16:59:45

Oh you know I agree with that 🙂

Kartik Agaram 2020-08-13 17:45:19

I don't know if you've been following this group, but you can see some of my demos at https://archive.org/details/@kartik_agaram and https://mastodon.social/@akkartik/104256362334830835. They're less exciting because I'm proceeding bottom-up. But they should also be easy for anyone else to reproduce.

Jeff Mickey 2020-08-13 17:47:41

I actually really like using emacs' eshell for some weird output shenanigans. Because it internally uses buffers for every pipe (doesn't have real pipes) you can go back and forth between the intermediate buffers it creates, cat them, etc.

This also looks like a tree editor direction potentially as well, output trees of info, and then have some numbering for previous trees (like lisp repls with $1, $2, $3 backwards etc)

Jeff Mickey 2020-08-13 17:48:12

though eshell is so slow, this actually always ends up annoying the crap out of me, so then I go back to rc in shell-mode

Steve Dekorte 2020-08-13 19:58:33

I love the idea of structured input and output for everything. If —help had a sufficiently structured representation, could we auto generate UIs for CLI tools?

Garth Goldwater 2020-08-13 20:29:49

yes! nushell does the beginning of that, as does xiki. reminds me of naked objects

Doug Moen 2020-08-13 20:53:58

This feature would be quite useful in the REPL for my own language project. Here are some other features I need:

  • Animated text. If you type time (which is a numeric expression), then you should see an animated floating point number that counts up, measuring the number of seconds since the command was typed.
  • Colour swatches. If you type red, which is a Colour expression, you should see a colour swatch, in full 24 bit RGB.
  • Animated 3D shapes.
  • Visualizations of data structures (eg, lists and records), where data structure elements can be animated numbers, colour swatches, or 3D animated shapes.I doubt I can find off-the-shelf tech that does everything I need, so I expect I'll need to implement this more or less from scratch. ☹
Konrad Hinsen 2020-08-14 08:13:00

My plan for a better shell environment (purely theoretical, I have no time left for such pleasures at the moment) would be to start with Tudor Girba's GToolkit (https://gtoolkit.com/) and implement a playground plugin for running shell commands, capturing the output in a command-specific object that does some basic parsing. It would then be a small job to add customized views on the parsed output - small enough that you could do it on the spot while working on a specific problem.

I actually expect the implementation of this idea to be a small project. The big one is providing enough output parsers for popular CLI tools to make it practically useful.

Tudor Girba 2020-08-14 08:16:44

Indeed 🙂

Felix Kohlgrüber 2020-08-14 07:47:09

Next mock: This time it's about resizing and how that breaks layout in state-of-the-art terminals.

https://twitter.com/FKohlgrueber/status/1294177496729030656

🐦 Felix Kohlgrüber: I'd love to have a terminal that correctly wraps text when resizing the terminal window. This is such a simple feature and has been implemented in browsers / office programs for decades, but terminals still don't support it. (1/2)

Felix Kohlgrüber 2020-08-14 07:50:32

The underlying problem here is that for console programs, the layout is determined by the client program, not the terminal. This implies that the terminal doesn't have the information to correctly re-layout the content on resize. The solution to this would be to pass structured data to the terminal and have it do the layout.

Mariano Guerra 2020-08-14 09:49:01

"A good science fiction story should be able to predict not the automobile but the traffic jam." ―  Frederik Pohl

What's the traffic jam of the future of coding?

Chris Knott 2020-08-14 11:18:15

No software tool will change fundamental brain chemistry. There is a limit to how fast humans can learn, because learning and thinking is a physical process that involves the actual movement of atoms and building of biological cells. There's potentially huge, not well understood benefits to effort, toil and failure in long term knowledge acquisition and design.

There's potential catastrophes around superpowering one part of our abilities and not others, a bit like early attempts to eliminate sleep. I think Chris Granger has written before about the benefits of pointless busywork in coding (manually balancing brackets, manual renames of variables etc). Sometimes confronting a problem in it's purest conceptual form is just too big chunks to chew.

So I would guess the "traffic jam" feels like those times when you actually have loads of time on your hands but you don't seem to be able to start anything because there's no little problems.

I linked this Ted Chiang story before which highlights some of the arguably negative effects of the previous literacy revolution - http://web.archive.org/web/20131204053806/http://subterraneanpress.com/magazine/fall_2013/the_truth_of_fact_the_truth_of_feeling_by_ted_chiang

Stefan Lesser 2020-08-14 12:11:14

Funny, I’ve never thought about it like this, but now when you say “traffic jam” and coding, my brain immediately goes to package management, libraries, and projects — too many people driving their own cars and too few people interested in car sharing.

Clemens Klokmose 2020-08-14 12:40:42

From Alan Kay’s “A personal computer for children of all ages” from 1972.

📷 image.png

Steve Peak 2020-08-14 14:06:10

The future of coding is that people won’t be coding at all 🤣

Steve Peak 2020-08-14 14:07:23

Depends on how far of future perhaps. But IMO the first traffic jam is accessibility being pretty terrible.

Ope 2020-08-14 14:21:49

Requires knowing what the automobile equivalent is first 😅. One way to answer is to assume what the automobile is and think of what happens when there is an abundance of it. Say the automobile is everyone can code. Software Engineers go extinct or almost?

nicolas decoster 2020-08-14 14:35:19

I don't think the future of coding programming will be that people don't practice it anymore. The core of the programming activity is managing the complexity of automatic behaviors, whatever is the underlying system.

The progress in programming only means simplifying things that were complex, and in the same time making impossible things become feasible but complex. So there will always be something complex that needs people to work on.

Ope 2020-08-14 14:37:42

Oh by that, I mean it’s ubiquitous. like “everyone does some maths but we aren’t all mathematicians” sort of way or like how human computers don’t have jobs anymore, it is not because we are no longer computing. And things that used to require software experts might just need a domain expert - (to get something basic at least)

Christopher Galtenberg 2020-08-15 05:10:44

The traffic jam would be everyone is flinging code like everyone is now a writer... in the micro-form of posts and tweets, that are now swamping the earth

I think a lot about the micro-form of coding that people use to amuse themselves and get something small done - maybe if it’s done right, there‘ll be less writing/venting out in the world and more getting results

🕰️ 2020-07-30 16:01:41

...

hamish todd 2020-08-14 11:12:15

Maybe worth saying that the thing I am working on is planned to be AR/VR https://www.youtube.com/watch?v=hR-MQm3c13Q 😅 lonely furrow I guess

Ivan Reese 2020-08-15 04:19:43

📢 https://futureofcoding.slack.com/archives/CEXED56UR/p1596524159412300. Here's hoping I get it all done before my toddler wakes up 🤞

EDIT: Done for the night. More updates to come, likely on Sunday rather than Saturday. Cheers 🍻

Ivan Reese 2020-08-15 05:05:53

Interesting wrinkle — when you use the #hashtag name for a channel in a message, and then rename the channel, your message is updated to reflect the new name. Most curious, and likely carries a number of second order consequences.

Shalabh Chaturvedi 2020-08-15 05:19:58

yeah interesting

Ivan Reese 2020-08-15 05:21:40

wonder if they store it all as text and just rewrite the messages as-persisted, or if the channel name is somehow transcluded into the message at render time, or something else.

Shalabh Chaturvedi 2020-08-15 05:24:06

yeah more importantly how do you refer to the old name, let me try.. #general

Shalabh Chaturvedi 2020-08-15 05:24:49

ok you could just edit the msg I guess.

Shalabh Chaturvedi 2020-08-15 05:27:47

another instance

📷 image.png

Ivan Reese 2020-08-15 05:47:54

^ Yeah, that's wild. I'm half-tempted to.. okay, gonna do it. Back in a sec.

Ivan Reese 2020-08-15 05:49:32

Renamed it to #your-work to see what would happen. So it looks like they're not doing a totally naive regex rename.

Shalabh Chaturvedi 2020-08-15 05:50:18

wait wat.

Shalabh Chaturvedi 2020-08-15 05:50:36

shouldn't you change it to # feedback?

Ivan Reese 2020-08-15 06:01:16

I could have changed it to anything — I just wanted to see what happened to my message (where the text #share-your-work wasn't an automatic channel link). But just in case Slack had some sort of dumb limit about naming channels back to the names they previously had, I went with something I'd have been happy with if we were to end up stuck with it.

Ivan Reese 2020-08-15 06:02:37

(I just spent 20 minutes manually adding everyone to #announcements and cleaning up errors due to their crummy add-people-to-channel UI, so.. can't blame me for being suspicious about Slack's dusty UX corners and hidden cliffs)

Ivan Reese 2020-08-15 04:57:18

Ivan Reese has renamed the channel from "general" to "thinking-together"

Ivan Reese 2020-08-15 06:16:50

Welcome to #thinking-together!

This is the new channel for discussions based on our own thoughts and questions about the future of computing.

  • While you can add reference links to your posts, please take any discussions that center around external links to #linking-together
  • While you can discuss the future in light of the present, please take any discussions that are mostly about computing (or the world) of today to #present-company
  • If you'd like to discuss your own work, (A) please do that a lot it's great I love it (B) over in #share-your-work
  • If you'd like to discuss the future of a particular subject, don't miss our subject-specific channels about the future #of-end-user-programming, #of-graphics, #of-functional-programming, and #of-music.

I am working on a new Member's Handbook that will be the canonical home for our cultural norms (how we use Slack messages, the channels, etc) — so stay tuned for that if you aren't quite sure about how to make the most of the new organizational scheme.

One final note — I attempted to add everyone in the Slack to #announcements and #present-company, but the Slack UI for this is not great. I know for certain I didn't get everyone, and I can't easily tell who I missed, so if you aren't in those channels then I encourage you to join them.