2020-05-29 00:38:03 Unknown User:

MSG NOT FOUND

2020-05-31 23:48:17 S.M Mukarram Nainar:

thanks for all the pointers after reading up some more, and trying out some of the proposals, I'm back to just regular S-expressions, though I must admit the infix rule Konrad Hinsen mentioned is very elegant and is something I will remember. I suppose there's still always room for macros that change the syntax (I like LOOP, please no flames), and going even further, stuff like racket's #lang.

My main issue with syntactic extensions and changes, is that they usually break tooling support—usually macros don't provide too useful error messages, and static analysis tools don't work on the files anymore. Even basic stuff like syntax highlighting. Does anyone know of work in this area? I know of racket's syntax objects and still need to dive deeper into how they're used, but I'm not aware of much else.

2020-06-01 07:57:48 Konrad Hinsen:

S.M Mukarram Nainar Racket is where all the action happens right now on this topic. The PLT group is seriously thinking about a non-s-expression-based language for their ecosystem (codename "Rhombus": https://github.com/racket/rhombus-brainstorming).

Racket's syntax objects are extensions of Scheme syntax objects, which are lists plus metadata for tracing the provenance of transformed code back to the source code files. This metadata is required for implementing correct lexical scoping for identifiers, something that Common Lisp (and others) don't care about. It's a double-edged word: correct scoping is certainly necessary to scale up macro processing to the point of supporting complex language implenentations, but it also increases the learning curve significantly. Common Lisp macros are much simpler, and much easier to understand, but not at all easy to use for complex transformations.

2020-06-01 10:00:47 S.M Mukarram Nainar:

Konrad Hinsen interesting. I will have to spend some time learning about them. Is Beautiful Racket still the recommended resource?

On another note, github.com/mflatt/rhombus-brainstorming/blob/shrubbery/shrubbery/0000-shrubbery.md is quite relevant to the earlier discussion.

2020-06-01 10:01:49 S.M Mukarram Nainar:

(How do I make slack not ecpand the whole thing?)

2020-06-01 14:39:23 Kartik Agaram:

I just delete the attachment in these situations. There should be a 'x' for you up top on the web interface.

2020-06-02 01:47:47 Konrad Hinsen:

S.M Mukarram Nainar Beautiful Racket is a very good entry point. There are also many little languages by now in the Racket ecosystem that are good examples to study.

2020-06-01 02:09:08 Nick Smith:

Has anyone thought about, implemented, or encountered higher-level abstractions of ALUs? a.k.a. the part of hardware where actual computations are performed (as opposed to the miles of hardware dedicated to control flow management). Almost every programming language has an ALU abstraction based upon fixed-width chunks of binary digits (32 or 64-wide), arithmetic operations (interpreting the chunks as integers or IEEE floats), and bitwise and bitshift operations. Those fixed-width chunks are grouped into "allocations", which are either larger fixed-width containers (structs etc) or dynamically-sized arrays.

Recently I've been thinking about a "clean slate" abstraction that still exposes the basic operations that ALUs are good at (integer arithmetic and bit manipulations) but without the fixed-width chunk limitations. Fixed-width chunks are purely a hardware design limitation and have no inherent value to a programmer's mental model; they just add complexity to data modelling. What DOES have value is the notion of a dynamically-sized bit sequence that can be manipulated via splicing operations (take, drop, insert, replace) that generalize bit shifts, bitwise operations (the same old &,|,^ operations), and the familiar arithmetic operations (add, sub, mul, div...). This is a natural foundation for arbitrary-size integers and sequences, but also for general computations that want an efficient mapping to hardware capabilities. I want to take an ALU abstraction like this and build my way-out-there logic programming environment on top of it, so that you still have a conceptual bridge to hardware, and thus you can still reason about the efficiency of basic operations and use them to create efficient user-defined data types.

2020-06-01 02:11:03 Nick Smith:

And yes, arbitrary-sized bit sequences have overhead, because you have to perform length checks before every operation. But I'm not too worried about this constant-factor overhead. I'm not competing with C. Also, many of those checks should be able to be removed with the help of some static analysis. I want to make this a language implementation problem, not a user's problem (and my users aren't compute-bound).

2020-06-01 03:41:50 Edward de Jong:

In Beads i have a variable length bit string type (bits), and a variable length byte type (bytes), which are very useful for packing binary things, or for doing byte manipulation, both common low-level operations. Some CPU's such as the Motorola 68000 had variable length bit and byte strings. Intel has kinda sorta byte manipulation with the REP MOVSB instruction, but the Motorola had extremely handy arbitrary bit string stuff.

As for arithmetic, IEEE floating point is downright stupid, causes all sorts of problems. Some propose using DEC64 a superior methodology, but some languages support BCD with specified decimal digits which can be helpful in financial applications.

No question that thinking about 32 vs 64 is mostly a waste of time unless you have an ungodly amount of data to process, which is why AS3, JS, and many other languages have a single numeric type based on 64 bit floating point.

2020-06-01 07:21:07 Nick Smith:

Yes I'm planning to see how I go banning IEEE floats and instead exposing an opaque rational number type in my environment. As far as bit manipulation hardware goes, Intel's parallel bit deposit and extract for x86 seems really cool, but unfortunately isn't efficient on an AMD Zen, since they've inexplicably implemented it in microcode rather than as a native capability. I'm also saddened by the absence of a bit reversal instruction in x86... it seems to exist on every other major hardware platform!

2020-06-01 17:11:36 Robert Butler:

A few thoughts from my instruction set design. 1. My uCISC instructions have an increment flag in them (see https://github.com/grokthis/ucisc/blob/master/docs/07_ALU.md#arguments). This allows you to chain arbitrarily long ALU functions back to back in increments of 16-bits since that is my word size. This works for addition, shifts and similar operations, for example. The increment points it to the next address and the carry flags tell the ALU how to adjust the next op. 2. You can generalize this by adding a repeater flag to repeat the same operation N times (see the repetition factor here https://github.com/grokthis/ucisc/blob/master/docs/05_Instruction_Behaviors.md#flags-register) 3. I banned floating point math from my ALU. You can always attach custom hardware to speed up these cases if needed. The problem is that these operations tend to be highly bit width dependent and also orders of magnitude slower in software. 4. However, repetition does NOT work for ALU operations where the first bit and the last bit affect each other. For example, in multiplication, each bit is effectively multiplied against every other bit. So, for arbitrarily sized numbers you'll need to make multiple passes. Using something like the Karatsuba algorithm (https://en.wikipedia.org/wiki/Karatsuba_algorithm) you could arbitrarily decompose larger operations into multiple sub operations but it's non-linearly scaling. I haven't done the math, but it gets out of control computation very quickly. 5. Verilog has ALU operations built in with semantics controlling bit width and signed-ness and the effects on the operation. I have found I need to be ultra careful with the results. Adding 8 bit numbers results in a 9th bit. Multiplying to n-bit numbers results in a 2n-bit number. You also have to be careful with what happens when you operate on unequal bit lengths. Verilog's handling of these things might provide some inspiration on how to do what you want.

2020-06-01 19:53:31 Doug Moen:

LLVM supports this: http://blog.llvm.org/2020/04/the-new-clang-extint-feature-provides.html

2020-06-01 19:54:50 Doug Moen:

There's a proposal to put this into the C language: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2472.pdf

2020-06-02 07:09:42 Nick Smith:

Robert Butler Seems like a cool project, but beyond the chaining you still seem like you're sticking to the classical ALU operations. I'm interested in seeing a re-think of bit manipulation, and for that reason I'm planning on having more powerful primitives than just adds and shifts (but which ultimately can be compiled back to them).

2020-06-02 07:11:00 Nick Smith:

Doug Moen Yeah LLVM's support for this is really cool, but having integer size be statically-specified seems like it's more limited than having all bit strings be dynamically-sized. In other words, LLVM doesn't seem to have BigInts built in.

2020-06-02 07:14:15 Nick Smith:

Once you've decided to ban overflow, size annotations should be a performance optimisation (computed through static analysis), not a requirement for compilation.

2020-06-02 07:56:45 Robert Butler:

I'm curious, why is banning overflow a useful abstraction? At the end of the day, it has to run on hardware and making use of arbitrary bit widths on each operation will work often (with a performance overhead) and sometimes crater the performance. Unless you limit yourself to ALU operations that scale O(n) where n is the bit width, you'll end up with a lot of edge cases.

2020-06-02 07:59:10 Robert Butler:

There are plenty of languages (mostly scripting languages) that you are safe to ignore bit widths for most cases since 64bits and double precision numbers can hold a lot of range. Most programmers don't think about those things anyway.

2020-06-02 08:02:39 Robert Butler:

If you do start to break out of 64 bits, you very much need to think about the hardware because performance is going to kill the program pretty quickly on anything computation that isn't O(n) with respect to bit width. I guess my question is what is the target audience is this for?

2020-06-03 13:02:28 Nick Smith:

Integer wrapping is almost always a bug, and it’s one that can occur years or decades after a program is written. On the other hand, reduced performance is at best a mild nuisance and at worst a bug (if drastic). But (drastically) reduced performance will only occur in situations where wrapping would have occurred anyway. So the question really comes down to: when overflow DOES occur (if it ever does), do you want to produce a correct result (a bigint) or crash/produce garbage numbers? That’s the choice I’m making. And I’m sure that 99.9% of integer overflows in most programs would not cause any noticeable performance drop, and thus are an “obviously correct” choice rather than a tradeoff.

2020-06-03 13:03:37 Nick Smith:

I did mention in my original post that the target audience is not people expecting uncompromised C-like performance. My primary goal is to extend the accessibility of developing nontrivial apps to those who cannot create them today.

2020-06-03 16:09:18 Doug Moen:

Nick, you started talking about an ALU abstraction (bit strings with low level ALU-like operations), but then you say that integer wrapping is almost always a bug, so now the abstraction we are talking about is integers, not bit strings. From my perspective, the confusion between bit strings and integers began in C (where it was appropriate) but has infected many modern high level languages (where it is not appropriate). The two ideas should be kept separate, because the operations are distinct and the concepts are distinct. In my language (Curv), I started with a "number" abstraction, which only supports numeric operations, not bit string operations. Later, I found a need for bit strings (to write hash functions), so I added a separate bit string abstraction, which is actually an array of booleans. So bitwise and, or, xor, and not are just those boolean operations extended to work on boolean arrays. For hash functions, I also needed operations to convert between numbers and bit strings, and I needed a bitwise version of integer addition that wraps around (ie, it has ALU semantics). Unlike C, Python or Javascript, my "bit add" operation, which only works on bit strings, is distinct from the "+" operator that only works on numbers, and they have different semantics.

2020-06-04 01:22:23 Robert Butler:

Nick Smith that's a really interesting perspective and I can see where you are going. I see the same base information and come to different conclusions, at least in part because all of the situations where I have had to deal with overflows that I can think of are precisely the ones where performance matters. I write a lot of Ruby in my day job and every time I've had to think about precision or int vs. bigint performance has been one of the walls I was up against. That said, that's just my experience and I'm curious to see where it goes and see if it opens up new ways of approaching these issues that changes the situation.

2020-06-04 01:39:47 Nick Smith:

Doug Moen You’re right, I’ve mixed discussion of bit strings with that of integers, and I agree that bit strings should be considered something beyond integers. That said, I don’t think they should be separate concepts, rather I think integers should be considered one use case of bit strings. Addition is just xor with carry, and can be useful as a bit string operation, amongst general sequence operations like drop, insert, replace etc.

2020-06-04 03:11:48 Nick Smith:

The benefit of keeping the concepts unified is that there is less complexity in the ALU model, and you can mix integers into bit string encodings. However I expect a programming environment to default to a separate (opaque) abstraction for integers that hides their bit string representation and bitwise operations, exposing just arithmetic. The “raw” bit string representation of integers can be reserved for those wanting to do low-level coding with an ALU.

2020-05-05 17:49:04 Unknown User:

MSG NOT FOUND

2020-06-01 10:18:33 S.M Mukarram Nainar:

Thinking about it, aren't coloured spaces basically what you get with org-mode outlines? I mean, they aren't literally spaces, but presentation-wise.

2020-06-01 15:02:50 Maeliza:

Our Future of Coding project is taking shape 🤩 We are building Machine Learning Model that can learn how to code

🏆 With Shubhadeep Roychowdhury we are excited to release codeBERT. It's a Masked Language Model trained over Python source code thanks to Hugging Face.

🕹 You can easily load the model and its weights (code below) from transformers import * tokenizer = AutoTokenizer.from_pretrained("codistai/codeBERT-small-v2")model = AutoModelWithLMHead.from_pretrained("codistai/codeBERT-small-v2") 📈 Example of the results are in the thread below

📚 Full tutorial on how to load and fine-tune the model for downstream tasks is coming!

2020-06-01 15:05:47 Maeliza:

codeBERT results over Python code

2020-06-01 19:59:49 Steve Dekorte:

If you feel (as I’m guessing most of us here do) that software development hasn’t progressed very much in the last several decades, can you share any thoughts/theories you might have on why that has happened?

2020-06-01 20:13:39 Ivan Reese:

In my personal experience, Macromedia Flash was a remarkable tool that took a lot of brilliance with it when it died. It was a good enough programming environment (arguably on par with modern JS, or Lua), but that programming environment was embedded within and subservient to a powerful graphics program. People could (and would) use it just to draw pictures or make animations, never touching the programming features.

So the absence of a modern equivalent is, I think, one way in which we've actually lost progress. We have Unity and Unreal, but while they do have basic 3d modelling features, they are by no means 3d art tools themselves — you still need to bring your own Blender, so to speak.

This is why I see potential for visual programming — it's not just a way to make the programming experience more live, it's actually a way to make the programming experience embedded within and subservient to the arts. Like Flash, but more so.

I think this sort of "embed the programming within the domain" is a useful, subtle inversion of the idea of a DSL. Spreadsheets are this. So I think some of the stagnation has been the relative lack of new initiatives of this type.

2020-06-01 20:16:11 Tim Babb:

Short answer: We're at a local maximum. There is a distinct valley of utility between where we are (text programming) and where we could be (fully interactive computing systems) which is deep and difficult to cross, because there isn't really a good incremental way to turn text programming into a fully interactive visual language. Any in-between would be too constrained and clumsy for a fluent programmer working with mature text-based systems, and too difficult to use for a non-programmer who needs programming.

There's not a lot of need or market in that valley between the peaks. The mountain has to be climbed from the bottom, and it's counterintuitive to make a "programming language" that "isn't for programmers" (at first).

2020-06-01 20:19:56 Tim Babb:

I could see an alternate universe where Engelbart's demo happened a handful of years later (when it would have been more possible for others to emulate), when many of its ideas could take off immediately, and programming evolved directly from SketchPad, completely leaving text programming in the dust of the 1970s

2020-06-01 20:22:32 Tim Babb:

There could have been a world where we chose the trail to the Interactive Programming peak from the very beginning.

2020-06-01 23:10:05 Christopher Galtenberg:

imo, there's a "medium is the message" / "you have to become the medium to process information" aspect at play - computing has offered us 2 primary modes to work in, text/CLI and windows, and that's dictated the terms - things haven't really moved (on), because we don't experience other frames, true other platforms -

The usual forms of programming in both modes are obvious, but even the "novel" modes that we've seen — browsers, notebooks — are really just conversations in the same terms.

Even the above is too pedestrian to say in a forum of this quality. But it still feels like it's the case. We aren't informed by other modes yet, such as audio/oral programming (eg, your computer is your AirPods; modify Alexa in a meaningful way with your voice), or true spatial/temporal (as AR will offer with pseudo-holography), beyond pilot & novelty projects. And even once we have those, will we still think primarily in terms of desktop and mobile for "reach", and hence be back / constrained in the 2D plane.

It's very well possible that 200 years from now programmers, artists, and tool-builders may still think "yeah, a single window is really all I need". Captain Picard's crew didn't open a holodeck to get shit done, they remained at their station on the bridge. (ok, they did open a holodeck a couple times - also they went back in time)

We may need a whole new mode beyond "information" to really feel out an alternative paradigm. Information itself may not need much more than what we have.

2020-06-01 23:16:51 Christopher Galtenberg:

On another line of argument, it's always worth re-reading https://www.dreamsongs.com/RiseOfWorseIsBetter.html

2020-06-02 00:12:07 Tom MacWright:

I think programming has gotten a lot better in the last few decades: there are tons and tons more people who can do it, the web has been the most amazing distribution system so far, and tooling/languages have gotten good enough that people can operate with a minimum of low-level knowledge or engineering background.

I think programming has gotten slightly worse in the last 10-15 years because languages and techniques have been dominated by what FAANG companies have created, and those approaches have scaled down catastrophically poorly.

2020-06-02 01:45:45 Konrad Hinsen:

Ivan Reese That’s also the argument I made about Emacs in https://malleable.systems/blog/2020/04/01/the-most-successful-malleable-system-in-history/. I really like your framing of it being the dual of DSLs. Maybe the fact that CS started out from considerations about universal properties of symbolic manipulation has been haunting us all the time. “General purpose” programming is not something many people see a need for.

2020-06-02 05:47:54 William Taysom:

Yes friends, I'm going to go with better and worse. To narrow focus, consider the progression from basically no source control to CVS to SVN to several distributed systems to Git.

2020-06-02 05:53:38 Konrad Hinsen:

In view of Tom MacWright’s and William Taysom’s comments, maybe we should first define what we consider “better”.

2020-06-02 06:40:09 William Taysom:

Good point Konrad Hinsen. One pattern we tend to see is that as the rough edges in a system get ground down toward being easier to use, more "features" are added or at least more complex tasks are attempted.

2020-06-02 06:52:09 Jamie Brandon:

To add to Tom MacWright’s comment, my impression is that programming got much better over the last few decades for large industrial users, at the direct expense of more convivial uses. More layers of abstraction to understand, more constant overheads for starting a project, more moving parts to maintain. But dramatically better distribution, tooling and scaling.

In many ways, malleability is the opposite of what a big company wants. When you have ten thousand engineers you're more concerned with trying to pin the damn thing down.

2020-06-02 07:10:39 Jamie Brandon:

On a totally different tangent, I think the latency of a lot of tools has gotten steadily worse over the last few decades. Too many layers?

2020-06-02 07:34:06 Ivan Reese:

Yes, latency is to the point that I'm infuriated by it daily. macos 10.15 feels like Mac OS 8 to me (and that was one of the bad ones). Though I think that's not a fault of programming tools directly, but rather how we've decided to employ them not being a good fit with their strengths. Eg: improving battery life was a top priority at Apple there for a few years, which meant things like timer coalescing, which meant making everything asynchronous, which meant new pauses and hiccups, new race conditions, new latency, new places to inject network calls that delay UI handling, new failure modes. None of that was caused by programming tools, but programming tools haven't helped us adequately address it.

2020-06-02 16:00:40 Konrad Hinsen:

Ivan Reese Check out https://sigpipe.macromates.com/2020/macos-catalina-slow-by-design/ for the background on macOS' slowing down. I am sticking to 10.14 as long as I can.

2020-06-02 16:05:27 Tom MacWright:

Konrad Hinsen Absolutely: "better" is a hidden value judgment. I'd say that my definition of better is "the total combined increase of people's abilities unlocked by programming" or some sort of "# programmers X leverage that programming gives them"

2020-06-01 20:07:18 Ivan Reese:

Because I think it's going to devolve… if you instead want to share thoughts about how software dev has made good progress in the past few decades, join my thread.

2020-06-01 20:07:41 Ivan Reese:

🦗 🦗 🦗

2020-06-01 20:26:20 Tim Babb:

You gotta give it more than 15 minutes, Ivan Reese 😄

2020-06-01 20:26:48 Tim Babb:

I think notebooks, the in-browser dev tools, and compiler-connected text editors are notable leaps in interactivity

2020-06-01 20:27:44 Tim Babb:

Also, the ability to imbue passive real-world objects with computational "magic" at Dynamicland fairly well blew my mind. There's definitely something revolutionary there.

2020-06-01 20:29:09 Ivan Reese:

There are a number of tools we've built to do jobs that used to require bespoke from-scratch programming. I'm thinking of things like Rails-style web frameworks, Unity-style game engines, Maya-style 3d software. These are all amazing to me. They show that programming can be a good way to bootstrap from "requires bespoke code per project" to "can use a substrate that covers up to the 99th percentile before bespoke solutions make more sense". Having those things now might make programming progress look slower. In a number of important areas — and, perhaps, all the low-hanging fruit areas — we've already used programming to obviate the need for programming.

2020-06-01 20:32:57 Tim Babb:

Obviously I'm very bullish on node graphs 😀— the best examples of which we owe to VFX and audio "DSLs"

2020-06-01 23:10:00 Chet Corcos:

The proliferation of JavaScript has made it vastly easier to build software interfaces! 🙂

2020-06-01 23:10:13 Chet Corcos:

On all platforms too!

2020-06-01 23:17:41 Ivan Reese:

Even spaceships!

2020-06-02 07:05:22 Jamie Brandon:

The recent renaissance in systems languages is a relief. I expect we'll see future software taking much better advantage of the hardware available.

Wasm is the first cross-platform standard that doesn't require managed memory, which gives it a huge range and allows much more interoperation between very different languages. (The jvm and javascript both made it very hard to efficiently implement languages that don't look a lot like java/javascript.) It's also much much easier to implement well than any of it's predecessors which should encourage more competition and experimentation.

Not to mention the web itself, which only really became standardized in the last two decades.

There have been amazing algorithmic improvements in constraint solving and query languages. In the former case, they've outpaced the improvements in hardware ie running an old constraint solver on modern hardware would be slower than running a modern constraint solver on old hardware.

Nix/guix are a new idea that simplifies the process of building and combining software packages.

Sandboxing and capabilities have come a long way. Modern phones and browsers primarily run untrusted code, which would have been suicidally stupid on the operating systems of the 90s.

2020-06-02 07:07:23 Jamie Brandon:

The availability of high-quality open source libraries with easy installation is much higher than it used to be.

2020-06-02 07:09:13 Jamie Brandon:

I propose a programming competition where people who believe programming hasn't improved much in the last few decades get to use only tools that shipped before 1990 and everyone else gets to use modern tools.

2020-06-02 07:40:54 Ivan Reese:

Wonderful thoughts, Jamie. To your last point, I don't think anyone is saying programming hasn't progressed at all. But that the pace of improvement has slowed, or that issues we might have solved well long ago are still rampant.

That aside, I think your challenge would actually be fun. Of course, the judgment criteria would have to somehow account for the hardware difference — it's harder to make something that'd compete with a modern GUI when you only have 512 black and white pixels.

2020-06-02 10:44:40 Maikel van de Lisdonk:

Before 1990 you could do nice gui and beautiful graphics programming in color if you wanted : https://en.m.wikipedia.org/wiki/Amiga Surely it would be a nice challenge😎

2020-06-02 10:51:17 Maikel van de Lisdonk:

But what I nowadays that you still need a lot of code to achieve things. Yes you can use npm or nuget packages , if you want to dependent on them. I currently write websites using kentico cms using mvc.. it surprises me how much code we need to add custom logic, views, proper forms, custom styling etc. Dont get me wrong, I like writing code, but I would expect a lot of problems to be solved already by the cms itself in our case

2020-06-02 10:59:39 Maikel van de Lisdonk:

And then there's tools like Mendix, the so called low-code platforms for building (web)apps in business domains. I dont use it myself but they seem rather succesful, the demand for those tools is increasing I think.

2020-06-02 18:16:37 Garth Goldwater:

  1. for me at least, the absolute best thing that’s happened is the rise and improvement in the built-in javascript console on modern web browsers
  2. graph databases are taking off, which i think is a really good thing
  3. i really appreciate the rise in declarative data languages: graphql, datalog, object structuring and destructuring, pattern matching
  4. although all these no/low/oops code projects are mostly crap, at least there’s SOME interest (and a lot of money) in making programming more accessible
  5. inventing on principle really did light a fire for more interactive programming and debugging models even if they didn’t go far enough
  6. minecraft. that’s it. that’s the whole bullet point

2020-06-02 18:24:10 Tim Babb:

I forgot to mention Python. It's just a fantastically designed text language, with a healthy, vibrant, ecosystem, and generally good/easy-to-use standards and practices. If I had to go back to 1990 and use Perl... shudder.

2020-06-01 23:07:40 Vitorio Miliano:

Anyone familiar with the work Synthetic Minds is doing? They've got an HN jobs post: > Synthetic Minds is building program synthesizers, i.e., automation that can write code. We have a working prototype in stealth and are currently in the process of doing user studies. Their site and Twitter look like it's about automated generation of smart contracts, and they held a summit on program synthesis last fall? https://synthetic-minds.com/

2020-06-01 23:08:23 Chet Corcos:

Looks interesting

2020-06-01 23:08:58 Jean-Louis Villecroze:

Cool name. Not sure I buy the automated part of writing software ...

2020-06-02 00:38:58 Temirlan Nugmanov:

So I have my consulting website on Notion made public with a tool called super.so…today, I got an email from a genuinely helpful person who alerted me that I had my projects/todos data available publicly. I told him it was on purpose for transparency sake (idk if its helpful for biz but that’s a separate conversation). He sends:

```"As a DBA (database architect) and developer for 23+ years, managing million dollar+ projects, I wouldn't use Notion in this public way. It's possible to run into sensitive info issues down the road when you forget to lock things down. You can't scale and maintain security because Notion simply doesn't have the features to provide scalable security for public facing Pages. I would instead create a fake project that you can demo and make public. And since the personal level is now free with unlimited blocks, you could create a separate "Demo" workspace without fear of ever showing sensitive data or running into block limits.

Best of luck... I'm seeing an increasing problem in the Notion community with non-programmers and those with little or no real-world Project Management or database experience setting up shop as Notion Experts. We were building database-backed websites in 1996-7 and by the early 2000's most of our jobs for large companies and State agencies and governments consisted of cleaning up the messes the early "web designers" created ... I fear we're seeing a similar situation with Notion and in 1-3 years there are going to be clients who have unusable systems that will have to be rebuilt in order to scale and handle new functionality..."``` I feel like this could be a good discussion. What are your thoughts?

2020-06-02 06:36:02 William Taysom:

Two topics are going on here...

(1) If you are well aware that everything is public, that may just be the way you role.

(2) Are we just talking about the age old problem that the inexperienced and incompetent can sometimes hack things together? The question for Notion as a particular tool is whether it incentivizes bad habits.

2020-06-02 07:01:50 Michael Dubakov:

One problem I see here is good/bad pick of abstractions. Notion is not pure relational database, thus you can design data layer in a familiar way nor tune it for performance. It is easy to start and build something, but when complexity grow it will be harder and harder to maintain. This is true for any serious system, but if you pick not good abstractions from the start, it becomes impossible.

2020-06-02 14:58:25 Ivan Reese:

If these notion-wielding people are complete novices, and they're building systems that let a business run more effectively at their current scale, I think this is a phenomenal success. The "notion experts" have found a path that will eventually lead them to learn more programming. The businesses running on these notion setups have a working solution that likely didn't cost them nearly as much as hiring a traditional developer. And, best of all, another stuffy traditionalist developer who is obsessed with "scaling" and their own credentials has been frustrated, and some egg has landed on their face.

2020-06-02 19:54:50 Tim Babb:

Ivan Reese, this reminds me of a debate I had with someone at Pixar.

Internally, engineering is largely divided between "production" and "tools". The former are artists and technical people who are actually building the movie itself, and the latter are software engineers who are building and maintaining the high-powered software it takes to make the movie.

Production often cobbles things together with hastily-coded scripts or node networks, and it's often fragile, ugly, duplicated— most complaints you could level against "bad engineering". Generally it's done under a deadline, though, in service of getting the best possible picture on the screen, and knowing the audience won't care about how the sausage is made.

This other person was griping about a script they had to maintain (and— probably with cause— had several of those complaints), and was arguing that production TDs shouldn't be allowed to write code if they couldn't engineer it properly, and that this stuff should be given to the tools department.

But the thing is, the tools engineers (and there are ~100 of them) all have their hands completely full on large-scale, long-term projects that are necessary to hit the highest level technical goals of a film, and beyond that, to keep the entire studio on the technological cutting edge. New projects are careful negotiations between all the films' needs and the tools department's limited resources. Bidding these projects takes time, and often they wait in line for several weeks (or months) behind critical bugs and features.

If tools engineers had to do all "production" engineering, that work would simply never get done— or the studio would need an engineering department three times as big, and for the exact same result on the screen.

Bashed-together scripts allow technical artists to solve a problem quickly and get back to making the movie. If they had to bid their project to an engineering team, prioritize it, get leadership support, and so on... they'd be waiting for weeks. They'd probably just tediously solve their problem by hand instead, and spend less time making the movie look good.

The dichotomy frequently isn't "engineering things cleanly vs. cobbling things messily", it's "cobbling things messily vs. not doing them at all".

If put into practice, that abstract complaint about the tidiness of how something is done— or gatekeeping about who gets to do it— would result in a worse movie.

I don't have deep experience with Notion, but in general I think this principle extends to the outside world— there are lots of people who are spending hours doing things that should be automated, but there are way more one-off problems than there are engineers to architect tight solutions for each of them. "Messy and done" is better than "not done" or "done tediously, at the expense of solving more central problems".

2020-06-02 21:31:24 Ivan Reese:

Your Pixar anecdote would map perfectly to many (most?) video game teams, too.

2020-06-03 06:33:27 yoshiki:

Reminds me of the discussion in this great thread: https://twitter.com/geoffreylitt/status/1177607448682582016

2020-06-06 17:32:46 Andrew:

My naive impression is that any tool that gives someone leverage is a good tool.

If you did theoretically have to migrate from Notion to a more performant RDBMS, then you ran out of rope. But that’s fine, it’s what engineers are for.

The question is just “does migrating notion data cost more than the total leverage you get by using notion”.

Since the person using notion is probably not technical, I imagine they get a ton of leverage by having an easy to use DB system that “just works”.

It looks pretty, it’s easy to edit, easy to add metadata, easy to change schema, and best of all integrated directly with your entire wiki, and all the institutional knowledge therein.

2020-06-02 14:28:15 Christopher Galtenberg:

Minimum viable browser quine https://twitter.com/Jermolene/status/1267801378430373890

2020-06-02 17:34:08 Chris Maughan:

This looks interesting, and may make me look at TiddlyWiki again. I keep my notes in a vimwiki, but it always annoys me that they are not as easily published, or viewable on my phone.

2020-06-02 18:21:01 Jamie Brandon:

I'm going to branch this off from the discussion on programming not progressing.

...my impression is that programming got much better over the last few decades for large industrial users, at the direct expense of more convivial uses. More layers of abstraction to understand, more constant overheads for starting a project, more moving parts to maintain. But dramatically better distribution, tooling and scaling. It's notable though that the arc in most other technologies has been towards increasing scale and centralization at the cost of individual capacity. A few hundred years ago a single village could probably build their own carts. Now we're down to a dozen or so car companies per country. Most countries don't have the ability to manufacture their own cpus.

I doubt anything in my house was created within a 100 miles of here. There is certainly very little that I could make myself. In "The Toaster Project" the author couldn't even find a modern text on iron production that didn't assume you had an entire factory - he had to work off a 16th century textbook instead. And when it came to more modern materials, he completely failed to produce plastic.

I can't think of many exceptions to this arc. Writing comes to mind because we use it as a comparison to programming so often - used to be a centralized profession and now is practiced everywhere.

I'm not even sure to what extent this is a problem. Specialization and economies of scale have been some of our main tools for progress. Maybe the future where programming only happens in thousand-person teams is the one where software works better?

Either way, I think it might be important for this community to understand why most technologies progress in that direction and what makes the few exceptions different.

One plausible answer is that, like writing, programming has some anti-economies of scale too. If we think of the trend in industrial software as trying to constrain software so that complexity scales linearly with size, then maybe we can find points of leverage in tools that abandon those constraints in exchange for more power in the small-software-niche.

2020-06-02 19:04:13 Tim Babb:

I could see a future where programming is more centralized, and it's mostly a good thing for many of the reasons you describe. Much more is possible when you can harmonize the work of thousands of people.

For example, your programming editor might be a thin client over a mountain of cloud infrastructure. Very little software would run "locally", but that's ok, because software shouldn't care what device it's running on; your various devices should all be portals into a single, coherent world where all your data and your tools are accessible anywhere, and computational resources appear bottomless.

In that world, when you open an app or press "compile", a small slice of a datacenter springs to life for you, and it takes millions of lines of code and thousands of engineers to maintain all that. But that's invisible to you, because you shouldn't have to worry about it, the same way you don't have to know how a fuel injector works (let alone build one) in order to drive your car.

Modern web tech might be analogous to the complexity of a car, but using it is more like getting a chassis, engine, drivetrain, seats, etc. from various factories and assembling them yourself, and less like buying a complete working car and simply driving it where you want to go. As the "car" gets more complex and advanced, it's increasingly inaccessible for an individual to wrangle, but the assembled product can go further than something you could build from scratch in your garage.

If we're operating under the assumption that user has to do this assembly, you'll tend to want the simplest possible "car" you can design. But perhaps there's a second local maximum, where a "car" is a polished, well-encapsulated product (with extreme internal complexity and power) that's easy for anyone to pick up and use.

So we have advanced "cars"— complex, economy-of-scale-optimized engineering marvels— but individuals can't really buy and drive them. But we could get there. (And none of this should be to the exclusion of building go-karts in your garage if you want to).

2020-06-02 19:09:50 Jamie Brandon:

Would you say something like airtable or notion is an example?

2020-06-02 19:14:57 Tim Babb:

I think they're doing good work in that direction, though more for "databases" than "programming". (My aim is to make Lynx fill the programming hole).

2020-06-03 08:19:04 Konrad Hinsen:

I agree that for most people, technology that "just works" is what they want. As Tim Babb said, that shouldn't exclude the possibility of people building their own stuff. I'd take that one step further: it shouldn't exclude the possibility of people modifying industrial products they use, nor the possibility of delegating such modifications to a competent expert of their choice. The "enemy" is not industrial product, but lock-in and dependencies on a single supplier.

With cars, this enemy has just been arriving over the last decade, in the form of computerization. Every modern car has parts that even an independent professional mechanic can no longer fix. You need special equipment available only under strict licensing conditions.

In computing, that's already the norm rather than the exeception. It's dependencies everywhere. Whatever technical progress may be in Airtable or Notion, the real problem with them is that they are silos that make you dependent on the companies running them. We have almost completely lost interoperability except for the lowest common denominator, which is text files.

2020-06-03 23:21:57 Timwithlip:

For our company, we are wrestling through many of these same challenges, but from the logistics and of the user's home as it relates to their carbon footprint. I find it immensely helpful to ground in a simple understanding of what it means to be human, and build everything up from there. So while it might be possible to build vast code integrating the fridge to scan the barcode on a jug of milk and alert the user on their phone when it is close to expiry, it will still be more socially desirable in some contexts to simply ask a neighbour for a cup of milk.

2020-06-04 00:42:41 Robert Butler:

Concerning: "For example, your programming editor might be a thin client over a mountain of cloud infrastructure. Very little software would run "locally", but that's ok, because software shouldn't care what device it's running on"

You are correct that such a world achieves economies of scale, but only along a certain dimension. Consider any web application that can also be a local application. The local application requires less: • overall compute (an app in the browser typically takes vastly more resources on a local machine than a native app with the same functionality and then there is the added compute in the cloud). • network infrastructure • code complexity (network latency, error states, etc)

2020-06-04 00:48:03 Robert Butler:

You also give up software ownership and the benefits and disadvantages of that. For example, there is an entire generation of cloud based games that will simply vanish when the companies shut down the servers. Retro gaming communities can emulate/reconstruct game consoles, CPU hardware and peripherals needed to keep games alive. You can't do that with software that is destroyed when the company loses interest and that you never actually had a copy of. Same thing applies to other applications.

I'm not saying it's inherently "bad" to run things in the cloud, but I think it's really important to understand what you are optimizing for.

2020-06-04 00:54:56 Robert Butler:

I think we also keep building layers upon layers of abstraction to achieve a certain type of scale. Any such shared resource is optimized for the most widely recognized use cases. That is, we lose out on economies of scale that would let us provide more customized software (not to be confused with personalized) to each communities needs.

Bottom line, yes, the trend to big cloud and centralized compute is strong and clear. I think there is a place in the future, however, for a more maker type software environment. Something akin to what 3D printing is doing, bringing manufacturing back out of the cloud (factories) and onto the desk (personal computer). Yes it is more expensive from a certain perspective to 3D print something than to mass manufacture it, but you miss out on all sorts of other benefits.

2020-06-04 20:19:43 Tim Babb:

More or less agree. Generally I don't think centralization and customization are inherently at odds with each other. To echo Ivan and others, you could have a platform which is fairly opaque/blackbox, but where the data and software that lives within its ecosystem is totally fluid, customizable, and controlled by users.

A common platform might actually aid customization by offering easy and standard ways for software-pieces to talk to each other. Right now that's a pretty hard/ugly problem.

Or put another way, it's much easier to customize a piece of software that's made of blocks which you can take apart. But that first requires buying into a platform where everything is made of blocks.

2020-06-03 20:07:53 Paul Butler:

What are people's thoughts on the future of no-code platforms with regards to open source? These days as a developer I see that languages where the only compiler is proprietary are increasingly niche, while widely adopted languages have an open source stack. Will the same thing happen with no-code platforms, or are the dynamics different?

2020-06-03 20:25:45 Chris Knott:

I think the closer you are to the end user probably the more you can get away with that type of lock in

2020-06-03 20:29:11 Ivan Reese:

My hunch is that the model of Unity and Unreal — where the runtime is a closed source tool but the code and data you're working with are open and portable — strikes a better balance than what no-code is now where everything, code and data, are locked up inside the silo.

2020-06-03 20:43:05 Paul Butler:

Interesting, Ivan Reese when you say code is portable do you envision competing runtimes that can run the same (no-code) code?

2020-06-03 20:44:28 Paul Butler:

I could definitely imagine a world where the IDEs are proprietary and the runtimes public, which is approximately where most (proprietary) code IDEs are now

2020-06-03 20:47:14 Andy F:

it definitely seems like a concern, most no-code products are SaaS products where the business model is to encourage lock-in.

2020-06-03 20:50:51 Jamie Brandon:

This is my main reason for not using tools like airtable and notion. I can imagine a sourcehut-like model where the code is open source but most people pay for hosting anyway because it's convenient and there is no lock in.

2020-06-03 21:31:42 Steve Peak:

Microsoft, Google, GitHub, Amazon, Salesforce, Heroku, Apple, …. See a trend? Open source has it’s place in any business, some more than others, but it has zero effect on your business and customers of the product do not care, generally speaking. Don’t get me wrong, I love open source and promote it, however business-wise it posses little to no value. IMO most, if not all, no-code and low-code will have little to no open source software.

2020-06-03 21:42:59 Steve Peak:

Andy F I’m sure those companies you speak of are not advertising or internally encouraging lock-in… they are more likely encouraging empowerment of their user base. The feature to add portability to the source of truth comes at a cost. Many tools in no/low-code provide a level of abstraction that have significant depth. Exporting that depth would result in wildly unusable serializations of data which is useless outside the system. That is the point: the more you abstract away the more tighter the “lock-in” becomes, which in all fairness to (I believe) 99% of people are more than comfortable with because they do not have the ability or desire to create things outside of that environment. It’s all about value in the end of the day.

2020-06-03 23:03:53 Paul Butler:

Steve Peak good point re. the list of companies. I guess it comes down to, will companies brining on no-code platforms evaluate them like platforms (Salesforce/Heroku/AWS) where it doesn't matter that they are open-source, or like programming languages. I'd point out that even from that list of companies, TypeScript, Dart, Go, and Swift are all open sourced.

2020-06-03 23:05:09 Paul Butler:

Having been involved in those sorts of discussions before, I think that lock-in can be an important detail on platforms. And despite things like AWS and Heroku being proprietary, there are drop-in replacements for S3 and things like Dokku with Heroku compatibility

2020-06-03 23:46:04 Steve Peak:

Paul Butler however, the usage of Dokku and Heroku are embarrassing insignificant when compared to AWS and GCP. I'm proud of Heroku nonetheless

2020-06-03 23:55:06 Paul Butler:

sure, but I'd be surprised if I'm the only one who feels more comfortable building for Heroku knowing that Dokku exists as a fallback

2020-06-03 23:56:13 Paul Butler:

(I happen to run a service that I migrated from Amazon Lambda to Dokku because I designed it generically enough to work on both)

2020-06-03 23:56:21 Steve Peak:

The market is massive. Room for many. I would feel confident and proud to work on Heroku. I personally love the product

2020-06-04 00:01:34 Tom MacWright:

Open sourcing the application side of no-code solutions is likely all downside for most companies - only makes ripoffs easier and makes customers constantly wonder whether they'd be better off DIY. Kinda more interested about whether any of the no-code solutions have export & import with specifications and file formats

2020-06-04 00:05:20 Steve Peak:

I don't think import/export that would be possible... The abstract and domain primatives would prohibit it. The amount of work to translate between products would become a product. Turtles all the way down.

2020-06-04 00:27:34 Paul Butler:

An interesting approach would be for a no-code tool to transpile the no-code into a programming language to run, and to allow the user to download that exported code. Similar to how some desktop UI builders (for C++, etc.) used to work.

This only makes sense if the transpiling step is a natural part of the no-code tool's design though; the further from code the tool is (e.g. Airtable) the less this way of thinking even makes sense.

2020-06-04 01:53:49 Tom MacWright:

I think interchange formats kind of have to be the place to start, otherwise you're just as locked into an open source no-code solution as you are into a proprietary one

2020-06-04 04:38:21 Zubair Quraishi:

Working at red hat I have some unique insights into open source and Nocode tools. By embracing open source culture it makes a really good developer story and makes it much easier to attract good talent, as long as the reason you are doing open source is because of open source of course, and not just saying you support open source to attract new talent as many companies seem to do these days.

Open source is also a marketing tool now, for sure. However the term is so big now that it has actually become meaningless and ask 100 different people what open source is and you will get 100 different answers.

Anyway, for customers there will be many developers who say they prefer open source because of lock in reasons , even when open source locks them in even more. These developers are mostly advising the budget holders who do not care one bit about open source, only about benefits your software provides.

2020-06-04 04:40:09 Zubair Quraishi:

I think the best thing open source provides is that it makes it easier for customers to try your software , as they think open source is basically free, even if there is a restrictive license. They can say to their boss, we can use it, it is open source . If the software ends up being used the management will buy a support license anyway

2020-06-04 04:43:48 Zubair Quraishi:

I think that in the future being fully open source will be pretty much the ONLY way to get your project deployed on premise in a large enterprise in the future, IF the install is manual. The easier you make your product to install and not need developers, the less important open source becomes

2020-06-04 04:45:15 Zubair Quraishi:

However, if your product only runs in the cloud or with local cloud providers and doesn’t need to be installed manually by developers in an enterprise server or someone’s PC then open source does not matter one bit

2020-06-04 04:49:09 Zubair Quraishi:

So for me I think the question is whether you believe in open source personally should determine your open source strategy as to whether to make your product open source or not

2020-06-04 04:52:34 Zubair Quraishi:

But the issue of open source and nocode tools and whether it matters is more related to whether you think on premise installations done by hand will win, or fully managed downloadable software on prem or in the cloud will win . Just look at iPhone App Store as an example, hardly any of those apps are open source, yet enterprises spend huge amount of money on them as the install is very smooth. If it needed a developer to install an iPhone app then you would see a lot more open source iPhone apps

2020-06-04 05:01:55 Zubair Quraishi:

Overall though I see that open source no code tools only work if there a nice polished installation and UI on top, like out systems and mendix and Airtable and notion and similar tools. Yes if you look at any proprietary nocode tool inside you will see that 99% of the code is open source anyway, but that 1% proprietary layer on top which makes it usable by non techies is the secret source of those companies

2020-06-04 13:56:53 Steve Peak:

Well stated Zubair 👏

2020-06-04 14:00:36 Steve Peak:

Paul Butler What about data? You cannot transpire no/low-code that includes a data-store. What about asynchronous interactions? What about frontend domain requirements? The list goes on and on. You may have fallen in the idea trap of “everything eventually ends at C” but that is not true. There are things that no/low-code do that simply cannot be captured in a transpire/export — I would argue they are the exact things that make the no/low-code appealing. This statement also is also true for languages like Eve, Dark, Scratch, and other structured editor tools. You cannot export the experience, which is the most important part.

2020-06-04 14:02:09 Steve Peak:

I’m working on a business, a no-code solution, and can confidently say that you cannot export/transpire the product in any way. It’s technically impossible, and if not impossible more work than building our entire business.

2020-06-04 14:03:51 Steve Peak:

Last point, if any low/no-code product can be exported/transpired then they did not take it far enough. The abstraction therefore would be incremental at best.

2020-06-04 14:10:23 Tyler Adams:

Non techie users are already locked in by their lack of technical ability, so vendor lock in isn't an additional cost. Techies are not and so vendor lock in is a much larger cost

2020-06-04 14:28:19 Paul Butler:

Steve Peak sure, I don't think there's a one-size-fits-all model here, but I think there is a space of no-code apps that could be transpiled to code. Excel is arguably the most successful low/no-code/end-user-programming tool of all time, and there are tools to export spreadsheets into code.

2020-06-04 14:29:54 Paul Butler:

(I am not saying that there is no place for fully proprietary no-code tools, in case I was unclear about that.)

2020-06-04 14:34:19 Steve Peak:

We are on the same page here 🙂 I’m not reading your comments as all-or-nothing; and I hope versa vise. I would still bark up the tree concerning the domain abstraction level preventing exporting may have a common graph; wherein there is a point where transpile/export is no longer possible (in full, not in part). I agree Excel is as you say it is, and indeed you can export the spreadsheet into code. This is made possible because the domain abstraction is not very high in the end. The line becomes really really fuzzy when the domain requires multiple systems in seamless collaboration. Excel does not have this.

2020-06-05 09:26:54 Konrad Hinsen:

Here's a story of lock-in in Open Source: https://blog.khinsen.net/posts/2020/02/26/the-rise-of-community-owned-monopolies/ Open Source is a safeguard against lock-in only for those who can themselves assume maintenance of all the Open Source tools they depend on. Which is roughly Google and Microsoft, plus perhaps Apple. The real question is not Open Source or proprietary, but how much a user's application profile matters for the producer of the infrastructure, and of course the long-term survival chances of the producer.

2020-06-05 09:55:00 Chris Knott:

While I agree people underestimate the maintenance burden of OS, there is a meaningful distinction in the degree of lock in. It's not about migrating, it's about tying your viability to another organisation.

If everything is open then it is possible to keep running an old version of something, potentially on old hardware, almost indefinitely. There are countless enterprises out there stuck on Java 6.

Obviously this slow death is not desirable, but IMO it is a completely different risk to say, Dark getting bought out and shuttered.

2020-06-05 09:57:21 Chris Knott:

(Dark is perhaps a bad example because they actually are quite open about what would happen in such a scenario and how they would release everything)

2020-06-05 12:17:31 Konrad Hinsen:

Chris Knott I'd say that's more the difference between local vs. cloud, not open vs. proprietary. I know lots of people who still run Windows XP on old hardware, usually because they have drivers for exotic lab instruments for XP that were never ported to later versions.

2020-06-05 17:39:12 Jamie Brandon:

Or even just the difference between open, well understood data models and deliberate lockin. I used sublime text for a while, even though it was proprietary, because I would still be able to use it if the company folded and I would still be able to read my files if I stopped using it. Plus the extension api etc were well understood enough that someone would probably just clone it if it died. Similarly for using a proprietary service to host my email, because I can still backup my email with a standard protocol and easily migrate it somewhere else, or use third party software to read it if I don't like the interface. It's still a cloud service but I don't feel locked in.

I guess I'm happy to use proprietary tools when they are to some extent fungible.

2020-06-05 17:40:16 Jamie Brandon:

I wonder if the airtable api is sufficiently powerful to allow writing a third party interface.

2020-06-05 18:03:02 Tom MacWright:

Yeah, the question of whether no-code tools have a clearly describable data model that can be disentangled from their one implementation is pretty important. Like: software is the lines of code that define the software. Even Dark, you can imagine, because the text code is a 1:1 representation of the internal magic, that you could create another backend. Zapier-style tools, you can imagine a YAML/declarative definition of the workflow, like GitHub Workflows's YAML definitions. But then once things start getting visual… once you can move around nodes & boxes, is that ineffable? Is it position metadata on top of a graph that could be exported? Somewhere in between?

2020-06-05 18:17:49 Jamie Brandon:

Everything is just bits in the end. I don't think there is some ineffable magic that can't be exported in the data model. The important distinction imo is whether they are willing to define and commit to a data model. One of the defining features of the rise of cloud services is the end of backwards compatibility. Most cloud services own all the user's data and hide the internal details, allowing them to arbitrarily migrate code and data whenever they feel like. It certainly makes development easier and the lockin it generates is just icing on the cake.

Compare this to eg apps which store their data in sqlite which has a very stable serialization format and meta-model (sql schema), making it easy to access and understand that data in third party tools. Some cloud services do expose the same underlying api that all their front-end code goes through, which has a similar effect but still takes more work to interact with than a standard interface like sql. Perhaps the rise of graphql will lead to more of this kind of portability.

2020-06-06 04:38:11 Konrad Hinsen:

There's intentional lock-in, creeping lock-in, and explicit design to prevent lock-in, which involves in particular well-documented data models, storage formats, and APIs. It's much like code complexity: you have to fight it actively to prevent it from sneaking in.

2020-06-06 13:21:14 Stefan Lesser:

Lock-in “creeps in”, because a lot of technology is commercially driven these days, and building on open standards just doesn’t make economic sense in a world where you need to own a platform to make money, or at least make it look like you will at some point in the future.

The groundbreaking technologies we still have as foundations, the internet, TCP/IP, HTTP, email, etc. have all been invented without business models in mind. What’s locking us in, and what’s keeping us back in inventing the future are the incentives set by business models and what is considered being successful today.

2020-06-06 17:41:11 Andrew:

On the bright side, the desire to own your data is a very real one.

Which means there’s economic incentives to helping people own or duplicate their data from “locked in” services.

Companies like https://fivetran.com/ help.

2020-06-06 19:00:30 Konrad Hinsen:

Stefan Lesser All true, but as I try to explain in my blog post, creeping lock-in can also happen in an Open Source project such as Python, with no business model. It is sufficient that the developers have some interest in increasing their user base, even if it's for glory rather than money.

Ultimately the issue is competitive development, rather than collaborative.

2020-06-06 20:53:23 Stefan Lesser:

Konrad Hinsen Sure, strong communities have strong opinions, and it starts to look a lot like a cult, religion, or whatever we want to call it. I would think that this is almost necessary to create a successful community.

Overall, I think the aspect of a project being open source is perceived as more important than it really is. Many successful open source projects are practically driven by commercial organizations and their values and economic incentives reflect on the direction these projects take.

At the end of the day the real issue seems to be about trust — do I trust the organization to not screw me over, go out of business, and stay aligned with my values? At least with the current commercial landscape you can look at the business model and get a good grasp on what you can expect.

2020-06-07 06:55:24 Konrad Hinsen:

Stefan Lesser Yes, trust is a big issue. In small communities, it is based on personal relations. For business, the business model is a good indicator. For big Open Source communities, there don't seem to be good criteria for outsiders to develop trust.

2020-06-03 23:01:35 Chris Knott:

https://twitter.com/hypotext/status/1268218080993386497

2020-06-03 23:15:55 Christopher Galtenberg:

mermaid for math, I like!

(I know... there are many more equivalents, and this is even more powerful - just love these declarative mini-languages)

2020-05-28 03:52:59 Unknown User:

MSG NOT FOUND

2020-06-04 09:04:26 Shubhadeep Roychowdhury:

Hey, Alex Bzz just a curious question, I know that Source-{d} had collected a huge Public Git Achieve. And you also ahd a small tool written in Go to explore and download the files. I was wondering if there is anyway to get that archive anymore. It does not seem possible using the tool. Please let me know if you have any idea.

2020-06-04 12:30:00 Alex Bzz:

Indeed, several Tb of archives are gone from GCS and the company servers by now, but the procedure of collecting the data works https://github.com/src-d/datasets/tree/master/PublicGitArchive/pga-create#pga-create

2020-06-04 21:22:51 Shubhadeep Roychowdhury:

Thanks a lot

2020-06-04 18:11:19 Chris Martens:

Hey everyone, I’m chairing the AIIDE 2020 Playable Experiences track!

Please send us your games, weird art, and interactive widgets that are informed in some way by AI, by Friday, June 12! Details are available here.

Please consider submitting especially if you are not working in academia and/or if you are working with an arts/humanities focus. This track is a huge part of what makes AIIDE special, that we value many different forms of contribution.

2020-06-04 18:30:32 Ivan Reese:

U Alberta is right near me! I'm curious what their relation to this conf is (since the page is hosted on their site). (My thinking is: after COVID is over — one can dream — maybe there's a good community for this sort of AI weirdness in my area.)

2020-06-04 19:05:28 Chris Martens:

One of their faculty members, David Thue, is the program chair for AIIDE this year.

2020-06-04 19:05:49 Chris Martens:

There are definitely U Alberta folks active in this area!

2020-06-05 04:33:49 Ivan Reese:

Ah, I believe I've had some contact with David Thue in the past. Thanks!

2020-06-04 23:26:14 Will Crichton:

I’m researching the influence of working memory in program comprehension. Question for the community:

When you’re reading or writing a program, are there specific tasks/examples/etc. where you found it hard to remember things? Maybe you were flipping back and forth between documents, or you kept looking back to the definition of something.

2020-06-04 23:32:38 Christopher Galtenberg:

It's possible to remember things?

2020-06-05 01:41:25 Michael Coblenz:

Order of arguments is the worst.

2020-06-05 04:37:02 Ivan Reese:

Does keeping track of the order of operations / events in a complex system (eg: which subsystems are invoked in which order in which circumstances) count? Because that's probably my biggest struggle.

2020-06-05 06:00:51 Edward de Jong:

The biggest improvement that I cite in my beads language design is a 10 to 1 reduction in the number of APIs you had to learn in order to build a product. instead of having 100 drawing functions I have 10 with many parameters, all keyword type of parameters so that through repetition you eventually learn those 10 functions and then you can build your products without consulting any documentation or using auto complete. Autocomplete is a crutch that sometimes covers fir a complex design. It became very popular in the Java world because of the ridiculous number of function names that one ended up with. Ivan Reese is Correct that the Biggest source of error in programming is trying to make sure things are done in the correct order. That is almost 50% of all programming is sequencing the operations. This is why I used deduction to automatically sequence as much as possible. It is the one thing that prolog had that was not copied by other languages after it. I traced the evolution of languages back to the 70s and there was a big funding battle between two groups one based on prologue in the other based on Lisp. Because the prolog group was based in France they of course lost the Funding battle, And after MIT failed to produce any tangible results from 10 years of high level of funding for automatic programming the term AI was poison for another 10 years after that. It has finally been long enough that people have forgotten the over blown claims of AI and now we are back with an AI fetish. This time however machine learning is delivering some good results and in vision language recognition Some of the areas they’re doing great work and this time it won’t blow up in their face. However as Conway has a proof that consciousness cannot be the result of computation there are Limits to the achievements we are going to get from gradient descent ML

2020-06-05 06:13:56 Ivan Reese:

Edward — can you also offer an example of somewhere in programming you've found it hard to remember things, specifically? I can't tell if the example of reducing 100 APIs to 10 with many params is about a struggle you've personally faced, or something you've done just to solve an issue others have faced (especially since the rest of your comment drifts way off topic, rather than, say, offering more relevant context). I'm interested in hearing about your personal struggle with remembering, if you have anything to share there.

2020-06-05 10:21:09 Cole Lawrence:

At the moment, I'm having a hard time remembering which files I defined core logic in. I have many entry points due to the complexity of bundling and reusing my own library code. Then, a semi involved multi step build process with WASM in the middle. So, yeah, I'm currently hard at work to reduce that complexity

2020-06-05 13:16:30 Edward de Jong:

The better your memory the more obtuse you can be in your work And get away with yourself as the reader but will punish any other person who comes along later who tries to understand your code. You can see the bad effects of programmers who have good memories in many examples of code, where variable names are very short and non-descriptive, and where there are excessive numbers of modules with very complex inheritance systems. People with great memories gravitate towards languages which are known to be hard to read but because of their phenomenal memories it does not stress them. Languages where you have to remember exactly how many parameters are being consumed on the stack are highly bifurcated in terms of their user base, because people with poor memories find those languages rough going. Forth and Postscript both require you to know how many operands the function is going to absorb from the stack. That is a tremendous omnipresent memory load. Languages and APIs which have long sequences of required positional parameters in functions also present a heavy burden. In fact almost any function that has more than one positional parameter starts to create a memory burden.. The Lego system proves that it is better to have a small number of primitives that are repeated many times then to have a huge set of complicated pieces to connect together.

2020-06-05 17:19:24 Will Crichton:

Ivan Reese can you elaborate? What’s the higher level task that requires you to understand the order of operations? (debugging, performance optimization, etc.)

2020-06-05 17:21:22 Will Crichton:

Also, for APIs I think Matplotlib vs. Seaborn is a great example of what Edward de Jong is talking about. MPL gives 100s of knobs each with its own API function. Seaborn gives maybe a dozen top-level functions with many parameters, along with many smart defaults.

2020-06-05 17:26:18 Ivan Reese:

My OoO difficulty usually occurs when acclimatizing myself to a new codebase, needing to learn what all the pieces are and how they fit together. Alternatively, returning to a familiar codebase after a time away from it, needing to recall or reacquaint myself with the workings.

Debugging too, sure, but I think that has less to do with memory and more to do with visibility. The period of honing-in on the cause of a bug (subjectively) feels more like following a scent than making a map. Once the cause is found, it's usually a methodical process to determine the root cause.

Optimization is almost the opposite of learning / reacquainting — by the time I'm ready to do optimization, I will have loaded the entire program into my head, so to speak (or at least the relevant bits), so it's all in working memory and is easy to recall.

2020-06-05 17:30:25 Will Crichton:

In what cases do you need to understand OoO to understand a codebase? Perhaps put another way: for what kinds of pieces do you need OoO to reason about their composition?

For example, if I’m understanding how Seaborn draws a graph, it might choose to draw the axis labels before the data points, but that’s an arbitrary choice. Understanding the OoO doesn’t give insight to the system design.

2020-06-06 03:30:55 Ivan Reese:

One example would be a video game, full of subsystems that all operate with different notions of time — networking code working in terms of packets with variable ping, physics locked at 60hz, gameplay logic happening at various rates (some stuff is every frame, some stuff is once every few frames, some stuff goes into a low-priority queue, some stuff happens at specific moments), rendering synced to the display refresh interval, audio happening both in sync with the gameplay logic but also at the audio sampling rate, and on and on. These subsystems are kinda isolated, but they're also kinda interdependent. There could be a lot of shared state, or a lot of dynamism in how these subsystems affect one another, or a lot of design decisions that prioritize performance at all costs. Ultimately, the code needs to be quite deterministic and very well understood in order to ensure that the game runs quickly and correctly, and you don't (can't?) have automated tests or static verification, so you generally have to work on it by loading it all into your head.

(I hope I'm understanding your question correctly. Sorry if this is not what you had in mind.)

2020-06-07 03:26:38 Will Crichton:

Ivan Reese that helps, thanks for elaborating. I think what I’m getting at is — what is a specific task involving this kind of understanding? e.g. a hypothetical task is “I need to understand how many ms elapse from the start of the tick to when my gameplay logic runs”, which requires understanding OoO/subsystem dependencies, presumably.

I’m being a little pedantic here because understanding how to provide cognitive support has to start with a task. e.g. when evaluating whether to use a bar chart vs. pie chart to display some data, you don’t ask “which is better”, you ask “which is better when a person is trying to find the maximum value in my dataset”.

Here’s a high-level task taxonomy from “Program Comprehension During Software Maintenance and Evolution”. These are a bit vague, but a useful starting point.

2020-06-05 01:46:55 S.M Mukarram Nainar:

http://okmij.org/ftp/Prolog/Soutei.pdf Delightful application of PLT principles to solve an actual problem. Debugging unix permissions problems is going to be more painful in the future because I'm always going to think back to this.

2020-06-05 13:01:08 Michael Donatz:

This is what I want to see from a programming language: https://s.ai/nlws/

2020-06-05 19:28:48 Prathyush:

First brush: Can't compute.

2020-06-05 22:05:27 Doug Moen:

Do it.

2020-06-06 05:40:07 Dan Cook:

This might not be quite the same thing, but here's an idea I had:

A diagram consisting of data (and/or labeled placeholders for data), some is which can be visually nested (lists, key-value maps), and connections (e.g. arrows) that show operations between them.

Copy/assign A to B is an arrow from A to B

Conditionals connect a condition to an operation(s). Either a bubble around the operations, or an indicator next to the line representing the operation (and all other operations that stem from it).

A map/select operation where one end is a collection, and the other represents each element. Either some other connector "down the line" that "collects" it all, or a bubble around the whole map. In either case, the output is the new collection.

Similar symbols for filter, reduce/aggregate, sort, etc.

Some sort of haskell pattern match. For example, an arrow from A to some (partially specified) nested structure, and then connectors from parts of that nested structure to further operations (which only happen IFF the match succeeded in the first place).

There's no inherent order to anything, other than by dependency. It's a DAG that you can trace forward or back.

2020-06-06 15:38:52 William Taysom:

Frege mentioned right up front: at least knows to reference. Will look further. Might have a fun idea or three. Of course, also has a feel of "watched Arrival, let's do this!"

2020-06-06 15:48:59 William Taysom:

Dan Cook Similar to how mechanisms can perform calculations (think of a coin sorting machine). I've been playing for while with how to represent interesting data transformations (map, select, order, group, flatten, etc., etc.) non-symbolically in the sense that the geometry of the representation corresponds directly to the semantics without the use arbitrary symbols. Since symbols make certain things so easy that there's a gravitational design pull unto familiar programming constructs, this charades game yields some interesting ideas.

2020-06-06 16:02:43 William Taysom:

So review time: cute. Makes for nice swirly pictograms, which seems to be the real goal. These people need to be introduced to string diagrams. Would benefit from a type-system or, more linguistically, agreement features. 😉

2020-06-06 16:05:01 William Taysom:

And using Pac-Man for "to eat" — fun times.

2020-06-07 12:01:39 Cole Lawrence:

noahtren would something like a different notation like this help guide the training process in your experiments?

2020-05-26 19:41:42 Unknown User:

MSG NOT FOUND

2020-06-06 17:05:06 Andrew:

I think it’s easier than ever actually — ever used netlify drag and drop?

2020-06-06 17:05:52 Andrew:

create-react-app -> Netlify stack with no web server feels like being on a rocketship.

You can deploy a functional web app in less than an hour if you know what you want!

2020-06-06 17:16:29 Jamie Brandon:

https://futureofcoding.slack.com/archives/C5T9GPWFL/p1591449674001000?thread_ts=1591214873.399000&cid=C5T9GPWFL

What are people's thoughts on funding?

2020-06-06 17:54:40 Jamie Brandon:

I'm mostly in agreement with Stefan that typical business models tend to lead to worse outcomes overall. I can think of a few reasons:

My own approach at the moment is to take advantage of how ridiculously overpaid programming is, where 1-2 months of unrelated consulting is enough to fund a year of working on my own projects. But it's also worth looking at how existing projects manage:

I particularly like this little note on the sqlite page:

Hwaci is a small company but it is also closely held and debt-free and has low fixed costs, which means that it is largely immune to buy-outs, take-overs, and market down-turns. Hwaci intends to continue operating in its current form, and at roughly its current size until at least the year 2050. We expect to be here when you need us, even if that need is many years in the future. It's a reminder that these problems aren't inherent to businesses in general, but just to a particular set of ideas about how business should be run that seems to be in the water these days. It seems like small businesses that intend to stay small are better at resisting them. A more recent example is Sourcehut, which is funded by paid subscriptions but still releases all the code under AGPL and exposes simple apis and data exports for each individual service.

2020-06-06 18:52:28 Duncan Cragg:

My plan is a mix of: • overpaid contract work followed by periods of working on Onex (I'm currently in one of those full-time Onex phases) • drawing on my pension pot 😮 • doing a Kickstarter to sell cheap OnexOS smartwatches - like Espruino, make money on hardware not software. This is an age-old problem: just think what progress the human race could make if there was a source of funding for innovations that didn't rely on (a) a business model or (b) an academic publishing model, just (c) a value-for-humanity model.

2020-06-06 20:28:24 Tom MacWright:

I'm not sure how comparable most 'future of programming' projects are to programming language projects. Like definitely, it's 5-10 years till languages are generally applicable but Rust, Zig, and Clojure were useful for basic experiments within the 1-2 year timeline (as far as I can grok from the informal histories)

2020-06-06 20:30:14 Tom MacWright:

The sort of future-of-programming vision quest things like dynamicland or, sort of, ink & switch, are longer bets and more likely to produce prototypes with no practical use before they potentially invent the future

2020-06-06 23:59:07 Jamie Brandon:

Rust, Zig, and Clojure were useful for basic experiments within the 1-2 year timeline (as far as I can grok from the informal histories) The language grew out of a personal project begun in 2006 by Mozilla employee Graydon Hoare,[16] who stated that the project was possibly named after the rust family of fungi.[35] Mozilla began sponsoring the project in 2009[16] and announced it in 2010.[36][37] The same year, work shifted from the initial compiler (written in OCaml) to the self-hosting compiler written in Rust.[38] Named rustc, it successfully compiled itself in 2011.[39] rustc uses LLVM as its back end. The first numbered pre-alpha release of the Rust compiler occurred in January 2012.[40] Rust 1.0, the first stable release, was released on May 15, 2015.[41][42] Following 1.0, stable point releases are delivered every six weeks, while features are developed in nightly Rust and then tested with alpha and beta releases that last six weeks.[43] I started using it in 2014 - at that point it clearly had promise but the compiler still regularly crashed and there were very few libraries. I think it still had gc and green threads as late as 2012, so at least 6 years to figure out the right combination of features to omit the runtime, becoming recognizably the language that it is today. I wouldn't have paid money for Rust in 2007.

Zig is only 5 years old and is surprisingly usable (although I still crash the compiler). I think the timeline for Clojure was similar. But both are much simpler languages, sticking to fairly well understood design spaces. I suspect most foc projects are more like Rust in that they're tackling completely new areas of the design space and will need a lot of shaking out before it's clear whether or not they're going to work out.

2020-06-07 02:57:16 Will Crichton:

Given recent events (as well related discussion of lack of diversity in FoC), I just wanted to mention — now is a good time to reflect on what role people of color can and should play in the future of coding. That starts with understanding the relationship of technology and race. I can recommend several great examinations of this topic in the HCI community: • Does Technology Have Race? https://dl.acm.org/doi/pdf/10.1145/2851581.2892578 • Critical Race Theory for HCI https://dl.acm.org/doi/pdf/10.1145/3313831.3376392 Not strictly race-related, but Morgan Ames also has some great work in critically analyzing hacker culture and techno-utopianism in education policy. https://dl.acm.org/doi/pdf/10.1145/3274287

2020-06-07 03:06:37 Edward de Jong:

one of the wonderful things about computers is that they don't care who is programming them; old or young, male or female, what shade color, or rich or poor. There is no more egalitarian field than computers, and i have worked with people of every shape and size from all over the world. Everyone else wishes they had our level playing field. I've worked on projects with people i've never seen, so that is the ultimate in freedom of opportunity and lack of bias.

2020-06-07 03:53:04 Kartik Agaram:

And yet, we do know who's on this group. And we see the same disparities as elsewhere.

You're right that our field has advantages. The question is what we have done with them.

2020-06-07 03:53:06 Shriya Nevatia:

Edward de Jong I love that too but unfortunately the people and communities around technology are not always as welcoming as the technology alone, so there is still much work to be done 🙂

Many young programmers get their start finding friends and collaborators online; if you are the only Black or Female or Latinx (etc) person learning about the world of software, it does feel isolating and can hamper your learning.

I think we can appreciate the amazing potential of technology while also acknowledging that there’s still work to be done.

2020-06-07 03:58:05 Kartik Agaram:

Among all the stuff I've learned this past week, this thread stands out:

https://twitter.com/michaelharriot/status/1186468302400507904

Not on topic for this group, but this is a thread and we can all damn well adjust.

2020-06-07 11:36:18 Mariano Guerra:

https://blokdots.com/ blokdots is a simple to use software to build interactive hardware prototypes without writing a line of code.

2020-06-07 11:52:58 Cole Lawrence:

Oooh... I like the layout design in that thumbnail. Thanks for sharing!

2020-06-07 11:59:11 Cole Lawrence:

I really like their landing page. It reminds me of how Phaidon would design a book on the "history of hardware" Steve Peak this looks a bit like Storyscript with a slightly more technical audience focus. It's interesting that they are also calling out a Figma Plugin integration (which makes for good demos, I'm sure!)