You are viewing archived messages.
Go here to search the history.

🕰️ 2020-08-22 19:42:02

...

Doug Moen 2020-08-23 22:19:34

In the 1950's, Kenneth Iverson was a math professor at Harvard. He designed APL as an unambigous, expressive mathematical notation, for teaching math to undergrads, and for his personal use solving research problems and writing books and research papers. Iverson joined IBM as a mathematician, where he used his notation to formalize and specify the instruction sets of the 7090 and 360 computers. Only after that, the project at IBM to implement APL as a programming language.

APL doesn't usually get much credit for its influence on modern programming systems (although Mathematica, NumPy and TensorFlow are APL dialects), and it isn't usually credited as an early functional language, even though it was the first such language (to my knowledge) to have map/reduce primitives (although under different names). APL now seems to be remembered for its syntax.

Finally, my point. Re: Sussman, APL is a much better mathematical notation than Scheme.

Konrad Hinsen 2020-08-24 05:07:15

Historical note: NumPy started out with a focus on implementing much of APL as a Python library. The function names in Numerical Python (as NumPy was originally called) are the names of APL operators. Later on, there was a movement to make NumPy more Matlab-like to win over Matlab users, so the APL heritage is no longer as clear as it used to be.

As for APL vs. Scheme: that really depends on what aspect of mathematical notation you focus. Sussman comes from a symbolic computation background, with an application focus on calculus (check his "Structure and Interpretation of Classical Mechanics" as the prime example). APL has a focus on numerics and algebra. The only decent attempt I know to unify both perspectives is Mathematica.

Jack Rusher 2020-08-24 07:06:24

I have fond feelings for APL. It was the language in which I was taught a range of numerical methods by one of my favorite professors (here's his http://archive.vector.org.uk/art10007880 to the N-Queens problem), and I agree completely that it is under-appreciated for its place in CS history. However, I find the syntax leaves a good deal to be desired for work outside of its original niche, whereas the vector nature it embodies can quite naturally be expressed in any other functional programming language. It is for this reason that I prefer "APL-as-library", "PROLOG-as-library", &c.

Regarding the history of Mathematica, it was inspired almost entirely by Wolfram's extensive use of https://en.wikipedia.org/wiki/Macsyma and written in C rather than Lisp only because https://writings.stephenwolfram.com/2013/06/there-was-a-time-before-mathematica/ that C was "the language of the future" while they were both at Caltech. Wolfram was aware of APL, but to call Mathematica an APL dialect is in no sense correct.

Re: APL v Scheme as notation, I will say first that we disagree and second that even though we've only (virtually) known each other for a few weeks, your habit of expressing your subjective aesthetic opinions about Lisp family languages as if they were objective truths is already quite tiresome.

The Journal of the British APL Association. The BAA promotes the APLs, terse programming languages derived from Iverson’s mathematical notation.

🔗 There Was a Time before Mathematica…—Stephen Wolfram Writings

Emmanuel Oga 2020-08-25 07:13:30

papers alone are not great, is not a matter of functional vs imperative, it is a matter of completeness. 99% of the papers are missing implementation details that could be incredible hard to provide by the uninitiated. Papers with artifacts are great. Start from a running template, play with the parameters, see what happens.. if you really want to start from scratch, you can, but you can always go back to an actual working artifact for reference.

Emmanuel Oga 2020-08-25 07:16:51

my comment taps into the "reproducibility" conversation. See: https://ctuning.org/ae/

Don Abrams 2020-08-25 19:26:21

Good paper covering a lot of recent CS edu research in this area: https://scholarship.tricolib.brynmawr.edu/bitstream/handle/10066/22621/2020LowensteinS.pdf

Ray Imber 2020-08-25 21:03:45

Emmanuel Oga My thoughts were more in regards to pedagogy than reproducibility, but there is probably overlap here (it's difficult to reproduce what you don't understand).

My thesis is that you gain a deeper understanding by building your own model up from scratch. I learn far more by building from scratch than I do from just "playing with parameters."

That being said, reproducibility is an important goal in itself. I applaud any efforts to improve that area!

Kartik Agaram 2020-08-25 22:56:07

At least for my personal style, reproducing is often the first step to understanding or learning. Hot take: reproducibility matters more for teaching than research.

Konrad Hinsen 2020-08-26 05:51:25

Reproducibility matters for learning, which is the common aspect of teaching and research. Learning is attaching new knowledge to an ever-growing edifice of solidly acquired knowledge. Reproducibility is about the solidity of that edifice. You have to be able to question your old knowledge and check if it is as solid as you thought it was, in the light of new knowledge. And that is true as much at the individual level (what we usually call "learning") as at the collective level (research, which is learning at the level of society).

Don Abrams 2020-08-26 06:56:04

This seems to describe making conceptions and discovering misconceptions. Pedagogically, reproducibility would concretely imply using approaches that minimize the appearance or lifespan of misconceptions and minimize time to build conceptions? Research-wise, that would mean cataloging conceptions and misconceptions and measuring their presence over time across large groups? Then, we could compare different pedagogies?

nicolas decoster 2020-08-26 09:14:44

I agree that implementing something helps understanding/feeling it better. And in fact it is the idea from Seymour Papert with the idea of microworld. Creating microworlds helps understanding some concepts. In the case of LOGO programming the turtle helps children feel (if not understand) concepts like angles.

And I am convinced that this implementing-microworld idea can extend to adults to help them understand better some science, even without great programming knowledge. And as fan of visual programming, I think tools like Scracth are perfect for this: you can implement dynamic/interactive visualisation or microworld to help you grasp some concept.

One example. During a two days workshop where I was teaching Scratch to some teachers (middle school), one wanted to use Scratch to create a small interactive animation to explain some plate tectonics concept. She has never used programming before, but at the end of the workshop she was able to ask her students to implement this concept with Scratch.

Ray Imber 2020-08-27 18:24:19

I think my comment on "reproducibility" needs to be clarified a bit with some definitions.

There is the denotative, dictionary definition of "reproducibility": The ability "reproduce" something (note, this is generic, it can be anything, not just a research paper); to build a copy or simulacrum that exhibits the same behavior or results as the original.

From what I see in the thread, we all basically agree that this is a good and useful thing, particularly for learning and understanding.

But there is a connotative definition of "reproducibility" often associated with scientific research. This definition has more specific context associated with it. In this sense, "reproducibility" is the ability to make a copy or simulacrum that exhibits the same behavior or results of a research paper specifically for the purpose of peer review and validation.

This definition often implies that the reproduction should be created completely independently from original result using only the information found in the paper (and associated references and domain knowledge), but no contact with the original authors.

This second definition is what I believe Emmanuel Oga was referring to, based on the context of his post and associated links. My response to that post was an attempt to separate the two concepts. I think I did a bad job initially, and hopefully this is more clear.

Kartik Agaram 2020-08-27 19:50:27

Yeah. Even if 99% of papers have reproducibility problems, I suspect the graphics papers that the (excellent!) demo link up top uses don't have that problem.

Perhaps we need a separate thread on the academic notion of reproducibility.

Konrad Hinsen 2020-08-27 20:18:08

Ray Imber The terminology around this topic is a mess. There's people who have written articles just on what should be called what, and of course they disagree. That's life.

One initiative worth mentioning in this context is https://reproducible-builds.org/. It's about making (Linux) executables reproducible from source code, so that you can be sure you are actually running the program whose source code you are reading. It's an answer to Ken Thompsons Turing Award speech "Reflections on trusting trust" (https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf).

Emmanuel Oga 2020-08-28 06:20:36

fwiw I was thinking reproducible more as in "reproducible builds" than as "replicable experiment". The idea is that papers should come with their artifacts: a "one click" implementation that process some provided example inputs and produces some example outputs, as described in the paper.

This is maybe independent of the conversation of whether the learner should build their own version from scratch or not (personally I do think "building from scratch" is the best way to learn). Even when building from scratch, comparing against a reference implementation is incredible useful as a learning tool.

🕰️ 2020-08-21 23:15:13

...

Doug Moen 2020-08-23 22:31:02

Maikel van de Lisdonk I implemented an endless slider to my Curv project, but I'm not in love with the UI. I'd appreciate some more specific references to good UIs for this, so that I can see what a better UI might look like.

yoshiki 2020-08-24 03:24:24

Daniel Garcia curious what you mean by solving the problem of magic numbers. For physics params in games at least, to me everything is arbitrary and relative to other values, so I’m not sure what the problem to be solved is.(Maybe “magic numbers” isn’t the right word? Sorry if that was confusing)

Maikel van de Lisdonk 2020-08-24 05:51:16

Doug Moen The knobs on the nord lead https://www.wikiwand.com/en/Nord_Lead line of hardware synths have endless knobs with a ring of leds to indicate the value. I'll look for other examples in software and will post them here

Maikel van de Lisdonk 2020-08-24 07:24:16

Doug Moen Ableton Live does the following: when changing a knob , the mouse pointer disappears and you control the knob directly. After you let go, the mouse pointer reappears at the position where you started controlling the knob. I am using ableton on a mac using the touchpad.

Chris Maughan 2020-08-24 07:39:20

I do the same thing with control knobs, along with a tooltip that appears to show you the units and current value. The orange arc gives you a sense of where you are around the knob, but the knob may scale exponentially depending on what it is for; and there is no indication for that, other than 'playing' with it. I don't think that's a big deal, but I guess it isn't discoverable.

📷 image.png

Maikel van de Lisdonk 2020-08-24 08:37:11

Chris Maughan in ableton you can still type in the value when needed, is this also possible in your solution?

Chris Maughan 2020-08-24 08:38:05

No, but it probably should be 😉

Maikel van de Lisdonk 2020-08-24 09:26:36

I want these kind of controls in the browser as well, and they exist offcourse but I think the biggest problem in getting the native feel is that you cannot hide/show/control the mouse pointer and position like you can do in a desktop application. On a mobile touch device this might be less of a problem

Gregg Tavares 2020-08-25 01:36:46

Do they need to be discoverable? There is often a tension between being efficient and discoverable. Imagine you wanted values in a spreadsheet to be draggable and discoverable. If you show some UI you've probably just decreased the information density by 50% or more. People that are actually trying to enter data will probably tab, type, tab, type, tab, type so the UI is just wasted. Holding alt and dragging to slide a value when you want to explore is a reasonable trade off. Sure, a new user won't know they can alt-drag a value but the training of that new user is over in 30 seconds and is possibly not worth the loss in efficiency for the few days later when they start trying to be productive.

I'm not saying you shouldn't have visible sliders period. It's rather, probably more of what's your expected use case. If you only ever expect less than say 20 inputs then sliders might be great but if you expect 300 inputs maybe not so much? Or maybe at least there's a tension between targeting noobs and once in a while users and targeting experts/pros. There's a reason we take the training wheels off the bike at some point.

Daniel Garcia 2020-08-30 17:19:10

yoshiki By "magic numbers" I mean this critic by Bret Victor. But I like more just being explicit in the language/API and have labels on arguments, like: ellipse(xPosition: 161, yPosition: 219, width: 114)

Nick Smith 2020-08-24 05:10:56

Why isn't any kind of logic programming considered a https://en.wikipedia.org/wiki/Model_of_computation? Why do we talk about Turing Machines and recursive functions as fundamental, but not inference? I can't find any resources discussing this disparity. It's like there are two classes of academics that don't talk to each other. Am I missing something?

S.M Mukarram Nainar 2020-08-24 05:33:41

I'm not an expert, but logic programming a la prolog etc is usually based fundamentally on term rewriting, which is essentially what the untyped lambda calculus is. (Please correct me if I'm wrong, this is something I've been studying recently but I'm still quite new to the field)

Nick Smith 2020-08-24 05:40:41

Isn't term rewriting destructive, i.e. it consumes an input to produce an output?

S.M Mukarram Nainar 2020-08-24 05:50:38

I suppose? i am afraid I am not sure what the relation is and what you are getting at here.

Would you consider beta-reduction "destructive"?

Nick Smith 2020-08-24 05:58:37

I guess a better question is: what is the "term" that is getting re-written in evaluating a logic program? Is it the entire program? As a conjunction of clauses? I guess the destructivity doesn't manifest in such a case because you're producing a "new program" (term) which you guarantee is going to be strictly larger than the last one (adding new deductions). That suggests term rewriting might be "too powerful" for logic programming though, because it can also model the deletion of facts (unless you want that capability, for some reason).

But yeah, I'm now seeing how you could map inference to term rewriting!

Andrew F 2020-08-24 08:45:29

Interesting. When logic shows up in the foundations of computing, it's usually in the context of the Curry-Howard isomorphism. From that perspective, logical inference is sort of... program synthesis, I think? More broadly, finding elements that occupy a type that represents a proposition.

It could be that logical inference is (IIUC) inherently search-oriented, so people who make "models of computation" don't see it as a good primitive operation: too many steps and decisions behind the scenes. If that's the case, I think there's still some good mileage to be had from making a model where logical inference is stupid easy to write and tractable for an optimizer as well. (Uncoincidentally, this is an approach I'd like to do, but I don't know enough yet)

Duncan Cragg 2020-08-24 09:48:18

There probably are two classes of academics not talking, yes, and the logic folk seem to have lost the popularity race to the functional folk.

You can implement functional programming with term rewriting or reduction and similarly you can use rewriting or reduction in proof systems. My memory of all this has faded since my university days mind you.

Duncan Cragg 2020-08-24 10:04:15

Horn clauses are Turing Complete with tail recursion IIRC

S.M Mukarram Nainar 2020-08-24 10:42:01

http://maude.cs.illinois.edu/w/images/0/0d/Maude-book.pdf

This may be of interest. I was pointed to it recently, and have been meaning to read it (though I haven't yet). As I understand it, it's a platform along the lines of Racket, except based on term rewriting rather than syntactic macros.

Nick Smith 2020-08-24 11:29:10

S.M Mukarram Nainar I read some pages from that book just now, and the whole system looks overly complicated. Meanwhile the authors are using the words “simple”, “easy”, and “natural” everywhere, but I can’t find any good justification for why I should keep reading. The Wikipedia article is the same. It’s raising a lot of red flags.

Nick Smith 2020-08-24 12:03:58

@Andrew F Not all logic programming involves search. Each logic language comes with its own “standard” evaluation strategies, and by my understanding, each strategy effectively yields a different model of computation (including different time and space complexities).

S.M Mukarram Nainar 2020-08-24 12:06:08

Well, it is an academic reference work. While unfortunate, I think that's pretty normal. I posted it because it is one of the few "production" systems based on term rewriting that I am aware of.

Andreas S. 2020-08-24 11:12:08

Hi #thinking-together ! Could someone explain or give links to resources that compare MVVM and the react Approach? How do they compare and relate?

Duncan Cragg 2020-08-24 11:44:27

React is quite thin and one-way, so not really comparable! It's a View of a (View?)Model in some ways.

Duncan Cragg 2020-08-24 11:49:47

This could be a #present-company question?

Ivan Reese 2020-08-24 14:18:14

Yeah, this is a #present-company question. @Andreas S., kindly repost it there.

I'm going to leave this thread here so that other folks can calibrate their sense of what should go where.

No more comments on this thread, thanks!

🕰️ 2020-08-17 14:40:56

...

Mariano Montone 2020-08-24 11:56:38

I suggest The Humane Interface: New Directions for Designing Interactive Systems. I don't agree with everything, but it has interesting concepts.

Orion Reed 2020-08-24 16:26:27

@Mariano Montone it’s already on its way!

🕰️ 2020-08-19 04:28:15

...

Andrew Reece 2020-08-24 13:16:50

This isn't a critique, just a stray thought about visual representation:

The transition from rhetorical algebra (verbal descriptions of relationships) to symbolic language is commonly lauded as a large improvement in the UI for maths.

Do you disagree with this in general/think it only applies to "low-level" concerns/think it only applies to symbolically-inclined individuals?

The inverse approach might be to try to extend mathematical notation to better represent the "high-level" concepts end-users want to manipulate. Can this be done without being immediately off-putting to people who aren't big fans of maths? (i.e. normal people!) 😄

Nick Smith 2020-08-26 12:45:22

@Andrew Reece I think symbolic representations are useful when you need to do algebra (symbolic manipulation/evaluation). But programmers don’t want to do this: they simply want to write something down, and be able to easily read it later. We make the computer do the algebra when we hit “Run”.

Mike Cann 2020-08-26 03:52:58

Anyone played with Motoko yet? https://stackoverflow.blog/2020/08/24/motoko-the-language-that-turns-the-web-into-a-computer/ looks really interesting, kind of reminds me of Unison in some ways

Cameron King 2020-08-26 19:28:40

I've been trying to get it running, but the local development environment isn't Windows-friendly, and doesn't seem to play well with Windows Subsystem for Linux. On dfx start, I keep getting Permissions too open for path errors with a reference to a Rust file whose source I can't find. Changing the permissions of the path in question alternates the original error with a Permission Denied error. Google turns up nothing.

I think the Internet Computer as a whole is a fascinating idea, Motoko aside, and I'd love to work with it because it fits well with the philosophy of a project I've had planned for a few years, but I guess I'll have to wait until the dev experience is more stable/I have energy to install a full unix distro.

Mike Cann 2020-08-26 23:51:21

@Cameron King ah thats a shame you are having issues on windows. Im a windows user too and was planning on having a tinker with it when I get a spare min

Tyler Adams 2020-08-26 08:57:17

Does anybody have experiences with non-standard keyboard layouts like dvorak? I'm writing a blogpost about keyboard layouts and would love to hear about your experiences

Nick Smith 2020-08-26 12:57:18

I once used a QWERTY keyboard with a split space bar and the keys aligned into columns. It was allegedly “ergonomic”, but it was not worth it. My typing was perhaps 10% better, but it was suddenly hard for me to jump between different devices. Given that typing speed is completely irrelevant to productivity for everyone except perhaps professional writers, I’ve never tried any weird keyboards since. There are 100 things a person can optimize to receive better returns than a new keyboard layout. For example: a note-taking methodology, diet and exercise, or social media rationing.

Chris Maughan 2020-08-26 13:11:26

I tried one of these, but found the mental effort to learn it too much; and it was very slow compared to touch typing on my favourite MS Sculpt keyboard: https://twiddler.tekgear.com/

I also recently tried an ergodox EZ, but the layout of keys was again quite a high effort to learn (and I found it awkward that many of the symbol keys I expected to find on the right, or beside the Z were not there). Lots of folks make up their own keyboard layouts for it (but I imagine this is easier if you aren't a touch typist to start with): https://ergodox-ez.com/

I have a Mattias half keyboard too: http://matias.ca/halfkeyboard/ This is a weird one; your brain really does fill in the gap and let you type with one hand, 'mirroring' the other half. I've found it occasionally useful for use during CAD tools, but its more of a novelty than something I would seriously use all day (unless I didn't have the use of one hand).

Finally, I tried a Kinesis FreeStyle Edge:

https://gaming.kinesis-ergo.com/product/freestyle-edge/

This was OK, but eventually I had to admit to myself that my typing speed was significantly slower using Cherry/Mechanical keys; it just takes so much more effort to press them than the short travel of the MS Sculpt. I was also finding my hands were far more tired by the end of the day. I haven't measured my output, but I type a lot during a given day. One of the reasons I'm really interested in input devices!

My dream keyboard is the MS Sculpt, separated into 2 halves that I can spread apart on the desk.... I believe I am on my 6th Sculpt. I probably kill one in less than a year.

https://www.microsoft.com/accessories/en-gb/products/keyboards/sculpt-ergonomic-desktop/l5v-00006

Ivan Reese 2020-08-26 14:26:12

I recently bought a Microsoft Sculpt too, to ward off RSI that might be looming in my future. (Trying to "listen to my body", you know?)

Since the layout is different enough, I've decided that this keyboard is going to be my home-row Dvorak instrument, and my regular Apple keyboards are going to be my bad-habit two-finger QWERTY instrument.

(I use "instrument" here because, to me, the idea of having different layouts for differently shaped devices feels just like switching between, say, clarinet and guitar. Totally normal.)

I've been learning Dvoark using https://www.typingclub.com, and I'm up to about 20 WPM after 2 weeks, practicing about 15 minutes a day. Learning all the keys was pretty quick for me, and now I'm enjoying just building up my speed.

Unlike what Nick said above, I do feel like my productivity — and my enjoyment — while using the computer is restricted by my typing speed. This is my medium for thought. I'm writing right now to express my thinking. I can think of these words far faster than I can type them (and I currently type something like 70 WPM). So I think it's totally worth it for me to attend to my discipline.

Ivan Reese 2020-08-26 14:39:33

(On a meta note, I'd love to know how you see the matter of keyboard layouts as part of the future of programming. Or if that's not really relevant to your blog post, Tyler, that's fine — but #present-company would be the best channel for this sort of conversation. When your post is done, you should share it there!)

Chris Maughan 2020-08-26 15:25:54

Totally agree on the speed; for me optimal typing makes me a better/more productive programmer. It's one of the reasons I use Vim, and regularly research keystrokes/combinations to improve.

Ivan Reese Did you get a Dvorak Sculpt (if such a thing exists) or are you using a regular one with keystrokes remapped?

Ivan Reese 2020-08-26 15:34:17

Standard US-layout Sculpt. I'm on Mac, so I remapped alt -> command and win -> option. I just use the Dvorak layout included with Mac OS, and switch layouts with control-space. I never look at my keyboard, so the stuff printed on the keys doesn't matter to me :)

Kartik Agaram 2020-08-26 17:07:49

In my experience there's a correlation between typing faster and RSI. I used to use hjkl for arrow keys, but I stopped when my body started telling me to use the arrow keys.

Tyler Adams 2020-08-26 18:37:38

Fascinating, thank you everybody, keep it coming! Ivan Reese - I think the most interesting innovation will be in the field of RSI prevention since that's what everybody's worried about. Although my post is focused on speed, speed is very minor concern for many people compared to RSI. Both on the HW front with more comfortable keyboards, and on the sw front with different ways of letting users stay on their home row more

Chris Maughan 2020-08-26 18:40:56

Regarding RSI, I've always favoured low-travel, non-clicky keys. After 26+ years of programming full time, I've never had RSI problems or pains in my hands - except when I've tried to use a cherry keyboard. Now back-pain, that's another story 😉

Lukas Schwab 2020-08-26 19:16:44

Bought an ergodox recently to improve my posture. Getting used to ortholinear QWERTY wasn't too bad (except for learning to strike B with my left hand).

The rearrangement of modified keys has made writing code pretty slow; brackets/braces have been muscle memory for me for so long (and I have a habit of typing the opening bracket, then the closing bracket, and then the back arrow to fill the contents), but both brackets/braces and the arrow keys are displaced.

tl;dr: I prefer the split keyboard for typing text, even though I'm somewhat slower, because the improved posture is good. I do not prefer it for writing code.

Alexey Shmalko 2020-08-26 21:57:04

I'm a Workman user for ~4 years now. Have tried Dvorak before but couldn't switch and I like Workman better now. A couple of differences are: (1) taxing the middle column (and better priorities overall), so hands rarely move sideways, less pinky load, (2) less emphasis on hand alternation and more "rolling" combos (e.g., t/h, s/t, d/r are next to each other, so it feels nice and easy typing them).

I also use Vim-like keybindings and they work surprisingly well. The only one remapping I do is swap j and k (because j is on the top row but means "down," and k is on bottom row but means "up"—couldn't make my mind adjust to that). Other than that, Vim bindings on Workman work great. As a bonus, you learn to use hjkl less and rely on better movement commands more.

As a downside, when I have to type on other people's keyboards, that's awful. I have to look at the keys or I type trash otherwise.

Andrew F 2020-08-27 06:17:30

Chris Maughan I've never seen that half keyboard before. That's wild!

William Taysom 2020-08-27 07:56:16

Advising a friend on a Sci-Fi story, I imagined what a keyboard a few decades hence might be. I recommended the character point in the direction where she wants the text to appear in the world, wiggle her finger as though swiping on a phone (yes, QWERTY style), and mumble what it is that she means to write (for the audience). We could imagine subvocalizations as well. Moral being that the noisy signals (subvocalizations and finger wiggles) get fused by the dictation software. Just a thought.

Chris Maughan 2020-08-27 09:55:17

@Alexey Shmalko I thought you were going to say you remapped 'jk' to escape; which is what I do; it's a real typing saver in vim; never reaching for escape again; easy to learn, with only the minor inconvenience that you have to remap every vim editor you use to :noremap jk <Esc>

Chris Maughan 2020-08-27 09:58:42

@William Taysom I always thought that a futuristic (or maybe not so future) keyboard would just involve wiggling fingers. I think it should be possible to learn to gesture stenographically in space. Perhaps the new iPhone will enable more experiments like this. I'm fascinated by this whole field of efficient gesture input; I even learned T-line shorthand when I was a kid, for 'fun' 🙂 We have been hamstrung by keyboards for far too long.

The other reason we need gestures for typing is of course VR.

Alexey Shmalko 2020-08-27 10:03:36

Chris Maughan this trick doesn't actually work with Workman. Because of the rolling combos, most of the keys that are close, also occur often in the text. I mapped ESC on Caps Lock place though

William Taysom 2020-08-27 14:22:55

Stenography! Now we’re talking!

Andrew Reece 2020-08-27 15:05:11

I've been using colemak for a good few years now. I recently made some of the more common mods ("Angle", "Wide", "Curl-DH" https://colemakmods.github.io/ergonomic-mods/)

because I started to notice RSI symptoms in my right pinky and realized how much of the keyboard it normally controls.

I've also remapped tapping caps lock to be escape and holding it to be ctrl, both very useful for vim.

(Another change I made recently was to entirely remap vim, e.g. making the equivalent of IJKL act as arrow keys instead of HJKL).

I'm on a normal keyboard atm (Microsoft Sidewinder X4), but I've been planning to make my own split ergo keyboard for a while, maybe the Dactyl (pictured)...

Happy to answer any questions.

📷 image.png

Andrew Reece 2020-08-27 15:07:08

When I occasionally move back to QWERTY it reminds me how terrible of a layout it is - lots of stretching for digrams and common letters out of easy reach. I changed more for reducing finger strain than for speed

Chris Maughan 2020-08-27 15:08:54

I'm curious if you have measured your speed? On QWERTY or otherwise.

Tyler Adams 2020-08-27 15:11:48

Andrew Reece how did you remap caps lock to tap vs hold?

Robbie Gleichman 2020-08-30 19:58:50

I have been using use one of the Carpalx layouts for 10 years. http://mkweb.bcgsc.ca/carpalx/

Nick Smith 2020-08-27 03:37:49

I'd like to apologise to those with whom I've had recent conversations about linguistics, logic, and ontologies, for brushing off the relevance of the latter topics to the former. It seems these topics are more deeply intertwined than I had previously realised.

  • The field of logic emerged as a means of studying how humans reason using natural language. Logic is irrevocably tied to the structure of natural language, and therefore it seems foolish to try to add natural language to a programming system without basing it on logic.
  • Ontology is the study of categories and relationships. Ontology languages like OWL are actually based on Description Logic (I was surprised!), which could be perceived as a type system based on first-order logic. This gives us a formal way of conceptualising entities and have the computer check that a program is going to respect that conceptualisation.I had been dissuaded from reading into the use of ontologies in information systems because most resources I encountered about it are about the "Semantic Web"; I had equated the two. I'm interested merely in programming systems, and accordingly I have no interest in trying to make or support global standards for the categorisation of information. But the fundamental idea of ontology is sound: it seems like a bridge between natural language, logic, and type systems. I'd be a fool to ignore it given I'm designing a logic programming system.

Next up: Several weeks studying the interplay between logic, natural language, ontologies, and type systems!

William Taysom 2020-08-27 08:16:20

I have some familiarity with OWL. Worked with Pat Hayes. He has a short article from a while back expressing thoughts https://www.ihmc.us/users/phayes/CatchingTheDreams2003.html. I think this stuff didn't do much anywhere because Google and Wikipedia did instead. Maybe.

Ivan Reese 2020-08-27 15:41:56

(On a meta level — this was a fantastic #thinking-together post, Nick. Thank you for writing it.)

Robbie Gleichman 2020-08-30 20:07:09

I'm skeptical that logic is needed to understand natural language. For example, I don't believe Transformer translation systems explicitly model logic http://jalammar.github.io/illustrated-transformer/

Harry Brundage 2020-08-27 13:18:23

I was rewatching Are We There Yet the other day (https://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hickey/) which is Rich Hickey (mr Clojure himself)'s treatise on how he sees the world, how objects are lies, and how modelling whole applications as streams of new states produced by pure functions with structural sharing to keep performance good. super interesting to see him talking about this in the Java community in 2009 and see it percolate through to the JS community with immer and redux and whatnot, but i was struck by a certain feeling that i'd love to know if y'all share.

The whole OOP/functional debate, this talk, and frankly a lot of my thinking seem to principally be about modelling logic, and striving to get to some place where state is abstracted away. I/we seem to want to get to a world where I/we think mostly about computation and not about state change over time. I had a very strange experience switching from working on a backend team where state is the devil and ruins performance of everything (it'd be so fast if it didn't store anything!) to working in analytics where data is everything and where the code is a tiny little ever-changing bit of glue that manipulates this massive, permanent, far more important artifact. I found it nasty. It's nasty because the data captures every mistake ever made, which pile up and force every user to care about until fixed. It's nasty because it's big and hard to make development responsive. It's nasty because it feels wrong to write "poor" code you run once to fix something then delete, and it is really hard to get a handle on the shape, or quality, or meaning of real big datasets.

I think I had (have) data-phobia, and it took getting immersed in a data-heavy product to realize that I think I/we have it backwards, and that the data is more important than the instructions for manipulating it, and deserves to be the focus, not the nagging feeling at the back of your head. What I was struck with in Rich's talk is that the epochal time model and FP writ large seem like they are born of the same phobia, trying to escape the shackles of state management in order to get back to some pure world of computation that doesn't actually exist. A bunch of Bret Victor's work circles around this too, where the instructions to run the code are way less emphasized than the (often visual) data created by what the instructions are actually doing. All the hover-to-see-the-value or watch-this-expression debugging tools are us being forced to go back from pure computation to look at the actual data flowing through once more. Airtable/spreadsheets are counter examples of non-data-phobic tools that seem to be easier to use, maybe because they put the data first.

So ... am I off my rocker? Is data-phobia a real thing, a force that has shaped our tools to demote a super important piece of our lives? Is there an antidote?

Duncan Cragg 2020-08-27 14:18:28

Yus data/state-phobia is real, as evidenced by both imperative (OO) and declarative (FP/LP) approaches, which deprecate or hide it.

Duncan Cragg 2020-08-27 14:19:21

and state -> f() -> state -> f() -> state is a good model for programming.

William Taysom 2020-08-27 14:19:38

Thinking how structured programming shows the control flow of you program, the steps, but not the... yeah what Duncan Cragg said.

Duncan Cragg 2020-08-27 14:20:25

IMAO (In My Arrogant Opinion) 😄

Paul W Homer 2020-08-27 15:13:20

I think it’s an artifact of the way we learn to program: http://theprogrammersparadox.blogspot.com/2020/08/duality.html

🔗 Duality

Konrad Hinsen 2020-08-27 15:15:52

Not sure if phobia is the right term, but I also perceive an aversion to dealing with data particularly in the academic CS world that would prefer so much to concentrate on pure computations.

At the highest level, computing is always about that big mass of data that is sitting on your computer's disk. All that stuff that accumulated over time as the result of lots of computations and equally many user interactions. There is no way around this fact. That mass of data is the reason why we use computers at all.

On the other hand, that mass of data is also what we mess up all the time, so we have been looking for ways to do data updates in some principled way. In the end, that's what both OO and FP are about. OO divides the state into compartments so that we can look at small pieces at a time. FP focuses on data transformations, which it divides into compartments so that we can better reason about them.

Ivan Reese 2020-08-27 15:45:41

What Konrad said, but in joke form: that there's an "expressionless" emoji 😑 but there's no "stateless" emoji should tell you everything you need to know about the world.

Jack Rusher 2020-08-27 15:51:34

Rich's position seems to be not "state is bad" so much as "shared mutable state is bad" and "change occurs over time." An example of the former in the small is that is your code will be easier for everyone, including you, to understand if it's made of functions that take inputs and outputs rather than ones that change a shared scratch pad. The latter is why Datomic keeps all previous versions of what's been written to it and treats a handle to a database as pertaining to a snapshot at a particular moment in time.

Shalabh Chaturvedi 2020-08-27 16:21:34

Instead of "data" I like to think of the information expressed via "data" and really the "meaning" induced in our minds - which is what we really care about in the end. Yes I agree there isn't enough study of these aspects of computing. Many pieces of data can mean the same thing (xml, json, in-memory struct, db, object...). They use different mediums, but how do these mediums affect the representations? How do we determine equivalence? What can be directly represented in the medium vs what needs to be simulated (e.g. Objects can represent "identity", or you can simulate it via 'ids' in a system that doesn't have them. "Time" is built-in to some models, but may be simulated via version_ids, etags etc.).

Importantly how do these mediums affect the description of computations (programs)?

nicolas decoster 2020-08-27 16:30:22

It is interesting to note that in many languages Computer Science is called Informati(que|k|ca|...), focusing more on the data.

Harry Brundage 2020-08-27 18:39:44

I agree that epochal time model for programming makes a lot of sense and the small amount of work i've done with Datomic was nice, but, it seems strange how much work Datomic has to go through to present to us a consistent snapshot of the world so our programs can pretend it isn't changing. the databases bend over backwards to present to us an unrealistic model of data because we want to program in this way that pretends things arent changing or messy or big, and i feel like i get intuitively why things have evolved that way, but i feel like there's a whole branch of research that i don't know about or hasnt been done yet for the alternative model of embracing state and change and the mess

Harry Brundage 2020-08-27 18:42:00

Rich went to great lengths to explain why observations are always out of date and why we should be designing for real latency between event, observation, and reaction, and i think the epochal time model fits that, but it seems to twist the data to fit the code when perhaps it could be the other way around

Harry Brundage 2020-08-27 18:43:14

(i feel it terribly heretical to disagree with anything Rich Hickey says and definitely don't know what I'm talking about, I just have this nagging feeling that we're missing something)

Shalabh Chaturvedi 2020-08-27 18:44:04

There are some branches of research that preserve 'identity'. E.g. see NAMOS/pseudo time from David Reed (1978) and a related work on Virtual Time. Here's a nice collection of links: https://prabros.com/readings-on-time

Harry Brundage 2020-08-27 18:45:11

like using Rich's analogy of the people watching a baseball field, i'm more interested in the players playing the baseball game, the ones who have to make decisions and affect outcomes, and they are perceiving data but reacting to it and changing it. for example, what's the programming equivalent of fast twitch muscle fibres vs slow twitch? or pre-game visualization? or 10000 practice swings of a bat? our bodies are extreme examples of perceptors that participate effectively in a highly complex, dynamic situation, i want to build things that can do that

Harry Brundage 2020-08-27 18:45:38

the audience is easy to build, they just eat pretzels and hoot and holler

Shalabh Chaturvedi 2020-08-27 18:45:50

So if you look at NAMOS/pseudo time - it doesn't give up the idea of 'objects' or identity and instead of putting the 'timestamp' into one database - a corner of your system - the timestamps are pervasively spread out all across the system (each message carries a timestamp identifiying which 'version' it is from - these are pseudo (virtual) timestamps).

Andrew F 2020-08-27 18:51:13

Stateful computations are hard to analyze, for humans or machines, which means they're hard to get correct. Large masses of data are hard to understand. I think this is the basic cause of data phobia to extent it exists, and I don't think it's entirely unreasonable to program in a way that works around it.

Drewverlee 2020-08-27 22:42:32

Harry Brundage

It seems what your talking about is presenting the probabilities. Which is a much more honest way of viewing reality. It's also one we humans are terrible at, a result that has been repeatedly shown in studies.

Most customers are much happier to see that there is one shirt in their size vs some function that accounts for the chance someone else purchased it by the time they get this message.

Nothing about the storage of observations at a given time effects presenting a more robust model though. It just needs to justify itself against the cognitive overhead it causes.

Garth Goldwater 2020-08-27 23:16:35

it’s almost like state is the ur-impurity—in the same way that functional programming languages still have to eventually interact with the outside world, the vast majority of user applications etc are only useful insofar as they modify or save some state (eg: a word document). the thing that frustrates me personally is that they never go so far as to eliminate state entirely. as in: a word document is a function of character and mouse presses. your saved file happens to be a snapshot of the result of running that function at with the timeline of inputs as it’s only parameter. if you really want to be data blind please go all the way

on the other hand, i really sympathize with this idea. most of what i want to do personally is get really good at modifying ASTs so i can write code quickly in a flow state. the bottom of all programming is a state change from no program source to more program source

Jack Rusher 2020-08-28 08:04:29

Harry Brundage It's interesting to see the phrase "alternative model of embracing state and change" when that's exactly what Rich is presenting in his various projects, relative to the normative approach taken by (say) most Java or C++ programs. I wonder if there's a better way to communicate that perspective. 🤔

Some of the problems here are a consequence of the kind of universe in which we happen to live: there's no central clock and all observations are local to the observer. Lambda calculus is a great way to model computation, but it is serial and operates within a single frame of reference. When we want to compute with multiple observers, which we very often do in a networked/multicore world, additional theory is needed to make things sensible.

Most approaches one encounters at the end of that road start to look more like biology, where there are -- using the terms in quotes because they're familiar, though not exactly correct -- "objects" with local "threads" that communicate via "messages". This can look like the https://en.wikipedia.org/wiki/Actor_model, https://dspace.mit.edu/handle/1721.1/44215, https://en.wikipedia.org/wiki/Π-calculus, or any number of other things, but they all share the idea that we're performing dataflow between "processors". (N.B. Nodes in a dataflow system can be thought of as lazy functions from inputs to outputs, possibly with local state, called incrementally by whatever execution engine is at work.)

Stefan Lesser 2020-08-28 10:11:52

I’m sure there’s some zen-like state(!) of enlightenment one can eventually reach, where it becomes totally obvious that computation and state are one and the same. Something like the particle-wave duality in physics or so…

Dennis Heihoff 2020-08-28 17:38:31

My favorite way to think about this is data-phobia as a symptom/consequence of our tools. Rich also once noted that prog lang and database designers are rarely the same people. Any major programming language has next to no definition of real-world-state, meaning that beyond mutable variables and data-structures ideas about persistence, querying data etc are missing. Arguably every programming language transforms data but it rarely has a rich idea of where this data comes from and where it will go? Real-world information is an afterthought for all major programming languages and if we buy into the medium being the message notion then the medium, code, only ever transforms some data that we usually can't see, because it's maybe to big to see or it's in format that we can't (usually) see in our tools (images, video, animation). Beyond that code is static. It transforms data but data can not be seen being transformed so again the tool, the language carries an implicit motivator for the programmer to write transformation code, not visualizations or comprehension tools.

Jack Rusher 2020-08-29 07:52:38

Stefan Lesser If we dig into this particular Buddha nature we find it at every level. 😊

Although we colloquially divide state from the algorithm that specifies transformations of that state, the algorithm itself exists as state encoded in the registers, stack, instruction pointer, and so on. Sometimes we modify the behavior of the system by having the algorithm change its own code while running. Okay, code is also data.

If we implement the naive B-tree algorithm, the shape (and thus performance) of the constructed B-tree depends entirely on the entropy present in the sequence of keys we insert into it. In this situation the B-Tree compiles a tiny state machine from the "code" of the input keys. Okay, data is also code.

https://zoo.cs.yale.edu/classes/cs112/2012-spring/helpdoc/pike.html old Rob Pike quote also regards this matter:

Code and data > are>  the same, or at least they can be. How else can you explain how a compiler works?

Everything is state and computation is just state over time.

Steve Dekorte 2020-08-29 16:15:06

The functions vs objects and static vs dynamic typing wars seem to come from the assumption that all-or-nothing is the only reasonable option. Why can’t each of these have their appropriate use cases and the best system be one that can use each where the trade offs make the most sense for a given project’s goals?

Jonathan Edwards 2020-08-30 17:45:37

Harry Brundage I agree. State is essential to many problems, and the central concern of many users, yet is the source of much difficulty in programming. So we have tried valiantly to make it go away, or make it someone else's problem (the database). Some people peddle cures for state like snake oil, so don't believe everything you hear. State is still an unsolved problem.

Duncan Cragg 2020-08-30 18:32:45

I'd say that anyone creating an end user system should start with state and build around it.

🕰️ 2020-07-25 14:08:04

...

Jack Rusher 2020-08-28 07:38:53

Both because it's germane to this topic and because Tristan asked me to share it with anyone who might be interested, here's a trailer for an upcoming Netflix film that's trying to spread the good word about the dangers of the attention economy paperclip maximizer:

https://www.youtube.com/watch?v=uaaC57tcci0

Andreas S. 2020-08-30 17:37:27

I think Jack Rusher you hinted at the problem that it’s difficult to talk about the problem. I would agree in general with this but of course I’m still thinking about ways how to change the status quo. This documentary is a possibility in education about how the current system works. One of the more promising actual practices that I have found goes by the name of P2P learning. Has anyone heard of that or is interested in it?

Cameron Yick 2020-08-28 21:25:49

Pondering: how important is it for a making environment to be made from the same medium you’re making with if your main goal isn’t making interfaces? The Jupyter ecosystem has come quite far despite relatively few people using it to write JS: https://twitter.com/cmastication/status/1299366037402587137?s=21

🐦 JD Long: Observation from Jupyter Land: The Jupyter ecosystem has a big headwind because the initial target audience for the tool (Julia, Python, R) has a small overlap with the tool/skills needed to expand the ecosystem, namely Javascript.

That's not a criticism, just an observation.

Cameron Yick 2020-08-28 21:29:16

People have gone quite far with making r/python libs to generate JavaScript, eg streamlit/Plotly/shiny (for r)

Ivan Reese 2020-08-28 21:52:12

I'm not entirely sure what meaning you have for "made from the same medium". If it helps, some of the specific terms of art for making the thing in itself are: meta-circularity (which is where a compiler is written in the language it compiles, like ClojureScript and gcc) and bootstrapping (which is, IIRC, where an interactive system is implemented entirely using things that can be edited from within that system at runtime, like some Smalltalk implementations). I'm going to assume you're referring to that sort of thing, perhaps in a slightly broader sense, which in my mind would include Jupyter being used to edit the same language it's implemented in.

As for how important it is — I would say it depends on what the goals of the environment are. It's a tradeoff.

Without knowing for sure what sense of "interfaces" you mean (GUIs? FFIs?), there are plenty of good reasons to do it, like the one mentioned in the tweet, or like wanting to prove that the system you're building is expressive enough to express itself (eg: if Jupyter could only handle ASCII, that would be a lack of expressive power needed to be a general JavaScript editor). On the other hand, there are a ton of reasons not to do it — perf, often being the #1 cited reason I've seen, but also simplicity, in that you might not want to spend the effort to make your environment so expressive that it can express itself.

Christopher Galtenberg 2020-08-28 22:19:55

That was PL302 in a four-minute lesson, nicely done

Cameron Yick 2020-08-28 22:28:15

Thanks for the very clear answer in spite of the slightly muddled question Ivan Reese ! I feel like a big part of why I was drawn to VSCode (from sublime text) was knowing I could write extensions in JS instead of Python.

Realizing that not every language/ecosystem aspires/has good ergonomics for used for GUI/editor toolmaking makes me think that if someone wanted to change the Jupyter situation, it may be more efficient trying to get JS devs curious about plugin-making then modifying Jupyter so that plugins could be written using the science scripting languages eg Python/Julia/R.

Jack Rusher 2020-08-29 07:17:28

In the best case, the environment is "made of something you want to make with" and is completely interactive. The biggest failing of projects like Atom and VSCode is that although they run inside a live environment they do not provide a REPL that allows code execution in the context of the editor itself. This makes plugin development a static compile-test-run sort of affair rather than an interactive "build up from small pieces" experience.

William Taysom 2020-08-29 09:28:55

Sort of ruins the fun.

Cameron King 2020-08-29 14:56:21

I hate to bring up the tired old distinction of "product vs process", but let me see if it has new life in this context. Is self-bootstrapping important? Is the software at hand supposed to be a concrete product providing a reliable experience, or is it a fluid process for experimenting toward experiences we haven't imagined yet? In "The Computer Revolution Hasn't Happened Yet", Alan Kay talked about Squeak as a tool to help you build the next version of itself. In contrast, he says that commercial Smalltalk didn't change much once it was released. It's difficult to build software products with a moving foundation. I think there's a fundamental tension between (r)evolutionary software-as-process and commercial software-as-product. The balance between progress and stability is tough to strike.

On a different note, one thing I like about self-bootstrapping systems is the conceptual unity. Once you "get it", you get a LOT. That's one reason I too jumped to VSCode--writing plugins using the same tech those plugins are supposed to work with reduced cognitive load. But then I got annoyed with VSCode because of the compile-test-run cycle. VSCode is a product, not a process. There is an essential staticness to it. You can't tinker with it to create something fundamentally other than what it is.

That's why I started experimenting with a self-bootstrapping JS editor. Ideally, application development and editor live-tinkering should be the same fundamental process. Ideally, I think end-user development abstractions should be built directly on top of developer constructs (e.g., hadron.app). I think self-bootstrapping, if taken to its limit, offers the chance to build a deep ubiquitous language for communication between users and developers in building pliable software.

Konrad Hinsen 2020-08-29 15:34:36

"How important..." for reaching which goal? Jupyter is a development environment like most others, in that it is a tool designed by tool designed for end users who are not supposed to modify it. So the toolchain used for Jupyter development is something Jupyter users don't care much about - otherwise they probably wouldn't use Jupyter.

Personally, I find tools that I can adapt myself much more empowering, and for reaching that goal it helps a lot to have a single medium. But such tools are rare, most people have never experienced one, and so they are not asking for it.

Jack Rusher 2020-08-30 09:34:29

@Cameron King The thing you're feeling here is why I've been using emacs for 35 years. While it's easy for me to imagine a new (especially more graphically capable) environment built around some language I like more than elisp, such a thing would absolutely need to be "emacsy" in this sense.

Chris Pearson 2020-08-29 07:01:21

I'd like to know more about how Eve managed reactivity (eg 'commit vs bind' and the idea of tables that contain events). What worked well? Did later iterations/inspired projects tweak this approach? How (if at all) can lazy vs eager reactivity be managed using this approach?

http://docs-next.witheve.com/v0.2/handbook/bind/

📷 image.png

William Taysom 2020-08-29 09:27:07

The quote sounds about right. I forget why, but I remember getting sequences of commits to work was tricky in Eve. Stepping back, first-class time is a really interesting idea that I'd like to see more systems explore.

Chris Pearson 2020-08-29 13:00:21

@William Taysom yes - Chris Granger said similar comments in 2018 here: https://news.ycombinator.com/item?id=16631333

Chris Pearson 2020-08-29 13:00:56

when you say 'first-class time', do you mean having a ticker (of varying granularities) that can be observed?

hamish todd 2020-08-30 18:07:21

In the thing I am making, you can't have a variable without choosing a specific example value for that variable. This is surely something that's been discussed here before since Bret does it in Inventing On Principle. What do folks think of it?

hamish todd 2020-08-30 18:08:54

With the way things are going I could embrace a more abstract way of doing things and be like "yeah don't worry, you can program without example values". I am pretty sure this is the right way to go though. My favourite mathematician/mathematics teacher, Tadashi Tokieda, says you should have lots and lots of examples

hamish todd 2020-08-30 18:09:42

It's not necessarily Bret-Victor-ian to have things be this way either, in theory every time you use a debugger you are "programming by example"

hamish todd 2020-08-30 18:10:59

But I can certainly conceive of an experienced programmer/mathematician saying "no, the "abstract view"/"general case", wherein a variable can take on many values, is fundamental and should be tackled first because [something]

Mariano Guerra 2020-08-30 18:47:41

I came here to say programming by example, but I see you already mentioned it 🙂 as long as it works for your tool, I find it really nice, you can't build invalid things since you are always referencing a valid example