...

In the 1950's, Kenneth Iverson was a math professor at Harvard. He designed APL as an unambigous, expressive mathematical notation, for teaching math to undergrads, and for his personal use solving research problems and writing books and research papers. Iverson joined IBM as a mathematician, where he used his notation to formalize and specify the instruction sets of the 7090 and 360 computers. Only after that, the project at IBM to implement APL as a programming language.

APL doesn't usually get much credit for its influence on modern programming systems (although Mathematica, NumPy and TensorFlow are APL dialects), and it isn't usually credited as an early functional language, even though it was the first such language (to my knowledge) to have map/reduce primitives (although under different names). APL now seems to be remembered for its syntax.

Finally, my point. Re: Sussman, APL is a much better mathematical notation than Scheme.

Historical note: NumPy started out with a focus on implementing much of APL as a Python library. The function names in Numerical Python (as NumPy was originally called) are the names of APL operators. Later on, there was a movement to make NumPy more Matlab-like to win over Matlab users, so the APL heritage is no longer as clear as it used to be.

As for APL vs. Scheme: that really depends on what aspect of mathematical notation you focus. Sussman comes from a symbolic computation background, with an application focus on calculus (check his "Structure and Interpretation of Classical Mechanics" as the prime example). APL has a focus on numerics and algebra. The only decent attempt I know to unify both perspectives is Mathematica.

I have fond feelings for APL. It was the language in which I was taught a range of numerical methods by one of my favorite professors (here's his http://archive.vector.org.uk/art10007880 to the N-Queens problem), and I agree completely that it is under-appreciated for its place in CS history. However, I find the syntax leaves a good deal to be desired for work outside of its original niche, whereas the vector nature it embodies can quite naturally be expressed in any other functional programming language. It is for this reason that I prefer "APL-as-library", "PROLOG-as-library", &c.

Regarding the history of Mathematica, it was inspired almost entirely by Wolfram's extensive use of https://en.wikipedia.org/wiki/Macsyma and written in C rather than Lisp only because https://writings.stephenwolfram.com/2013/06/there-was-a-time-before-mathematica/ that C was "the language of the future" while they were both at Caltech. Wolfram was aware of APL, but to call Mathematica an APL dialect is in no sense correct.

Re: APL v Scheme as notation, I will say first that we disagree and second that even though we've only (virtually) known each other for a few weeks, your habit of expressing your subjective aesthetic opinions about Lisp family languages as if they were objective truths is already quite tiresome.

The Journal of the British APL Association. The BAA promotes the APLs, terse programming languages derived from Iverson’s mathematical notation.

🔗 There Was a Time before Mathematica…—Stephen Wolfram Writings

papers alone are not great, is not a matter of functional vs imperative, it is a matter of completeness. 99% of the papers are missing implementation details that could be incredible hard to provide by the uninitiated. Papers with artifacts are great. Start from a running template, play with the parameters, see what happens.. if you really want to start from scratch, you can, but you can always go back to an actual working artifact for reference.

my comment taps into the "reproducibility" conversation. See: https://ctuning.org/ae/

Good paper covering a lot of recent CS edu research in this area: https://scholarship.tricolib.brynmawr.edu/bitstream/handle/10066/22621/2020LowensteinS.pdf

Emmanuel Oga My thoughts were more in regards to pedagogy than reproducibility, but there is probably overlap here (it's difficult to reproduce what you don't understand).

My thesis is that you gain a deeper understanding by building your own model up from scratch. I learn far more by building from scratch than I do from just "playing with parameters."

That being said, reproducibility is an important goal in itself. I applaud any efforts to improve that area!

At least for my personal style, reproducing is often the first step to understanding or learning. Hot take: reproducibility matters more for teaching than research.

Reproducibility matters for learning, which is the common aspect of teaching and research. Learning is attaching new knowledge to an ever-growing edifice of solidly acquired knowledge. Reproducibility is about the solidity of that edifice. You have to be able to question your old knowledge and check if it is as solid as you thought it was, in the light of new knowledge. And that is true as much at the individual level (what we usually call "learning") as at the collective level (research, which is learning at the level of society).

This seems to describe making conceptions and discovering misconceptions. Pedagogically, reproducibility would concretely imply using approaches that minimize the appearance or lifespan of misconceptions and minimize time to build conceptions? Research-wise, that would mean cataloging conceptions and misconceptions and measuring their presence over time across large groups? Then, we could compare different pedagogies?

I agree that implementing something helps understanding/feeling it better. And in fact it is the idea from Seymour Papert with the idea of microworld. Creating microworlds helps understanding some concepts. In the case of LOGO programming the turtle helps children feel (if not understand) concepts like angles.

And I am convinced that this implementing-microworld idea can extend to adults to help them understand better some science, even without great programming knowledge. And as fan of visual programming, I think tools like Scracth are perfect for this: you can implement dynamic/interactive visualisation or microworld to help you grasp some concept.

One example. During a two days workshop where I was teaching Scratch to some teachers (middle school), one wanted to use Scratch to create a small interactive animation to explain some plate tectonics concept. She has never used programming before, but at the end of the workshop she was able to ask her students to implement this concept with Scratch.

I think my comment on "reproducibility" needs to be clarified a bit with some definitions.

There is the denotative, dictionary definition of "reproducibility": The ability "reproduce" something (note, this is generic, it can be anything, not just a research paper); to build a copy or simulacrum that exhibits the same behavior or results as the original.

From what I see in the thread, we all basically agree that this is a good and useful thing, particularly for learning and understanding.

But there is a connotative definition of "reproducibility" often associated with scientific research. This definition has more specific context associated with it. In this sense, "reproducibility" is the ability to make a copy or simulacrum that exhibits the same behavior or results of a research paper specifically for the purpose of peer review and validation.

This definition often implies that the reproduction should be created completely independently from original result using only the information found in the paper (and associated references and domain knowledge), but no contact with the original authors.

This second definition is what I believe Emmanuel Oga was referring to, based on the context of his post and associated links. My response to that post was an attempt to separate the two concepts. I think I did a bad job initially, and hopefully this is more clear.

Yeah. Even if 99% of papers have reproducibility problems, I suspect the graphics papers that the (excellent!) demo link up top uses don't have that problem.

Perhaps we need a separate thread on the academic notion of reproducibility.

Ray Imber The terminology around this topic is a mess. There's people who have written articles just on what should be called what, and of course they disagree. That's life.

One initiative worth mentioning in this context is https://reproducible-builds.org/. It's about making (Linux) executables reproducible from source code, so that you can be sure you are actually running the program whose source code you are reading. It's an answer to Ken Thompsons Turing Award speech "Reflections on trusting trust" (https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf).

fwiw I was thinking reproducible more as in "reproducible builds" than as "replicable experiment". The idea is that papers should come with their artifacts: a "one click" implementation that process some provided example inputs and produces some example outputs, as described in the paper.

This is maybe independent of the conversation of whether the learner should build their own version from scratch or not (personally I do think "building from scratch" is the best way to learn). Even when building from scratch, comparing against a reference implementation is incredible useful as a learning tool.