Been thinking of getting a Remarkable / Onyx Boox / general note-taking/drawing tablet , intended as a serious replacement for my mechanical-pencil-and-paper-notebook setup. I'm using it as an opportunity to think about these devices as thinking/coding tools.
Remarkable 2 has replaced notepads for me. I love it and I would recommend it but I believe they recently went to some kind of subscription model. I was grandfathered in without paying because I was an early pre-order. I think they are all pretty similar. They all use the same e-ink screen.
I have an Onyx Boox Note 2. No longer available but the successors are overall similar. For me it's mostly a reading device which I appreciate enormously because of reduced eye strain compared to standard tablets. I use the stylus mainly for annotating stuff I read, not so much for note taking. It may be very good for that, but I didn't try. Habits...
I got for my self Samsung Tab S6 Lite, comes with stylus, still not using it properly as a thinking tool though
I've been (slowly) trying to implement a Smalltalk-78 interpreter (enough to run one of the images on the Smalltalk-zoo - smalltalkzoo.thechm.org/HOPL-St78.html) for my PineNote, and then I plan to redesign the core UI to work with pen input. It feels like an almost bare-metal Smalltalk (I am just leveraging a minimal linux system to handle the screenbuffer and the basic hardware) will give me a 'small-enough' system to be able to experiment with new 'digital-paper' like interactions
It feels like a lot of the ideas from Alex Obenauer's Lab notes (alexanderobenauer.com/labnotes/000) provide a design space for building this kind of full-object based personal system
I love the thinking behind Ink&Switch's crosscut system, but it feels hampered by the fact that experimenting with any new ideas needs the full edit/compile/upload cycle, instead of being able to make changes directly to the system live. One comment from Dan Ingalls that has stuck with me is that while the PARC systems that Alan Kay's LRG group built were slower than software from other groups (like the Bravo editor, which was written directly in BCPL), their turnaround time for experimentation was so quick that it more than made up for the slower system speed. Now with computers as powerful as they are now (even the PineNote boasts a 64 bit 1.8 Ghz quad core chip with 4Gb of memory and a GPU), there is no excuse not to have live editable systems directly on the devices we use.
The risk of using a live programming system is that once you are used to one, you just don't want to get back to edit-compile-run cycles.
And yes, I'd love to have Smalltalk-on-e-ink with a good touch interface!
i work on software for the remarkable tablet, mostly focused on using handwriting as a ui element or input. so far i've released github.com/bkirwi/folly, but also have a text editor and other things in various stages of completion
folly is a text adventure / interactive fiction interpreter, so not exactly FoC's bread and butter, but i started it as a way to experiment with some adjacent ideas like the read-eval-print loop, and using handwriting both as a visual element and as a command for the machine.
personally i find i think differently with a pen+paper metaphor than on a laptop / similar, and i'm curious about better ways to leverage that - most existing software for these tablets uses either a paper metaphor (with no interactivity aside from drawing lines etc.) or a mobile-device metaphor (very interactive, but doesn't take advantage of the text/ink-heavy visuals the hardware is good at)
@Naveen Michaud-Agrawal - i'd be curious how your system differs from existing smalltalk implementations / why one might need a new interpreter for e-ink stuff and not just a new ui
i've thought a bit about the interface side of programming on e-ink (eg. where the syntax doesn't need to be ASCII, but does need to be maximally visually distinct for the handwriting recognizer) but almost not at all about the implementation
@Ben Kirwin Folly looks interesting, thanks. The pinenote doesn't quite yet have the level of support for 3rd party applications yet, so i've been prototyping using an interpreter on my desktop.
My approach is less about needing a new interpreter, but more that the Smalltalk-76/78 system feels small enough to be fully approachable (compared to Squeak/Pharo/GT). In addition, those systems all start with a UX system built around keyboard and mouse, and I'd like to explore UI ideas where the pen input is first class
Although I do vacillate between using my own interpreter and trying to get smalltalk-vm running on the quartz64 chip
@Naveen Michaud-Agrawal You might also check out Cuis Smalltalk, which is more modern that 76/78, but has the explicit goal of remaining small and understandable. cuis-smalltalk.org
š Cuis-Smalltalk
Konrad Hinsen thanks, I forgot about Cuis. Although I'm looking for non-modern, the current smalltalk vms feel like they are way beyond my comprehension
that makes sense! i've never really spent much time with smalltalk; i forget sometimes how deeply intertwined the language and ui are.
From the user's point of view, they are, but fundamentally, they are not. You could create a Smalltalk with no UI other than an input text stream. In fact, GNU Smalltalk is almost at that level. Given such a Smalltalk, you could then build different UIs on top of it. It's not fundamentally different from other dynamic languages. The real difference is that the Smalltalk community has since the beginning valued good UIs and development tools.
oh, also related to taking advantage of the medium, if anyone hasn't seen: omar.website/posts/against-recognition
(i don't 100% agree with it, but found it thought-provoking)
š Against recognition
@Ben Kirwin yes it's a good read. What parts don't you agree with?
mostly: i like recognition! it's why i'm using a computer instead of paper.
i really agree with a lot of the specific criticisms of the tablet ui, and how it goes too far to make the input legible to the computer at the expense of expressivity etc.
but there are other places where i wish the ui did a little bit more recognition... for example, searching for a particular phrase in my notes
and i think you really do end up wanting to use the whole spectrum of legible-to-human/legible-to-machine for different tasks / in different contexts
Have you ever seen the Grail demo from Rand Corp? Even the UI was constructed by hand (as a flowchart) - youtu.be/2Cq8S3jzJiQ
I do like Omar's point that the recognition shouldn't erase the original input, instead just annotating with the computer readable information
i've seen a little bit about the rand tablet, but not a proper demo - thanks for the link!
and yeah, that was definitely an inspiration for how folly does it
though really it could take it much farther - eg. letting users take notes and things in the margins, or allow editing old commands
in the next version maybe!
I actually just tried Folly now so here's some snap reactions;
hey, thanks for the feedback!
in case it's helpful as you mess around: the "folly tutorial" will repeat what it thought you were saying if it wasn't able to make sense of it.
understandable! one thing that's been tricky to balance is that gesture-/handwriting-based ui feels less discoverable, since there's fewer affordances on-screen. right now i'm leaning on the tutorial to fill some of the gaps, but it feels like there should be a better way to integrate that into the app itself...
I've always thought it would be interesting to do a "Marauders Map" (Harry Potter reference) type UI for a story game on an ePaper device
Like the device is some sort of magical artifact that you interact with
If anyone has a writeup/previous discussion on here of this kind of thing, I'd love to see it. But essentially I was thinking about this kind of thing youtube.com/watch?v=nqx2RKYH2VU&t=6s Of course Bret demo'd Stop Drawing Dead Fish and, I think, Drawing Dynamic visualizations on touchscreens. I think a stylus is a good addition to that. It's slightly unpleasant to put one's finger on a touchscreen, and to try to do very precise manipulations of potentially pixel-sized things below it using one's relatively fat fingers.
Multitouch is nice in theory, but genuinely, aside from pinch-to-zoom/rotate, I know of nothing else good that uses it
we have a long running thread about programmable ink at ink&switch, with recent publication here: inkandswitch.com/crosscut -- I'd be up to connect and chat some more if that's interesting/relevant
Thanks, I read a lot of this page when you put it out but I didn't recall that it was intended for multitouch tablets
Ivan Reese at š¬ #administrivia@2022-05-16T16:42:37.709Z:
what even is a computer and what will we do with it?
I've actually been struggling with this question a whole lot outside this community. Perhaps I should bring y'all in:
The conclusion I've currently arrived at is:
This is a surprising, even shocking, conclusion for me to arrive at. I've always slightly looked down my nose at all the "tools for thought" conversations in this Slack. It's felt too close to productivity porn most suitable to avoid doing anything productive. Suddenly they're super relevant.
But it's not enough to build more tools for thought. We have to think also about the process by which they're built. We have to ensure that the people-factory generating the convivial iPhone is also convivial. Because if it isn't, the conviviality will be short-lived as organizations kill or coopt it for their needs. The most important property to preserve, IMO, is to keep the raw code as naked and free from packaging as possible. It should be literally begging to be opened, inspected, tinkered with. Most software today fails to fit this bill. C programs come without source code by default. Browsers require big honking machines to be built from source. We write lovely naked Ruby code but then package it up into gems that go hide in some system directory where nobody can go look inside them.
This is what my future of software looks like.
Alan Kay had another answer. Paraphrasing here, the computer is a communications medium that allows us to share simulations of our ideas with other people which they can then run, modify, and test in order to better understand and question the ideas being communicated. It is personal in the sense that anyone can use it to create simulations of their ideas and share them. It is dynamic in that the simulation can respond to the recipient of the message who is attempting to understand it. The recipient can ask the simulation "what if?" questions of the sort that have only historically been possible when conversing directly with the person who has the idea.
It can also allow others to respond to and critique messages by pointing out flaws in a simulation and publishing an improved version.
Ultimately, by allowing us to think and communicate more deeply about complex issues, this could help bring about another enlightenment in the same way that the printing press helped bring about the last one by allowing people to communicate ideas and arguments that were too long to remember all at once.
I canāt think of a cohesive final thought because thereās so much here.
When someone builds a tool, itās because their brain is attempting to do what you said: externalize a thought. Theyāre building a representation of their brainās wiring. Itās amazing. It also means there are trillion-billion-billion ways to organize thoughts and ideas, which is also amazing but daunting.
This is why Iām building my project. My version of ātechā is that everything is a cog in someone elseās machine. Everything . I used to flip boolean flags in iOS world hoping for views to come out differently, and would be surprised when they did - because there was no mental map I could rely on to help me understand what the hell this was actually doing . I wanted - need - the ability to and for myself look as arbitrarily deep or as shallow as a system allows. Big corp hates that, because then you understand what they did, and then you donāt buy it anymore. Boohoo.
I want to click a file and see code. Then, knowing that every single token has a meaning - either as an identifier locally or someone else - and understand it in whatever context I wish. If youāve seen the movie Arrival, with the Heptopod ink language, this is what I mean. Every curve and contour of your software does something and has a direct cause and effect. Thatās what all the computer science research did for us, and is doing for us. Codifying cause and effect and strengthening the guarantees that paradigm offers.
I want āthe mechanic that grows up around a family shopā to be in exactly the same space as āthe techie that grows up around a computer storeā. These people are not different - theyāre experimentalists.
The difference is the tools they have and can create, and who - in this current world - owns and allows new ones to be created.
It should be literally begging to be opened, inspected, tinkered with.
The first moment I ran my project and saw the code that was running it fly up in space and just.. stare back at me, I had a feeling I didnāt understand. I still donāt understand what it was. It was kinda accomplishment, but my brain justā¦ did something. I felt a click. I saw every single individual glyph and line of code all at once and just went, āā¦huhā.
My brain had never before that day used its optical nervous system to simultaneously process the entire visual representation of a codebase other than as a list of āfilesā and ādirectoriesā, already abstractions I had to come up with visual metaphors for.
Kartik Agaram when you say "large organizations," what you're referring to specifically is capitalist corporations. The development of computing in modern times has followed the same driving force as all other industrial technologies. From the absurd keurig coffee maker to the ever-present WalMart (in the US) to the roadways that consume land to the very design of cities themselves. It all serves one primary purpose: profit. The only reason PARC and Vannevar Bush were able to get as much amazing work done in so many creative directions was a relatively absent profit motive (read: mostly unhindered research). I shudder to imagine what the computing world would look like if early development were dominated almost exclusively by the profit motive via corporations, as it is now.
I feel like there is mainly 1. the scientific use of the computer, its a tool for thought or an instrument for massive number crunching. 2. general tool for automation, that's the business stuff we get paid for, it's not cognitive extension, it's free labour or helper. It's just a very convenient tool for stuff .
You see the first case much more in universities, it's there, but its not connected to business. You get random software that a prof made for enumerating mathematical objects in such and such space. Thats the computer as a mind extender. Its lovely. I prefer programming computers for science but society as a whole finds more use in the second case, and rewards accordingly.
I'll also throw out Ted Nelson's definition, that a computer is a general device for dealing with symbols and following plans. A generalized form of writing and paper. He argues that we only call it a computer because as a historical accident the first people to create one were using it to compute by manipulating symbols that stood for numbers.
āThatās why itās called a computer. Itās for computation .ā
Some people just refuse to see outside themselves. They sound and present a willingness only for small and closed mindedness.
āDonāt you have file folders?ā
āYes.ā
āIsnāt that enough?ā
āā¦ no!ā
@David Brooks Jack Goldman, who helped run Xerox during the PARC days, once said that if it were only up to the whims of the profit motive, we would never have gotten a vaccine for polio. Instead, as he put it, we would have "gotten the best iron lungs you ever saw."
I always thought that summed it up nicely.
Are today's computers the polio vaccine or the advanced iron lungs?
I just remembered after many years this old website made by a friend of mine: whatarecomputersfor.net
I can't find a reference for it right now, but I seem to recall Seymour Papert saying something like "all adults are learning disabled." I also seem to recall this being part of Alan Kay's reason for focusing his research on children, because they were still able to learn new ways of thinking.
Maybe Planck's principle applies to more than just science?
š Planck's principle
I actually like profit. There's nothing wrong with profit. The trouble arises when our assessments of potential profit lack imagination. In particular, as group size increases groups have a tendency to focus on short-term , stable profits. They're easier to defend in debate, and probabilistic investments become risky.
It's not just that ARPA gave us the internet while corporations couldn't. PARC at its peak was similar, and it came out of a for-profit company. I have a hard time imagining DARPA today accomplishing as much, though I'm not an expert.
Power = Resources - Accountability. Sometimes people with power do amazing things. Sometimes they don't.
(Sometimes they're Robert Moses who we thought for decades did amazing things, until Jane Jacobs opened our eyes.)
(I also saw something recently that the rhetoric about companies having to maximize profit is a recent thing, going back perhaps to the 70s: nytimes.com/1970/09/13/archives/a-friedman-doctrine-the-social-responsibility-of-business-is-to.html. So while maximizing profit is always a difficult problem, perhaps we've made it harder for ourselves in recent decades by forgetting the value of long-term vision.)
My response to all this is to avoid trying to decide what "we" should "all" do. Universal basic computation for all. Then make of it what you will.
what even is a computer and what will we do with it?
Collect, Question, Communicate - we eventually changed Collect to Gather.
If you honestly believe there's nothing wrong with profit Kartik Agaram , I would encourage you to research some voices that have a lot to say about that. Suffice it to say for this conversation that the profit motive has given us an extremely twisted major use for computing: social media. If profit above all else is the motive, then Facebook/Twitter et al will do whatever it takes to increase their profits. They do this by way of advertising money. The suggestion / content algorithms maximize "user engagement." And since we humans are hard-wired to pay attention to extremes ("if it bleeds, it leads"), the algorithm has no choice but to suggest more and more extreme content. We have witnessed this again and again in mass shootings and in general political divisiveness around the world. This barely scratches the surface.
And if you wonder "why doesn't Facebook do something about it," an internal FB group was instructed to research the extent of FB's influence on extremist activity and came to the conclusion that yes, FB is contributing to global instability and that the best course of action for the benefit of humanity was to fundamentally change FB's business model. Mark Zuckerberg waved the warning aside and told them to never bring it up again.
You're preaching to the choir there! Reread what I wrote. "Profit" != "maximizing profit" or "profit above all". I explicitly called out "maximizing profit" as harmful rhetoric.
Partly, I think the reason we've arrived at such an obviously local optimum is that consumers of computers and software typically go for the cheapest fastest device that can run the most stuff, and leave out the more qualitative aspects of valuation to their own detriment. B2B sales certainly doesn't lend itself to optimal purchasing decisions either, so we are all left waiting for revolutions while the products that we're stuck with gets moderately better over the decades that it's around. More than anything else, the discipline of engineering struggles to find a foothold in computer software, partially because it's so lucrative and powerful that "agile" unscalable tinkering wins out, and partially because making well-engineered software is especially hard in a rapidly transforming medium - at least now it seems Moore's law has migrated to GPUs!
It seems we're stuck with Capital and all that brings, including the systemic drift to low performance that is only corrected by aperiodic paradigm shifts. What's exciting is this community seems as poised as any to offer one! I, and I think most others here, agree with your axiom to make software as free as possible - an important part of that, to me, is to extend that freedom to the end-user even if they aren't a "programmer" in the traditional sense. There are performance and complexity costs associated with that as well as the standard ugliness of proprietary systems, but things seem to be moving in the right direction, and the idea of such a system now isn't pure fantasy. Now, we just have to compete with AI-fueled black boxes that seek to forever murkify precious computing.
Time to dredge up Out of the Tar Pit again? Itās the only thing Iāve ever seen in ~25 years that looks like it might actually simplify real-world apps
Iām trying not to be the greybeard who pours cold water over everything, butā¦ gestures vaguely at decades of failure to create declarative systems
So I loved out of the tar pit when I first read it. On revisits, I'm not so sure about the promise.
I've worked on a number of different systems that you could call a realization of the kind of functional relational programming talked about in Out of the Tar Pit and it definitely didn't seem to be that much better.
The system I worked on most recently was in clojure, with a datalog database for the relational part.
Without going into details, the system has not been a success. Trying to achieve our goals, with reasonable performance was hard. The mapping between the inner frp world and the world around us was messy and lossy.
I think there are many great insights in that paper. But having lived in the clojure space for a while, (a space I love) I've cooled on the paper a bit.
Thinking about this some more overnight, I think the addition of first-class state machines into the model will really help
On paper it seems so good. We had lots of really smart talented people building these parts. But honestly, it just wasn't that great. Definitely not orders of magnitude better. Probably quite a bit worse
Layering violations are often necessary at scale, and when use cases get complex
I think there's potential but it needs a more comprehensive approach by baking the principles into compiler infrastructure and data stores
Every PL paradigm is better as a library/module used to code the part(s) of your program that it makes easy than as an ideology or all encompassing model.
@Alex Cruise have you used it? if so, can you say something about it that's more informative than their horrible enterprise marketing website?
Iāve been keeping an eye on its predecessors, ācloudstateā and āakka serverlessā
Making event sourcing relatively easy, while also supporting CRUD, is niceā¦ Having everything be based on protobuf is annoying but IDLs that anyone is willing to actually use arenāt exactly thick on the ground
If you can confine your appās needs for statefulness to one of those two models, and everything imperative you need to build is stateless, that can be a big win
I wonder how Kalix specifically will handle very heavy load, but the Akka folks have been at it for a long time, and done some very large-scale testing
(not to mention being used in prod in many, may large-scale deployments)
Donāt get me wrong, I think actors generally, and Akka specifically, are amazing, but theyāve very low-level and require significant brain rewiring
RelationalAI are quietly successful - hytradboi.com/2022/experience-report-building-enterprise-applications-using-logiql-and-rel
What would you put in an Information Management/Data Modelling version of 7 GUIs?
That is, what difficult to model scenarios would you use to "stress test" different data formats/information systems (I'm struggling for the words here but I'm talking broadly about stuff like Relational Database, JSON, XML, Java classes etc).
Here's one example;
Alice, Bob and Charles live at 123 Fake St.
Alice works as an Accountant and earns $20,000
Bob works as a Baker and earns $25,000
Charles works part-time as a Carer earning $10,000 and part-time as a Carpenter earning $12,000.
We want to be able to calculate the total household income easily.
In most systems the most natural way to model the first two people would be as a class/schema with a salary
field/column, but this makes it hard to do the third person.
Some others:
Yeah these expose quite a few limitations to my current design š.
I think exceptions needs nesting/inheritance + overwriting. CSS is actually pretty natural and intuitive for exceptions.
Meta is not easy. You need a way to point to the notation itself. My first thought would be citations/footnotes somehow.
The third one is actually best for me, I can limit attributes/relationships to a certain context, but it's not really queryable. It wouldn't allow you to get students on a particular day.
I am mainly influenced by JSON + this paper researchgate.net/publication/200043248_Conceptual_Graphs_for_a_Data_Base_Interface
yeah, this stuff is pretty hard, e.g. how do you model Joe knows that Tim lives in Seattle?
A signed document arrives on Tuesday. The person who signed it is found dead on the following Thursday. When might they have died?
I'm currently working with constraint answer set programming to deal with a lot of these sorts of problems. Exceptions are dealt with in the Bird Act demo at dev.blawx.com. I have a plan for causality and temporality using Event Calculus, but haven't gotten there, yet. Meta-data is currently being done in a sketchy way for the source of legal conclusions, but what I would prefer to do is use a higher-order logic representation for it, like you can do in Flora-2.
I have a doc somewhere about a short-lived pet project called āa metamodel for unreliable informationā, I wonder if I could find it š
There was a striking comment about finding that the semantic web style ātriples all the way downā was found to be insufficient, lemme see if I can find it
Hereās the Semantic Web chapter, it should be in there youtube.com/watch?v=3wMKoSRbGVs&t=4003s
You can model everything he talks about with triples. I'm not sure I'd want to, but it's doable. In some sense, 6th normal form (of which triples is a crappier version) is the atomic form of data - all other structures fall out of it. What's useful at a practical level for modeling very complex things, like knowledge of someone else's knowledge of, is still an open question.
Yes, I think ultimately every format is essentially "Turing Complete". You can model everything with everything, in the same way you can translate any algorithm into a mountain of NAND gates. That doesn't make NAND gates a good way to express algorithms.
You can come up with crappy ways of representing ordered lists in XML, at the same time I think it's fair to say XML doesn't really support ordered lists.
It's more about having a sort of Theory of Mind for the computer, so you can express yourself in a language fluently, and also be completely confident in how the computer is understanding what you are saying.
To use one of Chris's examples. We can say;
(Joe) (KNOWS) (Tim lives in Seattle)
Where (Tim lives in Seattle) (INSTANCE OF) (Fact about Tim)
But that clearly just doesn't count because the computer isn't understanding it in the same way I am.
This example is solved as a design problem when "Joe knows Tim lives in Seattle" is naturally related to the representation of "Tim lives in Seattle". THAT's the thing that RDF is going to struggle with
But anyway @Alex Cruise your title there has certainly whetted my appetite so if you do find it please share! Or whatever you can remember
My personal data modeling hell is when a 1-1 relationship becomes 1-n. What things stay the same (say across all the n), what should be aggregated, and what now needs to be different?
Hello friends, recently Konrad Hinsen tweeted that we should document our ādeep goalsā and that has me thinking about how to formulate such things. If any of you have formulated your ādeep goalsā, not just for your strategy or project, but the underlying motivating goal of your research, could you please share?
Since we have overlap as well as divergence in what each of us is working towards, I think formulating the goals may help us see where we overlap and diverge?
š¦ Konrad Hinsen (š @khinsen@scholar.social): @NickSmit_ @jonathoda @chatur_shalabh @jackrusher Same for me. And I think it would help a lot if people made a better effort to document their goals, not just their work. Also "deep goals" and not just superficial ones.
As I replied on Twitter, my deep goal is creating computational media for research in physics and chemistry (plus possibly beyond). Media that encode observations, models, and their relations, relegating tools (code) to the background. In terms of "the classics", this is very close to Terry Winograd's ideas from "Beyond programming languages", specialized to a particular domain.
My idea of a deep goal is a goal expressed in terms of what people can do with computers. As opposed to technical goals, such as "getting rid of text files", or "making X faster".
One deep goal I often see here is to make programming easier, perhaps to non-programmers. Call this the "lower the floor" goal.
A second deep goal I see here is to reduce the effort of programming. That can help either beginners or experts, but for contrast with the previous one, call it the "raise the ceiling" goal.
My preferred deep goal is to make programs easier for anyone to look inside. "Lower the floor for reading." I'm less concerned about the writing experience.
Obviously there's tons of overlap between these goals. But I'm starting to realize that that's a problem rather than a good thing. It makes it harder to see true potential collaborators.
An orthogonal axis is means towards these goals. Some people build research projects, some people build commercial products for others to use. By the nature of my goal, I have to try to build finished (open source) products that people can use. They're stable, they last a long time, and oh if you eventually want to look inside them, hopefully that's easier than with most other products you use.
Making programming more accessible is definitely a worthy goal, but I think this goal requires more precision. In fact, I don't believe there is a universal abstract activity called "programming". You need to pick an audience and an application context, and maybe more. And as you note, reading and writing are not the same problem either.
More precision may help gain adoption, but it might shrink your tent too much if the goal is to find collaborators.
The way I think about it is programming is a skill that can support any activity. Bending the computer to help with whatever you do. So while some tools are more specific, it seems reasonable to be general-purpose. I'm fairly flexible on the initial product. My plan has always been a small "empire" of little products. (It's been slow going but will hopefully be faster now.) If one product gets me an opportunity to collaborate I'd be happy to prioritize it.
Iām kind of surprised to see myself say this. But I donāt have a deep goal, or if I do, it is a deep meta goal.
In programming and in my other hobby (philosophy). Iāve found that as I learn more, my tastes change, morph, and expand. What I see in philosophy that I think is wonderful is the incredible diversity of views and yet persistent dialogue going on between these views. Philosophy becomes this rich discussion of trade-offs, of clear conversations driving to find where the true disagreement lies. Programming on the other hand seems to lack this depth.
So if I have a deep goal, it is a meta goal. To explore many different deep goals. To see many different approaches flourish. To push and understand. To find where real disagreement is rather than superficial. To find connections where others see difference.
I think the things that make us different here are much less important than the things we have in common. What I think matters in those difference is to understand them and to find out the strengths in each approach and to find a way, not to combine them all into some multi-paradigm mess, but to see them all from their perspectives.
It'll take some effort to write this up the way I want, so it'll probably end up as a blog post rather than a Slack message.
Thanks Konrad Hinsen, what you wrote above and science-in-the-digital-era.khinsen.net/#Computational%20media is clear and certainly overlaps with stuff that interests me.
Being better at choosing, writing, and following rules.
Jason Morris - could you elaborate a bit on how you think what you wrote overlaps with computing? Specifically what kinds of rules are we talking about?
Jimmy Miller - loved your reply. I do agree that with programming we canāt say where the true disagreement lies. I mean āstatic types vs dynamicā and āfunctional vs OOā seem somewhat narrow and fuzzy debates that arenāt quite getting to something rich. Perhaps there are some key notions that could make these discussions feel more crisp? Often I feel I am in this boat as well.. āseeking perspectivesā that connect and sever other ideas. What came to mind after reading your reply was
Not All Who Wander Are Lost
(I mean.. I know I am lost, but not all who wander are.)
Perhaps it is useful to have both kinds of goals be explicit? The meta goals and well as the currently being explored perspective/goals.
Kartik Agaram I do think your website paints a fairly clear picture around your goals and approaches. I got the idea about ālower the floor for readingā (but not by the usual readability )
Finding useful collaboration (and avoiding unproductive colab) is one reason Iām thinking about this goals stuff. Another (ulterior) reason is just so I have ideas on how to formulate goals and approaches for myself. Say if each one of us in the community puts the answer to this question on our site.. it may make it easier to know even what one might want to read more of? Since there are many levels of goals, Iām also curious to see how people think about this.
A more structured webring? Random site from the same city/state/country/solar system.
Kartik Agaram I agree that it is possible and in certain ways advantageous not to specialize too early on some application domain. But I doubt that any concrete project (real, working code) can aim to be general purpose. The people whose life we want to make easier come from various backgrounds and have widely different goals. Often those goals are defined by existing technology. And I doubt many people have the wish to "learn programming". It's nerds like us who have such strange ideas š
Shalabh my use case is "Rules as Code" which is an approach to policy development, public admin, regulatory drafting, and compliance. So laws, mostly. The tech is symbolic AI (specifically, stable model constraint answer set programming) applied to legal knowledge representation and reasoning, with a user-friendly interface. Combining "user-friendly" with "symbolic AI" is the FoC part.
I keep drafting a reply to this thread but keep giving up. I guess it's a good exercise.
I think my overall goal is to shorten the development loop. Remove everything that is not strictly necessary, this included build tools, but also switching to reactive hot code reloading and other technologies that make iterating much faster. I build on Observable because it solved most of the main issues already, but I was thinking along these lines when I tried to develop a functional reactive animation system called Animaxe in 2016 github.com/tomlarkworthy/animaxe
I see observable as the proper realizing of what Animaxe was ineffectively grasping at, so I just need to upgrade Observable to be able to program the things I am interested in i.e. backend programming, hence webcode.run
Tom Larkworthy Have you used any systems with faster feedback than hot-reloading?
š¦ Tudor Girba: Our goal at @feenkcom is to make systems explainable. In particular, their inside.
https://twitter.com/khinsen/status/1526075007050334208
1/
š¦ Konrad Hinsen (š @khinsen@scholar.social): @NickSmit_ @jonathoda @chatur_shalabh @jackrusher Same for me. And I think it would help a lot if people made a better effort to document their goals, not just their work. Also "deep goals" and not just superficial ones.
I started Awake with the publicly stated goal (March 2019) of co-creating 100M new jobs through 10M digital entrepreneurs, by building a more fair Internet
Building In Public is a great way to go
New guest post on the blog from my friend Anton (twitter.com/atroyn); Visual Debugging Now!
nickarner.com/notes/visual-debugging-now-may-20-2022
Includes a project proposal that some in this community may find interesting to take up
This post is making my pulse race. Iām actually a bit emotional, heh. I have been struggling so.. damn.. hard to find people that can succinctly put in to words the different ways the sides of the programming and development worlds all are trying to converge on one thing: Seeing what the hell youāre doing.
If I may be allowed a minor plug, the #share-your-work has a link to my project where I try to tackle this very thing. Itās specifically the idea of viewing code , but Iām working on finding and building the tools that let you do the analysis - Swift is really, really lacking in this respect so far. That means things like runtime state poking, type lookups and definitions etc, are all gated by the big barrier between the high level language and all the goodies that C provides.
In Python there are readily available viz modules, and people use them. They suck because theyāre mostly not interactive and are effectively stuck in the 70s.
This is the sentence that does it for me. I feel it viscerally, and I just canāt write, think, or learn fast enough to get my ideas into code.
Absolutely interested in chatting more ā¤ Iām in the code now even, haha
(I split this out of the deep goals thread) Jack Rusher asks:
Tom Larkworthy Have you used any systems with faster feedback than hot-reloading?
maybe?
Just to clarify what I mean by hot-code reload vs live-reload. Live reload is a full restart on a program on change, which loses state, vs hot-code reloading where only changed code is reloaded which maintains program state between partial restarts.
But I would say Observable is a little beyond hot-reload because it's also the notebook format so program output and code is interleaved, so it that removes a context switch between IDE and program output and gives you a REPL vibe inline too. But thats as advanced as I have got so far for fast feedback. Can I do better?!